I worry that 7-Zip is going to lose relevance because lack of zstd support. zlib's performance is intolerable for large files and zlib-ng's SIMD implementation only helps here a bit. Which is a shame, because 7-Zip is a pretty amazing container format, especially with its encryption and file splitting capabilities.
I use ZSTD a ton in my programming work where efficiency matters.
But for sharing files with other people, ZIP is still king. Even 7z or RAR is niche. Everyone can open a ZIP file, and they don't really care if the file is a few MBs bigger.
Which reveals that "everyone can open a ZIP file" is a lie. Sure, everyone can open a ZIP file, as long as that file uses only a limited subset of the ZIP format features. Which is why formats which use ZIP as a base (Java JAR files, OpenDocument files, new Office files) standardize such a subset; but for general-purpose ZIP files, there's no such standard.
(I have encountered such ZIP files in the wild; "unzip" can't decompress them, though p7zip worked for these particular ZIP files.)
> On 15 June 2020, Zstandard was implemented in version 6.3.8 of the zip file format with codec number 93, deprecating the previous codec number of 20 as it was implemented in version 6.3.7, released on 1 June.[36][37]
there are A LOT of zip files using lzma in the wild.
also, how about people learn to use updated software? should newer video compression technologies not be allowed in mkv/mp4.
if you cant open it, well.. then stop using 90ies winzip
No. You can't get people to use updated software. You can't get a number of people to update past windows 7. This has been and will likely remain a persistent issue, and it's sure not one you're going to fix. All it will do is limit your ability to work with people. This isn't a hill on which you should die.
Installing new software has a real time and hassle cost, and how much time are you actually saving over the long run? It depends on your usage patterns.
the developer is hired by someone that gets to make that decision. Ultimately the customer does. Thats why some people spend extreme resources on legacy crap, because someone has deemed it worth it.
English is evolving as a hieroglyphic language. That floppy disk icon stands a good chance of becoming simply the glyph meaning "save". The UK still uses an icon of an 1840s-era bellows camera for its speed camera road signs. The origin story will be filed away neatly and only its residual meaning will be salient.
More 'useful' one is webp. It has both a lossy and lossless compression algorithm, which have very different strengths and weaknesses. I think nearly every device supports reading both, but so many 'image optimization' libraries and packages don't - often just doing everything as lossy when it could be lossless (icons and what not).
It's similarly annoying how many websites take the existence of the lossy format as a license to recompress all WebP uploads, or sometimes other filetypes converted to WebP, even when it causes the filesize to increase. It's like we're returning to ye olden days of JPEG artifacts on every screenshot.
I don't know about, had a dicey situation recently where powershell's compress-archive couldn't handle archives >4GB and had to use 7zip. it is more reliable and you can ship 7za.exe or create self-extracting archives (wish those were more of a thing outside of the windows world).
I understand that security has to compromise for the real world, but a self-extracting archive is possibly one of the worst things one could use in terms of security.
Use the pigz command for parallel gzip. Mark Adler also has an example floating around somewhere about how to implement basically the same thing using Z_BLOCK.
What are you compressing with zstd? I had to do this recently and the "xz" utility still blows it away in terms of compression ratio. In terms of memory and CPU usage, zstd wins by a large margin. But in my case I only really cared about compression ratio
people tend to care about decompression speed - xz can be quite slow decompressing super compressed files whereas zstd decompression speed is largely independent of that.
People also tend to care about how much time they spend on compression for each incremental % of compression performance and zstd tends to be a Pareto frontier for that (at least for open source algorithms)
This makes sense. A lot of end-users have internet speeds that can outpace the decompression speeds of heavily compressed files. Seems like there would be an irrational psychological aspect to it as well.
Unfortunately for the hoster, they either have to eat the cost of the added bandwidth from a larger file or have people complain about slow decompression.
Well the difference is quite a bit more manageable in practice since you’re talking about single digit space difference vs a 2-100x performance in decompression.
I think it depends on what you're compressing. I experimented with my data full of hex text xml files. xz -6 is both faster and smaller than zstd -19 by about 10%. For my data, xz -2 and zstd -17 achieve the same compressed size but xz -2 is 3 times faster than zstd -17. I still use xz for archive because I rarely needs to decompress them.
7-zip is the de-facto tool on Windows and has been for a long time. It's more than fast and compressed enough for 99% of peoples use cases.
It's not going anywhere anytime soon.
The more likely thing to eat into its relevance is now that Windows has built-in basic support for zipping/unzipping EDIT: other formats*, which relegates 7-zip to more niche uses.
7-zip is the de-facto tool on Windows and has been for a long time.
Agreed. The only thing I think it has been missing is PAR support. I think they should consider incorporating one of the par2cmdline forks and porting that code to Windows as well so that it has recovery options similar to WinRAR. It's not used by everyone but that should deprecate any use cases for WinRAR in my opinion.
As mentioned in another comment, zip support actually goes further back as far as '98, but only Windows 11 added support for handling other formats like RAR/7-Zip/.tar/.tar.gz/.tar.bz2/etc.
That allows it to be a default that 'just works' for most people without installing anything extra.
The vast majority of users don't care about the extra performance or functionality of a tool like 7-zip. They just need a way to open and send files and the Windows built-in tool is 'good enough' for them.
I agree that 7-zip is better, but most users simply do not care.
Windows zip is not in fact good enough. I've run into weird, buggy behavior, hanging on extract, all sorts of nonsense. I can see the argument that a universally-adopted solution is better, but that's different from windows just not working.
I'm not saying I would ever use it. I'm saying that for casual non-power users, it's good enough. They work with it and if it breaks once in a blue moon they don't care. They just want it to open the files they get and give them a way to send files compressed.
That is enough to bite into 7-Zip's share of users.
Is there something different about the built in zip context menu functionality now than before? I'm pretty sure you could convert something to a zip file since forever ago by right clicking any file.
7-zip, through its .7z format, also supports AES encryption. I'd argue it's probably the easiest way to encrypt individual file archives that you need to access on both Windows and Linux. I have a script I periodically run that makes an encrypted .7z archive of all of my projects, which I then upload for off-site backup. (On-site, I don't bother encrypting.)
It's been a long time since I used Windows, but back in the day I used 7-Zip exactly because it could open more or less $anything. That's also why we installed it on many customer computers.
On Linux bsdtar/libarchive gives a similar experience: "tar xf file" works on most things.
7-Zip is like VLC: maybe not the best, but it’s free (speech and beer) and handles almost anything you throw at it. For personal use, I don’t care much about efficient compression either computationally or in terms of storage; I just want “tar, but won’t make a 700 MB blank ISO9660 image take 700 MB”.
in fact this is the first time I even hear about it, and I am semi-IT litterate. The prevalence of a compression standard is about how ubiquitous it is. For that one, I would vote "not even on the radar yet".
That's why 7zip should support it. People care about the convenience of the GUI and we all benefit from better compression being accessible with a nice GUI.
That's basically me! I really like 7-Zip because it opens most archive formats I have to work with and also the .7z format has pretty good compression for the stuff I want to store longer term.
If you're expecting a "mobile first" or similar GUI where most of the screen is dedicated to whitespace, basic features involves 7 or more mouse clicks and for some reason it all gets changed every ~6 months then yes the 7zip GUI is terrible.
Desktop software usability peaked sometime in the late 90s, early 2000s. There's a reason why 7zip still looks like ~2004
PeaZip is popular? It seems a lot less tested than 7zip; Last time I tried to use it, it failed to unpack an archive because the password had a quote character or something like that. Never had such crazy issues in 7zip myself.
if by gui u mean the ability to right click a .zip file and unzip it just through the little window that pops up ur totally right. At least that + the unzipping progress bar is what I appreciate 7zip for
Being a bit faster or efficient won't make most people switch. 7z offers great UX (convenient GUI and support for many formats) that keeps people around.
Since Windows 11 incorporated libarchive back in October 2023 there is less reason to use 7-zip on windows. I would be surprised if any of my friends even know what a zip file is let alone zstd.
If you ever try to extract an archive file of several gigabyte size with hundreds of thousands of files (I know, it's rare), the built-in one is as slow as a turtle compared to 7z.
Glad I'm not the only one who feels this way. WinZip is a slow and bloated abomination, especially compared to 7-Zip. The right-click menu context entry for 7-Zip is very convenient and runs lightning fast. WinZip can't compete at all.
There are lots of 7zip alike with zstd support (it's a plugin effectively). On [corporate] Windows NanaZip would be my choice as it's available in Windows store.
How does that work? You cannot write to disk before you know the compressed size. Or if you do you can use a data descriptor but you cannot write concurrently.
I guess they buffer the compressed stream to RAM before writing to zip. If they want to keep their zip stable (always the same output given the same input), they also need to keep it a bit longer than necessary in RAM.
A lot of synchronization primitives in the NT kernel are based on a register width bit mask of a CPU set, so each collection of 64 hardware threads on 64 bit systems kind of runs in its own instance of the scheduler. It's also unfortunately part of the driver ABI since these ops were implemented as macros and inline functions.
Because of that, transitioning a software thread to another processor group is a manual process that has to be managed by user space.
The NT kernel dates back to 1993. Computers didn’t exceed 64 logical processors per system until around 2014. And doing it back then required a ridiculously expensive server with 8 Intel CPUs.
The technical decision Microsoft made initially worked well for over two decades. I don’t think it was lame; I believe it was a solid choice back then.
> Computers didn’t exceed 64 logical processors per system until around 2014.
Server systems were available with that since at least the late 90s. Server systems with >10 CPUs were already available in the mid-90s. By the early-to-mid 90s it was pretty obvious that was only going to increase and that the 64-CPU limit was going to be a problem down the line.
That said, development of NT started in 1988, and it may have been less obvious then.
"Server systems" but not server systems that Microsoft targeted. NT4 Enterprise Server (1996) only supported up to 8 sockets (some companies wrote their own HAL to exceed that limit). And 8 sockets was 8 threads with no NUMA back then, not something that would have been an issue for the purposes of this discussion.
Linux had many similar restrictions in its lifetime; it just has a different compatibility philosophy that allowed it to break all the relevant ABIs. Most recently, dual-socket 192-core Ampere systems were running into a hardcoded 256-processor limit. https://www.tomshardware.com/pc-components/cpus/yes-you-can-...
Tom's hardware is mistaken in their reporting. That's raisng the limit without using CPUMASK_OFFSTACK. The kernel already supported thousands of cores with CPUMASK_OFFSTACK and has at least since the 2.6.x days.
> And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.
If that were the case the above system wouldn't have needed 8 sockets. With NUMA systems the app needs to be scheduling group aware anyways. The difference here really appears when you have a single socket with more than 64 hardware threads, which took until ~2019 for x86.
The same reasons it would on macOS or Windows, most people just aren't writing software which needs to worry about having a single process running many hundreds of threads across 8 sockets efficiently so it's fine to not be NUMA aware. It's not that it won't run at all, a multi-socket system is still a superset of a single socket system, just it will run much more poorly than it could in such scenarios.
The only difference with Windows is a single processor group cannot contain more than 64 cores. This is why 7-Zip needed to add processor group support - even though a 96 core Threadripper represents as a single NUMA node the software has to request assignment to 2x48 processor groups, the same as if it were 2 NUMA nodes with 48 cores each, because of the KAFFINITY limitation.
Examples of common NUMA aware Linux applications are SAP Hana and Oracle RDBMS. On multi-socket systems it can often be helpful to run postgres and such via https://linux.die.net/man/8/numactl too, even if you're not quite the scale you need full NUMA awareness in the DB. You generally also want hypervisors to pass the correct NUMA topologies to guests as well. E.g. if you have a KVM guest with 80 cores assigned on a 2x64 Epyc host setup then you want to set the guest topology to something like 2x40 cores or it'll run like crap because the guest is sees it can schedule one way but reality is another.
There were single image systems with hundreds of cores in the late 90s and thousands of cores in the early 2000s.
I absolutely stand by the fact that Intel and AMD didn't pursue high core count systems until that point because they were so focused on single core perf, in part because Windows didn't support high core counts. The end of Denmark scing forced their hand and Microsoft's processor group hack.
Do you have anything to say regarding NUMA for the 90s core counts though? As I said, it's not enough that there were a lot of cores - they have to be monolithically scheduled to matter. The largest UMA design I can recall was the CS6400 in 1993, to go past that they started to introduce NUMA designs.
Windows didn't handle numa either until they created processor groups, and there's all sorts reasons why you'd want to run a process (particularly on Windows which encourages single process high thread count software archs) that spans numa nodes. It's really not that big if a deal for a lot of workloads where your working set fits just fine in cache, or you take the high hatdware thread count approach of just having enough contexts in flight that you can absorb the extra memory latency in exchange for higher throughput.
6.1 (2009) - Processor Groups to have the KAFFINITY limit be per NUMA node
Xeon E7-8800 (2011) - An x86 system exceeding 64 total cores is possible (10x8 -> requires Processor Groups)
Epyc 9004 (2022) - KAFFINITY has created an artificial limit for x86 where you need to split groups more granular than NUMA
If x86 had actually hit a KAFFINITY wall then the E7-8800 even would have occured years before processor groups because >8 core CPUs are desirable regardless if you can stick 8 in a single box.
The story is really a bit reverse from the claim: NT in the 90s supported architectures which could scale past the KAFFINITY limit. NT in the late 2000s supported scaling x86 but it wouldn't have mattered until the 2010s. Ultimately KAFFINITY wasn't an annoyance until the 2020s.
> other systems had been exceeding 64 cores since the late 90s.
Windows didn’t run on these other systems, why would Microsoft care about them?
> x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it
For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.
> Windows didn’t run on these other systems, why would Microsoft care about them?
Because it was clear that high core count, single system image platforms were a viable server architecture, and NT was vying for the entire server space, intending to kill off the vendor Unices.
. For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.
Linux wasn't the only OS. Solaris and AIX were NT's competitors too back then, and supported higher core counts.
That doesn't mean every platform was or would have been profitable. x86 became 'good enough' to run your mail or web server, it doomed other architectures (and commonly OSes) as the cost of x86 was vastly lower than the Alphas, PowerPCs, and so on.
The linked Processor Group documentation also says:
> Applications that do not call any functions that use processor affinity masks or processor numbers will operate correctly on all systems, regardless of the number of processors.
I suspect the limitation 7zip encountered was in how it checked how many logical processors a system has, to determine how many threads to spawn. GetActiveProcessorCount can tell you how many logical processors are on the system if you pass ALL_PROCESSOR_GROUPS, but that API was only added in Windows 7 (that said, that was more than 15 years ago, they probably could've found a moment to add and test a conditional call to it).
It isn't just detecting the extra logical processors, you have to do work to utilise them. From the linked text:
"If there are more than one processor group in Windows (on systems with more than
64 cpu threads), 7-Zip distributes running CPU threads across different processor groups."
The OS does not do that for you under Windows. Other OSs handle that many cores differently.
> more than 15 years ago, they probably could've found a moment to add and test a conditional call to it
I suspect it hasn't been an issue much at all until recently. Any single block of data worth spinning up that many threads for compressing is going to be very large, you don't want to split something into too small chunks for compression or you lose some benefit of the dynamic compression dictionary (sharing that between threads would add a lot of inter-thread coordination work, killing any performance gain even if the threads are running local enough on the CPU to share cache). Compression is not an inherently parallelizable task, at least not “embarrassingly” so like some processes.
Even when you do have something to compress that would benefit for more than 64 separate tasks in theory, unless it is all in RAM (or on an incredibly quick & low latency drive/array) the process is likely to be IO starved long before it is compute starved, when you have that much compute resource to hand.
Recent improvements in storage options & CPUs (and the bandwidth between them) have presumably pushed the occurrences of this being worthwhile (outside of artificial tests) from “practically zero” to “near zero, but it happens”, hence the change has been made.
Note that two or more 7-zip instances working on different data could always use more than 64 threads between them, if enough cores to make that useful were available.
Are you sure that if you don't attempt to set any affinities, Windows won't schedule 64+ threads over other processor groups? I don't have any system handy that'll produce more than 64 logical processors to test this, but I'd be surprised if Windows' scheduler won't distribute a process's threads over other processor groups if you exceed the number of cores in the group it launches into.
The referenced text suggests applications will "work", but that isn't really explicit.
They're either wrong or thinking about windows 7/8/10. That page is quite clear.
> starting with Windows 11 and Windows Server 2022 the OS has changed to make processes and their threads span all processors in the system, across all processor groups, by default.
> Each process is assigned a primary group at creation, and by default all of its threads' primary group is the same. Each thread's ideal processor is in the thread's primary group, so threads will preferentially be scheduled to processors on their primary group, but they are able to be scheduled to processors on any other group.
I mean, it seems it's quite clear that a single process and all of its threads will just be assigned to a single processor group, and it'll take manual work for that process to use more than 64 cores.
The difference is just that processes will be assigned a processor group more or less randomly by default, so they'll be balanced on the process level, but not the thread level. Not super helpful for a lot of software systems on windows which had historically preferred threads to processes for concurrency.
It absolutely will. Your process is only assigned a single processor group at process creation time. The only difference now is that it's by default assigned a random processor group rather than inheriting the parent's. For processes that don't require >64 cores, this means better utilization at the system level. However you're still assigned <=64 cores by default per process by default.
That's literally why 7-zip is announcing completion of that manual work.
The 7zip code needed to change because it was counting cores by looking at affinity masks, and that limits it to 64.
It also needed to change if you want optimal scheduling, and it needed to change if you want it to be able to use all those cores on something that isn't windows 11.
But for just the basic functionality of using all the cores: >Starting with Windows 11 and Windows Server 2022, on a system with more than 64 processors, process and thread affinities span all processors in the system, across all processor groups, by default
That's documentation for a single process messing with its affinity. They're not writing that because they wrote a function to put different processes on different groups. A single process will span groups by default.
That depends on what format you're using. Zip compresses every file separately. Bzip and zstd have pretty small maximum block sizes and gzip doesn't gain much from large blocks anyway. And even when you're making large blocks, you can dump a lot of parallelism into searching for repeat data.
No, the GUI. 7-zip integrates well with the shell: select a group of files, right click -> make zip file, and so on. Or right-click a zip file and select extract. If you're accustomed to Linux you might not know what they're talking about.
TortoiseGit (and TortoiseSVN) are similarly convenient. Right click a folder with an SVN repo checked out, and select "SVN update". Right-click an empty space, and select "SVN checkout". SVN was the main distribution method for some modding communities before things like Steam Workshop and Github, specifically because TortoiseSVN made it so convenient. Checkout into your addons folder, and periodically update. What could be simpler?
I've used pbzip2 which takes the same parallel blocked compression approach 7zip seems to be taking (using AI's analysis of the changes). Theoretically the compression is less efficient, but i haven't noticed a difference in practice.
Wow, a program that doesn't matter anymore has been very very minimally enhanced on a platform that doesn't matter anymore, benefitting the 7 users that have more than 64 real cores with Windoes and are regularly compressing archives so large that it doesn't drastically reduce the compression ratio to split it into more thsn 64 sections.
Posting this link to hn has consumed more human potential than the thing it is describing will save up to the end of time.
The rest of this comment has, though gratuitously snarky, a point, but I don’t think claiming that 7zip is irrelevant as an independent statement is even remotely coherent.
I worry that 7-Zip is going to lose relevance because lack of zstd support. zlib's performance is intolerable for large files and zlib-ng's SIMD implementation only helps here a bit. Which is a shame, because 7-Zip is a pretty amazing container format, especially with its encryption and file splitting capabilities.
I use ZSTD a ton in my programming work where efficiency matters.
But for sharing files with other people, ZIP is still king. Even 7z or RAR is niche. Everyone can open a ZIP file, and they don't really care if the file is a few MBs bigger.
> Everyone can open a ZIP file, and they don't really care if the file is a few MBs bigger.
You can use ZSTD with ZIP files too! It's compression method 93 (see https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT which is the official ZIP file specification).
Which reveals that "everyone can open a ZIP file" is a lie. Sure, everyone can open a ZIP file, as long as that file uses only a limited subset of the ZIP format features. Which is why formats which use ZIP as a base (Java JAR files, OpenDocument files, new Office files) standardize such a subset; but for general-purpose ZIP files, there's no such standard.
(I have encountered such ZIP files in the wild; "unzip" can't decompress them, though p7zip worked for these particular ZIP files.)
> You can use ZSTD with ZIP files too!
Support for which was added in 2020:
> On 15 June 2020, Zstandard was implemented in version 6.3.8 of the zip file format with codec number 93, deprecating the previous codec number of 20 as it was implemented in version 6.3.7, released on 1 June.[36][37]
* https://en.wikipedia.org/wiki/Zstd#Usage
So I'm not sure how widely deployed it would be.
Most linux distributions have zip support with zstd.
Well, only a lunatic would use ZIP with anything but DEFLATE/DEFLATE64
there are A LOT of zip files using lzma in the wild. also, how about people learn to use updated software? should newer video compression technologies not be allowed in mkv/mp4.
if you cant open it, well.. then stop using 90ies winzip
No. You can't get people to use updated software. You can't get a number of people to update past windows 7. This has been and will likely remain a persistent issue, and it's sure not one you're going to fix. All it will do is limit your ability to work with people. This isn't a hill on which you should die.
if they want to open certain files, they will update
>how about people learn to use updated software?
How about software developers learn to keep software working on old OSes and old hardware?
What stops you from running updated zip/unzip on an old OS or on old hardware?
Nothing, but what stops you from using DEFLATE64?
Installing new software has a real time and hassle cost, and how much time are you actually saving over the long run? It depends on your usage patterns.
Supporting old APIs and additional legacy ways of doing things has a real cost in maintenance.
So does not supporting them, but the developer gets to externalize those.
the developer is hired by someone that gets to make that decision. Ultimately the customer does. Thats why some people spend extreme resources on legacy crap, because someone has deemed it worth it.
what stops you from installing win95 and winzip?
what software doesnt support OSs that are in active SECURITY support?
mkv or mp4 with h264 and aac is good enough. mp3 is good enough. jpeg is good enough. zip with deflate is also good enough.
"Good enough" is not good enough.
h264 is not good enough for many things
> new Office files
I know what you mean, I’m not being pedantic, but I just realized it’s been 19 years. I wonder when we’ll start calling them “Office files”.
> I wonder when we’ll start calling them “Office files”.
Probably around the same time the save icon becomes something other than a 3 1/2" floppy disk.
Nowadays I’ve noticed fewer applications have a save icon at all, relying instead on auto-save.
English is evolving as a hieroglyphic language. That floppy disk icon stands a good chance of becoming simply the glyph meaning "save". The UK still uses an icon of an 1840s-era bellows camera for its speed camera road signs. The origin story will be filed away neatly and only its residual meaning will be salient.
Same thing with "WAV" files. There's at least 3 popular formats for the audio data out there.
More 'useful' one is webp. It has both a lossy and lossless compression algorithm, which have very different strengths and weaknesses. I think nearly every device supports reading both, but so many 'image optimization' libraries and packages don't - often just doing everything as lossy when it could be lossless (icons and what not).
It's similarly annoying how many websites take the existence of the lossy format as a license to recompress all WebP uploads, or sometimes other filetypes converted to WebP, even when it causes the filesize to increase. It's like we're returning to ye olden days of JPEG artifacts on every screenshot.
You can and I've done it… but you can't expect anything to be able to decompress it unless you wrote it yourself.
> Copyright (c) 1989 - 2014, 2018, 2019, 2020, 2022
Mostly it seems nutty that, after all these years, they’re still updating the zip spec instead of moving on to a newer format.
The English language is awful, and we keep updating it instead of moving to a newer language.
Some things are used for interoperability, and switching to a newer incompatible thing loses all of its value.
.7z and .tar.* have existed for at least 20 years now, but you are unlikely to see a wild 7z file and .tar.* is isolated to the UNIX space
I don't know about, had a dicey situation recently where powershell's compress-archive couldn't handle archives >4GB and had to use 7zip. it is more reliable and you can ship 7za.exe or create self-extracting archives (wish those were more of a thing outside of the windows world).
In the realm of POSIX.2 and UNIX relatives, the closest analog would be a "shar" archive.
They are not regarded kindly.
https://en.wikipedia.org/wiki/Shar_(file_format)
I understand that security has to compromise for the real world, but a self-extracting archive is possibly one of the worst things one could use in terms of security.
Use the pigz command for parallel gzip. Mark Adler also has an example floating around somewhere about how to implement basically the same thing using Z_BLOCK.
My main use case for 7z is bypassing corporate filters that block ZIPs from being sent.
I think gmail is onto you. They blocked one of my 7z files the other day.
What are you compressing with zstd? I had to do this recently and the "xz" utility still blows it away in terms of compression ratio. In terms of memory and CPU usage, zstd wins by a large margin. But in my case I only really cared about compression ratio
people tend to care about decompression speed - xz can be quite slow decompressing super compressed files whereas zstd decompression speed is largely independent of that.
People also tend to care about how much time they spend on compression for each incremental % of compression performance and zstd tends to be a Pareto frontier for that (at least for open source algorithms)
This makes sense. A lot of end-users have internet speeds that can outpace the decompression speeds of heavily compressed files. Seems like there would be an irrational psychological aspect to it as well.
Unfortunately for the hoster, they either have to eat the cost of the added bandwidth from a larger file or have people complain about slow decompression.
Well the difference is quite a bit more manageable in practice since you’re talking about single digit space difference vs a 2-100x performance in decompression.
I usually see zstd on max settings outperform xz on speed and very slightly on compression (though that's a tiny difference).
in my experience using zstd --long --ultra -22 gives marginally better compression ratio than xz -9 while being significantly faster
I think it depends on what you're compressing. I experimented with my data full of hex text xml files. xz -6 is both faster and smaller than zstd -19 by about 10%. For my data, xz -2 and zstd -17 achieve the same compressed size but xz -2 is 3 times faster than zstd -17. I still use xz for archive because I rarely needs to decompress them.
do you have examples where xz 'blows it away', not just zstd -3?
7-zip is the de-facto tool on Windows and has been for a long time. It's more than fast and compressed enough for 99% of peoples use cases.
It's not going anywhere anytime soon.
The more likely thing to eat into its relevance is now that Windows has built-in basic support for zipping/unzipping EDIT: other formats*, which relegates 7-zip to more niche uses.
7-zip is the de-facto tool on Windows and has been for a long time.
Agreed. The only thing I think it has been missing is PAR support. I think they should consider incorporating one of the par2cmdline forks and porting that code to Windows as well so that it has recovery options similar to WinRAR. It's not used by everyone but that should deprecate any use cases for WinRAR in my opinion.
Windows has had built in zip/unzip since vista. 7zip is far superior (and the install base proves that)
As mentioned in another comment, zip support actually goes further back as far as '98, but only Windows 11 added support for handling other formats like RAR/7-Zip/.tar/.tar.gz/.tar.bz2/etc.
That allows it to be a default that 'just works' for most people without installing anything extra.
The vast majority of users don't care about the extra performance or functionality of a tool like 7-zip. They just need a way to open and send files and the Windows built-in tool is 'good enough' for them.
I agree that 7-zip is better, but most users simply do not care.
Windows zip is not in fact good enough. I've run into weird, buggy behavior, hanging on extract, all sorts of nonsense. I can see the argument that a universally-adopted solution is better, but that's different from windows just not working.
I'm not saying I would ever use it. I'm saying that for casual non-power users, it's good enough. They work with it and if it breaks once in a blue moon they don't care. They just want it to open the files they get and give them a way to send files compressed.
That is enough to bite into 7-Zip's share of users.
Windows unzip is so ungodly slow and terrible! Long live 7zip!
Is there something different about the built in zip context menu functionality now than before? I'm pretty sure you could convert something to a zip file since forever ago by right clicking any file.
It could support basic ZIP files, but only Windows 11 added support for 7-Zip (.7z), RAR (.rar), TAR, and TAR variants (like .tar.gz, .tar.bz2, etc).
That makes it 'good enough' for the vast majority of people, even if it's not as fast or fully-featured as 7-Zip.
7-zip, through its .7z format, also supports AES encryption. I'd argue it's probably the easiest way to encrypt individual file archives that you need to access on both Windows and Linux. I have a script I periodically run that makes an encrypted .7z archive of all of my projects, which I then upload for off-site backup. (On-site, I don't bother encrypting.)
You are looking for 7-Zip Zstd: https://github.com/mcmilk/7-Zip-Zstd
I don't know what your use case is, but it seems to be quite a niche.
I was curious upon seeing this and found the thread where its inclusion was turned down: https://sourceforge.net/p/sevenzip/discussion/45797/thread/a...
Not that many people care about zstd; I would assume most 7-zip users care about the convenience of the gui.
It's been a long time since I used Windows, but back in the day I used 7-Zip exactly because it could open more or less $anything. That's also why we installed it on many customer computers.
On Linux bsdtar/libarchive gives a similar experience: "tar xf file" works on most things.
7-Zip is like VLC: maybe not the best, but it’s free (speech and beer) and handles almost anything you throw at it. For personal use, I don’t care much about efficient compression either computationally or in terms of storage; I just want “tar, but won’t make a 700 MB blank ISO9660 image take 700 MB”.
in fact this is the first time I even hear about it, and I am semi-IT litterate. The prevalence of a compression standard is about how ubiquitous it is. For that one, I would vote "not even on the radar yet".
That's why 7zip should support it. People care about the convenience of the GUI and we all benefit from better compression being accessible with a nice GUI.
That's basically me! I really like 7-Zip because it opens most archive formats I have to work with and also the .7z format has pretty good compression for the stuff I want to store longer term.
I just hope that the recipient will be able to open the file without too much difficulty. I am willing to sacrifice a few megabytes if necessary.
.. but 7-zip has a pretty terrible GUI?
Hence why PeaZip is so popular, and J-Zip used to be before it was stuffed with adware.
If you're expecting a "mobile first" or similar GUI where most of the screen is dedicated to whitespace, basic features involves 7 or more mouse clicks and for some reason it all gets changed every ~6 months then yes the 7zip GUI is terrible.
Desktop software usability peaked sometime in the late 90s, early 2000s. There's a reason why 7zip still looks like ~2004
When compared to it's contemporaries the 7-zip GUI is noticeably worse. Back in 2004 WinRar and WinZip were both clearly superior.
Most people won't use that GUI, but will right click file or folder -> 7-Zip -> Add To ... and it will spit out a file without questions.
Granted Windows 11 has started doing the same for its zip and 7zip compressors.
Same trick goes for opening archives or executables (Installers) as archives.
Let's chat about Windows 11 right-click menu. I'm pretty sure they hid all the application menu extensions to avoid worst-case performance issues.
Exactly it. 3rd parties injecting their extensions harmed performance, which people turn around and blame Microsoft for.
All the GUI I need is right click-> extract here or to folder. And 7zip is doing that nicely.
PeaZip is popular? It seems a lot less tested than 7zip; Last time I tried to use it, it failed to unpack an archive because the password had a quote character or something like that. Never had such crazy issues in 7zip myself.
> .. but 7-zip has a pretty terrible GUI?
Since you're asking, the answer is no. 7-Zip has an efficient and elegant UI.
I would never trust PeaZip.
The author updates code in the github repo....by drag and drop file uploads. https://github.com/peazip/PeaZip/commits/sources/
if by gui u mean the ability to right click a .zip file and unzip it just through the little window that pops up ur totally right. At least that + the unzipping progress bar is what I appreciate 7zip for
I use the right click context menu to run 7zip, why would you open a GUI?
That is a GUI!
https://github.com/mcmilk/7-Zip-zstd
https://github.com/M2Team/NanaZip
It includes the above patches as well as few QoL features.
Thanks! Any ideas why it didn't get merged? Clearly 7-Zip has some development activity going on and so does this fork...
Working with Igor Pavlov, the creator of 7-zip, does not seem very straightforward (understatement).
7-zip's development is very cathedral. Igor Pavlov doesn't look like he accepts contributions from the public.
Being a bit faster or efficient won't make most people switch. 7z offers great UX (convenient GUI and support for many formats) that keeps people around.
If anything, the gui and ux is terrible compared to winrar.
Since Windows 11 incorporated libarchive back in October 2023 there is less reason to use 7-zip on windows. I would be surprised if any of my friends even know what a zip file is let alone zstd.
If you ever try to extract an archive file of several gigabyte size with hundreds of thousands of files (I know, it's rare), the built-in one is as slow as a turtle compared to 7z.
As long as it does a better job than whatever Windows team packs into the OS, they're safe.
Even on latest Windows 11 takes minutes to do what 7-Zip does in seconds.
Goes to show how good all those leetcode interviews turn out.
Glad I'm not the only one who feels this way. WinZip is a slow and bloated abomination, especially compared to 7-Zip. The right-click menu context entry for 7-Zip is very convenient and runs lightning fast. WinZip can't compete at all.
Mixing channels here, WinZip is a commercial product, unrelated to Windows 11 7-zip support, and my comment.
https://www.winzip.com
There are lots of 7zip alike with zstd support (it's a plugin effectively). On [corporate] Windows NanaZip would be my choice as it's available in Windows store.
on anything else - either directly zstd or tar
Why are they not adopting ztsd?
It already has- look up nanazip
How does that work? You cannot write to disk before you know the compressed size. Or if you do you can use a data descriptor but you cannot write concurrently.
I guess they buffer the compressed stream to RAM before writing to zip. If they want to keep their zip stable (always the same output given the same input), they also need to keep it a bit longer than necessary in RAM.
I think you get different compressed files depending on how many threads you use to compress
Why was there a limitation on Windows? I can't find any such limit for Linux.
A lot of synchronization primitives in the NT kernel are based on a register width bit mask of a CPU set, so each collection of 64 hardware threads on 64 bit systems kind of runs in its own instance of the scheduler. It's also unfortunately part of the driver ABI since these ops were implemented as macros and inline functions.
Because of that, transitioning a software thread to another processor group is a manual process that has to be managed by user space.
Wow. That's surprisingly lame.
The NT kernel dates back to 1993. Computers didn’t exceed 64 logical processors per system until around 2014. And doing it back then required a ridiculously expensive server with 8 Intel CPUs.
The technical decision Microsoft made initially worked well for over two decades. I don’t think it was lame; I believe it was a solid choice back then.
> Computers didn’t exceed 64 logical processors per system until around 2014.
Server systems were available with that since at least the late 90s. Server systems with >10 CPUs were already available in the mid-90s. By the early-to-mid 90s it was pretty obvious that was only going to increase and that the 64-CPU limit was going to be a problem down the line.
That said, development of NT started in 1988, and it may have been less obvious then.
"Server systems" but not server systems that Microsoft targeted. NT4 Enterprise Server (1996) only supported up to 8 sockets (some companies wrote their own HAL to exceed that limit). And 8 sockets was 8 threads with no NUMA back then, not something that would have been an issue for the purposes of this discussion.
SGI Origin did by 1996.
Though MS ported NT to a number of systems (mips, alpha, ppc) it wasn’t able to play in the very big leagues until later.
I agree it was a reasonable choice at the time. Few were getting mileage out of that many CPUs back then.
The Sun E10K (up to 64 physical processors) came out in 1997.
(Now, NT for Sparc never actually became a thing, but it was certainly on Microsoft's radar at one point)
Linux had many similar restrictions in its lifetime; it just has a different compatibility philosophy that allowed it to break all the relevant ABIs. Most recently, dual-socket 192-core Ampere systems were running into a hardcoded 256-processor limit. https://www.tomshardware.com/pc-components/cpus/yes-you-can-...
Tom's hardware is mistaken in their reporting. That's raisng the limit without using CPUMASK_OFFSTACK. The kernel already supported thousands of cores with CPUMASK_OFFSTACK and has at least since the 2.6.x days.
That was actually the DEC team from what I understand, Microsoft just hired all of their OS engineers when they collapsed
Dave Cutler left DEC in 1988 and started working on WINNT at MS, well before the collapse.
I mean, x86 didn't, but other systems had been exceeding 64 cores since the late 90s.
And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.
> And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.
If that were the case the above system wouldn't have needed 8 sockets. With NUMA systems the app needs to be scheduling group aware anyways. The difference here really appears when you have a single socket with more than 64 hardware threads, which took until ~2019 for x86.
Why would an application need to be NUMA aware on Linux? Most software I've ever written or looked at has no concept of NUMA. It works just fine.
The same reasons it would on macOS or Windows, most people just aren't writing software which needs to worry about having a single process running many hundreds of threads across 8 sockets efficiently so it's fine to not be NUMA aware. It's not that it won't run at all, a multi-socket system is still a superset of a single socket system, just it will run much more poorly than it could in such scenarios.
The only difference with Windows is a single processor group cannot contain more than 64 cores. This is why 7-Zip needed to add processor group support - even though a 96 core Threadripper represents as a single NUMA node the software has to request assignment to 2x48 processor groups, the same as if it were 2 NUMA nodes with 48 cores each, because of the KAFFINITY limitation.
Examples of common NUMA aware Linux applications are SAP Hana and Oracle RDBMS. On multi-socket systems it can often be helpful to run postgres and such via https://linux.die.net/man/8/numactl too, even if you're not quite the scale you need full NUMA awareness in the DB. You generally also want hypervisors to pass the correct NUMA topologies to guests as well. E.g. if you have a KVM guest with 80 cores assigned on a 2x64 Epyc host setup then you want to set the guest topology to something like 2x40 cores or it'll run like crap because the guest is sees it can schedule one way but reality is another.
There were single image systems with hundreds of cores in the late 90s and thousands of cores in the early 2000s.
I absolutely stand by the fact that Intel and AMD didn't pursue high core count systems until that point because they were so focused on single core perf, in part because Windows didn't support high core counts. The end of Denmark scing forced their hand and Microsoft's processor group hack.
AMD and Intel were focused on single core performance, because personal desktop computing was the bigger business until around mid to late 2000s.
Single core performance is really important for client computing.
They were absolutely interested in the server market as well.
Do you have anything to say regarding NUMA for the 90s core counts though? As I said, it's not enough that there were a lot of cores - they have to be monolithically scheduled to matter. The largest UMA design I can recall was the CS6400 in 1993, to go past that they started to introduce NUMA designs.
Windows didn't handle numa either until they created processor groups, and there's all sorts reasons why you'd want to run a process (particularly on Windows which encourages single process high thread count software archs) that spans numa nodes. It's really not that big if a deal for a lot of workloads where your working set fits just fine in cache, or you take the high hatdware thread count approach of just having enough contexts in flight that you can absorb the extra memory latency in exchange for higher throughput.
3.1 (1993) - KAFFINITY bitmask
5.0 (1999) - NUMA scheduling
6.1 (2009) - Processor Groups to have the KAFFINITY limit be per NUMA node
Xeon E7-8800 (2011) - An x86 system exceeding 64 total cores is possible (10x8 -> requires Processor Groups)
Epyc 9004 (2022) - KAFFINITY has created an artificial limit for x86 where you need to split groups more granular than NUMA
If x86 had actually hit a KAFFINITY wall then the E7-8800 even would have occured years before processor groups because >8 core CPUs are desirable regardless if you can stick 8 in a single box.
The story is really a bit reverse from the claim: NT in the 90s supported architectures which could scale past the KAFFINITY limit. NT in the late 2000s supported scaling x86 but it wouldn't have mattered until the 2010s. Ultimately KAFFINITY wasn't an annoyance until the 2020s.
> other systems had been exceeding 64 cores since the late 90s.
Windows didn’t run on these other systems, why would Microsoft care about them?
> x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it
For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.
> Windows didn’t run on these other systems, why would Microsoft care about them?
Because it was clear that high core count, single system image platforms were a viable server architecture, and NT was vying for the entire server space, intending to kill off the vendor Unices.
. For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.
Linux wasn't the only OS. Solaris and AIX were NT's competitors too back then, and supported higher core counts.
Windows NT was originally intended to be multi-platform.
NT was and continues to be multi-platform.
That doesn't mean every platform was or would have been profitable. x86 became 'good enough' to run your mail or web server, it doomed other architectures (and commonly OSes) as the cost of x86 was vastly lower than the Alphas, PowerPCs, and so on.
Seems like this is a general Windows thing per https://learn.microsoft.com/en-us/windows/win32/procthread/p... - applications that want to run on more than 64 CPUs need to be written with dedicated support for doing so.
The linked Processor Group documentation also says:
> Applications that do not call any functions that use processor affinity masks or processor numbers will operate correctly on all systems, regardless of the number of processors.
I suspect the limitation 7zip encountered was in how it checked how many logical processors a system has, to determine how many threads to spawn. GetActiveProcessorCount can tell you how many logical processors are on the system if you pass ALL_PROCESSOR_GROUPS, but that API was only added in Windows 7 (that said, that was more than 15 years ago, they probably could've found a moment to add and test a conditional call to it).
It isn't just detecting the extra logical processors, you have to do work to utilise them. From the linked text:
"If there are more than one processor group in Windows (on systems with more than 64 cpu threads), 7-Zip distributes running CPU threads across different processor groups."
The OS does not do that for you under Windows. Other OSs handle that many cores differently.
> more than 15 years ago, they probably could've found a moment to add and test a conditional call to it
I suspect it hasn't been an issue much at all until recently. Any single block of data worth spinning up that many threads for compressing is going to be very large, you don't want to split something into too small chunks for compression or you lose some benefit of the dynamic compression dictionary (sharing that between threads would add a lot of inter-thread coordination work, killing any performance gain even if the threads are running local enough on the CPU to share cache). Compression is not an inherently parallelizable task, at least not “embarrassingly” so like some processes.
Even when you do have something to compress that would benefit for more than 64 separate tasks in theory, unless it is all in RAM (or on an incredibly quick & low latency drive/array) the process is likely to be IO starved long before it is compute starved, when you have that much compute resource to hand.
Recent improvements in storage options & CPUs (and the bandwidth between them) have presumably pushed the occurrences of this being worthwhile (outside of artificial tests) from “practically zero” to “near zero, but it happens”, hence the change has been made.
Note that two or more 7-zip instances working on different data could always use more than 64 threads between them, if enough cores to make that useful were available.
Are you sure that if you don't attempt to set any affinities, Windows won't schedule 64+ threads over other processor groups? I don't have any system handy that'll produce more than 64 logical processors to test this, but I'd be surprised if Windows' scheduler won't distribute a process's threads over other processor groups if you exceed the number of cores in the group it launches into.
The referenced text suggests applications will "work", but that isn't really explicit.
They're either wrong or thinking about windows 7/8/10. That page is quite clear.
> starting with Windows 11 and Windows Server 2022 the OS has changed to make processes and their threads span all processors in the system, across all processor groups, by default.
> Each process is assigned a primary group at creation, and by default all of its threads' primary group is the same. Each thread's ideal processor is in the thread's primary group, so threads will preferentially be scheduled to processors on their primary group, but they are able to be scheduled to processors on any other group.
I mean, it seems it's quite clear that a single process and all of its threads will just be assigned to a single processor group, and it'll take manual work for that process to use more than 64 cores.
The difference is just that processes will be assigned a processor group more or less randomly by default, so they'll be balanced on the process level, but not the thread level. Not super helpful for a lot of software systems on windows which had historically preferred threads to processes for concurrency.
> it'll take manual work for that process to use more than 64 cores.
No it won't.
It absolutely will. Your process is only assigned a single processor group at process creation time. The only difference now is that it's by default assigned a random processor group rather than inheriting the parent's. For processes that don't require >64 cores, this means better utilization at the system level. However you're still assigned <=64 cores by default per process by default.
That's literally why 7-zip is announcing completion of that manual work.
The 7zip code needed to change because it was counting cores by looking at affinity masks, and that limits it to 64.
It also needed to change if you want optimal scheduling, and it needed to change if you want it to be able to use all those cores on something that isn't windows 11.
But for just the basic functionality of using all the cores: >Starting with Windows 11 and Windows Server 2022, on a system with more than 64 processors, process and thread affinities span all processors in the system, across all processor groups, by default
That's documentation for a single process messing with its affinity. They're not writing that because they wrote a function to put different processes on different groups. A single process will span groups by default.
That depends on what format you're using. Zip compresses every file separately. Bzip and zstd have pretty small maximum block sizes and gzip doesn't gain much from large blocks anyway. And even when you're making large blocks, you can dump a lot of parallelism into searching for repeat data.
Windows has a concept of processor groups, that can have up to 64 (hardware) threads. I assume they updated 7zip to support multiple processor groups.
WaitForMultipleObjects is limited to 64... since forever.
Maybe WaitForMultipleObjects limit of 64 (MAXIMUM_WAIT_OBJECTS) applies?
An ugly limitation on an API that initially looks superior to Linux equivalents.
Windows is a terrible operating system.
I had initially migrated to NanaZip, but with Windows natively supporting the 7z format now, I'm not sure it's needing anymore.
7-zip is one of the software that I miss since I’ve moved to macOS
Keka is also really nice!
https://www.keka.io/
Never heard of it, I'll give it a try!
If you're talking about the program you use in the terminal, you can install it via homebrew
No, the GUI. 7-zip integrates well with the shell: select a group of files, right click -> make zip file, and so on. Or right-click a zip file and select extract. If you're accustomed to Linux you might not know what they're talking about.
TortoiseGit (and TortoiseSVN) are similarly convenient. Right click a folder with an SVN repo checked out, and select "SVN update". Right-click an empty space, and select "SVN checkout". SVN was the main distribution method for some modding communities before things like Steam Workshop and Github, specifically because TortoiseSVN made it so convenient. Checkout into your addons folder, and periodically update. What could be simpler?
How about PeaZip?
I've used PeaZip in the past but only on Windows, I was not aware that a MacOS version exists! I'll give it a try. Cheers
This may or may not be a relevant question, but does the terminology of "zip" have the same origin as the zip disk drive?
No. Zip format significantly predates the zip disk.
[dead]
I've used pbzip2 which takes the same parallel blocked compression approach 7zip seems to be taking (using AI's analysis of the changes). Theoretically the compression is less efficient, but i haven't noticed a difference in practice.
7zip has been the greatest usage for limbo x86 on mobile.
You just termux qemu-utils convert your qcow2 partitions to IMG and 7zip can read IMG file
Try yourself to see you can extract from your emulated windows
https://xkcd.com/619/
Wow, a program that doesn't matter anymore has been very very minimally enhanced on a platform that doesn't matter anymore, benefitting the 7 users that have more than 64 real cores with Windoes and are regularly compressing archives so large that it doesn't drastically reduce the compression ratio to split it into more thsn 64 sections.
Posting this link to hn has consumed more human potential than the thing it is describing will save up to the end of time.
> a program that doesn’t matter anymore
The rest of this comment has, though gratuitously snarky, a point, but I don’t think claiming that 7zip is irrelevant as an independent statement is even remotely coherent.
A 1% speed improvement for 1% of 7zip users is several times more productive than your comment.