What also happened in the same time frame according to this website:
- [1] 1366 x 768 was the strongest growing monitor resolution
- [2] Dual- and Quad core CPUs went up, 6-16 Cores are down
- [3] 4 GB and 8 GB of RAM went up, 16/32 GB fell
So it comes down to: More old (ancient?) machines in the Dataset. Why?
Unknown, but probably not indicating a trend regarding the hardware people use in the real World (TM) has changed.
I feel like it could simply be that some big PC recycler, perhaps in a poorer nation reselling Windows Vista machines for $15 a pop, has decided to run this benchmark as part of their recycling process and it has skewed all the results.
silvestrov 4 hours ago [-]
For me it sounds more like cheap ChromeBooks because that is enough for most people's need.
Very little in education needs more computing power than that.
We've just told ourselves that computing power will always grow and that we will always need that growth.
That might not be be true and computers might be like cars: we don't need (want) faster cars, we want cheaper and safer cars. Top speed no longer sells.
elif 3 hours ago [-]
Ehhh
Having tried to install a modern Linux on a 2012 laptop, it simply is not as cozy as your memory records it.
For reference even Linux designed for old hardware takes about 6 minutes to boot and even just using the console lags
NikolaNovak 2 hours ago [-]
I have a couple of ThinkPad t420s's from 2011 in active use. One has windows 10 the other kubuntu. Key part was installing a $40 sata 2.5" ssd, and both operating systems boot in well under a minute, completely unoptimized.
Big revolution of last 20 years was spinning drives to ssd. Otherwise, my daily drivers are all 8th generation Intel CPUs - I just got a ThinkPad x1 carbon for $300 cad from Facebook marketplace. 7 years old and runs brilliantly. My wife's daily driver is a t580.
Which is all to say I have no idea how Linux is taking 6 minutes to boot or even worse, lagging on terminal - something is horribly wrong there :-(
onre 2 hours ago [-]
Does not reflect my experience at all. T420 from 2011 is still completely usable. Haven't clocked the boot-up time but definitely not minutes, more like a couple dozen seconds. Everything worked out of box with latest stable Debian. Can do 2560x1600 over the DP connector, too. This is not even a top-spec model but the slowest i5 available back then.
unregistereddev 49 minutes ago [-]
To be fair, they didn't say /what/ computer from 2011 ran terribly. The Intel Core CPU's were the best back then. There were also Atom netbooks, terrible AMD chips, etc. AMD solidly has the performance crown now, but in 2011 they did not.
Pretty much anything from 2011 that was not a Core i5 or better will probably not run well today unless you use a purpose-built ultralight distro.
samiv 48 minutes ago [-]
Make sure that:
- your framebuffer is hardware accelerated
- your systemd doesn't have any timeouts or service hangups
- run a lightweight environment such as fluxbox
And you should really be fine still.
goosedragons 2 hours ago [-]
On what laptop and what distro? Does that PC lack an SSD? I have Ubuntu on a Thinkpad W520 and it does not take anywhere close to 6 minutes to boot, nor does the console lag but it also has an SSD installed.
m3rc 54 minutes ago [-]
I was just using a 2012 X230 running Arch and it's plenty snappy on the command line or even running an electron based editor (or as good as such a thing can be)
Moru 43 minutes ago [-]
There was no mention of what distribution of linux, just "modern" whatever that means. If the person installs the lastest testversion of some distribution that tries to jam everything in there but does not have actual grapics drivers for the right card, it will be very sluggish. In the meantime my raspberry pi has no problems whatsoever with the latest linux.
hexagonwin 2 hours ago [-]
My 2008 MacBook with T8100 and 2GB ram boots Devuan 4 (debian11) in <30 secs and there's no lag. What hardware are you on?
alamortsubite 2 hours ago [-]
You're doing something very wrong. Debian Bullseye boots in under 30 sec on my Dell XPS L321X, which was released in 2012 (timed with a stopwatch). I use the machine regularly.
josephg 3 hours ago [-]
That’s wild and disheartening. Those machines ran great at the time. I had a 2011 MacBook Air which I adored - it was fast, small and for the day, it had great battery life.
There is no reason for computers to get slower over time. Especially on linux.
elif 2 hours ago [-]
Bodhi Linux is the one I used on a Celeron based system. The distro is purpose built for retro hardware.
I wish I could explain what is going on but it is physically painful to use, like you are remote desktoping over an iridium satellite
elif 2 hours ago [-]
Technically the chipset was released in 2008 but I believe this is a 2012 laptop (*)
cl3misch 3 hours ago [-]
The computers don't get slower, and light Linux distros should still work fine.
But what matters is the software you're using, and that will be modern software (mostly). Modern browsers visiting JS-heavy websites, or a heavier office suite than 15 years ago.
And if this is the context, then computers must get slower over time, right?
echelon_musk 2 hours ago [-]
> There is no reason for computers to get slower over time. Especially on linux.
I suspect those so-called, "zero-cost abstractions".
DiscourseFan 5 hours ago [-]
As other commenter said, prob has more to do with growth of computing in the developing world. Very little labor now that can't be done by someone living in a country where breakfast costs less than 25 cents. I mean, theoretically at least. Most of those people probably aren't learning from professors who spend their whole day teaching and doing research, and that kind of activity can only be supported by a relatively wealthy society. But, the internet has also democratized a lot of that activity. Well, its complicated either way.
keyringlight 4 hours ago [-]
It seems like a similar challenge as looking at the steam hardware survey stats, it's probably the best public source of information but it's too much of an overview to make anything but the most general conclusions from it. Within that data there's going to be millions of different stories if you could divide it up a bit - what are common combinations of hardware, what are common combinations in different age ranges, what is used for playing games of different ages, what's common in different locations in the world. Valve could possibly tie hardware to software purchasing activity as well what is played.
wiredfool 16 hours ago [-]
As the chart is updated bi-weekly, but the data point that may change is for the current year. The first few days or weeks of a new year are less accurate compared to the end of a year.
ellisv 16 hours ago [-]
Really makes you wish the chart showed some measure of variance
kqr 9 hours ago [-]
We still know the current data point has about three times the standard error of the previous point, but I agree it's hard to say anything useful without knowledge of the within-point variance.
throwaway287391 15 hours ago [-]
Or just have the last data point include everything from the full 12 month period before (as the title "year on year" would suggest) and maybe even put it in the correct place on the x-axis (e.g. for today, Feb 12, 2025, about 11.8% of the full year gap width from the (Dec 31) 2024 point).
contravariant 7 hours ago [-]
And then write an article every month about how the year on year difference has(n't) gone up/down yet.
SubiculumCode 8 hours ago [-]
variance.
Have you noticed that the mainstream public-level discussion of almost any topic never progresses farther than a point estimate? Variance implies nuance, and nuance is annoying to those who'd just rather paint a story. Variance isn't even that much nuance, because it is also just a point estimate for the variability of a distribution, not even its shape.
Public discourse is stuck at the mean point estimate, and as such, is constantly misled.
This is all an analogy, but feels very true.
mcmoor 43 minutes ago [-]
Seems like there's only bandwidth of 4bit that can be reserved for this, and variance couldn't make the cut. It's actually already generous since sometimes only 1 bit can barely make it through, whether something is "good" or "bad".
michaelt 4 hours ago [-]
> Have you noticed that the mainstream public-level discussion of almost any topic never progresses farther than a point estimate? [...] Variance isn't even that much nuance, because it is also just a point estimate for the variability of a distribution, not even its shape.
Next time you're doing something that isn't a professional STEM job, see how far you can get through your day without adding or multiplying.
Unless you're totting up your score in a board game or something of that ilk, you'll be amazed at how accommodating our society is to people who can't add or multiply.
Sure, when you're in the supermarket you can add up your purchases as you shop, if you want to. But if you don't want to, you can just buy about the same things every week for about the same price. Or keep a rough total of the big items. Or you can put things back once you see the total at the till. Or you can 'click and collect' to know exactly what the bill will be.
You don't see mainstream discussion of variance because 90% of the population don't know WTF a variance is.
contravariant 7 hours ago [-]
Sometimes even an average is too much to ask. One of my pet peeves is articles about "number changed". Averages and significance never enter the discussion.
Worst are the ones where he number is the number of times some random event happened. In that case there's a decent chance the difference is less than twice the square root, so assuming a Poisson distribution you know the difference is insignificant.
KeplerBoy 6 hours ago [-]
This reminds me of the presidential election where most credible institutions talked about 50/50 odds and where subsequently criticized after Trump's clear victory in terms of electoral votes.
Few people bothered to look at the probability distributions the forecasters published which showed decent probabilities for landslide wins in either direction.
Still, both previous years were notably up after the first month already. This is different and perhaps notable regardless. Either the release schedule is skewed forward or the hardware is genuinely stagnating. Or perhaps the benchmark is hitting some other CPU unrelated bottleneck.
They have more than twice as many samples on their Feb 10 update this year (47810) as they did last year on Feb 14 (22761). They have shown some growth year on year in sample size, but nowhere near doubling.
That suggests this month has been an outlier in having a strangely large number of samples already, which could all be related—maybe their software started being bundled with a specific OEM or was featured on a popular content creator or whatever.
As a sibling notes, less accurate just means less accurate, not necessarily skewed upward. There simply is less data to draw conclusions on than there will be later, so any medium-sized effect that adds 20k extra samples will have a larger effect now than later.
hsuduebc2 12 hours ago [-]
I had the same thought. The common reasons for such a decline would be something like change of CPU architecture prioritizing energy efficiency over raw performance, limitations in manufacturing processes, or changes in benchmarking methodologies wouldn't had such steep decline. So I would guess either change of methodology or as you said weaker CPUs in general are measured by that.
moffkalast 6 hours ago [-]
So if I understand right, more old hardware makes it into the sample now than before, increasing the unreliability of early data? That makes sense I guess.
lolinder 56 minutes ago [-]
Not increasing the unreliability—it's unreliable no matter what this early in the year—but it's possible that more old hardware made it in this past month than in previous early years which would explain why the number went down this time.
palijer 13 hours ago [-]
This is just still a sample size of three for a data period that is stated to be less accurate" not "has lower values"
Just because the error isn't presented the same here doesn't mean it's error free...
TheSpiceIsLife 6 hours ago [-]
So they have a dataset, and they’ve cherry picket a time period to time period, and not done any location based sorting etc, to suit a narrative.
Whatever pays the bill I suppose.
lynguist 7 hours ago [-]
My hunch is that more people in countries like India use cpubenchmark / buy computers and try out cpubenchmark; and that in countries like those lower performing laptops outsell more powerful laptops by an enough large number that it shows.
It doesn’t have to be literally India, it’s an example for illustration.
baq 6 hours ago [-]
Or the ‘slow CPUs’ are actually truly Fast Enough. Intel N100 and N150 systems are low key amazing value.
bearjaws 2 hours ago [-]
I bought into the N100 hype train because of Reddit. Sure its an amazing value, but I hope anyone reading this isn't convinced it's remotely fast. I ended up going with a minisforum ryzen for about 2x the price and 4x the performance.
I was a bit bummed since I wanted to use it as a kind of streaming box, and while it can do it, it is definitely slow.
baq 2 hours ago [-]
I got a 16GB RAM 512GB SSD N100 minipc for less than $150 last month. Yes it could be faster (always true for any computer), but I feel I got way more than I paid for. Certainly a much better deal than an RPi 5.
PS. If you need more power than that I have a hard time coming up with a better deal than M4 Mac Mini, perhaps with some extra usb storage. It's possibly the only non-workstation computer that is worth buying in the not-inexpensive category (in the base version, obviously).
esperent 54 minutes ago [-]
> it could be faster (always true for any computer)
The question isn't whether it could be faster, but whether it's slow enough to be annoying for the tasks you use it for.
I bet most people would find the N100 annoyingly slow when opening documents or browsing the web, or doing crazy things like opening a few programs at the same time.
MarkusWandel 43 minutes ago [-]
About time! Every bit of compute performance increase I've had personally, has been eaten up by software bloat. Admittedly as software goes to use 10x as much compute, it does get better, by maybe 10%, but the explosive increase in compute performance up to, say, the mid 2010s mostly just served to encourage bloat and obsolete older devices.
My main machine is from about 2012 - a garage sale hand-me-down ultra high end game machine of the time (24GB in 2012!) It does suck a fair bit of power, running 24/7 but it's comfortably adequate for everything, and that's with two simultaneous users. Such hand-me-down laptops as have come my way (garage sales or free) are also adequate.
So if the continuing evolution is, instead, in compute per per joule, rather than absolute compute, I'm all for it. Graphics card power connectors melting at 570W... not for me.
RachelF 16 hours ago [-]
I wonder why? Some possibilities:
1. Extra silicon area being used by NPUs and TPUs instead of extra performance?
2. Passmark runs under Windows, which is probably using increasing overhead with newer versions?
3. Fixes for speculative execution vulnerabilities?
4. Most computers are now "fast enough" for the average user, no need to buy top end.
5. Intel's new CPUs are slower than the old ones
jamesy0ung 16 hours ago [-]
> 1. Extra silicon area being used by NPUs and TPUs instead of extra performance?
I'm not an expert in silicon design, but I recall reading that there's a limit to how much power can be concentrated in a single package due to power density constraints, and that's why they are adding fluff instead of more cores.
>2. Passmark runs under Windows, which is probably using increasing overhead with newer versions?
This is a huge problem as well. I have a 2017 15" MacBook Pro and it's a decent computer, except it is horribly sluggish doing anything on it, even opening Finder. That is with a fresh install of macOS 13.7.1. If I install 10.13, it is snappy. If I bootcamp with Windows 11, it's actually quite snappy as well and I get 2 more hours of battery life somehow despite Windows in bootcamp not being able to control (power down) the dGPU like macOS can. Unfortunately I hate Windows, and Linux is very dodgy on this Mac.
gaudystead 14 hours ago [-]
That is WILD that Windows 11 runs faster than the machine's native OS...
Could this suggest that modern Windows is, dare I say it, MORE resource efficient than modern macOS?! That feels like a ludicrous statement to type, but the results seem to suggest as much.
bboygravity 8 hours ago [-]
My first thought is that Apple is throttling performance of older machines on purpose (again). As they did with the phones.
Would explain why Windows runs faster.
iforgot22 14 hours ago [-]
Modern macOS might be optimized for Apple Silicon CPUs. But even when it was Intel only, there were probably times when Windows was lighter, albeit bad in other ways.
p_ing 11 hours ago [-]
It’s a microkernel OS, it’s going to be slower by its very nature. And on ARM it wastes memory due to its 16KiB pages.
ryao 10 hours ago [-]
Neither of them use microkernels. They are monolithic kernels with loadable module support ("hybrid" kernels). The cores of both Windows NT and XNU were originally microkernels, but then they put all of the servers into the kernel address space to make them into the monolithic kernels that they are today.
imglorp 12 hours ago [-]
The pessimistic viewpoint is the hardware vendor would not mind if you felt your machine was slow and were motivated to upgrade to the latest model every year. The fact that they also control the OS means they have means, motive and opportunity to slow older models if their shareholders demanded.
Nevermark 12 hours ago [-]
Or they simply keep supporting old models with new versions of the OS, even though newer software versions are optimized, and contain new features, enabled by newer hardware improvements.
If you machines have more RAM, you can use a more RAM intensive solution to speed many things up, or deliver a more computationally intensive but higher quality solution, or simply add a new previous challenging feature.
What would be interesting, is how fast to old but high spec'd models slow down. What slow downs are from optimizing for newer architectures, vs. optimizing with expectation of higher resources.
jamesy0ung 11 hours ago [-]
The problem seems to be CPU bound, it's got 16GB of memory and memory pressure is low. The CPU is a i7-7820HQ, though I thought that it was interesting that my iPhone XR (Apple A12) scores higher on a synthetic benchmark than my top of the line MacBook Pro from the same time.
jamesy0ung 13 hours ago [-]
I'm just as surprised. Also, I was using Windows 11 LTSC 2024, not the standard version, which could impact the validity of my comparison.
taurknaut 12 hours ago [-]
This doesn't feel terribly surprising to me. MacOS has always had impressively performant parts, but their upgrades always generally lower responsiveness. On modern hardware it's less perceptible obviously, and they want to sell machines, not software. But the last iteration that felt like it prioritized performance and snappiness was Snow Leopard, now twenty years ago.
I will say the problem was a lot worse before the core explosion. It was easy for a single process to bring the entire system to a drag. These days the computer is a lot better at adapting to variable loads.
I love my macs, and I'm about 10x as productive as on a windows machine after decades of daily usage (and probably a good 2x compared to linux). Performance, however, is not a good reason to use macos—it's the fact that the keybindings make sense and are coherent across an entire OS. You can use readline (emacs) bindings in any text field across the OS. And generally speaking, problems have a single solution that's relatively easy to google for. Bad-behaved apps aside (looking at you zoom and adobe) administering a mac is straightforward to reason about, and for the most part it's easy to ignore the app store, download apps, and run them by double clicking.
I love linux, but I will absolutely pay to not have to deal with X11 or Wayland for my day-to-day work. I also expect my employer to pay for this or manage my machine for me. I tried linux full-time for a year on a thinkpad and never want to go back. The only time that worked for me was working at google when someone else broadly managed my OS. macs the only unix I've ever used that felt designed around making my life easier and allowing me to focus on the work I want to do. Linux has made great strides in the last two decades but the two major changes, systemd and wayland, both indicate that the community is solving different problems than will get me to use it as a desktop. Which is fine; I prefer the mac-style to the ibm pc-style they're successfully replacing. Like KDE is very nice and usable, but it models the computer and documents and apps in a completely different way than I am used to or want to use.
jamesy0ung 11 hours ago [-]
I love Linux and run it on my servers, but on the desktop, it requires too much tinkering—I often spend more time troubleshooting than working. Laptops are especially problematic, with issues like Wi-Fi, sleep, and battery life. Windows is a mess—I hate the ads, bloat, and lack of a Unix-like environment. WSL2 works but feels like a hack. macOS, on the other hand, gives me full compatibility with Unix tools while also running Office and Adobe apps. Also Command+C for copy, Command+V for paste is much nicer than Control+Shift+C and Control+Shift+V. macOS does have absolutely terrible screen snapping though in comparison to Windows 11. The hardware is solid—great screen, trackpad, speakers, and battery life. I considered getting a high-end PC laptop (based on Rtings’ recommendations), but every option had compromises, either a terrible screen, terrible processor or terrible battery life. By the time I configured it to a non crappy config, it would’ve cost more than a MacBook Pro 16.
caleblloyd 10 hours ago [-]
> Laptops are especially problematic, with issues like Wi-Fi, sleep, and battery life.
100% true, but I love Linux as a daily driver for development. It is the same os+architecture as the servers I am deploying to! I have had to carefully select hardware to ensure things like WiFi work and that the screen resolution does not require fractional scaling. Mac is definitely superior hardware but I enjoy being able to perf test the app on its native OS and skip things like running VMs for docker.
amarant 10 hours ago [-]
Same! That's One less OS that I have to remember how it works!
I'm not sure I understand the whole tinkering thing. Whenever I tinker with my Linux, it's because I decided to try something tinkery, and usually mostly because I wanted to tinker.
Like trying out that fancy new tiling Wayland WM I heard about last week...
kelnos 7 hours ago [-]
I feel like the main times Linux requires tinkering are:
1. You're trying to run it on hardware that isn't well-supported. This is a bummer, but you can't just expect any random machine (especially a laptop) to run Linux well. If you're buying a new computer and expect to run Linux on it, do your research beforehand and make sure all the hardware in it is supported.
2. You've picked a distro that isn't boring. I run Debian testing, and then Debian stable for the first six months or so after testing is promoted to stable. (Testing is pretty stable on its own, but then once the current testing release turns into the current stable release, testing gets flooded with lots of package updates that might not be so stable, so I wait.) For people who want something even more boring, they can stick with Debian stable. If you really need a brand-new version of something that stable doesn't have, it might be in backports, or available as a Snap or Flatpak (I'm not a fan of either of these, but they're options).
3. You use a desktop environment that isn't boring. I'm super super biased here[0], but Xfce is boring and doesn't change all that much. I've been using it for more than 20 years and it still looks and behaves very similarly today as it did when I first started using it.
If you use well-supported hardware, and run a distro and desktop environment that's boring, you will generally have very few issues with Linux and rarely (if ever) have to tinker with it.
[0] Full disclosure: I maintain a couple Xfce core components.
porridgeraisin 6 hours ago [-]
First, thanks for Xfce. I'm a (tiny) donor.
Two kinds of linux tinkering often get aliased and cause confusion in conversations.
The first kind is the enthusiast changing their init system bootloader and package manager and "neofetch/fastfetch" and WM and... every few weeks.
The second kind is the guy who uses xfce with a hidpi display who has to google and try various combinations of xrandr(augmented with xfwm zoom feature), GDK_SCALE, QT_SCALE_FACTOR, theme that supports hidpi in the titlebar, few icons in the status tray not scaling up(wpa_gui), do all that and find out that apps that draw directly with OpenGL don't respect these settings, dealing with multiple monitors, plugging in hdmi and then unplugging messing up the audio profile and randomly muting chromium browsers, deciding whether to use xf86-* Or modesetting or whatever the fix is to get rid of screen tearing. Bluetooth/wifi. On my laptop for example I had to disable usb autosuspend lest the right hand side USB-A port stop working.
If our threshold for qualifying well-supported hardware is "not even a little tinkering required" then we are left with vanishingly few laptops. For the vast vast majority of laptops, atleast the things I mentioned above are required. All in all, it amounted to couple of kernel parameters, a pipewire config file to stop random static noise in the bluetooth sink and then a few xfce setting menu tweaks (WM window scaling and display scaling). So not that dramatic, but it is still annoying to deal with.
The 2nd kind of tinkering is annoying, and is required regardless of distro/de/wm choice since it's a function of the layers below the de/wm, mostly the kernel itself.
adrian_b 2 hours ago [-]
I think that for your example of having problems with HiDPI you might have had in mind another desktop environment than XFCE.
I have been using XFCE for more than 10 years almost exclusively with multiple HiDPI monitors.
After Linux installation I never had to do anything else except of going to XFCE/Settings/Appearance and set a suitably high value for "Custom DPI Setting".
Besides that essential setting (which scales the fonts of all applications, except for a few Java programs written by morons), it may be desirable to set a more appropriate value in "Desktop/Icons/Icon size". Also in "Panel preferences" you may want to change the size of the taskbar and the size of its icons.
You may have to also choose a suitable desktop theme, but that is a thing that you may want to do after any installation, regardless of having HiDPI monitors.
flomo 9 hours ago [-]
I have a 2016 MBP-15 (sticky keyboard). I suspect Apple changed something so the fans no longer go into turbo vortex mode. Normally it isn't sluggish at all, but when it overheats, everything grinds to a halt.[1] (Presumably this is to keep the defective keyboard from melting again.[0]) Perhaps old OS/bootcamp still has the original fan profiles.
[0]Apple had a unpublicized extended warrantee on these, and rebuilt the entire thing twice.
[1] kernel_task suddenly goes to 400%, Hot.app reports 24%. Very little leeway between low energy and almost dead.
torginus 9 hours ago [-]
I think it's that the measure of modern CPU performance- multithreaded performance is worthless and has been worthless forever.
Most software engineers don't care to write multithreaded programs, as evidenced by the 2 most popular languages - Js and Python having very little support for it.
And it's no wonder, even when engineers do know how to write such code, most IRL problems outside of benchmarks don't really yield themselves to multithreading, and due to the parallizable part being limited, IRL gains are limited.
The only performance that actually matters is single thread performance. I think users realized this and with manufacturing technology getting more expensive, companies are no longer keen on selling 16 core machines (of which the end user will likely never use more than 2-3 cores) just so they can win benchmark bragging rights.
whilenot-dev 7 hours ago [-]
> The only performance that actually matters is single thread performance. I think users realized this and with manufacturing technology getting more expensive, companies are no longer keen on selling 16 core machines (of which the end user will likely never use more than 2-3 cores) just so they can win benchmark bragging rights.
How can you state something like this in all seriousness? One of the most used software application has to be the browser, and right now firefox runs 107 threads on my machine with 3 tabs open. gnome-shell runs 22 threads, and all I'm doing is reading HN. It's 2025 and multicore matters.
torginus 7 hours ago [-]
Those threads don't necessarily exist for performance reasons - there can be many reasons one starts a thread from processing UI events, to IO completion etc. I very much doubt Firefox has an easy time saturating your CPU with work outside of benchmarks.
foldr 5 hours ago [-]
If Firefox smears its CPU usage over multiple threads, that leaves more single-threaded performance on the table for other apps that may need it. So there could still be an effect on overall system performance.
windward 6 hours ago [-]
Well yeah - what CPU bound task do you need to be performant? Beyond many tabs - which is embarrassingly parallel - it's all either GPU, network or memory bound.
Firefox failing to saturate your CPU is a win-state.
adrian_b 2 hours ago [-]
The fact that multithreaded performance is worthless for you does not prove that this is true for most computer users.
During the last 20 years, i.e. during the time interval when my computers have been multi-core, their multithreaded performance has been much more important for professional uses than the single-threaded performance.
The performance with few active threads is mainly important for gamers and for the professional users who are forced by incompetent managers to use expensive proprietary applications that are licensed to be run only on a small number of cores (because the incompetent managers simultaneously avoid cheaper alternatives while not being willing to pay the license for using more CPU cores).
A decent single-threaded performance is necessary, because otherwise opening Web pages bloated by JS can feel too slow.
However, if the single-threaded performance varies by +/- 50% I do not care much. For most things where single-threaded performance matters, any reasonably recent CPU is able to do instantaneously what I am interested in.
On the other hand, where the execution time is determined strictly by the multithreaded performance, i.e. at compiling software projects or at running various engineering EDA/CAD applications, every percent of extra multithreaded performance may shorten the time until the results are ready, saving from minutes to hours or even days.
silverlinedjik 7 hours ago [-]
>multithreaded performance is worthless and has been worthless forever.
I have a very opposite opinion; single threaded performance only matters to a point where any given task it's doing isn't unusable. Multithreaded performance is crucial from keeping the system from grinding to a halt because users always have multiple applications open at the same time. Five browser windows with 4-12 tabs on each, on three different browsers, 2-4 Word instances, some electron(equivalent) comms app, is much less unusual than I'd like it to be. I have used laptops with only two cores and it gave a new meaning to slow when you tried doing absolutely anything other than waiting for your one application to do something. Only having one application open at a time was somewhat useable.
jart 7 hours ago [-]
Yes but look at the chart in the article. Both multi-threaded and single-threaded performance is getting slower on laptops. With desktops, multi-threaded is getting slower and single-threaded is staying the same.
znpy 6 hours ago [-]
> The only performance that actually matters is single thread performance.
Strong disagree, particularly on laptops.
Having some firefox thread compile/interpret/run some javascript 500 microseconds faster is not going to change my life very much.
Having four extra cores definitely will: it means i can keep more stuff open at the same time.
The pain is real, particularly on laptops: i've been searching for laptops with a high-end many-core cpu but without a dedicated gpu for years, and still haven't found anything decent.
I do run many virtual machines, containers, databases and stuff. The most "graphic-intensive" thing i run is the browser. Otherwise i spend most of my time in terminals and emacs.
anal_reactor 7 hours ago [-]
Lots of problems can be nicely parallelized, but the cost of doing so usually isn't worth it, simply because the entity writing the software isn't the entity running it, so the software vendor can just say "get a better PC, I don't care". There was a period when having high requirements was a badge of honor for video games. When a company needs to pay for the computational power, suddenly all the code becomes multithreaded.
moffkalast 6 hours ago [-]
Superscalarity is largely pointless yes, given that memory access between threads is almost always a pain, so two of them rarely process the same data and can't take advantage of using a single core's cache at the same time. It doesn't even make much sense in concept.
But multicore performance does matter significantly unless you're on a microcontroller running only one process on your entire machine. Just Chrome launches a quarter million processes by itself.
dbtc 16 hours ago [-]
Programmers failing to manufacture sufficient inefficiency to force upgrade.
johnnyanmac 14 hours ago [-]
Can't force upgrade with money you don't have. For a non-enthusiast, even the lowest specs are more than good enough for light media consumption and the economy would affect what they invest in.
rpcope1 14 hours ago [-]
As an experiment, I've tried working with a Dell Wyse 5070 with the memory maxed out. Even for development work, outside of some egregious compile times for large projects, it actually worked ok with Debian and XFCE for everything including some video conferencing. Even if you had money for an upgrade, it's not clear it's really necessary outside of a few niche domains. I still use my maxed out Dell Precision T1700 daily and haven't really found a reason to upgrade.
brokenmachine 14 hours ago [-]
I very much doubt that is the cause.
bobthepanda 14 hours ago [-]
there's 4 which has a couple different component
* battery life is a lot higher than it used to be and a lot of devices have prioritized efficiency
* we haven't seen a ton of changes to the high-end for CPU. I struggle to think of what the killer application requiring more is right now; every consumer advancement is more GPU focused
AnthonyMouse 15 hours ago [-]
COVID. People start working from home or otherwise spending more time on computers instead of going outside, so they buy higher-end computers. Lockdowns end, people start picking the lower-end models again. On multi-threaded tasks, the difference between more and fewer cores is larger than the incremental improvements in per-thread performance over a couple years, so replacing older high core count CPUs with newer lower core count CPUs slightly lowers the average.
Macha 13 hours ago [-]
I don't buy this really. COVID to now is less than 1 laptop replacement cycle for non-techie users who will usually use a laptop until it stops functioning, and I don't think the techie users would upgrade at all if their only option is worse.
AnthonyMouse 10 hours ago [-]
A lot of laptops stop functioning because somebody spills coffee in it or runs it over with their car. There are also many organizations that just replace computers on a 3-5 year schedule even if they still work.
And worse is multi-dimensional. If you shelled out for a high-end desktop in 2020 it could have 12 or 16 cores, but a new PC would have DDR5 instead of DDR4 and possibly more of it, a faster SSD, a CPU with faster single thread performance, and then you want that but can't justify 12+ cores this time so you get 6 or 8 and everything is better except the multi-thread performance, which is worse but only slightly.
The reticence to replace them also works against you. The person who spilled coffee in their machine can't justify replacing it with something that nice, so they get a downgrade. Everyone else still has a decent machine and keeps what they have. Then the person who spilled their drink is the only one changing the average.
nubinetwork 9 hours ago [-]
> (increased overhead with newer versions of Windows)
> Fixes for speculative execution vulnerabilities?
I don't know if they'll keep doing it, but hardware unboxed had been doing various tests with Windows 10 vs 11, and mitigations on vs off, as well as things like SMT on vs off for things like AMD vcache issues, or P cores vs E cores on newer intels... it's interesting to see how hardware performs 6-12 months after release, because it can really go all over the place for seemingly no reason.
windward 6 hours ago [-]
4 and 1.b: extra silion being used by GPUs. CPU performance isn't that important anymore.
I know, I know: all the software you use is slow and awful. But it's generally bad thanks to a failure to work around slow network and disk accesses. If you use spinning rust, you're a power user.
It's also a minority of video games that rely more on CPU performance than GPU and memory (usually for shared reasons in niche genres)
dehrmann 16 hours ago [-]
4 has been true for over a decade.
gotoeleven 16 hours ago [-]
Literally the only reason you need a computer less than 10 years old for day to day tasks is to keep up with the layers of javascript that are larded onto lots of websites. It's sad.
Edit: oh and electron apps. Sweet lord, shouldnt we have learned in the 90s with java applets that you shouldnt use a garbage collected language for user interfaces?
throwawee 3 hours ago [-]
I hate bloat and don't use Electron apps, but garbage collection is such a silly thing to pin it on. Tossing stuff like Lua and D in the same bin as the massive enterprise framework du jour is throwing the baby out with the bathwater.
pmontra 15 hours ago [-]
My laptop is 11 years old and handles Slack quite well, and Jira. I eventually maxed it out at 32 GB and that probably helps. It's only an i7-4xxx though.
Java in the 90s was really slow. It got much faster with the JIT compiler. JavaScript and browsers got many optimizations too. My laptop feels faster now than it was in 2014 (it has always run some version of Linux.)
The other problem of Java was the non native widgets, which were subjectively worse than most of today's HTML/JS widgets, but that's a matter of taste.
iforgot22 14 hours ago [-]
I think even a blank Electron app is rather heavy because it's basically a whole web browser, which btw is written in C++. BUT my 2015 baseline MBP still feels fine.
trinix912 3 hours ago [-]
I have a 2005 Core 2 Duo desktop running Windows 7. The only things that make it unusable are today's Internet and Electron apps. Everything else (including Visual Studio 2019!) runs at a performance comparable to my 2020 ThinkPad with Core i7. I know it's anecdotal, I'm just saying, if we fixed the bloat on the web, most people could just keep running their 10-15 year old PCs (using Linux if you want).
nicoburns 10 hours ago [-]
For me the big improvement from upgrading was better responsiveness while on a video call (no small improvement in the modern world)
bitwize 14 hours ago [-]
I use a GC'd language for UI every day -- Emacs -- and it runs well even on potato-tier hardware. The first GUI with all the features we'd recognize was written in a GC'd language, Smalltalk.
GCs introduce some overhead but they alone are not responsible for the bloat in Electron. JavaScript bloat is a development skill issue arising from a combination of factors, including retaining low-skill front end devs, little regard for dependency bloat, ads, and management prioritizing features over addressing performance issues.
jart 7 hours ago [-]
It's not that they lack the skill. It's that they don't care.
If they do care then they probably depend on someone who doesn't.
ajmurmann 4 hours ago [-]
I hate the framing that developers are the people not caring. It's a ridiculous misrepresentation of the dynamics I've seen everywhere. Developers usually love optimizing stuff. However, they are under pressure to deliver functional changes. One could blame "the business" for not caring and not making time for shipping a quality product. However, these business decisions are made because that's what wins customers. Customers might complaint that that wasn't more lightweight software. But clearly their stated preference doesn't match their revealed preference.
iforgot22 14 hours ago [-]
There's also a lot of fast software written in Golang. GC has an undeserved bad reputation.
TuxSH 4 hours ago [-]
It's all nice and fast until the GC eats up all the CPU time under memory pressure (e.g. container memory limit). GC deserves its bad rep, even if GC implementations have gotten a lot better over time.
GC also disqualifies a language from ever being used for anything but userland programs.
anthk 1 hours ago [-]
Containers for Go software have no sense. You can cross compile stuff from anywhere to anywhere. And the binaries are self contained.
whatever1 15 hours ago [-]
What is wrong with garbage collection and UI?
wtallis 15 hours ago [-]
Latency spikes, probably. But most UIs shouldn't be churning through enough allocations to lead to noticeable GC pauses, and not all of today's GCs have major problems with pause times.
I don't think Electron's problems are that simple.
skydhash 14 hours ago [-]
From a non expert point of view, it’s about bringing in so much dead code. Like support for every os audio subsystem for a graphic app, or the whole sandboxing thing while you’re already accessing files and running scripts. Or even the table and float layout while everything is using flex. All those things need to be initialized and hooked up. At least Cordova (mobile) runs only on the system web engine which was already slimmed down.
mook 10 hours ago [-]
I work on an Electron app, and we have a ticket open to investigate why the heck it's asking the OS for Bluetooth permissions even though we'd never use it. There are, of course, higher priority things to get to first, bugs that have larger effects. I'd love to be able to get to that one…
iforgot22 14 hours ago [-]
The renderer isn't going to pause on JS execution or GC.
grg0 15 hours ago [-]
[dead]
bentcorner 15 hours ago [-]
But has it been true for the type of person who runs CPU benchmarks?
kome 16 hours ago [-]
i use a 10 years old macbook air and honestly i can do everything i need, statistics, light programming, browsing and photo editing. perfect.
i wait it to break before moving on, changing for the sake to changing feels like waste.
skyyler 16 hours ago [-]
I use a 10 years old macbook pro and the only thing I dislike is the battery life and the heat generated.
I simply can't afford the replacement 16" MBP right now. Hopefully it lasts another couple of years.
lotsofpulp 11 hours ago [-]
Surely, the new MacBook Airs offer better performance than a 10 year old MacBook Pro. The M3 is very reasonable, 16GB/512GB for $1,100 or $1,300 for 15 inch I think. Could be going lower too with M4 MacBook Air purportedly releasing in Mar or Apr.
skyyler 51 minutes ago [-]
My ten year old computer has an SD card slot and an HDMI port.
I would like to replace it with something that has an SD card slot and an HDMI port, as I use those frequently and don't want to deal with adapter solutions.
trinix912 3 hours ago [-]
I just wish they upped the storage a bit. 512GB on an $1100 machine in 2025 just feels like a bad deal, especially if you already have a machine from 10 years ago that has the same amount.
omnimus 8 hours ago [-]
I have cheapest air 8gig and its just fine for anything i do if i dont open 80tabs.
But for sure wait for M4 as 16gig ram will be the base model. Coworker just got MacMini m4 and i dont see why that machine wouldnt be enough for 90% of people.
Legend2440 14 hours ago [-]
You’re forgetting the most likely possibility: it’s an artifact of data collection or a bug in the benchmarking software.
p_ing 15 hours ago [-]
> 2. Passmark runs under Windows, which is probably using increasing overhead with newer versions?
Shouldn't be an issue. Foreground applications do get a priority boost; I don't know if Passmark increases priority of itself, tho. Provided there isn't any background indexing occurring, i.e. letting the OS idle after install.
NBJack 14 hours ago [-]
As someone obsessing over small performance gains for his processor recently, there is definitely overhead to consider. Note by default you get things now like Teams, Copilot, the revised search indexing system, Skype, etc. Passmark in many tests will max out all cores at once; this tends to make it sensitive to 'background' processes (even small ones).
SecretDreams 15 hours ago [-]
ARM?
chvid 10 hours ago [-]
People spending less on new computers.
15 hours ago [-]
Anotheroneagain 6 hours ago [-]
Maybe it's all the new Intel CPUs that failed?
giljabeab 6 hours ago [-]
It’s everyone jumping ship and switching from 14th gen to AMD
Anotheroneagain 4 hours ago [-]
I mean the CPUs were tested early on, but later they failed, and no longer raise the average. It should be visible in more detailed statistics if it is so.
bitwize 14 hours ago [-]
People are buying more HP Streams and fewer ASUS ROGs than in previous years?
anal_reactor 7 hours ago [-]
I think it's reason #4. I bought a PC 12 years ago, the only upgrade I did was buying an SSD, and now it's being used by my father, who's perfectly happy with it, because it's fast enough to check mail and run MS Office. My current gaming PC was bought 4 years ago, and it's still going strong, I can play most games on high settings in 4k (thank you DLSS). I've noticed the same pattern with smartphones - my first smartphone turned into obsolete garbage within a year, my current smartphone is el cheapo bought 4 years ago, and it shows no signs of needing to be replaced.
ok I lied, my phone is only two years old, but what happened is that two years ago my phone experienced a sudden drop test on concrete followed by pressure test of a car tyre, and it was easier to buy a new one with nearly identical specs than to repair the old one.
bjourne 14 hours ago [-]
Could it be related to world-wide inflation and decreasing real wages? Salaries have not kept up, hence people can't afford as powerful hardware as they used to. It also seems that top-of-the-line hardware don't depreciate as fast as it used to. Many several year's old GPUs are still expensive, for example.
Nah, I read that but I'm not convinced. The reply to them from palijer is correct: inaccurate does not mean lower, it just means inaccurate. Fewer samples means less data means more room for weird sampling errors skewing the data. There's nothing inherent in that that suggests we ought to assume an upward skew.
The best we can say is that the sampling skew this month is different than the sampling skew in past Januaries. That could be due to some interesting cause, it could be totally random, and there's zero point in speculating about it until we have more data.
epolanski 13 hours ago [-]
That would not change the expectation of having faster hardware with time at the same/lower price.
ekianjo 14 hours ago [-]
> Many several year's old GPUs are still expensive,
They are propped up by demand and the fact that most of the new GPUs are marginally better than previous ones.
ttt3ts 13 hours ago [-]
New GPUs are quite a bit better than previous ones but perf oer dollar has been flat for a while now.
Also, if you're talking gaming GPUs old ones work fine given there has not been a new PlayStation or Xbox in many years. Min spec for many games is 5 years old tech
Rury 12 hours ago [-]
Sure they are better... but once you factor for die size, core count, wattage and so on... the improvements being made are less impressive than they first seem.
teaearlgraycold 13 hours ago [-]
For gaming GPUs the 5090 is something like 20-50% better than the 4090 in raster performance.
m4rtink 11 hours ago [-]
Apparently also 50% more likely to make things catch on fire! ;-)
blangk 12 hours ago [-]
Yep. The whole fake frames thing seems like a witch hunt to me. If you can actually get a 5090 and were previously fully utilising a 3090 or 4090, it is a significant upgrade, as long as it doesn't burn your house down.
Dylan16807 6 hours ago [-]
In situations where the budget matters, you need to take that 20-50% better performance and drop it by the 25% higher launch price.
And the generated frames are misleading because they don't work very well if your source framerate is low.
keyringlight 4 hours ago [-]
The generated frames thing is interesting as it's an obvious example of the whole discussion around halo products (the x080 or x090 cards), "the rich get richer", it takes up a lot of air in the room but is only appropriate for a small vocal proportion of the market.
As you say you need a good framerate to start with before generated frames make sense so either you're already running well with a high tier card and are further able to show off the combination of framerate/resolution/detail level, or running a lower tier card at lower settings turning the graphics down further which can be a very obvious trade off. Less demanding games which are generally online competitive usually wouldn't do as well with any extra latency introduced, and the situation where I've heard it would be a good fit is emulators or any kind of game where the CPU is the limiting factor
7speter 10 hours ago [-]
I just don't think the 5090 is for gamers; it's for hobbyist AI people who want to buy multiple to run high parameter LLMs. Fake frames are irrelevant to this crowd, even if they take one of their 5090s aside to game once in a while.
Given the strong prior pattern, I forecast this is due to
(a) changes in methodology/non-random sampling, with 60 % probability,
(b) random sampling effects that go away on longer timelines, with 20 % probability,
(c) any actual change in CPUs, with 20 % probability.
In my experience, when a metric shows weird behavious these are usually the reasons with roughly those proportions.
dusted 7 hours ago [-]
Interestingly, so did server cpu thread performance, I guess we're now starting to see more direct prioritization of core count versus core performance, which I kind of understand, it's superficially simpler to think about horizontal scaling, and for very many workloads, it's ideal, and for many it's at least doable, though I fear the invisible overhead in both compute and complexity
mikesabbagh 50 minutes ago [-]
most new software runs in the cloud. you only need the browser for many tasks. I even code in the cloud. for many people, why pay for something you dont need.
4gotunameagain 48 minutes ago [-]
Because people with this point of view infest us with Electron apps which need 100000x the processing power that took men to the moon in order to run a chat app.
aboardRat4 11 hours ago [-]
My experience with recent hardware is that it is very poorly designed, not as "computational hardware", but as a "consumer product".
Buying anything new for me is out of question since a few years ago, because it just cannot be guaranteed to survive half a year, hence now I always buy second-hand, because this way at least it was tested by the previous owner.
But even so, modern hardware is hellishly unreliable.
I bought a Dell 7 series laptop (17 inches), and:
1. Already had to replace the battery back panel three times, just because it is being held by tiny plastic hooks which are easily torn away.
2. It's assembly-reassembly process is a nightmare (which used to not be the case for Dell in the past).
3. Already had to replace the fans (the laptop is not even 5 years old).
but bear with me, the rest is more fun:
4. When running with a smaller battery (official), it CANNOT RUN THE GPU AT FULL SPEED EVEN WHEN PLUGGED INTO THE MAINS. Really? This is just ridiculous. A laptop using the battery when plugged in is just insane.
5. You MUST use a smaller battery to install a 2.5" SATA drive. So you much choose: either a SATA drive, or a GPU.
6. The power on/off button does not work if your BIOS battery has low charge. This is just maddening! What is the connection?
Maybe it's just me and one single laptop?
Well, in my experience, everything is getting similarly fragile.
My phone has its thermal sensor installed ON THE BATTERY, so if you replace the battery, you have to replace the sensor as well, which is, oh, well, a harder thing to manufacture than a battery and on many non-official batteries always returns 0 (0 Kelvin that is, -273 Centigrade).
The amount of cacti you have to swallow to get used to modern hardware is just staggering.
zeroq 12 hours ago [-]
Some 15 years ago I was playing DOTA on what you'd call today a gaming laptop (HP HDX9200) which would occasionally overheat causing BSOD and everytime it happened it was hail mary, because the boot time took around 5 minutes and it was the exact time for servers to flag me as leaver and ban me from future games.
These days I have two main machines which boot to Windows in about a minute.
Somehow I remember my old 486 machine was able to boot to DOS within the snap of a power button.
swiftcoder 4 hours ago [-]
Do these graphs not include Macs at all? The text would suggest this to be true, but PassMark has existed in at least some form on Macs for a long time.
layer8 13 hours ago [-]
The chart also shows that single-thread performance hasn’t improved that much in the past twelve or so years, compared to Moore’s law. And this is compounded by Wirth’s law [0].
It's always interesting when I install Linux on a machine, and it can fit all running processes on a single page in top.
altairprime 13 hours ago [-]
It’s remarkable how carefully the page is worded to not incorporate Apple’s desktop and mobile CPU performance changes over time - but they screwed up because they said Linux, which doesn’t exclude Apple Silicon like it used to thanks to Asahi.
talldayo 11 hours ago [-]
Ultimately Apple has hit the same dead end too. Their designs and TSMC's processes aren't putting Moore's law back on the map, you can see it in the year-over-year performance. If tariffs end up hitting any harder, Apple can't just go to TSMC's Arizona plant and get faster silicon than what Taiwan has today.
> they screwed up because they said Linux, which doesn’t exclude Apple Silicon
In the context of the second graph, that means Apple Silicon is also failing to push the envelope forwards.
nxpnsv 15 hours ago [-]
Arguably a majority of improvements in 2025 haven’t happened yet, seems premature to conclude anything in February.
Osiris 16 hours ago [-]
Looking at the top end chart, which flat lined, it seems that the biggest contributor would be a slower release cycle of the most performant chips. It looks like the score of the top end chips were dragging up the average. With the top spot not changing, the average is falling.
I'm curious what median and std dev look like.
kqr 9 hours ago [-]
I am somewhat surprised this has not happened earlier! In my circles there was a period around when the Eee PC and MacBook Air was new and manufacurers started cranking out ultra low power laptops that I felt we regressed in performance, but I suppose the majority of laptops sold were corporate things that didn't follow trends as closely.
esskay 5 hours ago [-]
Not sure why this would be surprising.
Even if we assume theres a bit of a skew because a recycler has run a bunch of benchmarks on old hardware, it's something that felt pretty inevitable.
Average users dont need any more power than they have right now for their common activities (browsing and such), so havent been upgrading.
There's been a large increase in low power machines, be it the N100's, portables such as the steam deck, etc.
Until the next 'big thing' theres not going to be much of a need or desire to upgrade, you're not gaining anything at this point if your activities arent cpu constrained.
friendzis 5 hours ago [-]
> Average users dont need any more power than they have right now for their common activities (browsing and such), so havent been upgrading.
Even on what could easily be called a "beefy" machine major websites on chrome make snails look like avatars of agility.
cluckindan 16 hours ago [-]
Hyperthreading and speculative execution are going out of fashion?
Now if we could just get rid of fake frames on the GPU side.
johnnyanmac 14 hours ago [-]
It just got introduced and NVDA needs to inflate its stock more.
I'd suspect some sort of economic factor, or sampling … e.g. just people with lower specs submitting and bringing down the average
utopcell 12 hours ago [-]
Well, Intel abandoned hyperthreading with Lunar Lake, so it's definitely going out of fashion.
esseph 4 hours ago [-]
For Intel, yes.
42lux 4 hours ago [-]
That happens every year after Christmas when old pcs hit the second hand market…
Max-q 7 hours ago [-]
Is the steep climb in 2021 due to Apple Silicon entering the scene? Or maybe not, servers seem to follow the same pattern. It’s a remarkable increase in performance on a short time.
kelvinjps10 14 hours ago [-]
I'm using a thinkpad t480 and it runs everything fine
nosioptar 12 hours ago [-]
My favorite computer is my t530. It's showing its age a bit, but the newer thinkpads I've used all feel like a step down quality wise
I'll probably use the t530 as my primary machine for years to come.
noizejoy 16 hours ago [-]
Price increases due to multiple factors from higher interest rates to trade wars, etc. are tempering the financial enthusiasm of customers to buy at the higher end, maybe?
curvaturearth 5 hours ago [-]
Lots of laptops marketed as AI powered but have lower CPU speeds and total ram?
vladms 16 hours ago [-]
People got smarter and buy only what they need?
zabzonk 12 hours ago [-]
Could not get to the page.
But, I dunno. I just bought an Asus Zenbook with an intel 9 ultra with Arc graphics 32Gb ram, 1tb SSD, and it seems pretty nifty to me, especially at the price.
dietr1ch 12 hours ago [-]
so 9 is fast? The rest of the specs are not CPU.
My i7-13700H is shitting the bed due to poor software. Permanently on power-saving while charging because of "rapid charging". It'd be ok if it didn't render the whole computer useless at 400MHz, my first computer from circa 1996 is the only one clocked slower than that I ever had, and it had a turbo button
zabzonk 12 hours ago [-]
> The rest of the specs are not CPU
Fair enuf. I was really trying to comment on the ultra/arc chip performance.
> so 9 is fast?
seems to be - my previous laptop was a Core i7, and my new 9 Ultra one is definitely faster (for what I do with it). It goes into power saving only when it gets below about 30% of battery charge, and Asus give a lot of switches to fiddle about with that.
reacweb 8 hours ago [-]
A linear scale is no fit for a performance graph. Can we switch to a logarithm scale ?
markhahn 12 hours ago [-]
average reflects the population, of course.
max graph shows no such fall - laptops going up even.
how about this interpretation: desktops are fast enough; laptops are getting faster but people tend to buy more usable laptops, not faster ones.
varispeed 15 hours ago [-]
Friend of mine bought Zephyrus G16 with Intel 185H and 32GB RAM. He thought this will help him studying and he chose it because some software he has to use is only available for Windows. He called me to have a look as the laptop has been sluggish for him.
This thing barely can handle Office, Teams and the browser. Hot, noisy as hell and performance wise I see no difference over laptops from decade ago. Tragic.
To be fair I don't think I could do anything. Task manager showed CPU as under utilised, yet fans were blasting, editing Word document looked like a slideshow.
ASUS tools showed no problems.
I don't know, feels like people are getting scammed with these new laptops.
I still have M1 Max and never experienced anything like this.
magicalhippo 14 hours ago [-]
Had something similar with my work laptop today. Lenovo X1 Carbon, couple of years old. Got reinstalled very recently.
Been fine, but suddenly it was slow af. Near zero CPU, GPU and disk usage in Windows Task Manager, but I could feel it was burning hot, which it wasn't just 15 minutes prior.
Did a reboot and all was fine again.
Surely some firmware that messed up, though no idea why.
Anyway, I'd start by removing crap, that goes for 3rd party anti-virus and vendor tools especially. Use something like Bulk Crap Uninstaller[1] or Revo[2], and then reinstall drivers.
Totally agreed on the sad state of laptops these days.
Windows is a heavy operating system compared to the others and can cause that problem but that likely isn't Windows alone causing the problem (though bloatware / other AV solutions could also have something to do with it)
varispeed 3 hours ago [-]
1TB NVMe and 4060 GPU
iforgot22 14 hours ago [-]
Sounds a lot like a dedicated GPU is running and heat-throttling the whole thing, since you don't see CPU usage.
I wanted to say gaming laptops are a scam, but the older Intel MBPs with dedicated GPUs suffered too. More trouble than they're worth.
varispeed 3 hours ago [-]
One thing I notice is that Google Drive was using 15% of ARC GPU (laptop also has nVidia 4060). When I shut down that process, it cooled down a bit after a while, but system is still sluggish and fans work all the time, even if I switched it to "Silent" mode.
truekonrads 15 hours ago [-]
Uninstall all security except for Windows defender and see how it feels.
speed_spread 13 hours ago [-]
It's an Intel powered gaming laptop with discrete GPU running Windows. Of course it's gonna heat like crazy. But that box can also do things an M1 can't do. Like play games... And heat up a small room.
varispeed 3 hours ago [-]
The discrete GPU is barely used, so it shouldn't be getting hot.
speed_spread 1 hours ago [-]
Yeah, that's the Intel part. They do that. AMD would have been much better although still not M1 cool.
walrus01 14 hours ago [-]
I have a theory that cpu performance in laptops plateauing is something we'll see more of in the future, as manufacturers further optimize for battery life and being very thin. 15 years ago I wanted a very powerful MacBook pro. Now I'm fine with a MacBook air and if I need to do something that requires heavy lifting, I have remote access (ssh, vnc over ssh, etc) into a loud, beefy hypervisor located somewhere else with 512GB of RAM in it.
On the consumer level and for non technical end users, as more functions are offloaded to "the cloud" with peoples' subscriptions to office365, Google workspace, iCloud, whatever, having a ton of cpu power in the laptop also isn't as necessary anymore. Along with things like h.265 and av1 video encode and decode being done with GPU assist rather than purely in software.
jiggawatts 11 hours ago [-]
Even in data centres, I'm starting to see performance regressions: I just benchmarked the recently released Azure Ddsv6/Edsv6 series of virtual machines, and in several metrics they're slower than the Dadsv5/Eadsv5 series that are more than three years older!
IT hardware performance has definitely been levelling off. The exponential curves have become logarithmic.
mook 9 hours ago [-]
If I'm reading the model correctly, that's comparing Intel machines to AMD machines. I'd imagine it makes sense that there are workloads that AMD does better than Intel (and hopefully vice versa). How do Dadsv6 compare?
jiggawatts 4 hours ago [-]
Azure doesn't have Dadsv6 in Australia yet. Any year now... any year.
Fundamentally, they're both x86 CPUs, both "high end server chips", both in nearly identical (maxed-out) configurations, etc...
I just got used to computers getting so much faster over a three-year time period that it was almost pointless to benchmark them. The vendor hardly mattered, not within such a narrow market segment.
jart 7 hours ago [-]
Sigmoid. That's the word you want.
hulitu 8 hours ago [-]
> The average CPU performance of PCs and notebooks fell for the first time
Well, a lot of HN users care more about battery life than performance, so it shouldn't be any issue. /s
After computers dumbed down (Android, iOS, Windows), we now have computers that do nothing. /s
FollowingTheDao 16 hours ago [-]
Moore's law crushed by capitalism? Does anyone else have a better explanation?
Gigachad 16 hours ago [-]
Most likely explanation is that we are comparing 1 months worth of data for 2025 to the full years worth on previous years. The laptops people will have over 2025 haven't been purchased yet. I'd expect the chart to be basically flat for the year to date, but a tiny drop isn't particularly surprising either.
derkster 12 hours ago [-]
Intel spend was spending upwards of 15% a quarter on stock buybacks, while also being stalled out on 10nm with the yield improvements "right around the corner", every quarter, for at least 18 months. I don't blame capitalism, but I do blame lack of regulation regarding stock buybacks, especially talking about the United States only home fab.
dgfitz 16 hours ago [-]
I am super curious, if you’ll indulge me. What kind of economic policy would have let Moores law continue as it had been? Conversely, do you think Moores law would have been faster than 18? months if… someone else identified Moores law? Slower? Exist at all?
veltas 8 hours ago [-]
Capitalism started Moore's law.
ZiiS 16 hours ago [-]
:shrug: CPU performance is a rounding error on your GPU and RAM bottlenecks.
Osiris 16 hours ago [-]
The GPU being a bottleneck over the CPU is only in very specific scenarios.
Can you be more specific about what message you're trying to convey?
ZiiS 8 hours ago [-]
In the 90s and 00s paying more for CPU ment I could work faster. I was extreemly grateful for each breakthrough.
Even in the 10s this wasn't really true. Storage and bandwith took a bigger slice and CPU was in abundance.
We are now at the point I could easy serve the DB and App servers of a multi-million SaaS off my mobile phone.
The few remaining areas where compute is still the bottleneck at all: video encoding, 3d rendering, mining, and predominantly now AI; also tend not to focuse on the CPU.
[1] https://www.pcbenchmarks.net/displays.html
[2] https://www.pcbenchmarks.net/number-of-cpu-cores.html
[3] https://www.memorybenchmark.net/amount-of-ram-installed.html
[from 3dcenter.org : https://www.3dcenter.org/news/news-des-12-februar-2025-0 [German]]
Very little in education needs more computing power than that.
We've just told ourselves that computing power will always grow and that we will always need that growth.
That might not be be true and computers might be like cars: we don't need (want) faster cars, we want cheaper and safer cars. Top speed no longer sells.
Having tried to install a modern Linux on a 2012 laptop, it simply is not as cozy as your memory records it.
For reference even Linux designed for old hardware takes about 6 minutes to boot and even just using the console lags
Big revolution of last 20 years was spinning drives to ssd. Otherwise, my daily drivers are all 8th generation Intel CPUs - I just got a ThinkPad x1 carbon for $300 cad from Facebook marketplace. 7 years old and runs brilliantly. My wife's daily driver is a t580.
Which is all to say I have no idea how Linux is taking 6 minutes to boot or even worse, lagging on terminal - something is horribly wrong there :-(
Pretty much anything from 2011 that was not a Core i5 or better will probably not run well today unless you use a purpose-built ultralight distro.
There is no reason for computers to get slower over time. Especially on linux.
I wish I could explain what is going on but it is physically painful to use, like you are remote desktoping over an iridium satellite
But what matters is the software you're using, and that will be modern software (mostly). Modern browsers visiting JS-heavy websites, or a heavier office suite than 15 years ago.
And if this is the context, then computers must get slower over time, right?
https://en.wikipedia.org/wiki/Wirth's_law
Next time you're doing something that isn't a professional STEM job, see how far you can get through your day without adding or multiplying.
Unless you're totting up your score in a board game or something of that ilk, you'll be amazed at how accommodating our society is to people who can't add or multiply.
Sure, when you're in the supermarket you can add up your purchases as you shop, if you want to. But if you don't want to, you can just buy about the same things every week for about the same price. Or keep a rough total of the big items. Or you can put things back once you see the total at the till. Or you can 'click and collect' to know exactly what the bill will be.
You don't see mainstream discussion of variance because 90% of the population don't know WTF a variance is.
Worst are the ones where he number is the number of times some random event happened. In that case there's a decent chance the difference is less than twice the square root, so assuming a Poisson distribution you know the difference is insignificant.
Few people bothered to look at the probability distributions the forecasters published which showed decent probabilities for landslide wins in either direction.
January 2023: https://web.archive.org/web/20230130185431/https://www.cpube...
Still, both previous years were notably up after the first month already. This is different and perhaps notable regardless. Either the release schedule is skewed forward or the hardware is genuinely stagnating. Or perhaps the benchmark is hitting some other CPU unrelated bottleneck.
They have more than twice as many samples on their Feb 10 update this year (47810) as they did last year on Feb 14 (22761). They have shown some growth year on year in sample size, but nowhere near doubling.
That suggests this month has been an outlier in having a strangely large number of samples already, which could all be related—maybe their software started being bundled with a specific OEM or was featured on a popular content creator or whatever.
As a sibling notes, less accurate just means less accurate, not necessarily skewed upward. There simply is less data to draw conclusions on than there will be later, so any medium-sized effect that adds 20k extra samples will have a larger effect now than later.
Just because the error isn't presented the same here doesn't mean it's error free...
Whatever pays the bill I suppose.
It doesn’t have to be literally India, it’s an example for illustration.
I was a bit bummed since I wanted to use it as a kind of streaming box, and while it can do it, it is definitely slow.
PS. If you need more power than that I have a hard time coming up with a better deal than M4 Mac Mini, perhaps with some extra usb storage. It's possibly the only non-workstation computer that is worth buying in the not-inexpensive category (in the base version, obviously).
The question isn't whether it could be faster, but whether it's slow enough to be annoying for the tasks you use it for.
I bet most people would find the N100 annoyingly slow when opening documents or browsing the web, or doing crazy things like opening a few programs at the same time.
My main machine is from about 2012 - a garage sale hand-me-down ultra high end game machine of the time (24GB in 2012!) It does suck a fair bit of power, running 24/7 but it's comfortably adequate for everything, and that's with two simultaneous users. Such hand-me-down laptops as have come my way (garage sales or free) are also adequate.
So if the continuing evolution is, instead, in compute per per joule, rather than absolute compute, I'm all for it. Graphics card power connectors melting at 570W... not for me.
1. Extra silicon area being used by NPUs and TPUs instead of extra performance?
2. Passmark runs under Windows, which is probably using increasing overhead with newer versions?
3. Fixes for speculative execution vulnerabilities?
4. Most computers are now "fast enough" for the average user, no need to buy top end.
5. Intel's new CPUs are slower than the old ones
I'm not an expert in silicon design, but I recall reading that there's a limit to how much power can be concentrated in a single package due to power density constraints, and that's why they are adding fluff instead of more cores.
>2. Passmark runs under Windows, which is probably using increasing overhead with newer versions?
This is a huge problem as well. I have a 2017 15" MacBook Pro and it's a decent computer, except it is horribly sluggish doing anything on it, even opening Finder. That is with a fresh install of macOS 13.7.1. If I install 10.13, it is snappy. If I bootcamp with Windows 11, it's actually quite snappy as well and I get 2 more hours of battery life somehow despite Windows in bootcamp not being able to control (power down) the dGPU like macOS can. Unfortunately I hate Windows, and Linux is very dodgy on this Mac.
Could this suggest that modern Windows is, dare I say it, MORE resource efficient than modern macOS?! That feels like a ludicrous statement to type, but the results seem to suggest as much.
Would explain why Windows runs faster.
If you machines have more RAM, you can use a more RAM intensive solution to speed many things up, or deliver a more computationally intensive but higher quality solution, or simply add a new previous challenging feature.
What would be interesting, is how fast to old but high spec'd models slow down. What slow downs are from optimizing for newer architectures, vs. optimizing with expectation of higher resources.
I will say the problem was a lot worse before the core explosion. It was easy for a single process to bring the entire system to a drag. These days the computer is a lot better at adapting to variable loads.
I love my macs, and I'm about 10x as productive as on a windows machine after decades of daily usage (and probably a good 2x compared to linux). Performance, however, is not a good reason to use macos—it's the fact that the keybindings make sense and are coherent across an entire OS. You can use readline (emacs) bindings in any text field across the OS. And generally speaking, problems have a single solution that's relatively easy to google for. Bad-behaved apps aside (looking at you zoom and adobe) administering a mac is straightforward to reason about, and for the most part it's easy to ignore the app store, download apps, and run them by double clicking.
I love linux, but I will absolutely pay to not have to deal with X11 or Wayland for my day-to-day work. I also expect my employer to pay for this or manage my machine for me. I tried linux full-time for a year on a thinkpad and never want to go back. The only time that worked for me was working at google when someone else broadly managed my OS. macs the only unix I've ever used that felt designed around making my life easier and allowing me to focus on the work I want to do. Linux has made great strides in the last two decades but the two major changes, systemd and wayland, both indicate that the community is solving different problems than will get me to use it as a desktop. Which is fine; I prefer the mac-style to the ibm pc-style they're successfully replacing. Like KDE is very nice and usable, but it models the computer and documents and apps in a completely different way than I am used to or want to use.
100% true, but I love Linux as a daily driver for development. It is the same os+architecture as the servers I am deploying to! I have had to carefully select hardware to ensure things like WiFi work and that the screen resolution does not require fractional scaling. Mac is definitely superior hardware but I enjoy being able to perf test the app on its native OS and skip things like running VMs for docker.
I'm not sure I understand the whole tinkering thing. Whenever I tinker with my Linux, it's because I decided to try something tinkery, and usually mostly because I wanted to tinker.
Like trying out that fancy new tiling Wayland WM I heard about last week...
1. You're trying to run it on hardware that isn't well-supported. This is a bummer, but you can't just expect any random machine (especially a laptop) to run Linux well. If you're buying a new computer and expect to run Linux on it, do your research beforehand and make sure all the hardware in it is supported.
2. You've picked a distro that isn't boring. I run Debian testing, and then Debian stable for the first six months or so after testing is promoted to stable. (Testing is pretty stable on its own, but then once the current testing release turns into the current stable release, testing gets flooded with lots of package updates that might not be so stable, so I wait.) For people who want something even more boring, they can stick with Debian stable. If you really need a brand-new version of something that stable doesn't have, it might be in backports, or available as a Snap or Flatpak (I'm not a fan of either of these, but they're options).
3. You use a desktop environment that isn't boring. I'm super super biased here[0], but Xfce is boring and doesn't change all that much. I've been using it for more than 20 years and it still looks and behaves very similarly today as it did when I first started using it.
If you use well-supported hardware, and run a distro and desktop environment that's boring, you will generally have very few issues with Linux and rarely (if ever) have to tinker with it.
[0] Full disclosure: I maintain a couple Xfce core components.
Two kinds of linux tinkering often get aliased and cause confusion in conversations.
The first kind is the enthusiast changing their init system bootloader and package manager and "neofetch/fastfetch" and WM and... every few weeks.
The second kind is the guy who uses xfce with a hidpi display who has to google and try various combinations of xrandr(augmented with xfwm zoom feature), GDK_SCALE, QT_SCALE_FACTOR, theme that supports hidpi in the titlebar, few icons in the status tray not scaling up(wpa_gui), do all that and find out that apps that draw directly with OpenGL don't respect these settings, dealing with multiple monitors, plugging in hdmi and then unplugging messing up the audio profile and randomly muting chromium browsers, deciding whether to use xf86-* Or modesetting or whatever the fix is to get rid of screen tearing. Bluetooth/wifi. On my laptop for example I had to disable usb autosuspend lest the right hand side USB-A port stop working.
If our threshold for qualifying well-supported hardware is "not even a little tinkering required" then we are left with vanishingly few laptops. For the vast vast majority of laptops, atleast the things I mentioned above are required. All in all, it amounted to couple of kernel parameters, a pipewire config file to stop random static noise in the bluetooth sink and then a few xfce setting menu tweaks (WM window scaling and display scaling). So not that dramatic, but it is still annoying to deal with.
The 2nd kind of tinkering is annoying, and is required regardless of distro/de/wm choice since it's a function of the layers below the de/wm, mostly the kernel itself.
I have been using XFCE for more than 10 years almost exclusively with multiple HiDPI monitors.
After Linux installation I never had to do anything else except of going to XFCE/Settings/Appearance and set a suitably high value for "Custom DPI Setting".
Besides that essential setting (which scales the fonts of all applications, except for a few Java programs written by morons), it may be desirable to set a more appropriate value in "Desktop/Icons/Icon size". Also in "Panel preferences" you may want to change the size of the taskbar and the size of its icons.
You may have to also choose a suitable desktop theme, but that is a thing that you may want to do after any installation, regardless of having HiDPI monitors.
[0]Apple had a unpublicized extended warrantee on these, and rebuilt the entire thing twice.
[1] kernel_task suddenly goes to 400%, Hot.app reports 24%. Very little leeway between low energy and almost dead.
Most software engineers don't care to write multithreaded programs, as evidenced by the 2 most popular languages - Js and Python having very little support for it.
And it's no wonder, even when engineers do know how to write such code, most IRL problems outside of benchmarks don't really yield themselves to multithreading, and due to the parallizable part being limited, IRL gains are limited.
The only performance that actually matters is single thread performance. I think users realized this and with manufacturing technology getting more expensive, companies are no longer keen on selling 16 core machines (of which the end user will likely never use more than 2-3 cores) just so they can win benchmark bragging rights.
How can you state something like this in all seriousness? One of the most used software application has to be the browser, and right now firefox runs 107 threads on my machine with 3 tabs open. gnome-shell runs 22 threads, and all I'm doing is reading HN. It's 2025 and multicore matters.
Firefox failing to saturate your CPU is a win-state.
During the last 20 years, i.e. during the time interval when my computers have been multi-core, their multithreaded performance has been much more important for professional uses than the single-threaded performance.
The performance with few active threads is mainly important for gamers and for the professional users who are forced by incompetent managers to use expensive proprietary applications that are licensed to be run only on a small number of cores (because the incompetent managers simultaneously avoid cheaper alternatives while not being willing to pay the license for using more CPU cores).
A decent single-threaded performance is necessary, because otherwise opening Web pages bloated by JS can feel too slow.
However, if the single-threaded performance varies by +/- 50% I do not care much. For most things where single-threaded performance matters, any reasonably recent CPU is able to do instantaneously what I am interested in.
On the other hand, where the execution time is determined strictly by the multithreaded performance, i.e. at compiling software projects or at running various engineering EDA/CAD applications, every percent of extra multithreaded performance may shorten the time until the results are ready, saving from minutes to hours or even days.
I have a very opposite opinion; single threaded performance only matters to a point where any given task it's doing isn't unusable. Multithreaded performance is crucial from keeping the system from grinding to a halt because users always have multiple applications open at the same time. Five browser windows with 4-12 tabs on each, on three different browsers, 2-4 Word instances, some electron(equivalent) comms app, is much less unusual than I'd like it to be. I have used laptops with only two cores and it gave a new meaning to slow when you tried doing absolutely anything other than waiting for your one application to do something. Only having one application open at a time was somewhat useable.
Strong disagree, particularly on laptops.
Having some firefox thread compile/interpret/run some javascript 500 microseconds faster is not going to change my life very much.
Having four extra cores definitely will: it means i can keep more stuff open at the same time.
The pain is real, particularly on laptops: i've been searching for laptops with a high-end many-core cpu but without a dedicated gpu for years, and still haven't found anything decent.
I do run many virtual machines, containers, databases and stuff. The most "graphic-intensive" thing i run is the browser. Otherwise i spend most of my time in terminals and emacs.
But multicore performance does matter significantly unless you're on a microcontroller running only one process on your entire machine. Just Chrome launches a quarter million processes by itself.
* battery life is a lot higher than it used to be and a lot of devices have prioritized efficiency
* we haven't seen a ton of changes to the high-end for CPU. I struggle to think of what the killer application requiring more is right now; every consumer advancement is more GPU focused
And worse is multi-dimensional. If you shelled out for a high-end desktop in 2020 it could have 12 or 16 cores, but a new PC would have DDR5 instead of DDR4 and possibly more of it, a faster SSD, a CPU with faster single thread performance, and then you want that but can't justify 12+ cores this time so you get 6 or 8 and everything is better except the multi-thread performance, which is worse but only slightly.
The reticence to replace them also works against you. The person who spilled coffee in their machine can't justify replacing it with something that nice, so they get a downgrade. Everyone else still has a decent machine and keeps what they have. Then the person who spilled their drink is the only one changing the average.
> Fixes for speculative execution vulnerabilities?
I don't know if they'll keep doing it, but hardware unboxed had been doing various tests with Windows 10 vs 11, and mitigations on vs off, as well as things like SMT on vs off for things like AMD vcache issues, or P cores vs E cores on newer intels... it's interesting to see how hardware performs 6-12 months after release, because it can really go all over the place for seemingly no reason.
I know, I know: all the software you use is slow and awful. But it's generally bad thanks to a failure to work around slow network and disk accesses. If you use spinning rust, you're a power user.
It's also a minority of video games that rely more on CPU performance than GPU and memory (usually for shared reasons in niche genres)
Edit: oh and electron apps. Sweet lord, shouldnt we have learned in the 90s with java applets that you shouldnt use a garbage collected language for user interfaces?
Java in the 90s was really slow. It got much faster with the JIT compiler. JavaScript and browsers got many optimizations too. My laptop feels faster now than it was in 2014 (it has always run some version of Linux.)
The other problem of Java was the non native widgets, which were subjectively worse than most of today's HTML/JS widgets, but that's a matter of taste.
GCs introduce some overhead but they alone are not responsible for the bloat in Electron. JavaScript bloat is a development skill issue arising from a combination of factors, including retaining low-skill front end devs, little regard for dependency bloat, ads, and management prioritizing features over addressing performance issues.
If they do care then they probably depend on someone who doesn't.
GC also disqualifies a language from ever being used for anything but userland programs.
I don't think Electron's problems are that simple.
i wait it to break before moving on, changing for the sake to changing feels like waste.
I simply can't afford the replacement 16" MBP right now. Hopefully it lasts another couple of years.
I would like to replace it with something that has an SD card slot and an HDMI port, as I use those frequently and don't want to deal with adapter solutions.
Shouldn't be an issue. Foreground applications do get a priority boost; I don't know if Passmark increases priority of itself, tho. Provided there isn't any background indexing occurring, i.e. letting the OS idle after install.
ok I lied, my phone is only two years old, but what happened is that two years ago my phone experienced a sudden drop test on concrete followed by pressure test of a car tyre, and it was easier to buy a new one with nearly identical specs than to repair the old one.
https://news.ycombinator.com/item?id=43030465
It's caused by people failing to read the fine print on the graph.
https://news.ycombinator.com/item?id=43030614
The best we can say is that the sampling skew this month is different than the sampling skew in past Januaries. That could be due to some interesting cause, it could be totally random, and there's zero point in speculating about it until we have more data.
They are propped up by demand and the fact that most of the new GPUs are marginally better than previous ones.
Also, if you're talking gaming GPUs old ones work fine given there has not been a new PlayStation or Xbox in many years. Min spec for many games is 5 years old tech
And the generated frames are misleading because they don't work very well if your source framerate is low.
As you say you need a good framerate to start with before generated frames make sense so either you're already running well with a high tier card and are further able to show off the combination of framerate/resolution/detail level, or running a lower tier card at lower settings turning the graphics down further which can be a very obvious trade off. Less demanding games which are generally online competitive usually wouldn't do as well with any extra latency introduced, and the situation where I've heard it would be a good fit is emulators or any kind of game where the CPU is the limiting factor
https://news.ycombinator.com/item?id=43017612
(a) changes in methodology/non-random sampling, with 60 % probability,
(b) random sampling effects that go away on longer timelines, with 20 % probability,
(c) any actual change in CPUs, with 20 % probability.
In my experience, when a metric shows weird behavious these are usually the reasons with roughly those proportions.
Buying anything new for me is out of question since a few years ago, because it just cannot be guaranteed to survive half a year, hence now I always buy second-hand, because this way at least it was tested by the previous owner.
But even so, modern hardware is hellishly unreliable.
I bought a Dell 7 series laptop (17 inches), and:
but bear with me, the rest is more fun: Maybe it's just me and one single laptop? Well, in my experience, everything is getting similarly fragile. My phone has its thermal sensor installed ON THE BATTERY, so if you replace the battery, you have to replace the sensor as well, which is, oh, well, a harder thing to manufacture than a battery and on many non-official batteries always returns 0 (0 Kelvin that is, -273 Centigrade).The amount of cacti you have to swallow to get used to modern hardware is just staggering.
These days I have two main machines which boot to Windows in about a minute.
Somehow I remember my old 486 machine was able to boot to DOS within the snap of a power button.
[0] https://en.wikipedia.org/wiki/Wirth%27s_law
> they screwed up because they said Linux, which doesn’t exclude Apple Silicon
In the context of the second graph, that means Apple Silicon is also failing to push the envelope forwards.
I'm curious what median and std dev look like.
Even if we assume theres a bit of a skew because a recycler has run a bunch of benchmarks on old hardware, it's something that felt pretty inevitable.
Average users dont need any more power than they have right now for their common activities (browsing and such), so havent been upgrading.
There's been a large increase in low power machines, be it the N100's, portables such as the steam deck, etc.
Until the next 'big thing' theres not going to be much of a need or desire to upgrade, you're not gaining anything at this point if your activities arent cpu constrained.
Even on what could easily be called a "beefy" machine major websites on chrome make snails look like avatars of agility.
Now if we could just get rid of fake frames on the GPU side.
But yes, I agree on principle: https://gist.github.com/bazhenovc/c0aa56cdf50df495fda84de58e...
I'll probably use the t530 as my primary machine for years to come.
But, I dunno. I just bought an Asus Zenbook with an intel 9 ultra with Arc graphics 32Gb ram, 1tb SSD, and it seems pretty nifty to me, especially at the price.
My i7-13700H is shitting the bed due to poor software. Permanently on power-saving while charging because of "rapid charging". It'd be ok if it didn't render the whole computer useless at 400MHz, my first computer from circa 1996 is the only one clocked slower than that I ever had, and it had a turbo button
Fair enuf. I was really trying to comment on the ultra/arc chip performance.
> so 9 is fast?
seems to be - my previous laptop was a Core i7, and my new 9 Ultra one is definitely faster (for what I do with it). It goes into power saving only when it gets below about 30% of battery charge, and Asus give a lot of switches to fiddle about with that.
how about this interpretation: desktops are fast enough; laptops are getting faster but people tend to buy more usable laptops, not faster ones.
This thing barely can handle Office, Teams and the browser. Hot, noisy as hell and performance wise I see no difference over laptops from decade ago. Tragic.
To be fair I don't think I could do anything. Task manager showed CPU as under utilised, yet fans were blasting, editing Word document looked like a slideshow.
ASUS tools showed no problems.
I don't know, feels like people are getting scammed with these new laptops.
I still have M1 Max and never experienced anything like this.
Been fine, but suddenly it was slow af. Near zero CPU, GPU and disk usage in Windows Task Manager, but I could feel it was burning hot, which it wasn't just 15 minutes prior.
Did a reboot and all was fine again.
Surely some firmware that messed up, though no idea why.
Anyway, I'd start by removing crap, that goes for 3rd party anti-virus and vendor tools especially. Use something like Bulk Crap Uninstaller[1] or Revo[2], and then reinstall drivers.
Totally agreed on the sad state of laptops these days.
[1]: https://www.bcuninstaller.com/ (open source)
[2]: https://www.revouninstaller.com/products/revo-uninstaller-fr...
Windows is a heavy operating system compared to the others and can cause that problem but that likely isn't Windows alone causing the problem (though bloatware / other AV solutions could also have something to do with it)
I wanted to say gaming laptops are a scam, but the older Intel MBPs with dedicated GPUs suffered too. More trouble than they're worth.
On the consumer level and for non technical end users, as more functions are offloaded to "the cloud" with peoples' subscriptions to office365, Google workspace, iCloud, whatever, having a ton of cpu power in the laptop also isn't as necessary anymore. Along with things like h.265 and av1 video encode and decode being done with GPU assist rather than purely in software.
IT hardware performance has definitely been levelling off. The exponential curves have become logarithmic.
Fundamentally, they're both x86 CPUs, both "high end server chips", both in nearly identical (maxed-out) configurations, etc...
I just got used to computers getting so much faster over a three-year time period that it was almost pointless to benchmark them. The vendor hardly mattered, not within such a narrow market segment.
Well, a lot of HN users care more about battery life than performance, so it shouldn't be any issue. /s
After computers dumbed down (Android, iOS, Windows), we now have computers that do nothing. /s
Can you be more specific about what message you're trying to convey?
Even in the 10s this wasn't really true. Storage and bandwith took a bigger slice and CPU was in abundance.
We are now at the point I could easy serve the DB and App servers of a multi-million SaaS off my mobile phone.
The few remaining areas where compute is still the bottleneck at all: video encoding, 3d rendering, mining, and predominantly now AI; also tend not to focuse on the CPU.