About a year ago I was looking at Crash Bandicoot timer systems and I found that Crash 3 has a constantly incrementing int32. It only resets if you die.
Left for 2.26 years, it will overflow.
When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: https://youtu.be/f7ZzoyVLu58
jsheard 208 days ago [-]
There's a weapon in Final Fantasy 9 which can only be obtained by reaching a lategame area in less than 12 hours of play time, or 10 hours on the PAL version due to an oversight. Alternatively you can just leave the game running for two years until the timer wraps around. Slow and steady wins the race.
"The perfect racing car crosses the finish line first and subsequently falls into its component parts."
Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve.
WJW 207 days ago [-]
The Porsche quote reflects a wider design philosophy that says "Ideally, all components of a system lasts as long as the design life of the entire system and there should be no component that lives significantly longer. If there is such a component, it has been overengineered and thus the system will be more expensive to the end consumer than it needs to be.". It kinda skips over maintenance, but overall most people find it unobjectionable when stated like this.
But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once.
creaturemachine 207 days ago [-]
Porsche was talking about racing, where the primary focus is reaching the finish line faster an anyone else, and over-engineering can easily get in the way of that goal. Back in the real world, no race team would agree that their cars should disintegrate after one race.
AzN1337c0d3r 207 days ago [-]
> Back in the real world, no race team would agree that their cars should disintegrate after one race.
Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?
jperras 207 days ago [-]
If you go back further than that, teams used to destroy entire engines for a single qualifying.
The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.
They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.
After which the engine was basically unusable, and so they'd put in a new one for the race.
gnatolf 207 days ago [-]
Current examples would be drag racing cars that have motors that are designed and used in a way that they only survive for about 800 total revolutions.
creaturemachine 207 days ago [-]
Yup, cigarette money enabled all kinds of shenanigans. Engine swaps for qualification, new engines every race, spare third cars, it goes on. 2004 was the first year that specified engines must last the entire race weekend and introduced penalties for swaps.
lostlogin 207 days ago [-]
> cigarette money enabled all kinds of shenanigans.
It still does. New Zealand has a crop of tobacco funded politicians.
lawlessone 207 days ago [-]
>New Zealand has a crop of tobacco funded politicians.
when they leave politics do they just rapidly age and dissolve like that guy in the Indiana Jones film?
TylerE 207 days ago [-]
F1 income is way way higher than the 80s.
kllrnohj 207 days ago [-]
Even today F1 teams are allowed 4 engine replacements before taking a grid place penalty, and those penalties still show up regularly enough. So nobody is making "reliable" F1 engines.
You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.
pfdietz 207 days ago [-]
Don't highly optimized drag racers do this? I mean, a clutch that in normal operation gets heated until it glows can't be very durable.
ortusdux 207 days ago [-]
Anyone can build a bridge, but it takes an engineer to barely build a bridge.
mikepurvis 207 days ago [-]
Alan Weisman's lovely book World Without Us speculates a bit about this, basically saying that more recently built structures would be the first to collapse because they've all be engineered so close to the line. Meanwhile stuff that already been standing for 100+ years like the Brooklyn Bridge will probably still be there in another 100 years even without any maintenance just on account of how overbuilt it all had to be in an era before finite element analysis.
ortusdux 207 days ago [-]
There was an aluminum extrusion company that falsified test records for years. They got away with it because what's a few % when your customer's safety factor is 2. Once they got into weight sensitive aerospace applications, where sometimes the factor is 1.2, rockets starting blowing up on the launch pad.
Should have resulted in jail time. A monetary fine is no deterrent.
Scramblejams 207 days ago [-]
It did result in jail time. The linked document states that the testing lab supervisor was sentenced to 3 years. (Not sure how much of that time was actually served, apparently he was suffering from dementia.) More info: https://www.oregonlive.com/portland/2018/08/company_supervis...
This is a great quote for the topic, but the quote is normally about a bridge that barely stands.
I'm chuckling at the thought of barely building something. (All in good fun, thank you.)
woliveirajr 206 days ago [-]
In my county, a company asked the Mayor if it was possible to improve some bridge because they need to carry 40t and the bridge had a sign telling it would only allow up to 32t. Their proposal was to do the construction and get tax rebates.
After two weeks, the Infrastructure department changed the sign allowing up to 45t.
signalToNose 207 days ago [-]
Consumer protection laws prevents businesses following this to it’s extreme. For many businesses the ideal would be to just sell stuff that immediately breaks down as soon as it’s sold. It has the fulfilled its purpose from their point of view
delichon 207 days ago [-]
I run sous vide cookers 24*7, and they uniformly break within 90 days or less. But they don't like to admit their smaller duty cycle, so they don't, and keep sending me warranty replacements instead. I keep buying different brands looking for one with a longer life. I'll bet most people do that when their gadgets die, and purposely making products that die as soon as sold isn't often a successful business model.
rlander 207 days ago [-]
That’s not a small cycle count for a normal household.
90 × 24 = 2,160 total hours.
I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week.
That works out to roughly 15 years of usable machine time for the average person.
Not bad at all.
josephg 207 days ago [-]
Photography is the same way. Most SLR / DSLR / mirrorless cameras have a mechanical shutter which is expected to last around 200k-1m activations. I've had a camera for a bit over a year. I've used it quite heavily, and my shutter count is at about 13k photos. At this rate, the shutter will probably last for 20+ years - which seems fine. If I'm still using the camera by then, spending a few hundred dollars to replace the shutter mechanism sounds totally reasonable.
plywoodShadow 207 days ago [-]
2160/12 is 180 weeks, or roughly 3.5 years, not 15 years
47282847 207 days ago [-]
Assuming linearity, which I doubt is the case.
account42 207 days ago [-]
You think a measly 360 uses at your 6 hours typical operation is even remotely acceptable for a glorified heating element?
And yes, 15 years is bad. I don't want to replace my entire household every 15 years FFS.
cestith 207 days ago [-]
A friend of mine gets new headphones/headsets every six to eighteen months, and hasn’t bought a pair entirely out of pocket in years. For him it’s all down to buying the Microcenter protection plan every time they’re replaced. They fail, he takes them back, he gets store credit for the purchase price, and he buys a new set and a new plan. He doesn’t even care about the manufacturer’s warranty anymore.
Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we haven’t really hashed out the numbers together. I’m typing this while wearing a HyperX gaming headset I bought refurbished that’s old enough that I’ve replaced the earpads while everything else continues to work.
Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would.
207 days ago [-]
hnuser123456 207 days ago [-]
Are there not industrial ones meant to last longer? Maybe you can buy a used but good condition one of those.
WJW 207 days ago [-]
There are, and if you really have the workload that you need to cook stuff 24/7 (what in gods name is OP cooking btw?) then you should definitely get one of those. Maybe not even secondhand but just a new one. The cheap consumer grade ones are meant for people who use them once or twice a year.
This is a fine example of what I meant about people complaining when they use products beyond their design parameters.
tracker1 207 days ago [-]
I got one that seems to be kind of in the middle, it's better built than most of the consumer models but not quite as "industrial" feeling as some of the commercial models. I use it a few times a week for a few hours each.
I'm on a mostly carnivore, mostly ruminant meat diet and for costs tend to do a lot of ground beef... I sous vide a bunch of burgers in 1/2lb ring molds, refrigerate and sear off when hungry. This lets me have safer burgers that aren't overcooked. I do 133F for 2.5+ hours.
I also do steaks about once or twice a week. I have to say it's probably the best kitchen investment I could have made in terms of impact on the output quality.
elzbardico 207 days ago [-]
It is easy to have to run a bunch of sous vide cooker 24/7 if you have a small restaurant or food delivery business.
compiler-guy 207 days ago [-]
In which case one shouldn't be using consumer-grade kitchen equipment.
elzbardico 207 days ago [-]
Call it vibe cooking.
lawlessone 207 days ago [-]
If the manufacturers keep replacing the machines because they're within warranty isn't this cheaper for OP?
mattkrause 207 days ago [-]
Definitely -- get something meant for a lab. I worked in one that had a 150F water bath running day and night.
account42 207 days ago [-]
Well from an evil business perspective their options are either
- the product doesn't break and you don't buy a replacement from them because you still have a working product
- the product breaks and there is a greater than 0% chance that you will buy a replacement product from them
Of course in practice it's more complicated but I wouldn't be so quick to declare that the math doesn't work out.
It looks like the Breville is the most affordable at $600. Currently I'm paying optimistically $45/90 days or $0.50/day. For the Breville to match that it would need to survive for 3.29 years. Will it?
FuriouslyAdrift 207 days ago [-]
Maybe... the Sammic is made for a high volume commercial kitchen
muzani 207 days ago [-]
What do you sous vide 24*7? It sounds like it would be party grounds for bacteria. Also curious if the bags and other components break as well.
delichon 207 days ago [-]
Beef, lamb, sometimes pork. I have a daily meal of a cheap, tough cut of meat cooked for 48 hours at 150F.
Sous vide is generally not a bacterial growth risk above 140F. At 150F throughout, you get decent pasteurization in under two minutes. Two days of that is such extreme overkill that I'm concerned about the nutritional effect of over cooking.
The Food Saver style vacuum sealers fail fast for me, so I bought a $400 chamber sealer, and I'm on year 5 with it.
Nathanael_M 207 days ago [-]
I think I love you? This is great. Do you have them running in arrays of 3? What’s your favourite cut? What’s the best cost:deliciousness cut? What bags do you use to minimize plastic leeching?
delichon 207 days ago [-]
It's just me, so I only need one running at a time. Every day I take one serving out and put another one in. I clean the tank about once per week, or if something breaks. My favorite is short ribs, my daily drivers are chuck roast or shank. The prices have skyrocketed in the last few years. I buy in bulk on sale and portion it into bags with a chamber style vacuum sealer. It goes straight from the freezer into the tank.
Nathanael_M 207 days ago [-]
Do you take pride in knowing that you eat cooler than anyone else, because you should.
Short rib is shocking where I am. Even chuck is pushing past $15 a pound.
What are you doing for sides/sauce? Generally when I think braise/sous-vide I think some rich, flavourful sauce, but that seems unpractical for daily consumption.
delichon 207 days ago [-]
Chuck on sale is now $8 a pound, more than double since Covid started. I am eating less of it and more ground beef, pork and eggs.
I crisp it up in an air fryer before serving. Here's the full ingredient list: meat, butter, salt. After five years I still look forward to every repeat.
I just replaced an air fryer that lasted two years of daily use, a personal record. I was ready to replace it anyway, because they accumulate grease where you can't clean, and the smell gets interesting.
207 days ago [-]
doubled112 207 days ago [-]
When the design spec seems to be a 3 year long lease I can see why people get bothered.
aleks224 207 days ago [-]
There's a quote in the bible that says something similar:
"Verily, verily, I say unto you, Except a corn of wheat fall into the ground and die, it abideth alone: but if it die, it bringeth forth much fruit.”
(John 12:24)
lelandfe 207 days ago [-]
So the invisible 12h timer runs during cutscenes. During Excalibur 2 runs, I used to open and close the PS1 disc tray to skip (normally unskippable) cutscenes. Never knew why that worked.
(I also never managed to get it)
jonhohle 207 days ago [-]
I’m going to wager that the cutscenes are all XA audio/video DMA’d from the disc. Opening the disc kills the DMA and the error recovery is just to end the cutscene and continue. The program is in RAM, so a little interruption on reading doesn’t hurt unless you need to time it to avoid an error reading the file for the next section of gameplay.
ad133 207 days ago [-]
This is a significantly better handling than the previous game (final fantasy viii). My disk 1 (it had four disks) got scratched over time (I was a child after all), and the failure mode was just to crash - thus the game was unplayable. The game had a lot of cutscenes.
Insanity 207 days ago [-]
That’s a solid guess. And if that’s the case, that’s actually pretty good error handling!
Jare 207 days ago [-]
I recall that handling disc eject was an explicit part of the Tech Requirements Doc (things the console manufacturer requires you to comply with). They'd typically check while playing, while loading and while streaming.
p1necone 207 days ago [-]
> Never knew why that worked.
I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash.
jbreckmckye 208 days ago [-]
Oh yeah. The sword you pick up in Memoria. The problem there is that the PAL version runs slower; the way PSX games "translated" between the two video systems was just to have longer VSync pauses for PAL. So the game is actually slower, not interpolated
reactordev 207 days ago [-]
Longer vsync pauses but larger frame time deltas so it’s basically the same speed of play. The only thing that was even noticeable was the UI lag.
fredoralive 207 days ago [-]
Erm. No, like lots of games during the era quite a lot of stuff is tied to the frame rate, so the 50Hz region game just runs slower than the 60Hz one as next to nobody bothers to adjust for it. The clock for the hidden weapon does run at the same rate for both unfortunately, hence it being harder to get in 50Hz regions.
reactordev 207 days ago [-]
Incorrect. I’m looking at the source code. It’s not perfect but it’s not just “slowed down to 50hz” like people claim.
jbreckmckye 207 days ago [-]
When you say looking at the source code, what do you mean here?
AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives
Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI)
In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator
(Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play)
anthk 207 days ago [-]
FF VII-IX were reimplemented under a custom engine.
reactordev 207 days ago [-]
Except I’m looking at the original source, not the remake, the crappy C/C++ Square engine. Not C# unity code.
There are a number of timers and things used. But the claim that it runs slower is absolutely false. It’s just perceived that way because it’s “drawn” slower.
jbreckmckye 207 days ago [-]
Firstly, could you elaborate what code you're looking at? Square have never shared the source code for these titles and were not even practicing real version control at this time (see: Eidos FF7/8 debacle)
Secondly, it absolutely will run slower. Animations will take longer to complete; FMVs will play at a different rate ; controller sampling will be reduced.
My scepticism isn't coming from hearsay or ignorance: I have written PlayStation software, and PSX software is not parallelised, even though it can support threading and cooperative concurrency. The control flow of the title is very locked into the VSync loop, from your first ResetGraph(0) right to your final DrawOTable(*p).
In addition, I have done a bunch of reversing work on the other two PSX games, and they are not monolithic programs. They can't be because there simply isn't enough RAM to store the .TEXT of the entire thing at once. So when you say "the source code", I'm inclined to ask - for which module? The kernel or one of the overlays?
reactordev 207 days ago [-]
Good for you. When I say the source, I mean all modules, the kernel, the graphics, everything. Cheers. While we didn’t use version control the way we do today, we still had it… some of us also made copies.
It’s not lost except to maybe Square Enix’s corporate but they don’t know where anything is.
reactordev 207 days ago [-]
It’s definitely not lost…
jbreckmckye 207 days ago [-]
What code are you looking at?
FFIX for PSX would have been written in C (or possibly C++) with PSY-Q. It will not be one program - those games were composed of multiple overlays that are banked in / out over the PlayStation's limited memory.
From what I know the PC release was a port to a new framework, which supports the same script engines, but otherwise is fresh code. This is how it can support mobile, widescreen, Steam achievements etc.
mungoman2 207 days ago [-]
Wouldn't a slower tick make it easier as you get more wall time to do the same challenge.
fredoralive 207 days ago [-]
No? Wall time (that the challenge runs on) is unchanged, game time (Vsync) is running at 83% of full speed (50Hz vs 60Hz), so if something tied to frame rate (animation, walking speed etc.) takes 1 second to do on NTSC, it'll take 1.2 seconds to do on PAL etc.
207 days ago [-]
Forgeties79 207 days ago [-]
Lord have mercy fandom really has become unbearable with the ads and pop ups.
coldpie 207 days ago [-]
Install an ad blocker.
Forgeties79 207 days ago [-]
I opened this on an iPhone which has fewer adblock options. Desktop is better locked down.
Regardless I can still complain about how intrusive the ads are.
coldpie 207 days ago [-]
There are many ad block options on iPhone. I currently use Wipr 2, but in the past I've used both 1Blocker and AdBlock Pro with success.
JustExAWS 207 days ago [-]
I just opened this my iPhone with 1Blocker installed. I saw no ads. It’s been around since iOS 8
Forgeties79 207 days ago [-]
Never heard of it, appreciate the recc!
Edit: ah only works on safari
mrguyorama 207 days ago [-]
You are on iOS. There is only safari. Any other "web browser" is just a skin over safari
Forgeties79 207 days ago [-]
Yes I know everything is wrapped around safari. But I like having Firefox syncing across devices.
Edit: ah forgot my vpn was off, usually clears all that up for me. Much better now
JustExAWS 207 days ago [-]
Just a note: If you have a Windows computer, Apple has a plug in for Firefox and Chrome that syncs bookmarks to iCloud and Safari.
Forgeties79 207 days ago [-]
Linux machine but maybe it works anyway?
207 days ago [-]
account42 207 days ago [-]
Don't accept devices that limit your ad blocker options.
Forgeties79 207 days ago [-]
Does this discussion strike you as one where I’m deliberating whether or not to chuck my smartphone and buy into a new ecosystem to avoid ads on fandom?
These types of comments are always very unhelpful.
ogurechny 207 days ago [-]
No, that's just a reminder that you had a choice, and chose empty talk about “ecosystems” over ability to control what you can see on “your” screen. You've stepped on a rake once, you got some experience, why repeat it over and over again?
Forgeties79 207 days ago [-]
Or another option: we could remember that the ultimate offender here is Fandom.
My choice of device is irrelevant when assessing their crappy site.
elcritch 207 days ago [-]
We should rally together to force game companies to use 32 bit timers rather than 64bit ones so we can keep finding these fun little glitches. The time to protect overflows is now! ;)
debo_ 208 days ago [-]
So that's why it's called Excalibur 2!
stevage 208 days ago [-]
You really managed to make the whole video without making a single "crash" pun? (Those freezes come close enough that you could call them crashes...)
xhrpost 207 days ago [-]
Is it common to default to a signed integer for tracking a timer? I realize being unsigned it would still overflow but at least you'd get twice the time, no?
jbreckmckye 207 days ago [-]
Some C programmers take the view that unsigneds have too many disadvantages: undefined behaviour for overflows, and weird type promotion rules. So, they try and avoid uints.
tekne 207 days ago [-]
Umm, signed integers are UB on overflow; unsigned is always fine.
jbreckmckye 206 days ago [-]
Sorry, you are correct. I don't think unsigned overflow behaviour was defined until C99 though.
Anyway, in answer to the question, I would guess the reason was because of signed / unsigned type promotion.
aidenn0 207 days ago [-]
If you get to right before you need to be (taking as long as you want), then wait until overflow, then you still have 12h to do the last tiny part if it's unsigned.
207 days ago [-]
jonhohle 207 days ago [-]
I think many games were that way. SotN definitely has a global timer. On a native 32-bit system it makes sense, especially when the life of a game was a few months to a few years on the retail shelf. No player is going to leave their system running for 2.27 years so what’s the point of even tesing it?
Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write?
Gamemaster1379 207 days ago [-]
Can be more than timers too. There's a funny one in Paper Mario where a block technically can be hit so many times it'll reset and award items again. Hit enough times it'll eventually crash. Of course it'd take around 30 years for the first rollover and 400 or so for the crash.
https://n64squid.com/paper-mario-reward-block-glitch/
rybosome 207 days ago [-]
It’s a totally reasonable choice in that context.
I wonder if any sense this is criticism (or actual criticism) is based on implementers of SaaS who have it so deeply ingrained that “haha what if the users of this software did this really extreme thing” is more like “oh shit what if the users of this software did this really extreme thing”.
When I worked on Google cloud storage, I once shipped a feature that briefly broke single-shot uploads of more than 2gb. I didn’t consider this use case because it was so absurd - anything larger than 2mb is recommended to go through a resumable/retryable flow, not a one-shot that either sends it all correctly the first time or fails. Client libraries enforced this, but not the APIs! It was an easy fix with that knowledge, but the lesson remained to me that whatever extreme behaviors you allow in your API will be found, so you have to be very paranoid about what you allow if you don’t want to support it indefinitely (which we tried to do, it was hard).
Anyway in this case that level of paranoia would make no sense. The programmers of this age made amazing, highly coreographed programs that ran exactly as intended on the right hardware and timing.
technion 207 days ago [-]
Let's say youre pedantic with code. Ive been trying to be lately - clippy has an ovefflow lint for rust i try to use.
Error: game running for two years, rebooting so you cant cheese a timer.
Does this make the bug any better handled? Bugs like this annoy me because they arent easily answered.
account42 207 days ago [-]
There are always limits to what a program can do. The only fix is to choose large enough integers (and appropriate units) so that you can represent long enough times / large enough sizes / etc. that anyone could reasonably encounter. What sizes make sense also include how they impact performance and for a game from the 32-bit era, a crash (controlled abort or not) after over two years is probably a better choice than slowing everything down by using a 64-bit integer.
jraph 207 days ago [-]
Isn't this common in the computer game scene? Shouldn't you asume your game will be disassembled, deconstructed, reverse engineered?
Although for old games released before internet was widespread in the general population, it might have not been this obvious.
sim7c00 207 days ago [-]
aslong as it doesnt lead to online cheats having such code is fine. if someone wants to reverse the game find an obscure almost untriggerable bug and then trigger it or play with it. 2.6 year game session is crazy if its not a server, and if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).
looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it?
lstodd 207 days ago [-]
> if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).
Pretty much doable even without resorting to VM migrations or ksplice. My last one had uptime in 1700s (days). Basically I leased it, put a debian on it and that was that until I didn't need it anymore.
lentil_soup 207 days ago [-]
they're still made like this. Just now I made a frame counter that just increments every frame on a int64. It would eventually wrap around but doubt anyone will still be around to see it happen :|
account42 207 days ago [-]
For some games the timer is stored is save files so it doesn't even have to be continuous play time. 2 years is still longer than anyone is expected to spend on a game.
ThrowawayTestr 207 days ago [-]
Great video, just subscribed
teeray 207 days ago [-]
The true Time Twister unlocked
Insanity 207 days ago [-]
Literally unplayable, someone should fix that.
Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didn’t do it for me.
bitwize 207 days ago [-]
Fun fact: Doom is now a Microsoft property, along with Quake, StarCraft, WarCraft, Overwatch, all of the adventure games from Infocom and Sierra, and of course Halo. Microsoft pretty much owns most of PC gaming. Which is what they've wanted since 1996 or so.
kodarna 207 days ago [-]
They own the past of PC gaming, as well as Call of Duty but that is more popular on consoles than PC nowadays. Those listed are small time compared to Counter-Strike 2, Dota 2, League of Legends, Valorant, Roblox, Apex Legends, Marvel Rivals and a number of hard-hitting games every year such as Witcher 3, Elden Ring, Baldur's Gate 3 etc.
account42 207 days ago [-]
So in other words the own the part of PC gaming that's actually good.
jama211 206 days ago [-]
You’re saying the Witcher 3 and games like it are bad?
Novosell 207 days ago [-]
They own Minecraft as well.
nurettin 207 days ago [-]
> Microsoft pretty much owns most of PC gaming.
So valve next?
Lightkey 207 days ago [-]
They missed that window when Sierra was still the publisher for Half-Life. Besides, Valve is not a publicly traded company and Gabe Newell as former manager at Microsoft has no interest in getting back together. Valve is betting everything on Linux right now to be more independent from Microsoft.
simoncion 207 days ago [-]
> Valve is betting everything on Linux right now...
They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so.
And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does.
I'm really gladdened by the effort put in to making this work.
[0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup.
Insanity 207 days ago [-]
I used to game on Linux back in the late 2000s through Wine. And I always found the mouse support to be jarring, even if I could get support to a decent level, for some reason the mouse input was never quite as fluid as it should have been.
And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot.
Spoom 207 days ago [-]
Stating my bias up front, I've been using Linux since Windows Vista, and I'm a fan. That said, I have experienced the same things you did whenever I needed to run Wine for... well, anything. It was clunky as hell.
You should absolutely revisit. Proton has changed the game. Literally the only game I've tried that was remotely difficult to play in SteamOS is Minecraft, likely because Microsoft owns it now. But I was able to get that working too (if anyone's wondering: you want Minecraft Bedrock Launcher, which is in the Discover store if you're on the Steam Deck and here[1] if you're somewhere else; basically it downloads and runs the Android version of Minecraft through a small translation layer, which is essentially identical to the Windows version).
Speed also is greatly improved from previous solutions. Games played through Proton are often very close in terms of performance to playing them natively.
ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: https://www.protondb.com/profile
And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said.
In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR.
minki_the_avali 205 days ago [-]
I use steam with Monado to play VRChat on Linux, it surprisingly just works. There are many options to do this nowadays like wired PCVR headsets (The HTC ones are really well supported under Linux) or ALVR if you have a Quest. The only tricky part is setting up steam to use Monado instead of Steam VR but there is documentation on that. I even had some success with running Beat Saber under FreeBSD once using Monado and Wine.
All the more reason for Microsoft to make a play now while Valve still at least somewhat depends on them.
And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve.
tomwojcik 207 days ago [-]
As long as Gabe is alive, no way.
HeckFeck 207 days ago [-]
We must find a way to extend his life indefinitely.
account42 207 days ago [-]
*in control of Valve
Old age can make him give that up before death.
jjbinx007 207 days ago [-]
This caters for people who prefer the classic Doom style of gameplay in FPS games:
Ahh yes, I'm quite happy that a few years ago this has become a trend!
jama211 207 days ago [-]
Same. Something about the metroidvania design with the home hub of the later ones didn’t give the same feeling. It should be run, kill, find secrets, end, next level.
jeffwask 207 days ago [-]
I just finished Robocop: Rogue City and it was exactly this a linear level by level shooter that felt like a pure Robocop power fantasy movie. I played new game plus it was so much fun and I never do that.
It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work.
jama211 206 days ago [-]
Ooh, I’ll have to give this a try, thank you!
Insanity 207 days ago [-]
This is exactly how I want my FPS games to be. Just linear, run & gun.
TBH, I can even do without weapon upgrades or any "RPG" style elements.
It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol.
lapetitejort 207 days ago [-]
> find secrets
I'll be honest, I don't like this part. I'm a rabid collector. If the game gives a metric to an item, I must have all of the items. I end up killing the flow by scouring the level looking for secrets. This is entirely my fault of course
bombela 207 days ago [-]
The latest DOOM: Dark Ages ditched the home hub. I think it's a really great DOOM game.
Insanity 207 days ago [-]
I was quite excited for it, despite not enjoying Eternal as much. But after about ~2 hours of playing it, I lost interest. I'm happy you're enjoying it, sadly it didn't click for me.
Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons.
jama211 206 days ago [-]
Huh, thanks! I’ll give it a try
xmonkee 207 days ago [-]
Same. And love those brutality mods.
shpongled 207 days ago [-]
2016 remains one the greatest single player FPS games I've played (Titan Fall 2 is the other)
pizza234 207 days ago [-]
I'm under the impression that since Doom Eternal (the first after Doom 2016), the gameplay has considerably shifted to an "interconnected arenas" style, and with more sophisticated combat mechanics. Many games have started adopting this design, for example, Shadow Warrior 3.
I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too.
billyp-rva 207 days ago [-]
The enemy cap all but forces the arena style gameplay. Doom 2016 tried to hide it more, but it still felt very stifling.
spjt 207 days ago [-]
Just be glad you knew what the bug was before you started. After 2.5 years... "Shit, I forgot to enable debug logging"
Sadly it appears that archive.org didn't capture all of the site formatting, but at least the text is there.
shultays 207 days ago [-]
Does that hardware traps overflows or something?
I had read an article about how DOOMs engine works and noticed how a variable for tracking the demo kept being incremented even after the next demo started. This variable was compared with a second one storing its previous value
Doesn't sound like something that would crash, I wonder what was the actual crash
Sharlin 207 days ago [-]
Signed overflow is undefined behavior in C, so pretty much anything could happen. Though this crash seems to be deterministic between platforms and compilers, so probably not about that. TFA says the variable is being compared to its previous value, and that comparison presumably assumes new < old cannot happen. And when it does, it could easily lead to eg. stack corruption. C after all happily goes to UB land if, for example, some execution path doesn’t return a value in a function that’s supposed to return a value.
account42 207 days ago [-]
Just because the language standard allows for anything to happen doesn't mean that actually anything can happen with real compilers. It's still a good question to think about how it could actually lead to a crash.
Sharlin 207 days ago [-]
That’s what I said? It’s easy to come up with scenarios where signed overflow breaks a program in a crashy way if the optimizer, for example, optimizes out a check for said overflow because it’s allowed to assume that `++i < 0` can never happen if i is initialized to >= 0. That’s something that very real optimizers take advantage of in the very real world, not just on paper. For example, GCC needs -fwrapv to give you guaranteed wrapping behavior (there’s sctually -ftrapv which raises a SIGFPE on overflow – that’s likely the easiest way to cause this crash!)
But I specifically said that it doesn’t look like SOUB in this particular case, and proposed an alternative mechanism for crashing. What’s almost certain is that some type of UB is involved because "crashing" is not any behavior defined by the standard, except if it was something like an assertion failing, leading to an intentional `abort`.
phkahler 207 days ago [-]
That doesn't make sense. If new < old cant happen there is no need to make a comparison. Stack corruption? Nah, its a counter not an index or pointer or it would fail sooner. But then what is the failure? IDK
jraph 207 days ago [-]
Assuming new > old doesn't mean you actually make the comparison, but rather that the code is written with the belief that new > old. This code behaves correctly under this assumption, but might be doing something very bad that leads to a crash if the new < old.
An actual analysis would be needed to understand the actual cause of the crash.
Sharlin 207 days ago [-]
Um, there are the cases new == old and new > old. And all the more specific cases new == old + n. I haven’t seen the code so this is just speculation, but there are plenty of ways how an unexpected, "can never happen" comparison result causes immediate UB because there’s no execution path to handle it, causing garbage to be returned from a function (and if that garbage was supposed to be a pointer, well…) or even execution never hitting a `ret` and just proceeding to execute whatever is next in memory.
Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing.
int foo[5] = { … }
foo[i % 5] = bar;
Everything is fine as long as i isn’t negative. But if it is… (note that negative % positive == negative in C)
account42 207 days ago [-]
Dividing by a difference that is suddenly zero is another possibility.
ogurechny 207 days ago [-]
The error states that the window can't be created. It might be the problem with parameters to the window creation function (that should not depend on game state), or maybe the system is out of memory. Resources allocated in memory are never cleaned up because cleanup time overflows?
Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?
There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.
Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.
Would be hilarious if it really is such an easy mistake.
BearOso 207 days ago [-]
The VGA 320x200 mode, either 13h or "Mode Y", ran at 70.086 Hz, so that adding up to ~2.5 years is just coincidental.
It's a shame the source code for doom isn't available, and that the author couldn't just link directly to a specific line in a gitweb repository. /s
jraph 207 days ago [-]
Notably, DOOM crashed before Windows CE.
chatmasta 207 days ago [-]
Seriously… I’m most impressed that this PDA kept an application running for 2.5 years. I’d be shocked if any modern hardware could do this, even while disconnected from the Internet.
jraph 207 days ago [-]
I'd be more impressed by current software not crashing for 2.5 years than hardware, but that might be I'm a software developer, not a hardware developer :-)
wingi 207 days ago [-]
Yes, great achivement!
JoshGlazebrook 208 days ago [-]
2038 is going to be a fun year.
kevin_thibedeau 207 days ago [-]
Everybody is sleeping on 2036 for NTP. That's when the fun begins.
wiredpancake 207 days ago [-]
Assuming correct implementation of the NTP spec and adherence to the "eras" functions, NTP should be resistant to this failure in 2036.
The problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine.
jonhohle 207 days ago [-]
That seems much closer than it did in y2k.
aaronbrethorst 207 days ago [-]
[ 25 ] Now [ 13 ]
yep
cestith 207 days ago [-]
You have 13 years to upgrade to 64-bit ints or switch to a long long for time_t. Lots of embedded stuff or unsupported closed-source stuff is going to need special attention or to be replaced.
I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I don’t have to worry about that.
account42 207 days ago [-]
Most 32-bit games won't be updated, we'll have to resort to faking the time to play many of them.
cestith 207 days ago [-]
Most 32-bit games written for some form of Unix will use the system time_t if they care about time. The ones written properly, anyway. Modern Unix systems have a 64-bit time_t, even on 32-bit hardware and OS. If it’s on some other OS and uses the Unix epoch on a signed 32-bit integer that’s another design flaw.
chatmasta 207 days ago [-]
You’ve got 13 years to update unless any of your code includes dates in the future. Just stay away from anything related to mortgages, insurance policies, eight year PhD programs, retirement accounts…
cestith 206 days ago [-]
If you’re managing mortgages or retirement accounts on systems that weren’t ready for 2038 by 2008 you were already missing the biggest bucket of the market.
pjc50 207 days ago [-]
Fixing that is my retirement plan.
Zobat 207 days ago [-]
This is a level of testing that exceeds what the testers I know commit to. I myself was annoyed the five or so times yesterday we had to sit and wait to check the error handling after a 30 second timeout in the system I work on.
jeffrallen 207 days ago [-]
This headline gave me a heart attack... I misread the site's name as Lenovo, and as I'm responsible for a whole lot of their servers running for years in a critical role... heart attack.
Maybe I need my morning coffee. :)
minki_the_avali 207 days ago [-]
I mean I wouldn't mind getting a subdomain there but I do like lenowo more :3
ranger_danger 208 days ago [-]
Seems to be a PocketPC port of Doom, with no source given or even a snippet of the relevant code/variable name/etc. shown at all.
unixhero 208 days ago [-]
Yes. I think it it seems like it was the os that overflowed, and not Doom in this case.
nomel 208 days ago [-]
It's also running on very old hardware, potentially with some electrolytic capacitors that have dried up. And, there's always the possibility that it's a gamma ray [1]!
To me, that error message was caused by some panic, and then the OS began gracefully shutting down the application in this case DooM - which would not have been done by the program itself. Therefore I conclude it was the OS.
I am not an OS developer, so I take my own conclusion with a grain of salt.
jama211 206 days ago [-]
Did you read the article? They specifically said it was a variable in the game engine code that causes the overflow. A program crashing causes the OS to show the error, but the bug that caused the crash was clearly in the game code itself.
Any OS this game engine ran on would experience this crash.
208 days ago [-]
0cf8612b2e1e 208 days ago [-]
I am going to need to see this replicated before I can believe.
piker 207 days ago [-]
Props again to the id team. No doubt something like that engineered by most folks today would have died long before the 2 year mark due to memory fragmentation if not outright leaks.
ustad 207 days ago [-]
Was this specific to the PDA port or the core doom code?
@ID_AA_Carmack Are you going to write a patch to fix this?
bombcar 206 days ago [-]
Looks like top security high damage CVE to me!
cestith 207 days ago [-]
Once upon a time, Windows NT 4 had a similar bug. Their counter was high precision, though, and was for uptime of the system. Back before Service Pack 3 (or was it SP2?) we had a scheduled task reboot the system on the first of the month. Otherwise it would crash after about 42 days of uptime, because apparently nobody at Microsoft tested their own server OS to run for that long.
This was definitely NT. It was the IIS server at an ISP. It might have been the same timer, and it might’ve been 49 days instead of 42. Its was in the forties, and 42 sticks in my mind pretty easily. It may have been basically the same bug.
That, or the Reddit poster and I have the same wrong memory of the bug. I do know my boss at the time made us make the scheduled task to reboot because he understood it at the time to happen on NT 4.
minki_the_avali 205 days ago [-]
> about 42 days
About 42 sounds a bit too low, if this really was a timer overflow from a 16 bit timer it would have to be around 49 days
glitchc 207 days ago [-]
I love the post, but your blurry text is hurting my eyes. Looks like it's intentionally blurry but I can't figure out why. This can't be a holdover from older systems, they had razor-sharp text rendering on CRTs.
minki_the_avali 205 days ago [-]
Its not supposed to be blurry, you may have a configuration error in your browsers font renderers pixel order (like using BGR on an RGB screen), Id recommend testing with font smoothing turned off and seeing if it persists.
jraph 207 days ago [-]
Looks crisp on my setup, but I block fonts and scripts. Reader mode is your friend :-)
qiine 207 days ago [-]
In games I worked on I use time to pan textures for animated FX.
After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem.
Yet this keep bothering me..
serf 208 days ago [-]
The easy way to e-Nostradamus predictions:
"See this crash?
I predicted it years ago.
Don't ask me how, I couldn't tell you."
p.s. I had an old iPaq that I wouldn't have trusted to run for longer than a day and stay stable, kudos for that at the very minimum.
prmoustache 207 days ago [-]
I had an iPaq for a while and I don't remember seeing OS/hardware crashes.
otikik 207 days ago [-]
Quick! John Carmack needs to be brought into this immediately.
patchtopic 207 days ago [-]
I haven't opened my DOOM software box, it's still in the shrinkwrap. I guess I can take it back and ask for a refund now?
207 days ago [-]
DeathArrow 207 days ago [-]
It's good it didn't took a billion years to overflow. That would have been quite a long wait.
208 days ago [-]
johnjames87 207 days ago [-]
Literally unplayable
casey2 207 days ago [-]
Has this ever come up in a TAS of custom levels?
EbNar 207 days ago [-]
Love the look of that board :-)
kwertyoowiyop 207 days ago [-]
CNR. Please attach video.
moomin 207 days ago [-]
Literally unplayable.
ZsoltT 207 days ago [-]
glitchless?
sunrunner 208 days ago [-]
Not a comment on the post, but I sure wish Jira would load even half as quickly as this site.
antsar 208 days ago [-]
It takes serious hardware investment [0] to pull that off.
After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it
commenting from dillo running on a disposable vape which boots desktop linux using a ram expansion
stevage 208 days ago [-]
It's not loading for me at all.
9dev 207 days ago [-]
We recently moved to Linear and couldn’t be happier, can recommend!
hughes 208 days ago [-]
Is this a joke because the site isn't loading at all?
sunrunner 207 days ago [-]
At the time of writing the comment it was practically instantaneous for me and the comment was genuine. Now it seems to be having trouble and I'm choosing to retroactively make the comment a joke about Jira ;)
SpicyUme 208 days ago [-]
Came back to check this since the tab never loaded. I'm guessing traffic caused some issues?
minki_the_avali 207 days ago [-]
You folks overflowed the 32 MB of RAM that my forum is running on and caused it to restart a few times due to the high amount of simultaneous connections. It has recovered now though
Insanity 207 days ago [-]
I’m guessing HN hug of death. Probably smarter than just auto scaling to handle any surge traffic and then get swamped by crawlers & higher bills.
Left for 2.26 years, it will overflow.
When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: https://youtu.be/f7ZzoyVLu58
https://finalfantasy.fandom.com/wiki/Excalibur_II_(Final_Fan...
"The perfect racing car crosses the finish line first and subsequently falls into its component parts."
Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve.
But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once.
Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?
The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.
They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.
After which the engine was basically unusable, and so they'd put in a new one for the race.
It still does. New Zealand has a crop of tobacco funded politicians.
when they leave politics do they just rapidly age and dissolve like that guy in the Indiana Jones film?
You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.
https://www.justice.gov/archives/opa/pr/aluminum-extrusion-m...
Also a correction to GP: They were payload deployment failures, they didn't blow up on the pad. More here: https://arstechnica.com/science/2019/05/nasa-finally-conclud...
I'm chuckling at the thought of barely building something. (All in good fun, thank you.)
After two weeks, the Infrastructure department changed the sign allowing up to 45t.
I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week. That works out to roughly 15 years of usable machine time for the average person.
Not bad at all.
And yes, 15 years is bad. I don't want to replace my entire household every 15 years FFS.
Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we haven’t really hashed out the numbers together. I’m typing this while wearing a HyperX gaming headset I bought refurbished that’s old enough that I’ve replaced the earpads while everything else continues to work.
Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would.
This is a fine example of what I meant about people complaining when they use products beyond their design parameters.
I'm on a mostly carnivore, mostly ruminant meat diet and for costs tend to do a lot of ground beef... I sous vide a bunch of burgers in 1/2lb ring molds, refrigerate and sear off when hungry. This lets me have safer burgers that aren't overcooked. I do 133F for 2.5+ hours.
I also do steaks about once or twice a week. I have to say it's probably the best kitchen investment I could have made in terms of impact on the output quality.
- the product doesn't break and you don't buy a replacement from them because you still have a working product
- the product breaks and there is a greater than 0% chance that you will buy a replacement product from them
Of course in practice it's more complicated but I wouldn't be so quick to declare that the math doesn't work out.
Or if you want something even beefier: https://sammic.com/en/smartvide-xl
Sous vide is generally not a bacterial growth risk above 140F. At 150F throughout, you get decent pasteurization in under two minutes. Two days of that is such extreme overkill that I'm concerned about the nutritional effect of over cooking.
The Food Saver style vacuum sealers fail fast for me, so I bought a $400 chamber sealer, and I'm on year 5 with it.
Short rib is shocking where I am. Even chuck is pushing past $15 a pound.
What are you doing for sides/sauce? Generally when I think braise/sous-vide I think some rich, flavourful sauce, but that seems unpractical for daily consumption.
I crisp it up in an air fryer before serving. Here's the full ingredient list: meat, butter, salt. After five years I still look forward to every repeat.
I just replaced an air fryer that lasted two years of daily use, a personal record. I was ready to replace it anyway, because they accumulate grease where you can't clean, and the smell gets interesting.
"Verily, verily, I say unto you, Except a corn of wheat fall into the ground and die, it abideth alone: but if it die, it bringeth forth much fruit.”
(John 12:24)
(I also never managed to get it)
I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash.
AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives
Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI)
In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator
(Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play)
There are a number of timers and things used. But the claim that it runs slower is absolutely false. It’s just perceived that way because it’s “drawn” slower.
Secondly, it absolutely will run slower. Animations will take longer to complete; FMVs will play at a different rate ; controller sampling will be reduced.
My scepticism isn't coming from hearsay or ignorance: I have written PlayStation software, and PSX software is not parallelised, even though it can support threading and cooperative concurrency. The control flow of the title is very locked into the VSync loop, from your first ResetGraph(0) right to your final DrawOTable(*p).
In addition, I have done a bunch of reversing work on the other two PSX games, and they are not monolithic programs. They can't be because there simply isn't enough RAM to store the .TEXT of the entire thing at once. So when you say "the source code", I'm inclined to ask - for which module? The kernel or one of the overlays?
It’s not lost except to maybe Square Enix’s corporate but they don’t know where anything is.
FFIX for PSX would have been written in C (or possibly C++) with PSY-Q. It will not be one program - those games were composed of multiple overlays that are banked in / out over the PlayStation's limited memory.
From what I know the PC release was a port to a new framework, which supports the same script engines, but otherwise is fresh code. This is how it can support mobile, widescreen, Steam achievements etc.
Regardless I can still complain about how intrusive the ads are.
Edit: ah only works on safari
Edit: ah forgot my vpn was off, usually clears all that up for me. Much better now
These types of comments are always very unhelpful.
My choice of device is irrelevant when assessing their crappy site.
Anyway, in answer to the question, I would guess the reason was because of signed / unsigned type promotion.
Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write?
I wonder if any sense this is criticism (or actual criticism) is based on implementers of SaaS who have it so deeply ingrained that “haha what if the users of this software did this really extreme thing” is more like “oh shit what if the users of this software did this really extreme thing”.
When I worked on Google cloud storage, I once shipped a feature that briefly broke single-shot uploads of more than 2gb. I didn’t consider this use case because it was so absurd - anything larger than 2mb is recommended to go through a resumable/retryable flow, not a one-shot that either sends it all correctly the first time or fails. Client libraries enforced this, but not the APIs! It was an easy fix with that knowledge, but the lesson remained to me that whatever extreme behaviors you allow in your API will be found, so you have to be very paranoid about what you allow if you don’t want to support it indefinitely (which we tried to do, it was hard).
Anyway in this case that level of paranoia would make no sense. The programmers of this age made amazing, highly coreographed programs that ran exactly as intended on the right hardware and timing.
Error: game running for two years, rebooting so you cant cheese a timer.
Does this make the bug any better handled? Bugs like this annoy me because they arent easily answered.
Although for old games released before internet was widespread in the general population, it might have not been this obvious.
looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it?
Pretty much doable even without resorting to VM migrations or ksplice. My last one had uptime in 1700s (days). Basically I leased it, put a debian on it and that was that until I didn't need it anymore.
Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didn’t do it for me.
So valve next?
They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so.
And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does.
I'm really gladdened by the effort put in to making this work.
[0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup.
And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot.
You should absolutely revisit. Proton has changed the game. Literally the only game I've tried that was remotely difficult to play in SteamOS is Minecraft, likely because Microsoft owns it now. But I was able to get that working too (if anyone's wondering: you want Minecraft Bedrock Launcher, which is in the Discover store if you're on the Steam Deck and here[1] if you're somewhere else; basically it downloads and runs the Android version of Minecraft through a small translation layer, which is essentially identical to the Windows version).
Speed also is greatly improved from previous solutions. Games played through Proton are often very close in terms of performance to playing them natively.
ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: https://www.protondb.com/profile
And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said.
In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR.
=> https://monado.freedesktop.org/
Not everything, but they do invest in it.
And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve.
Old age can make him give that up before death.
https://www.reddit.com/r/boomershooters/
It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work.
It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol.
I'll be honest, I don't like this part. I'm a rabid collector. If the game gives a metric to an item, I must have all of the items. I end up killing the flow by scouring the level looking for secrets. This is entirely my fault of course
Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons.
I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too.
Sadly it appears that archive.org didn't capture all of the site formatting, but at least the text is there.
But I specifically said that it doesn’t look like SOUB in this particular case, and proposed an alternative mechanism for crashing. What’s almost certain is that some type of UB is involved because "crashing" is not any behavior defined by the standard, except if it was something like an assertion failing, leading to an intentional `abort`.
An actual analysis would be needed to understand the actual cause of the crash.
Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing.
Everything is fine as long as i isn’t negative. But if it is… (note that negative % positive == negative in C)Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?
There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.
Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.
Would be hilarious if it really is such an easy mistake.
It's a shame the source code for doom isn't available, and that the author couldn't just link directly to a specific line in a gitweb repository. /s
The problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine.
I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I don’t have to worry about that.
Maybe I need my morning coffee. :)
[1] https://www.bbc.com/future/article/20221011-how-space-weathe...
I am not an OS developer, so I take my own conclusion with a grain of salt.
Any OS this game engine ran on would experience this crash.
@ID_AA_Carmack Are you going to write a patch to fix this?
UPDATE: Apparently it was 49.7 days in NT, same timer bug as 9x. Only remember this was a server OS. https://www.reddit.com/r/sysadmin/comments/86jxva/anyone_rem...
That, or the Reddit poster and I have the same wrong memory of the bug. I do know my boss at the time made us make the scheduled task to reboot because he understood it at the time to happen on NT 4.
After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem.
Yet this keep bothering me..
"See this crash?
I predicted it years ago.
Don't ask me how, I couldn't tell you."
p.s. I had an old iPaq that I wouldn't have trusted to run for longer than a day and stay stable, kudos for that at the very minimum.
[0] https://lenowo.org/viewtopic.php?t=28
Update:
After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it
Source: https://lenowo.org/viewtopic.php?t=28
It's a router.. oh my god that made me laugh
Source: https://lenowo.org/viewtopic.php?t=28
badass
Which is fine unless you get to HN frontpage.
[0] https://lenowo.org/viewtopic.php?t=28