Y2K was definitely not only fear-mongering. Windows Systems did not use Unix timestamps, many embedded systems didn’t either, COBOL didn’t either. So your explanation isn’t relevant to this problem specifically and these systems were absolutely affected by Y2K because they stored time differently. The reason we didn’t have a catastrophic event was the preventative actions taken.
Nowadays you’re right, there will be no Y10K problem mainly because storage is not an issue as it was in the 60s and 70s when the affected systems were designed. Back then every bit of storage was precious and therefore omitted when not necessary. Nowadays, there’s no issue even for embedded systems to set aside 64 bit for timekeeping which moves the problem to 292277026596-12-04 15:30:08 UTC (with one second precision) and by then we just add another bit to double the length or are dead because the sun exploded.
Not a storage problem but still a possible problem in UIs and niche software that assumes years have 4 digits or 4 characters. But realistically if our civilization is even still around then AI will be doing all that for us and it won’t be an issue humans even notice.
This would be a great short story if for some reason the AI didn’t realize there was going to be a date issue and didn’t properly update itself causing it to crash. Then the problem is it was self sufficient for so long no humans know how to restart it or fix the issue, causing society to have a technology blackout for the first time in centuries.
Yes, it’s kind of a familiar sci-fi trope - a supercomputer that has no built-in recovery mechanism in spite of being vitally important. Like the Star Trek episode where they made smoke come out of a robot’s head by saying illogical things.
The Microsoft Zune had a y2k9 bug caused by a lingering clock issue from leap year from the extra day in February 2008 that caused them to crash HARD on Jan 1, 2009. I remember It being a pretty big PITA getting it back up and running.
Y2K was definitely not only fear-mongering. Windows Systems did not use Unix timestamps, many embedded systems didn’t either, COBOL didn’t either. So your explanation isn’t relevant to this problem specifically and these systems were absolutely affected by Y2K because they stored time differently. The reason we didn’t have a catastrophic event was the preventative actions taken.
Nowadays you’re right, there will be no Y10K problem mainly because storage is not an issue as it was in the 60s and 70s when the affected systems were designed. Back then every bit of storage was precious and therefore omitted when not necessary. Nowadays, there’s no issue even for embedded systems to set aside 64 bit for timekeeping which moves the problem to 292277026596-12-04 15:30:08 UTC (with one second precision) and by then we just add another bit to double the length or are dead because the sun exploded.
Not a storage problem but still a possible problem in UIs and niche software that assumes years have 4 digits or 4 characters. But realistically if our civilization is even still around then AI will be doing all that for us and it won’t be an issue humans even notice.
This would be a great short story if for some reason the AI didn’t realize there was going to be a date issue and didn’t properly update itself causing it to crash. Then the problem is it was self sufficient for so long no humans know how to restart it or fix the issue, causing society to have a technology blackout for the first time in centuries.
Yes, it’s kind of a familiar sci-fi trope - a supercomputer that has no built-in recovery mechanism in spite of being vitally important. Like the Star Trek episode where they made smoke come out of a robot’s head by saying illogical things.
The Microsoft Zune had a y2k9 bug caused by a lingering clock issue from leap year from the extra day in February 2008 that caused them to crash HARD on Jan 1, 2009. I remember It being a pretty big PITA getting it back up and running.
Y2k9 sounds like a problem that only affects calculations done in dog years.