Some of the examples in the Wikipedia article refer to such things as small businesses and schools, which were unlikely to have been running software designed or written in the 1960s which was certainly the case for some larger establishments...
Yeah. I was in IT until 1993 when many commercial applications were in COBOL (some still are even now), and historically, to save space for dates on files, years were stored as 2 rather than 4 digits. Hence for 1964, 64 was stored; and for 1983, 83, etc. The problem would come when the year was 2000 or above. The year stored would be 00, then 01, 02, and so forth, which would be treated as 1900, 1901, 1902, etc.
As has been mentioned, not all programs relying on years (some didn't) would have stored the years on file as two digits: some would already have used the full four digits, particularly where forward calculations were required. For instance, someone might have taken out a 25-year mortgage in 1990, with a pay-off date of 2015. Programs like that would probably have already expected 4 digits for the year.
That said, it would certainly be important that the issue was dealt with in anticipation of the year 2000 and beyond, especially for sizeable commercial organisations. It would require changing file descriptions so that 4 digits could be stored, and changing all the programs that accessed relevant files so that they would process 4 rather than 2 digits.
Typically, sizeable organisations would have so-called "suites" of programs. For instance, in local government, there might be a suite or set of programs to calculate and bill monthly rents or utility rates or whatever. My guess is that before 2000, altered test files and program suites would be constructed and thoroughly tested. I can imagine that taking up to one man-month per suite for at least one programmer. Since programmers worked in teams lead by project leaders, there would be 4-6 programmers per team and maybe a number of suites per team. As a rough guess, allow up to maybe 12 man-months per team: they'd be working in parallel, so the elapsed time wouldn't necessarily be cumulative: could be as short as a few months. The teams would be doing this on top of ongoing maintenance of the running of the old suites, so that probably involved extra payments for overtime, or some organisations might have hired contract programmers to supplement in-house staff.
Some suites or systems would likely be bought-in, and many organisations would use them: so it would be up to the developer to amend their systems in preparation for Y2K, so there wouldn't be duplication of effort across many organisations for those.
No doubt, it was extra expense for organisations, but I always thought it was over-hyped. We were certainly talking about the Y2K issue back in the early 80's, but it wasn't seen as the end of the world: maintenance and development teams often have to implement new systems or amendments to existing ones, sometimes involving considerably more complexity. It would be a bit of a pain, but nothing that they hadn't seen before.
The problem came when the MSM got hold of the idea and blew it out of all proportion. Certainly, had no one done anything, there would have been a lot of issues, but that was never going to happen: no sizeable organisation would have countenanced it. They wouldn't have left it to the last minute, either: I'm sure that there would have been target dates for testing of amended suites that came appreciably before the turn of the millennium, so that it was just a matter of implementing the tested suites and files at the turn.
The main commonality with the CO2 issue is that the MSM always focusses on disaster scenarios: that's what sells papers. What's different is that, in practice, the climate system isn't at all like commercial computer systems, which latter are known in complete detail. No one actually knows all the factors that operate in the climate system, and even where factors are known, it isn't understood how they interact, or how to model them properly. The models, as even the IPCC has had to admit recently (albeit
sotto voce), have consistently been overestimating the climate feedback effects of anthropogenic CO2: the projected rises in average global temperatures for a doubling of CO2 (the climate sensitivity to CO2) are too large.
An important issue is that the models don't deal with clouds properly, assuming that they constitute a positive rather than negative feedback. It's important because a change in cloud cover of only a couple of percent can have appreciable effects on the earth's albedo, i.e. the amount of solar radiation getting reflected back into space before it ever hits the ground to be re-radiated.
It's entirely possible that a proportion of the observed rise in average global temperatures since 1860 is due to a small percentage change in cloud cover, and that could in part be related to variations in the amount of cosmic rays hitting the atmosphere: there's increasing evidence that cosmic rays can have a part to play in cloud formation. This is the idea of Henrik Svensmark, whose hypothesis has been tested at CERN, which has verified that cosmic rays can have some effect on cloud formation. El ninos (natural phenomena) may also have an effect on cloud formation. There is much debate in this area at the moment, but the key point is, clouds are poorly understood and the models make assumptions about their effects that don't jibe with the empirical observations.
It's impossible to have rational discussions about these sorts of things because it's been pre-decided that we already KNOW that CO2 is a bad thing. it MUST be so, and anyone begging to differ must be a swivel-eyed holocaust denier who eats children for breakfast, cashes monthly paychecks from big oil, is a right-wing nutjob, and so on.