I’m not defending AI here, but “people have been wrong about other things in the past” is a completely worthless argument in any circumstance. See: Heuristics that Almost Always Work.
Interesting article, but you have to be aware of the flipside: “people said flight was impossible”, “people said the earth didn’t revolve around the sun”, “people said the internet was a fad, and now people think AI is a fad”.
It’s cherry-picking. They’re taking the relatively rare examples of transformative technology and projecting that level of impact and prestige onto their new favoured fad.
And here’s the thing, the “information superhighway” was a fad that also happened to be an important technology.
Also the rock argument vanishes the moment anyone arrives with actual reasoning that goes beyond the heuristic. So here’s some actual reasoning:
GenAI is interesting, but it has zero fidelity. Information without fidelity is just noise, so a system that can’t solve the fidelity problem can’t do information work. Information work requires fidelity.
And “fidelity” is just a fancy way of saying “truth”, or maybe “meaning”. Even as conscious beings we haven’t really cracked that issue, and I don’t think you can make a machine that understands meaning without creating AGI.
Saying we can solve the fidelity problem is like Jules Verne in 1867 saying we could get to the moon with a cannon because of “what progress artillery science has made during the last few years”. We’re just not there yet, and until we are, the cannon might have some uses, but it’s not space technology.
Interestingly, artillery science had its role in getting us to the moon, but that was because it gave us the rotating workpiece lathe for making smooth bore holes, which gave us efficient steam engines, which gave us the industrial revolution. Verne didn’t know it, but that critical development had already happened nearly a century prior. Cannons weren’t really a factor in space beyond that.
Edit: actually metallurgy and solid fuel propellants were crucial for space too, and cannons had a lot to do with that as well. This is all beside the point.
Saying we can solve the fidelity problem is like Jules Verne in 1867 saying we could get to the moon with a cannon because of “what progress artillery science has made during the last few years”.
Do rockets count as artillery science? The first rockets basically served the same purpose as artillery, and were operated by the same army groups. The innovation was to attach the propellant to the explosive charge and have it explode gradually rather than suddenly. Even the shape of a rocket is a refinement of the shape of an artillery shell.
Verne wasn’t able to imagine artillery without the cannon barrel, but I’d argue he was right. It was basically “artillery science” that got humankind to the moon. The first “rocket artillery” were the V1 and V2 bombs. You could probably argue that the V1 wasn’t really artillery, and that’s fair, but also it wasn’t what the moon missions were based on. The moon missions were a refinement of the V2, which was a warhead delivered by launching something on a ballistic path.
As for generative AI, it doesn’t have zero fidelity, it just has relatively low fidelity. What makes that worse is that it’s trained to sound extremely confident, so people trust it when they shouldn’t.
Personally, I think it will take a very long time, if ever, before we get to the stage where “vibe coding” actually works well. OTOH, a more reasonable goal is a GenAI tool that you basically treat as an intern. You don’t trust it, you expect it to do bone-headed things frequently, but sometimes it can do grunt work for you. As long as you carefully check over its work, it might save you some time/effort. But, I’m not sure if that can be done at a price that makes sense. So far the GenAI companies are setting fire to money in the hope that there will eventually be a workable business model.
I’m not defending AI here, but “people have been wrong about other things in the past” is a completely worthless argument in any circumstance. See: Heuristics that Almost Always Work.
Interesting article, but you have to be aware of the flipside: “people said flight was impossible”, “people said the earth didn’t revolve around the sun”, “people said the internet was a fad, and now people think AI is a fad”.
It’s cherry-picking. They’re taking the relatively rare examples of transformative technology and projecting that level of impact and prestige onto their new favoured fad.
And here’s the thing, the “information superhighway” was a fad that also happened to be an important technology.
Also the rock argument vanishes the moment anyone arrives with actual reasoning that goes beyond the heuristic. So here’s some actual reasoning:
GenAI is interesting, but it has zero fidelity. Information without fidelity is just noise, so a system that can’t solve the fidelity problem can’t do information work. Information work requires fidelity.
And “fidelity” is just a fancy way of saying “truth”, or maybe “meaning”. Even as conscious beings we haven’t really cracked that issue, and I don’t think you can make a machine that understands meaning without creating AGI.
Saying we can solve the fidelity problem is like Jules Verne in 1867 saying we could get to the moon with a cannon because of “what progress artillery science has made during the last few years”. We’re just not there yet, and until we are, the cannon might have some uses, but it’s not space technology.
Interestingly, artillery science had its role in getting us to the moon, but that was because it gave us the rotating workpiece lathe for making smooth bore holes, which gave us efficient steam engines, which gave us the industrial revolution. Verne didn’t know it, but that critical development had already happened nearly a century prior.
Cannons weren’t really a factor in space beyond that.Edit: actually metallurgy and solid fuel propellants were crucial for space too, and cannons had a lot to do with that as well. This is all beside the point.
Do rockets count as artillery science? The first rockets basically served the same purpose as artillery, and were operated by the same army groups. The innovation was to attach the propellant to the explosive charge and have it explode gradually rather than suddenly. Even the shape of a rocket is a refinement of the shape of an artillery shell.
Verne wasn’t able to imagine artillery without the cannon barrel, but I’d argue he was right. It was basically “artillery science” that got humankind to the moon. The first “rocket artillery” were the V1 and V2 bombs. You could probably argue that the V1 wasn’t really artillery, and that’s fair, but also it wasn’t what the moon missions were based on. The moon missions were a refinement of the V2, which was a warhead delivered by launching something on a ballistic path.
As for generative AI, it doesn’t have zero fidelity, it just has relatively low fidelity. What makes that worse is that it’s trained to sound extremely confident, so people trust it when they shouldn’t.
Personally, I think it will take a very long time, if ever, before we get to the stage where “vibe coding” actually works well. OTOH, a more reasonable goal is a GenAI tool that you basically treat as an intern. You don’t trust it, you expect it to do bone-headed things frequently, but sometimes it can do grunt work for you. As long as you carefully check over its work, it might save you some time/effort. But, I’m not sure if that can be done at a price that makes sense. So far the GenAI companies are setting fire to money in the hope that there will eventually be a workable business model.