Can you point out and explain each strawman in detail? It sounds more like someone made good analogies that counter your point and you buzzword vomited in response.
Dissecting his wall of text would take longer than I’d like, but I would be happy to provide a few examples:
I have “…corporate-apologist principles”.
— Though wolfram claims to have read my post history, he seems to have completely missed my many posts hating on TSLA, robber barons, Reddit execs, etc. I completely agree with him that AI will be used for evil by corporate assholes, but I also believe it will be used for good (just like any other technology).
“…tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb”
“HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose”
— Tools are neutral. They have more than one purpose. A nuclear bomb could be used to warm the atmosphere another planet to make it habitable. Not to mention any weapon can be used to defend humanity, or to attack it. Tools might be designed with a specific purpose in mind, but they can always be used for multiple purposes.
There are a ton of invalid assumptions about machine learning as well, but I’m not interested in wasting time on someone who believes they know everything.
I understand that you disagree with their points, but I’m more interested in where the strawman arguments are. I don’t see any, and I’d like to understand if I’m missing a clear fallacy due to my own biases or not.
Many of their points are factually incorrect. The first point I refuted is a strawman argument. They created a position I do not hold to make it easier to attack.
Can you point out and explain each strawman in detail? It sounds more like someone made good analogies that counter your point and you buzzword vomited in response.
Dissecting his wall of text would take longer than I’d like, but I would be happy to provide a few examples:
— Though wolfram claims to have read my post history, he seems to have completely missed my many posts hating on TSLA, robber barons, Reddit execs, etc. I completely agree with him that AI will be used for evil by corporate assholes, but I also believe it will be used for good (just like any other technology).
— Tools are neutral. They have more than one purpose. A nuclear bomb could be used to warm the atmosphere another planet to make it habitable. Not to mention any weapon can be used to defend humanity, or to attack it. Tools might be designed with a specific purpose in mind, but they can always be used for multiple purposes.
There are a ton of invalid assumptions about machine learning as well, but I’m not interested in wasting time on someone who believes they know everything.
I understand that you disagree with their points, but I’m more interested in where the strawman arguments are. I don’t see any, and I’d like to understand if I’m missing a clear fallacy due to my own biases or not.
Many of their points are factually incorrect. The first point I refuted is a strawman argument. They created a position I do not hold to make it easier to attack.