Er....this feels like every main criticism going back to even the expert systems pre-90's.
Could you please provide some support for your argument? This was repeated a lot in the early days of the modern wave (2012-2016) but was pretty thoroughly debunked as we've explored generalization and how these models disentangle intrinsic concepts and compute with them. Heck, even modern transformers are restricted memory Turing complete and use that to their advantage.
Also, I do not mean to be rude, but frankly saying "colonize the parts of the problem space where AI doesn't have the imagination to go" is frankly rather terrible as it leans on the imagination argument of AI. At this point, being adaptable to co-integrate will be good, otherwise I could see people following that stuck inside of some kind of Sisyphean pseuso-Luddite escapist nightmare.
Source for opinions: have been involved in ML in some form for most of the modern wave, and am appropriately (quite) skeptical about the AI takeover/revolution/eventual singularity belief/etc.
Could you please provide some support for your argument? This was repeated a lot in the early days of the modern wave (2012-2016) but was pretty thoroughly debunked as we've explored generalization and how these models disentangle intrinsic concepts and compute with them. Heck, even modern transformers are restricted memory Turing complete and use that to their advantage.
Also, I do not mean to be rude, but frankly saying "colonize the parts of the problem space where AI doesn't have the imagination to go" is frankly rather terrible as it leans on the imagination argument of AI. At this point, being adaptable to co-integrate will be good, otherwise I could see people following that stuck inside of some kind of Sisyphean pseuso-Luddite escapist nightmare.
Source for opinions: have been involved in ML in some form for most of the modern wave, and am appropriately (quite) skeptical about the AI takeover/revolution/eventual singularity belief/etc.