I'm not a huge fan of AI stuff, but the output quality is (usually) above that of what BSers were putting out.
While I still need to double check my BS team members, the problems with the code they are pushing is lower than what it was pre-AI everywhere. To me, that's a win.
I guess what I'm saying is I'd rather have mediocre AI code written than the low quality code I say before LLMs became as popular as they are.
Yes. However, frankly, BSers weren't maintaining a good architecture anyways (in my experience). Code simply landed where it could rather than addressing the overarching problem.
A BSer is about pushing for things that don't make sense but sound like they solve more constraints than anything possible to implement could. It is very unlikely they are giving you code that could compile. (Though if so there's a little bug or todo in it that just happens to be Turing award's material.)
On a code level I'm inclined to agree that it will do better line-by-line.
On more abstract things I think it has to have intentional filters to not follow you down a rathole like flat earth doctrine if you match the bulk of opinion in verbose authors in a subject. I don't see the priority for adding those filters being recognized on apolitical STEM oriented topics.
I'm not a huge fan of AI stuff, but the output quality is (usually) above that of what BSers were putting out.
While I still need to double check my BS team members, the problems with the code they are pushing is lower than what it was pre-AI everywhere. To me, that's a win.
I guess what I'm saying is I'd rather have mediocre AI code written than the low quality code I say before LLMs became as popular as they are.