I expect that the reason it gave the correct answer to your first question is simply that it already saw the problem and memorized the solution - there is some empirical evidence that deep neural networks are able to memorize much of their training data.
Possibly. The problem itself is not in leetcode, but it almost certainly has been leaked somewhere. However the program was able to make a few changes to the code with some prompting, which hints a little bit more smarts than just regurgitating an answer.
Nevertheless the point is moot. I've invented completely novel questions (promise!), and saw them leaked online after asking them twice. The process is fundamentally flawed and large language models are just making that glaringly obvious.