As one gets deeper into learning another language, it’s also important to be aware that the meanings of words in different languages rarely map to each other in clean bijections. Common words especially tend to be semantic clouds, not fixed points of meaning.
I don’t know French, but I am sure there are many cases where an English phrase or sentence that includes ‘cat’ should not be translated into French with ‘chat’ and vice versa. (When asked, GPT-4 offers two such examples: "let the cat out of the bag” and "avoir un chat dans la gorge.")
I don’t mean to suggest, though, that memorizing word pairs is not a good way to learn vocabulary in another language. For me, it was an essential step in acquiring the second language that I am now fluent in (Japanese).
Words are always semantic clouds. Not getting that is what sent philosophy down many a dead end, and the source of plenty of arguments regular people get into all the time. One doesn't need to try mapping between two languages - it's enough problem trying to map within the language itself.
I think this has as much to do with abstract vs. concrete words as common vs. uncommon. If you had to pick a word corresponding to “chat” it’s quite clearly “cat”, despite a few different expressions. But it’s rather difficult to translate “justement” to English, or “random” to French, without further context.
I don’t know French, but I am sure there are many cases where an English phrase or sentence that includes ‘cat’ should not be translated into French with ‘chat’ and vice versa. (When asked, GPT-4 offers two such examples: "let the cat out of the bag” and "avoir un chat dans la gorge.")
I don’t mean to suggest, though, that memorizing word pairs is not a good way to learn vocabulary in another language. For me, it was an essential step in acquiring the second language that I am now fluent in (Japanese).