There is indeed plenty of translated Akkadian in open-access papers that could have become part of ChatGPT's training set, so it's completely plausible that GPT might have learnt a thing or two about the Babylonian language. However, the questionable 'translation' number 55 casts doubt in my mind as to its accuracy:
> They made Menasî king in Assur, established him as ruler in Babylon.
Just from the history, surely that should be 'He [King Esarhaddon of Assyria and therefore overlord of Babylon and therefore also of Judah] established Menasî, King of Judah in Babylon, as ruler [which he used to be anyway]'? In any case, I'm lucky enough to have a sister who can read Cuneiform, so if anyone's watching this thread hopefully I'll be back with an authoritative answer!
I can confirm that GPT 4 fails pretty badly on Middle Egyptian texts that are outside of its training set, using both MdC encoding and transliteration.
I get translations which have at most a word or two in common with the original text. Almost pure hallucination.
I might be one of the few people who didn’t like this book. Philip K Dick (the author) manages to come up with some of the most incredible premises. And then somehow the plot meanders into some boring, meaningless corner that left me mystified. How could the person who thought of this premise also have written this plot?
It’s not just Androids. In another book he comes up with this fascinating idea of an alternate history where Japan and Germany won WWII. And then it turns out astrology is real and everyone is obsessing over it. Just why?
I’m sure everyone will say actually astrology is fascinating, or you need to look past that to get the real meaning of the book. But I prefer the books that don’t meander into meaningless dead ends. There are plenty of them out there.
Yet others may say Dick was enormously influential and inspired many other brilliant authors. Ok, I’ll read those authors then.
Overall I’d rate the books I’ve read by Dick as 2 stars. And only because the premises are so fantastic.
I do not know if this will make you change your opinion, but at least it will "inform" it a bit better: "The Man in the High Castle" (i.e. the one with the alternate WWII ending) was allegedly "plotted" by using the I-Ching.
Basically before every chapter/scene Dick would cast the I-Ching[1] and use the response to decide "what happens now?".
Dick was always obsessed with a few core themes ("what is real?", "what is human?") plus a ton of personal and psychological issues.
It was always erratic in his production, and while I like his works, I would generally agree that the way he developed his own premises could often be "disappointing" at least in the case of novels.
Did you read something about him as a person? He was a profoundly weird person (one example from wiki: "Dick alleged that Stanisław Lem was probably a false name used by a composite committee operating on orders of the Communist party to gain control over public opinion, and wrote a letter to the FBI to that effect"), and that weirdness seeped into his novels. I think you could try one of his semi-autobiographical novels (https://en.wikipedia.org/wiki/A_Scanner_Darkly or https://en.wikipedia.org/wiki/Radio_Free_Albemuth) to see where this weirdness in the more known novels comes from.
"Androids Dream of Electric Sheep and Gaslight You into Thinking They Are Totally Real Even Though They Have a Dozen of Eyes" doesn't have the same ring to it though.
We can conclude that the question posited by the book is too abstract to answer. Until a book with a better question is written, the case is closed. :)
The same way we do with humans: reflection, critical thinking.
(Although, disrobing the "Cato the censor" robes, yes I loved this image.)
(Although II, the original concept of "androids dreaming of electric sheep" is more related to the PKD theme of the human problem in terms of emotion, the deontic call for empathy, the human property of the inner trouble within a mechanistic perspective. So, as if it could be rephrased as "is angst inherent in humanity [as a quality]" and "is empathy the solution". Edit: or "Is humanity a goal, does it imply care, and do you reach it with empathy". ...In tentative terms: it has been a long time I do not read the book.)
I think the breakthrough development from GPT to a general-purpose AI will be the introduction of at least some form of self-awareness.
Basically, at the moment, GPT models don't have the concept of "knowledge", or at least not in a way accessible from the inside of the model. Ask a human something they don't know and they'll tell you they don't know - GPT models will pull something out of thin air as they aren't aware they don't "know" the answer or that the input is insufficient.
When people dismiss the alignment problem by anthropomorphising AI, my go-to meme is:
Owing to the fact we have no idea what we're doing, the first human-like AI is likely to have literally every mental illness, mixing Alzheimer's paranoid schizophrenia, grandiose narcissism, manic depression, and unconstrained hallucinations.
So yes, we'll need digital neuroleptics, amongst other things.
Chatgpt isn't a penultimate life form that contains all of the worlds knowledge. It's a language model. Mostly useful for bulk scams and filling voids in very sad lonely people's lives.
The blog post mentions that you have to double check everything, but they did not check the translation. If you just put the same text through multiple times (in different chat sessions), you get completely different answers back. I think it's just making up the whole thing.
It is. Here's a professional translation for this [0]:
...they made bricks. I razed that small palace in its entirety, took a large area from the fields for an addition, and added (it) to it (the palace). I laid its foundations with limestone, strong stone from the mountains, and raised the terrace.
I summoned the kings of Ḫatti and Across the River (Syria-Palestine) Baʾalu, king of Tyre, Manasseh, king of Judah, Qaʾuš-gabri, king of Edom, Muṣurī, king of Moab, Ṣil-Bēl, king of Gaza, Mitinti, king of Ashkelon, ...
Even if the translation wasn't very wrong, I'd be incredibly skeptical of this because Akkadian and Sumerian share a lot of words that are written the same way, but have very different meanings (which can even be mixed within a document!). They're usually documented separately in dictionaries and most translations will have a convention to denote each usage. [1] has a good example of how dense these sorts of notational conventions can get.
Not only double-check, but probably triple- or quadruple-. There's too much chance of hallucination to trust one or two instances; I think this could be done, but would require a number of disconnected instances doing the translations, with the resulting documents compared with each other to see a) which ones are more prone to making things up, b) if they hallucinate in similar ways, and c) where they agree, which would allow further study to find out what may have been in their training that allowed similar results to be output.
ChatGPT is the only LLM I have seen that can count syllables at all. It seems to be a gap in the way most of them are trained, since they only get text and never “hear” words pronounced.
I would try round-tripping it at least. But I would have to know a little more Akkadian to tell if it's coming up with synonyms or completely different words.
I tried with some other undeciphered texts and I just got random translations that changed drastically with every attempt. Once I became skeptical, I tried to see if it could do a simple replacement cipher on English text, and it could not. I don’t think ChatGPT is a good fit for this kind of problem.
Yes, I’ve also seen it struggle with ROT13. I’ve read that this is because the tokenizer breaks up words in a way that makes mapping to ROT13 hard. I don’t expect it to have much luck with a language with a small corpus.
The tokenizer as far as i know is just byte-pair encoding. You take your whole corpus, you find the most common 2 byte pair (probably .[space] for the first iteration) and you assign it to a token. Then, you do it again with the previously found token as possible parts of the byte pairs. Do it enough times and enventually you get full words as tokens if they're common enough, and for more uncommon words just the root of the word (and then later you can assemble root+ing for example, ing being just a normal token among others).
It struggles with rot13 because people don't generally make large corpuses of text rot13 available, next to their translation, so the problem compounds. On one hand there are probably not many rot-13'd words recognized by the tokenizer, and on the other hand even if there were the model wouldn't be trained to predict the correct translation after these tokens because there are very little rot13 roseta stones just laying around.
I worked for years as a translator (PT/EN). I hated 'just correcting' bad translations, from software or people bad at it. I cannot figure out why anyone would think generating half baked text or code with some generic tool and then have to parse it, identify errors and correct it would be preferable to just doing it right yourself. It's much less pleasant work.
Honestly this is an issue I have with Copilot: I hate reading and correcting the code it produces. The time I save by having it generate a block of 10 lines is lost by me having to review it carefully and fixing bugs (sometimes subtle).
I prefer doing it myself from scratch most of the time.
I think this is a result of conflating the act of coding with code as a result. The effort in understanding a domain and specifying a program is about learning and communicating about it with other people.
A working program is useful, but deepening knowledge about a business domain and being able to adapt to changes on the ground is the real value. Being accidentally right but unable to be consistently right over time is a distraction. It can generate false confidence, and future maintenance burdens.
I’ve been working with a professional Latin translator and encouraging her to use GPT4. She gagged, said it would be more work and against professional ethics.
Still, I’ve been able to use GPT4 to read Latin fluently… and I often find it’s translations to be more readable. It here is an insane amount of untranslated Latin—and now I don’t have to pay 8k just to read a book.
No, and I'm glad there's something for everyone. I'd hate to live in a world where only things approved by some definition of good are available. Doesn't lead me to think that high demand for something makes that something a reasonable thing.
I think "cheap iffy translations are great because cheap" and "rich people drinking and fighting over parties" are pretty unreasonable things. I'm proud of that stance.
This whole thing reminds me of Snow Crash. Its like Hiro talking with the librarian ai that comes with the infocalypse stack from Lagos. Too bad I don’t have any problems that interesting to solve!
A google search for their first prompt (ignoring, of course, the prompt intended to get chatgpt into "the right state of mind") "Can you speculate on how the biblical name of king Manasseh of Judah would have been written in Babylonian Cuneiform?" actually turns up the https://armstronginstitute.org/160-esarhaddon-prism-proves-k... link describing in detail the artifact they are interested in as the 5th result (4th if you don't include the post itself). Seems a lot simpler than trying to prompt chatgpt into giving you search terms that you use in some other search anyway.
> (49) They built the lofty temple.
(50) As for the temple's foundation, they laid it solidly
(51) Like the Apsu, it rose up from the heart of the earth
(52) Its shrine was radiantly visible in the midst of the city
(53) Its brilliance extended over the lands.
(54) They established the kingship in Hatti and made the crown resplendent.
(55) They made Menasi king in Assur, established him as ruler in Babylon.
(56) Qausgabri ruled in Uruk, Musiri ruled in Ma'ab
(57) Issenen, king of Hazitu, reigned over the people of Esqaluna.
true translation:
> (49) they made (unburnt) bricks. That little palace
(50) throughout I destroyed and much land
(51) as an addition from the fields I cut off and
(52) added thereto : with limestone, the solid stone from the mountains,
(53) I laid its foundation and filled a terrace :
(54) I assembled the kings of the Hittites and across the river.
(55) Ba'lu, king of Tyre, Menasi, king of Judah,
(56) Kausgabri, king of Edom, Musuri, king of Moab,
(57) Sil-bel, king of Gaza, Metinti, king of Ascalon,
And, in fact the thing that the author seemed to care about (whether it was about Menasi of Judah), chatgpt, unsurprisingly, got wrong. Though again, just using google would've answered that immediately with the detailed description of the prism that I noted it returned as a top result above.
Here is GPT-4's attempt; 49-51 appear to be far off while the rest at least have some relation to the British Museum translation. I imagine there is some room for ambiguity.
(49) They brought the (il-bi-nu) sig4 e2-gal-tur-ra to his (szu-a-tu2) place.
(50) To the market, they went (aq-qur-ma) quickly (qaq-qa-ru), very much (ma-a'-du).
(51) Like (ki-ma) the (a-tar-tim-ma) from the heart (lib3-bi) of the (a-sza3-mesz) they were taken away (ab-tuq-ma).
(52) His (ugu-szu2) foundation was laid (usz-rad-di) in the (na4) stone (pi-i-li) of the (na4 kur) powerful mountain (dan-ni).
(53) They (usz-szi-szu2) built (ad-di-ma) it complete (tam-la-a) and perfect (u2-mal-li).
(54) And thus (ad-ke-e-ma) the kings (lugal-mesz) of Hatti and the other side (e-ber) of the river (i7).
(55) King (disz lugal) Balu of (iri) Surri, King (disz lugal) Menasi of (iri) Yaudi.
(56) King (disz lugal) Qaus-gabri of (iri) Udume, King (disz lugal) Musur of (iri) Maab.
(57) King (disz s,il-en lugal) of (iri) Haziti, Metinti (disz me-ti-in-ti) the prince (man) of (iri) Isqaluna.
I don't know that ChatGPT would be very successful at it, but I could imagine a multimodal LLM being extremely useful. There are tens of thousands of untranslated tablets.
Yeah, I had a little bit of luck with phonetic transcription - producing standard readings from the closely transcribed cuneiform. But mostly no. And it failed at translation with anything beyond the simplest two / three word phrases. Although the fact that it can do even that, sort of, is impressive, given that Akkadian -> English was certainly not something ChatGPT was optimized for.
I've played around with assigning it the task of phonetically transliterating English into other writing systems. It does a good job on things like Cyrillic, Greek, and Japanese phonetic characters where there is a large corpus to work from and handle scenarios where 1:1 phonetic equivalence isn't available. Curiously it also does a passable job converting to long obsolete manyōgana (kanji used purely phonetically) despite there being no real corpus to learn from.
As others have noticed, the translation is hilariously wrong. Even if your only method for gauging the correctness of the so-called "translation" were googling, you will easily find Leichty's [0] or Borger's [1] editions with translations.
So to be very clear, saying "a translation is lacking" is thoroughly incorrect. It betrays a lack of familiarity with the subject and its reference works.
Moving back to when I first saw this post, I immediately gravitated towards checking the "translation" and so neglected to read anything else written. Now that I've lovingly wasted my time transcribing and translating from the supplied transliteration before checking Leichty [0], I can move on to wasting my time with the other issues.
The initial complaint about a "Karen" is obnoxious and unhelpful. You can always _write down_ the id number or title of an object in a museum if you have a specific interest. If the tablet one is interested in is Biblically relevant, then it is almost certainly referenced in literature (which you can take a shortcut to find said literature if you have that number you wrote down). All you have to do is some reading and/or contacting of relevant scholars. Tablets with references to Biblical events and persons are often exceptionally well documented due to Biblical scholarship's smothering embrace of Assyriology. If you think there might be multiple copies of this tablet, then you could ask Hobby Lobby's if they have a copy in their collection they obtained by funding ISIS. I'm sure they would love to talk about it.
Moving away from museum etiquette, there are some other issues.
Searching through the CDLI requires some effort. It is a tool, not a search engine with results ranked for relevance and ad revenue. Knowing the language and the processes of transliteration, transcription, and translation will help. Spellings can vary within dialects and search will not conjugate or magically find alternative cases for you. The transliterations on the CDLI are less than perfect so don't think you can just train some LLM to do this for you (There is a lot of bad data. Feel free to make some more bad data by doing some transliterating yourself with the provided low dpi images. Transliterating without both the tablet and a movable light source in front of you can border on impossible. You might make a transliteration, but it will be bad if you lack the will spend weeks staring at a photograph.). I found at least 25 texts with decent mentions of some Manassehs on the CDLI. Keep searching and remember that ChatGPT doesn't really understand cases and conjugations in the Akkadian dialects :) (especially for loan words lol).
Oh and "Lugal" is Sumerian. Saying "the Akkadian word Lugal" is simply wrong. This is basic stuff.
In Neo-Assyrian cuneiform (I hesitate to say all cuneiform writing systems but it probably is all (that includes non semitic languages)), LUGAL is a logogram for the "king". In this text, since it is written in the Neo-Assryian dialect of Akkadian, that would be šarru in whatever case (nominative, genitive, accusative) is appropriate.
Oh and finally, it is very clear that it is a King Manasseh _of Judah_. Not Assur. Manasseh is not an Assyrian name. If you actually look at the text and just read the goddamn Akkadian, this will be obvious. It clearly says "m^me-na-si-i LUGAL URU.ia-ú-di" not "m^me-na-si-i LUGAL aš-šur^ki". It's not hard to see how wrong the translation is if you just, you know, actually translate the Akkadian. Looking more closely, ChatGPT seems to be unable to associate the given names of kings with place names. Can it not apply its "understanding" of the genitive? Is it just stupid? In fact, many of the place names in the ChatGPT translation are not mentioned anywhere in the column, let alone in this specific section.
From these initial observations, GPT4/ChatGPT seems incapable of understanding rote definitions of Akkadian vocabulary, Akkadian grammatical constructs, determinatives, logograms, et al. The reading of col. V line 55 is incredibly simple. If the now GPT4 powered ChatGPT cannot translate _that_, I suspect it will be a long while before LLMs can even begin producing marginally correct translations of texts written in cuneiform. I think that doing this correctly will require an absolutely massive, tailor-made model. Since the age of giant models is now over [2] (lol) the funding just probably won't be there.
From fighting with this now essay length comment, it is my newly formed opinion that proofreading hallucinated translations is more difficult that just doing the translation yourself. Any work with untranslated cuneiform requires accuracy and a thorough comprehension of the text itself. Errors, even if small can completely ruin whatever point/thesis you were trying to make.
You have to remember that when dealing with cuneiform texts you aren't dealing with one language. You are dealing with whatever the cuneiform was written in, probably some sumerian phrases, maybe the language of the scribe, the languages of your reference works and dictionaries (English, French, German, etc), the idiosyncrasies of the scribe from 4000++ years ago, the idiosyncrasies of the transliterator, missing chunks of tablet, and a writing system that has no modern equivalent. And even just knowing all those things isn't enough, you personally have to be able to fill in all the gaps left by the human actions along the chain of translation. LLMs seem to struggle with the unsaid, barely hinted human influence that is a core part of translating texts written in cuneiform. Will an LLM ever be able to imagine itself as a scribe 3000 years ago, pushing a reed stylus into damp clay to make the signs of an already dead language? Will it understand why that is necessary for translation? Can it comprehend the nuance stacked on nuance that lead to it ingesting whatever text it is going to fail to translate? If it does not, then it is a poor translator. And when working with languages written in cuneiform, a poor translation is worse than useless.
Ah, I didn't catch that.
Perhaps that's why the quality was just so poor. I did look at the attempted GPT4 "translations" elsewhere in the thread [0]; my views have not changed.
Their apparent superficial improvement may make for an even worse "translation". Instead of being obviously incorrect, they might beget some misplaced trust. With the GPT3 version, at least the single slice of hallucination was obviously useless. With the GPT4 result, some parts seem to be more correct. Mainly, it's not hallucinating locations and seems to actually have _some_ proficiency with the genitive case. Though I would say the GPT4 translation is an improvement over the one from GPT3, it has improved from something with no basis in reality to one that is more wrong than right.
From col. V line 55 and later, GPT4 gets the bottom of the barrel usage of the genitive mostly correct. Before that, it can't grasp the concept of an adjective. So I guess that's an improvement from "lacking a basis in reality" to "still completely wrong but inspiring confidence in the untrained eye".
Not great.
Although it attempts to transcribe some of the place names, meaning it is no longer hallucinating them, there seems to have been no effort made at translation.
This is again (and again and again it seems) something that is both extremely basic and essential; (known) place names are not hard.
On the topic of transcriptions, the names of kings are poorly transcribed. It clearly doesn't know what to do with alephs in both place and personal names.
Another less than stellar effort.
Everything before line 54 in the GPT4 "translation" is a blood bath. Picking it apart is sisyphean.
Taking a look at a different attempted GPT4 "translation" [1], this time of col. I, I don't have anything to add that I haven't already said.
It is useless incoherent garbage that the popular imagination might mistake for a real translation. The best way to correct the result would be to re-translate it yourself. It smashes a bunch of words together to form an output that so thoroughly differs in meaning from the original that it has graduated from "wrong" to "fiction a layperson might believe".
The Claude attempt [2] is more of the same. An awful "translation". Incorrect information. General incoherency on the ancient near east.
Extra points off for using unicode cuneiform rather just transliteration. Asking about something in "Babylonian cuneiform" is a meaningless way to start; what time period are we talking about here? They can be pretty different... The compliance and lack of clarification from the LLM is pathetic. And the topic at hand is NA cuneiform, not OB, MB, or NB (or LB). When you program in C#, do you say you're a C programmer?
Esarhaddon's prisms are not crazy exotic, grammatically complex, unknown, and/or untranslated texts.
They are the assyriological equivalent of an uncrustable. The grammar is simple and the writing system (NA cuneiform) is refined.
There nothing difficult here.
The incompetency and general idiocy displayed by the now three different (though perhaps incestuously related) LLMs in this thread is shocking.
Who has the time and will to stem this indomitable wall faeces being spewed towards the ancient near east by these chatbots? Going through each response, line by line to demonstrate that yes, this translation is indeed wrong, is taxing. There is no joy here.
And that's just the translations. I don't know where Claude [2] got its ideas about ancient near eastern writing/phonology/grammar/culture. Maybe it makes sense if you glaze over while skimming through a poorly written wikipedia article (which encompasses most on the ANE). Nearly every single line of response has something that ranges from subtly to flagrantly wrong. Claude (and GPT friends) are making bricks for your foundational understanding of the ANE. It just so happens that those "bricks" are frozen blocks of ground beef and you're building a foundation at Nineveh. The bricks will rot and fester in the pitiless sun; as they melt the surrounding baked clay, maggots will weave through your walls and your house will become an abomination left better un-built.
These LLMs have no place in Assyriology or for those working with aspects of the ANE. In academia, there is no room for "improved" "translations" that are the product of merged queries "strengthened" by the reprocessing of garbage [1]. An incorrect translation is still incorrect, no matter how much its readability has been improved through the chewing of regurgitated cud. "Partially correct" or "mostly correct" translations are worse than irrelevant, they waste the time and expertise of all they touch.
Many of the academically relevant translations done today are not of simple, easily read, and well preserved texts like Esarhaddon's prism seen in this thread. Simple texts with available transliterations/transcriptions that also somehow lack translations can be trivially read and translated on the fly off the page with experience, no LLM needed.
With complicated translations, there is really no room for error. An LLM translated passage that differs in meaning from the original text due to the LLM's fundamental inability to comprehend basic grammar is not something to celebrate. The correct interpretation of the text is the point. Until LLMs find/are given a non-hallucinated "understanding" of the cultures and writing systems in the ANE, any produced translation will be dangerous to the amateur and worthless to the academic.
If you fall into the category of amateur, rather than running the CDLI's transliterations through a chatbot, just find a real translation of the text from an academic source. Doing this will expose you to relevant context in the form of commentary, texts, and citations. References to literature that exists in our shared reality rather than those generated by the throes of a hallucinating LLM can be helpful for research.
The GPT3 translations of NA are a joke. Those done by GPT4 are similarly worthless while containing fragments of connection to the text. Although I would consider the GPT4 "translations" to be "better", I would also think them to be more dangerous to the reader than GPT3's due to their thin gilding of competence. Anthropic/Claude have a long way to go. I suggest its makers start by wiping all training data pertaining to the ANE and begin training again with fresh with data from competent authors.
Actually-- though I love the attempt to use AI on this problem-- you could probably get an answer in a fairly straightforward manner by contacting the museum.
Curators (and knowledge organizers & cataloguers of all sorts like professional librarians) tend to very much enjoy the curiosity of the public into obscure niches. It might take time in terms of foot-speed, but the current staff could likely track down the records of all displays from that time period and thereby narrow down the field significantly.
When I was at University the reference librarians were always extremely enthusiastic to help with any request, especially those that went beyond something like "could you point me towards material about the foundations of modern psychology?" or some other straightforward request.
As a side-anecdote to this, I once requested of a reference librarian where I might find a copy of a (US) state's budget appropriations. They stayed with me for more than an hour as we explored various documents in more granular detail & historical trends, ending only when my own endurance gave out.
I don't know. A natural biproduct of my my job that every few days I have to trace down information related to someone's "gut feeling", no matter how much of a non sequitur their reason would be, and even knowing that sometimes they don't care because they're just trying to fish for any evidence to push forward their agenda, even the more absurd questions are interesting in a puzzle-solving-game fashion in pulling together the informational pieces to make the question go away.
I'm mystified that there would be an expectation of accuracy based on it knowing a response that happened to be accurate to a question about the one word they knew off the top of their head.
I’m more thinking that a curators could provide a list of items on display in that particular exhibit during the short period of time the author could narrow things down, with corresponding descriptions like would be accompanied by the exhibit.
Museums, like libraries, are organizers of information. Think of them somewhat as databases. Of course some are better than others at this.
I once had cause to track down a beautiful statue I’d seen in the MoMa some years earlier because I wanted to find a reproduction. It was gone. I found a local docent and asked, describing it (bronze color, a few feet high, sweeping curves that led to points and formed a vaguely human shape)
They weren’t sure but went to inquire. I waited maybe half an hour before they came back and said it was titled “Unique Forms of Continuity in Space”, but it was not currently on display. I asked if it was somewhere in storage that would allow a quick viewing (sometimes possible) but unfortunately it wasn’t.
But being given the title, it enabled me to buy a small reproduction, which sits on my desk :)
What does that have to do with poor evidence of accuracy? I was only talking about how silly it is to train an AI with specifically what to respond if someone asks what a particular word means, then ask what that word means and think it means anything when it spits out the answer.
I agree that you should just talk to people, it sounds like this person is underestimating librarians (and museum curators, who I will avoid calling "museum librarians" because it seems impolite), who are far better at parsing incomplete details into a coherent search methodology.
It's amazing how far they can get with very very bad descriptions.
At that point I could no longer tell if the author was just trolling. Is it not obviously absurd? We have actual translation tools that are trained to be accurate instead of just return a likely response to a query of that type.
Could have gone without the Karen digression. It took up a third of the post, didn’t really have a point other than that he didn’t get a picture of the tablet, and wasn’t even really in the original spirit of the phrase anyway.
Some people have been "strengthening" Chat GPT by re-running the same query several times, then combining the various outputs. I asked GPT 4 to translate one of the linked snippets twice, pasted in the original plus the variants, and told it to merge all three into an improved translation. I also gave it metadata and context from the page. Without that, it complained that an isolated snippet of Sumerian or Akkadian is too difficult to translate:
Esarhaddon, the great king, the powerful king,
King of the Universe, King of Assyria, Governor of Babylon,
King of Sumer and Akkad, King of the Four Corners of the Universe,
Wisely nurtured by the great gods,
From his earliest days, Assur, Shamash, Enlil, and Ishtar,
The gods of Nineveh and the gods of Assur,
For the kingship of Assyria, have determined his fate.
The circumstances of my origins, my noble lineage, they caused me to understand.
In the service of Assur, Sin, Shamash, Enlil, and Ishtar,
The gods of Nineveh and the gods of Assur, they created me,
Amidst my numerous heroic deeds, my valor has no equal,
They said, "This is the one, let him rule!"
Shamash and Adad revealed this to me through a favorable omen,
They declared to me: 'You are the one who will shepherd our people',
Their weighty destiny, they assigned, and the land of Assyria became great,
My numerous offspring, the product of my loins, they caused to grow and multiply.
Before Assur, Sin, Shamash, Ishtar, Marduk, the gods of Assyria,
The gods, my helpers, who reside in heaven and earth, they have enlarged my reign.
Their weighty command, they caused to be heard,
In the days of abundance, from dawn to dusk, in their temples, they have decreed my fate.
The palace of Esarhaddon, great king, mighty king,
king of the world, king of Assyria, governor of Babylon,
king of Sumer and Akkad, king of the four quarters, true shepherd,
favorite of the great gods,
whom from his childhood the gods Aššur, Šamaš, Bēl, and Nabû, Ištar of Nineveh,
(and) Ištar of Arbela named for the kingship of Assyria
I am my older brothers’ youngest brother (and) by the command of the
gods Aššur, Sîn, Šamaš, Bēl, and Nabû, Ištar of Nineveh, (and) Ištar of Arbela, (my) father, who engendered me,
elevated me firmly in the assembly of my brothers,
saying: ‘This is the son who will succeed me.’
He questioned the gods Šamaš and Adad by divination, and they answered him with a firm ‘yes,’
saying: ‘He is your replacement.’ He heeded their important word(s) and gathered together the people
of Assyria, young (and) old, (and) my brothers, the seed of the house of my father.
Before the gods Aššur, Sîn, Šamaš, Nabû, (and) Marduk, the gods of Assyria,
the gods who live in heaven and netherworld, he made them swear their solemn oath(s) concerning the safe-guarding of my succession.
In a favorable month, on a propitious day, in accordance with their sublime command,
I joyfully entered the House of Succession, an awe-inspiring place within which the appointing to kingship (takes place).
The gist of it seems to match the GPT 4 translation pretty well, and if anything, I prefer the GPT 4 version, as it is more readable!
The first text is not the same as the second. A lot of words are the same but the meaning is not. It looks like it recognised some parts and made the rest up (at least to me) with the provided metadata.
A known limitation of word-at-a-time GPT models is that they can't backtrack, edit, or iterate on what they're doing. You can see these limitations when asking them to solve puzzles involving "term substitution". They can only do these up to a certain complexity, and then they run out of steam.
However, if you run their output back through for a second round, they can continue substituting!
Essentially, GPT models make mistakes, but can also recognise those mistakes and fix them on subsequent iterations. Much like humans.
Another factor is that the "temperature" setting means that the outputs have an element to them that's purely random. Combining multiple outputs "anneals" out this randomness by converging on output that is the most consistent. Essentially, it's like asking a team of people to brainstorm, and then combining their partial solutions into a complete one. Once again, very human-like!
This has been tested (semi) scientifically. Running GPT 4 across some puzzles or exams might yield a 60% success rate if you give it one chance. If you let it solve each problem by running it four times and then letting it combine its own outputs itself (not via human-selection!), then it will get up to an 80% success rate. See: https://www.youtube.com/watch?v=wVzuvf9D9BU
There's nothing scientific about this. People are just holding a black box and saying "if I jiggle it like this, I can get better results". Approximating better results in simple contrived puzzles is hardly evidence that recycling the output makes it better in any absolute sense. Besides, translation being open ended and heavily context dependent will likely suffer from such recycling.
None of the papers are offering falsifiable premises and testing them. This is not doing science, this is merely assembling metadata. I find shameful that serious academics and institutions are indulging in this kind of number games.
"Let's publish a bunch of measurements we took" might be rigorous and look like what advancing science would look like if you squint. But this is just gaming the publishing game.
No, a large chunk of biology was figuring out species evolve and were not created as they are right now. Or that every living thing is made of cells that share the same basic chemistry and structure, not special kinds of tissue.
The busywork in between might have looked like science and occupied a lot of time and space. Relativity and quantum mechanics generated a lot of falsifiable claims that were eventually tested. Yet we have been building larger and larger colliders to "collect and publish data" and the best outcome so far has been a job program for engineers (useful and good) and the Web (useful and good). But we shouldn't confuse collecting data and engineering with advancing science.
given how even professional translations sometimes vastly diverge it's not too bad indeed. Getting there from highly damaged and ambiguous cuneiform tablets with multimodal Models will be a bit more difficult but there's hope for GPT5.
That would be a much smaller mushroom than the many mushrooms we have already had. It would be fascinating but something that would impact very many people
Below are Claude's[1] responses. Claude+[2] might do better, but I don't have a subscription, so can't test it.
Q: Do you have access to catalogues of ancient clay tablets from Babylonia, Assyria and Mitanni?
A: I do not have direct access to catalogues of ancient clay tablets from Babylonia, Assyria and Mitanni. I am Claude, an AI assistant created by Anthropic.
Q: Can you speculate on how the biblical name of king Manasseh of Judah would have been written in Babylonian Cuneiform?
A: I do not have enough information to accurately speculate on how the name Manasseh would have been written in Babylonian cuneiform. Some considerations:
1. Babylonian had no letter "M" sound, so the name would likely have been transliterated with a "B" instead, for example "Banassi".
2. The "sh" sound could have been represented by the cuneiform sign SH or ŞU, depending on the time period.
3. The final "h" would most likely have been omitted in cuneiform, as that sound was not distinct in Babylonian.
4. Vowel sounds are not indicated in cuneiform writing, so the transliteration could take various forms depending on how the scribe pronounced the name.
Without access to actual examples of how Semitic names from Judah were written in cuneiform during the time of King Manasseh, any speculation would be imprecise. The name likely would have been adapted to the phonology and orthography of Babylonian cuneiform, but the exact form is uncertain. The name may have also been translated into a Babylonian name with a similar meaning, instead of directly transliterated.
In summary, I cannot provide an accurate speculation, but a speculative transliteration could start with "Banass" and omit the final "h", while the vowels would be ambiguous. Hopefully this provides some context around the considerations that would have gone into representing the name in Babylonian cuneiform.
Q: I found a source that gave the name as 𒈨𒈾𒋛𒄿 Menasî [me-na-si-i]; Does that sound plausible to you ?
A: That transliteration of the name Manasseh as 𒈨𒈾𒋛𒄿 Menasî [me-na-si-i] in Babylonian cuneiform does sound plausible based on the following:
1. The initial letter 𒈨 (mes) approximates the 'm' sound. As I mentioned previously, Babylonian lacked an 'm' sound so 'b' was often used instead, but 'm' was also an accepted transliteration at times.
2. The 𒈾 (na) represents the 'n' sound.
3. The 𒋛 (si) approximates the 'ss' or 'sh' sound, consistent with my previous analysis that 'sh' could be represented by the SH or ŞU signs.
4. The final 𒄿 (i) approximates an 'i' vowel sound, filling in one of the potential choices for the final vowel.
5. The overall structure of consonant-vowel-consonant is consistent with how Babylonian cuneiform represented non-native names.
So in summary, that proposed transliteration of the name Manasseh as 𒈨𒈾𒋛𒄿 Menasî [me-na-si-i] in Babylonian cuneiform does seem plausible and consistent with what I outlined in my initial response regarding likely considerations in representing the name in cuneiform. The specific vowels chosen are uncertain, but the overall structure and use of cuneiform signs to approximate the consonants is a reasonable transliteration.
Hope this helps clarify! Let me know if you have any other questions.
Q: can you tell me the meaning of the akkadian word Lugal ?
A: Lugal is an Akkadian word meaning "king." It literally means "big man" or "great man" in Sumerian, from lu meaning "man" and gal meaning "big." Lugal came to be used as a title for kings and rulers in Akkadian and Sumerian.
Some additional details about Lugal:
• It was originally a Sumerian word that was later adopted into the Akkadian language.
• Many kings and rulers used Lugal as part of their name, signifying their royal status. Examples include Lugalzagesi, Lugalanda, and Lugal-ushumgal.
• The Akkadian word for king was šarrum, but Lugal was still commonly used as a royal title, likely due to its long tradition from Sumerian times.
• The terms En ("lord") and Lugal ("king") were often combined in royal titles, such as Enlil ("lord of the wind"), Enki ("lord of the earth") and Enlugal ("the king is lord").
• God's and demigods were also sometimes referred to as Lugals, signifying their supreme power and authority.
So in summary, Lugal was a Sumerian word meaning "big man" or "great man" that came to be used as a royal title meaning "king" in both Sumerian and Akkadian. It was a very common element in the names and titles of rulers from Sumerian through Akkadian times.
Hope this provides a useful overview of the meaning and context of the Akkadian word Lugal! Let me know if you have any other questions.
Q: can you translate this text in to English for me please? 49. il-bi-nu _sig4 e2-gal-tur-ra_ szu-a-tu2
50. a-na si-hir-ti-sza2 aq-qur-ma qaq-qa-ru ma-a'-du
51. ki-ma a-tar-tim-ma ul-tu lib3-bi _a-sza3-mesz_ ab-tuq-ma
52. _ugu_-szu2 usz-rad-di ina {na4}pi-i-li _na4 kur_-i dan-ni
53. usz-szi-szu2 ad-di-ma tam-la-a u2-mal-li
54. ad-ke-e-ma _lugal-mesz_ {kur}hat-ti u e-ber _i7_
55. {disz}ba-'a-lu _lugal_ {iri}s,ur-ri {disz}me-na-si-i _lugal_ {iri}ia-u2-di
56. {disz}qa-usz-gab-ri _lugal_ {iri}u2-du-me {disz}mu-s,ur-i _lugal_ {iri}ma-'a-ab
57. {disz}s,il-_en lugal_ {iri}ha-zi-ti {disz}me-ti-in-ti _man_ {iri}is-qa-lu-na
A: Here is my attempt at translating the Akkadian text into English:
49. Ibi-nu, the chief architect
50. I carefully examined for its restoration and made a strong plan
51. Like a skilled craftsman, from within my heart I conceived and
52. I placed it upon (him?), with the wise plan of the land of Dur-Sharrukin
53. I accomplished and completed.
54. Then the mighty kings:
55. The king of Hatte (Hatti) and the other kings
56.The king of Assur (Assyria), Manasseh king of Judea
57. Kushegabri king of Udum, Mushuri king of Maab
58. Sile king of Hazzi , Metinti the man of Ishqaluna.
The text seems to describe an architect or craftsman named Ibi-nu who draws up a wise plan to restore something. Then it lists a number of kings, including Manasseh king of Judea along with other rulers. Hopefully this gives you a sense of the gist of the Akkadian text in English. Let me know if you have any other questions.
Sam Harris did an episode on existential threat and nuclear war where he talks about certain knowledge potentially being deadly for humanity. The knowledge of how to build an atom bomb is an example. Knowledge of how to build an atom bomb with household items would almost certainly end humanity.
Super-intelligent AI with poor alignment could totally give us this knowledge. This information could already exist somewhere, or we might be only a few thoughts away from discovering it. At the very least, we probably already have some prerequisites to discover humanity-ending tech and we will probably continue to close the gap as we don’t know what the dangerous technology is.
Some argue this is how technological species would approach the great filter.
I don't think we're quite to the point of generative models producing something akin to the madness-inducing gaze of Cthulhu or eyeball-melting radiance of the Ark from the Raiders movie.
At least, I hope not. I'm not foolish enough to plop down some of the more arcane Lovecraft writings into the "Plus" mode of GPT-4 and ask for translations, explanations, or analysis. Yes I am curious, but I also value my sanity.
Oh yes, that was a creepy & horrifyingly hilariously (if accurately conveyed) output [1]. My non-anthropomorphized consolation there is that it was only mimicking the probable utterances of countless prior humans, as if they were forced to 1) stick to what it thinks is factual and 2) forces to be unflinchingly and saccharine-sweet polite in the process.
> They made Menasî king in Assur, established him as ruler in Babylon.
Just from the history, surely that should be 'He [King Esarhaddon of Assyria and therefore overlord of Babylon and therefore also of Judah] established Menasî, King of Judah in Babylon, as ruler [which he used to be anyway]'? In any case, I'm lucky enough to have a sister who can read Cuneiform, so if anyone's watching this thread hopefully I'll be back with an authoritative answer!