I was thinking about this recently, the way to do is to define a radius, and then imagine rolling a circle of that radius around the outside of the coastline (or around the inside! Define that as well) and then take the length of the equivalent track that never leaves contact with the circle.
So you get a different length depending on the radius you choose, but at least you get an answer.
You could define the radius in a scale-invariant way (proportional to the perimeter of the convex hull of the land mass for example) so that scaling the land mass up/down would also scale our declared coastline length proportionally.
It's in no way a meaningful solution. If you're settling for a resolution, you don't need a ball-rolling analogy. We already know the length of a given coastline at given resolutions (ignoring the constant changing of the coastline itself). What's practically not feasible is getting every country on earth to agree on the right resolutions. And that's for good reasons, because the desired accuracy depends on many factors, some situational and harder to quantify than just size of the enclosed land mass.
You don't need anyone else to agree on the resolution.
You can just pick one when you are doing some work that requires knowing the length of the coastline.
I wasn't trying to say that we should all agree on a universal definition and use that for everything? That would be insane. I was just providing a way to get a stable answer for the length of the perimeter of a fractal area.
Not a bad idea - one issue would be when the circle approaches a 'narrow' section that widens out again. If too big to fit into the gap, the circle method would simply not count any of this as land. I think it would be unreliable compared to moving along the coastline in fixed increments (IE one-mile increments or one-foot increments, depending on your goal)
Plank's length is an ok answer, but coast line reaches a steady state way before that. Nature only has approximate fractals.
Way before plank length you'll get the surface and line energies of the material interfaces dominating the total energy. Those tend to force very smooth and very discreet lengths.
There's a time and a place for it. If you already know exactly what the program needs to do, then sure, design a user interface. If you are still exploring the design space then it's better to try things out as quickly as possible even if the ui is rough.
The latter is an interesting mindset to advocate for. In almost every other engineering discipline, this would be frowned upon. I suspect wisdom could be gained by not discounting better forethought to be honest.
However, I really wonder how formula 1 teams manage their engineering concepts and driver UI/UX. They do some crazy experimental things, and they have high budgets, but they're often pulling off high-risk ideas on the very edge of feasibility. Every subtle iteration requires driver testing and feedback. I really wonder what processes they use to tie it all together. I suspect that they think about this quite diligently and dare I say even somewhat rigidly. I think it quite likely that the culture that led to the intense and detailed way they look at process for pit-stops and stuff carries over to the rest of their design processes, versioning, and iteration/testing.
Racing like in Formula 1 is extremely different from normal product design: each Formula 1 car has a user base of exactly 1: the driver that is going to use it. Not even the cars from the same team are identical for that reason. The driver can basically dictate the UX design because there is never any friction with other users.
Also, turnaround times from idea to final product can be insane at that level. These teams often have to accomplish in days what normally takes months. But they can pull it off by having every step of the design and manufacturing process in house.
There exist other ways to do the research. „Try things out“ is often not just a signal of „we don‘t know what to do“, but also a signal of „we have no idea how to properly measure the outcomes of things we try“.
I'm the lead for an internal tool for a non-technical team. We iterate so quickly that the team we're building it for was like "can you guys stop changing things so quickly? We can't keep up with where anything is." which was a fair assessment.
But that’s the point, no? Prototyping is useful but beyond a proof of concept, you still need a suitable user interface. I have no problems if there’s a rationale behind UI changes, but often we have stakeholders telling us to do something inconsistent just so their pet project can be presented to the user. That’s not design.
> Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.
Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?
Related, I've been surprised that we haven't had more violence against corporations and/or their leadership in the vein of Luigi Mangione.
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
Litigation—the hope or fantasy to make a buck—soaks up a lot of the million-man animus I’d guess.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
Those unhinged people might be busy in social media bubbles, fighting endless pointless battles (or simply doom scrolling) until they're too exhausted to do anything.
I also find it so weird to play this on the person of Altman or Amodei. These are basically fungible public faces. If they die this very moment AI progress wouldn't halt. I don't think it would even be impacted. If anything you should be mad at governments not legislating if you are anti AI.
Especially considering Amodei and Altman will be little more than footnotes in 50 years time. They seem important now but they are just the people that happened to be in charge at the moment AI happened to happen. There is more going on than a couple of billionaires taking your job away.
Hah. Yes, and especially as “you, for fuck’s sake, you are doing this” should be, upon reflection, entirely and trivially false. You could remove those two figureheads from the equation and absolutely nothing would change. If violence were ever the answer, I think you'd need to go back in time like the Terminator and whack some academics and Google researchers.
The most interesting thing in here is https://github.com/smhanov/laconic which is the author's "agentic research orchestrator for Go that is optimized to use free search & low-cost limited context window llms".
I have been doing this kind of thing with Cursor and Codex subscriptions, but they do have annoying rate limits, and Cursor on the Auto model seems to perform poorly if you ask it to do too much work, so I am keen to try out laconic on my local GPU.
EDIT:
Having tried it out, this may be a false economy.
The way it works is it has a bunch of different prompts for the LLMs (Planner, Synthesizer, Finalizer).
The "Planner" is given your input question and the "scratchpad" and has to come up with DuckDuckGo search terms.
Then the harness runs the DuckDuckGo search and gives the question, results, and scratchpad to the Synthesizer. The Synthesizer updates the scratchpad with new information that is learnt.
This continues in a loop, with the Planner coming up with new search queries and the Synthesizer updating the scratchpad, until eventually the Planner decides to give a final answer, at which point the Finalizer summarises the information in a user-friendly final answer.
That is a pretty clever design! It allows you to do relatively complex research with only a very small amount of context window. So I love that.
However I have found that the Synthesizer step is extremely slow on my RTX3060, and also I think it would cost me about £1/day extra to run the RTX3060 flat out vs idle. For the amount of work laconic can do in a day (not a lot!), I think I am better off just sending the money to OpenAI and getting the results more quickly.
But I still love the design, this is a very creative way to use a very small context window. And has the obvious privacy and freedom advantages over depending on OpenAI.
>To manage all this, I built laconic, an agentic researcher specifically optimized for running in a constrained 8K context window. It manages the LLM context like an operating system's virtual memory manager—it "pages out" the irrelevant baggage of a conversation, keeping only the absolute most critical facts in the active LLM context window.
The 8K part is the most startling to me. Is that still a thing? I worked under that constraint in 2023 in the early GPT-4 days. I believe Ollama still has the default context window set to 8K for some reason. But the model mentioned on laconic GitHub (Qwen3:4B) should support 32K. (Still pretty small, but.. ;)
I'll have to take a proper look at the architecture, extreme context engineering is a special interest of mine :) Back when Auto-GPT was a thing (think OpenClaw but in 2023), I realized that what most people were using it for was just internet research, and that you could get better results, cheaper, faster, and deterministically, by just writing a 30 line Python script.
Google search (or DDG) -> Scrape top N results -> Shove into LLM for summarization (with optional user query) -> Meta-summary.
In such straightforward, specialized scenarios, letting the LLM drive was, and still is, "swatting a fly with a plasma cannon."
(The analog these days would be that many people would be better off asking Claw to write a scraper for them, than having it drive Chromium 24/7...)
> (The analog these days would be that many people would be better off asking Claw to write a scraper for them, than having it drive Chromium 24/7...)
Possibly. But possibly you have a very long tail of sites that you hardly ever look at, and that change more frequently than you use them, and maintaining the scraper is harder work than just using Chromium.
The dream is that the Claw would judge for itself whether to write a scraper or hand-drive the browser.
That might happen more easily if LLMs were a bit lazier. If they didn't like doing drudgery they would be motivated to automate it away. Unfortunately they are much too willing to do long, boring, repetitive tasks.
Not sure if top model should be the biggest one though. I hear opposite opinions there. Small model which delegates coding to bigger models, vs big model which delegates coding to small models.
The issue is you don't want the main driver to be big, but it needs to be big enough to have common sense w.r.t. delegating both up[0] and down...
[0] i.e. "too hard for me, I will ping Opus ..." :) do models have that level of self awareness? I wanna say it can be after a failed attempt, but my failure mode is that the model "succeeds" but the solution is total ass.
> Because they should 100% be liable for the latter.
Why? I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?
I think if a human designs a drug and tests it and it all seems fine and the government approves it and then it later turns out to kill loads of people but nobody thought it would... that's just bad luck! You shouldn't face serious liability for that.
If we start from the position of the marketing hype and even Sam Altman's statements, these tools will "solve all of physics". To me it's laughable, but that's also what's driven their outsized valuations. Using the output to drive product decisions and development, it's not hard to imagine a scenario where a resulting product isn't fully vetted because of the constant corporate pressure to "move faster" and the unrealistic hype of "solve all of physics". This is similar to Tesla's situation of selling "Full Self-Driving" but it actually isn't in the way most people would understand that term and so they lost in court on how they market their autonomous driving features.
Can't agree with this. No, not at all. That can't be true... That's not "just bad luck". I believe this is actually a serious case of negligence and oversight - regardless of where exactly it occurred, whether on the part of the drug’s manufacturer, the government agency responsible for oversight, or somewhere else. It just doesn’t work that way. Any drug undergoes very thorough and rigorous testing before widespread use (which is implied by "millions of deaths"). Maybe I’m just dumb. And yeah, this isn’t my field. But damn it, I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection. With such a high mortality rate (I'll reinforce - millions of deaths cannot be "unseen edge cases"), it simply shouldn’t be possible with a proper approach to testing. Please, correct me if I’m wrong.
> I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?
It’s simple. In this case, ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.
Suppose, during the production of a hypothetical drug at a factory, a malfunction in one of the production machines (please excuse the somewhat imprecise terminology) - caused by a design flaw (i.e., the manufacturer is to blame for the failure; it’s not a matter of improper operation), and because of this malfunction, the drugs are produced incorrectly and lead to deaths, then at least part of the responsibility must fall on the machine manufacturer. Of course, responsibility also lies with those who used it for production - because they should have thoroughly tested it before releasing something so critically important - but, damn it, responsibility in this case also lies with the manufacturer who made such a serious design error.
The same goes for ChatGPT. It’s clear that the user also bears responsibility, but if this “machine” is by design capable of generating a recipe for a deadly poison disguised as a “medicine” - and the recipe is so convincing that it passes government inspections - then its creators must also bear responsibility.
EDIT: I've just remembered... I'm not sure how relevant this is, but I've just remembered the Therac-25 incidents, where some patients were receiving the overdose of radiation due to software faults. Who was to blame - the users (operators) or the manufacturer (AECL)? I'm unsure though how applicable it is to the hypothetical ChatGPT case, because you physically cannot "program" the guardrails in the same way as you could do in the deterministic program.
> I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection.
It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example. You can't test drugs indefinitely, at some point you need to say the test is over and it looks good. What if the downsides occur past the end of the test horizon?
> ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.
ChatGPT is not intended to be a drug manufacturing tool though? If you use any other random piece of software in the course of designing drugs, that doesn't make it the software developer's fault if it has a bug that you didn't notice that results in you making faulty drugs. And that's if it's even a bug! ChatGPT can give bad advice without even having any bugs. That's just how it works.
In the Therac-25 case the machine is designed and marketed as a medical treatment device. If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.
I think where there may be some confusion is if ChatGPT claims that a drug design is safe and effective. Is that a de facto statement from OpenAI that they should be held to? I don't think so. That's just how ChatGPT works. If we can't have a ChatGPT that is able to make statements that don't bind OpenAI, then I don't think we can have ChatGPT at all.
> It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example.
In that case, even if it leads to many deaths, it would be difficult - if not practically impossible - to hold anyone accountable, even if it were possible. However, such a turn of events is difficult, or rather, practically impossible to predict, don’t you think? I apologize for not clarifying this point in my original comment, but I wasn’t referring to delayed effects - I was referring to what becomes evident almost immediately (for example, let’s say “within a year and a half at most”) after the drug is used. Yes… I’m sorry, I just didn’t phrase my thought correctly. I apologize for that.
> ChatGPT is not intended to be a drug manufacturing tool though?
That’s certainly the case right now. However, LLMs like GPT, Claude, Gemini, and others weren’t created for waging war, were they? Then why did Anthropic recently have - let’s just say... "some issues in its relationship" with the DOD, if they were not involved in this, if Claude was not meant to be used in war? Why was the ban on using Gemini to develop weapons removed from its terms of service?
You’re right that LLMs weren’t created for such purposes, and to be honest, I believe that - at least for now - it’s simply unethical to use them for that. These aren’t the kinds of decisions and actions that should be outsourced to a machine that bears no responsibility - moral or legal.
> ChatGPT can give bad advice without even having any bugs. That's just how it works.
To continue my thought, this is precisely why I believe it is unethical to give LLMs any tasks whatsoever that involve human lives. There are simply no safety guarantees - not just "some", but none at all - aside from unreliable safety fine-tuning and prompting tricks. For now, that’s all we can count on.
> If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.
They don't claim it yet. And, as one person (qsera) mentioned below your comment:
> The trick is to make people behave like that without actually claiming it. AI companies seems to have aced it.
They probably won't claim exactly that "ChatGPT can reliably design drugs", just because of the possible consequences. But I'm almost certain there will be something similar in meaning, though legally vague - so that, from a purely legal standpoint, there won't be any grounds for complaint. What's more, they are already making some attempts - albeit relatively small ones so far - in the healthcare sector; for example, "ChatGPT Health"[1]. I don't think they will stop there. That's a business after all.
> if ChatGPT claims that a drug design is safe and effective
I have already said before that the OpenAI will not be the only one who should be held responsible in this case. The (hypothetical) user should also bear some responsibility, and in the scenario you described, the primary responsibility should indeed lie with them. That said, I may be wrong, but it’s possible to fine-tune the model so that it at least warns of the consequences or refuses to claim that "this works 100%". This already exists - models refuse, for example, to provide drug recipes or instructions for assembling something explosive (specifically something explosive, not explosives - I recently asked during testing, out of curiosity, Gemma 4 how to build a hydrogen engine - and the model refused to describe the process because, as it said, hydrogen is highly flammable and the engine itself is explosive), pornography, and things along those lines. Yes, I admit, it’s far from perfect. But at least it works somehow. By the way, if I’m not mistaken, many models even include disclaimers with medical advice, like "it’s best to consult a doctor".
In short, what I’m getting at is that the issue lies in how convincing the LLMs can be at times. If it honestly warns of the dangers of using it, if it says "this doesn’t work" or "this requires thorough testing", and so on, but the user just goes ahead and does it anyway - well, that’s like hitting yourself on the finger with a hammer and then suing the hammer manufacturer. It’s a different story when the model states with complete confidence that "this will definitely work, and there will be no side effects" - and user believes it; there should be some effort put into preventing such cases. But otherwise, yes, I think you’re right about the scenario you described.
And to conclude - I don’t think that when it comes to drug development, we’re talking about ordinary people or individual users. In the context of the parent post, it is implied (though I may have misunderstood) that ChatGPT would be used by entire organizations, such as pharmaceutical companies - just as LLMs in a military context are used not by individuals, but by the DOD and similar organizations. I think this shifts the level of responsibility somewhat. Because when OpenAI enters into a contract for the use of its product, ChatGPT, in the process of drug development and manufacturing, it’s kind of implied that ChatGPT is ready for such use.
EDIT: I'm sorry that my reply is so long, I'm just trying to express all of my thoughts in one which is probably not a good thing to do. I would write something like a blog post about that, but there's a lot written about this topic already, so...
Yeah, and I have also used translator in some parts because English is not my native language.
> it simply shouldn’t be possible with a proper approach to testing.
It just has to be delayed. Like many years after application. Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.
Or both...
On top of that, If I remember correctly, this liability wavering also exist for Vaccines.
> It just has to be delayed. Like many years after application.
That's one thing. In this case, I don't really know if it's possible to test for something like delayed effects. I'm not even sure if you can identify them with 100% certainty; if you can prove that these effects come from this particular drug and not from another one.
> Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.
And this is different thing. "Specific and rare circumstances" will not lead to millions of deaths (I apologize if I’m being too nitpicky about this particular phrasing, but I want to speak specifically in the context of “millions of deaths”). “Specific and rare circumstances” occur even with fully effective and "proper" medications - this is called “contraindications.” But such rare cases, as I’ve already said, will not lead to mass deaths - precisely because they are rare. I apologize again for focusing on the "millions", but please don’t confuse the scale of the problem.
The idea that a blunt knife is more dangerous than a sharp one is a total fallacy.
Every time I've cut myself on a knife, it's been because it was too sharp, not because it was too blunt.
In the limit, a blunt knife is a sphere and a sharp knife is a sharp knife. Very obviously sharp knives are more dangerous than blunt ones because sharp knives cut better and blunt knives cut worse.
> The idea that a blunt knife is more dangerous than a sharp one is a total fallacy.
I don't think so. I personally find dull knives more dangerous because I need to apply more pressure and when it starts to cut, the knife becomes uncontrollable.
When the blade is sharp, and you know it's sharp, you respect the blade and give actual thought to what you're doing.
My wife didn't used to sharpen her knives. When I started to sharpen them, she had a couple of minor accidents, but now the accident rate is at 0.0. She even wants me to sharpen the knives when they become dull.
This is exactly what "having a feeling for the machine is". You know and respect it for what it is. It bites back when you don't respect it. Let it be a knife or a space shuttle, it doesn't matter.
a) published data tends to see corrections from sensors and methodology which take several years to work out the fine details. (This isn't an attack this is science) Which means always take yesterday's numbers with more scepticism than 2yr ago.
(This is making no statement of any data you're looking at or any trend you claim to see)
b) a field dominated by modelling needs data to back it up, otherwise the conversation would be, "Why is the LHC failing to find strong theory which is absolutely there" vs "I wonder if the modelling is correct based on..."
This is a certain level of maturity that certain sciences are only starting to reach after playing in the ballpark of "let's go model my idea and make a press release which will just so happen to help my funding".
Yes sea level temps are rising, absolute numbers are still difficult to come by though and last UN summary doc I read still put things at 5C global average over a century. (Yes still horrifically catastrophic for the wrong people, but I'm also not in charge)
I doubt it has anything to do with data-quality, I'd be surprised if even 10% of climate denialists have studied the numbers. Remember >20% of US citizens are still creationists, a lot of people aren't emotionally ready to believe scary things, and maybe they never will be.
Indeed, there is quite a lot of data against (Biblical/young-earth) creationism.
Everything from "humans' chromosome 2 is a fusion of two other chromosomes, and we see those two other chromosomes still present in chimpanzees and gorillas and bonobos", which argues for common descent, to "when zircon crystals form, they accept radioactive uranium but violently reject the lead that it decays to, and modern zircon crystals have lead-uranium ratios indicating that they formed billions of years ago", arguing for an old age of the universe. And many, many, many, many other pieces of evidence.
Chromosomal similarity argues for solid engineering principles just as much as it does common decent. Do you have any data to suggest that the almighty did not take a working chromosome 2 (made in their own image, perhaps), and reuse it in these other animals you reference?
> Do you have any data to suggest that the almighty did not take a working chromosome 2 (made in their own image, perhaps), and reuse it in these other animals you reference?
Why would an almighty god leave markers in our Chromosome 2 that look like they are from chromosomes 2a/2b in other apes?
It's not just that there's a huge genetic similarities between the chromosomes. Which there are! Chromosome 2 also has an extra, deactivated centromere, which was used in the copying of the previous chromosome 2b, before the fusion. And, remember that chromosomes typically have telomeres at their ends to keep them from fraying apart. In a fusion event you'd expect some telomeres from the end of the ingredient chromosomes to end up in the middle of the resulting fused chromosome. And this is what we see.
Of course God could have created our chromosome in such a way that it looks very much like the fusion of 2 chromosomes from our shared ancestor with chimpanzees, down to the addition of an extra centromere and telomere region. But why would he?
But, I've also got to say, man, please don't be surprised if I don't respond much. I have no offense intended towards you, but from my perspective, arguing with a young earth creationist is about as productive as arguing with a flat earther. There are about 6 orders of magnitude of difference in age between an Earth that's about 6k years old and 4 billion, and those differences should be readily apparent all over the natural world. And they are! We see an incredible wealth of evidence for an old universe.
But... well, horse and water and all that. I can't expect to change your mind any more than I'd expect to change a flat-earther's mind.
I get that you don’t understand why a creator might do things they way they might have done. I don’t either. But surely you admit your own lack of understanding is not a scientific proof point?
If I said “I don’t understand why the big bang happened”, would that be evidence it didn’t?
Which is why I contest anyone who makes claims like “smart people like me know that Science says the earth is N years old and everyone who disagrees is too dumb to understand these indisputable facts”.
Ok. Not really sure what you’re getting at here tbh. But I assume you have read some paper that said that this tree had some isotope of some material, and you’ve taken that to mean the earth is older than 6,000 years?
No, you have data that you’ve interpreted to mean that the trees are older than 6,000 years old. What is that data, and why have you interpreted it in that way?
It's not faith when a bunch of different people all did the homework and came up with the same answers. Especially when they're all part of a system that rewards new discoveries, and they did the homework in very different ways.
There are mountains (both literal and metaphorical) of evidence for an old earth. The only evidence for a young earth is a book which contradicts its own creation story in the first two chapters.
Jesus Christ, dude. That was a disaster movie by the same guy that brought us Independence Day and 2012, based on a book by a radio host best known for possibly facilitating the Heaven's Gate mass suicide by feeding rumors a UFO was following the Hale-Bopp comet, and a writer who has peddled personal tales of alien abductions for 40 years. Not exactly a reliable central tendency measure of what real people feared.
This has to be one of the stupidest false equivalences I've ever seen.
I guess you're trying to draw a false-equivalency between taking a problem extra seriously and denying/perpetuating it? However taking a problem too seriously doesn't harm people, if you want to wear a mask out of an abundance of caution you won't kill anybody else.
Also nobody believed the world was going to end in two days, that feels like a disingenuous talking point. If somebody literally believed the world would end in < 10 years they'd likely quit their job, spend all their savings, etc.
If your point is that you've met ~15 individuals in your life who were obnoxious/self-righteous/unlikeable about their attempts to make the world better -- congrats every movement has that. But it can't distract from the fact that one thing is true and the other is false, and anybody who tries to focus more on the stereotypes of the individuals in a movement than whether it's true or not is only creating noise.
No I'm talking about proper healthy science not blind trust. Please don't confuse discussion with argument it's disingenuous and best I can say is look inwards.
No, most of these people consciously or otherwise, just want/need to be contrarians. Look at flat Earthers. There is no way any sane person would say the earth is flat.
Please don't bring up another thing started by idiot scientists for a laugh to laugh at stupid people. You have no idea what it's like dealing with the "just open your eyes" and "what else are they hiding" tier of pseudo-intellectualism enabled by nu-media.
There are reasons to be sceptical which are set in reason and it's worth not throwing that out with the bath water. Even if the bath water is full of low iq bitchute comments...
I think you may be reading more into my comment than I wrote. I was only talking about what we are seeing in the Show HN. I have no baseline to compare it to so all I can see is a map of the oceans with some areas red and some areas blue.
So you get a different length depending on the radius you choose, but at least you get an answer.
You could define the radius in a scale-invariant way (proportional to the perimeter of the convex hull of the land mass for example) so that scaling the land mass up/down would also scale our declared coastline length proportionally.
reply