> However on android the sampling rate of the acceleration sensor is limited to 50/s. At least if you install through the official app store.
My understanding is that it’s the same even on iOS (or at least on my iPhone SE 2020). More specifically, the output only measures till 50hz (but the sensor sampling rate is actually 100hz - Nquist, you need double the measured frequency as sampling frequency, yada yada.)
I get 100/s on an iPhone SE2. 50/s on a Samsung Galaxy A16 which was released in 2024 or 2025, but that is due to an API restriction. You can export from phyphox (.xslx or .cvs). You get timestamps in the first column. Phyphox refers to the raw data rate, not Nyquist freq.
The sensors have analog lowpass filters that can be adjusted in order to avoid aliasing.
In general, with more bandwidth you can do more intrusive things. But if you want to tell wether two people ride in the same car, 50 Hz should be sufficient anyways.
By the way, it’s important to note that measuring vibrating things can permanently damage the OIS VCs in the camera. (See: Apple’s warning against motorcycle mounts.) my iPhone already had a broken OIS so I didn’t mind as much.
…I'm a bit afraid to ask, but are folks from Greenpeace supposed to be rich or something? (I'm not from the US so idk if it's a cultural thing I'm missing.)
Unless you come from privileged background, you don't exactly have the free time to go and prostest against the destruction of habitat of toads. And even if you do have the time, you probably don't care.
> I love that we're still learning the emergent properties of LLMs!
TBH, this is (very much my opinion btw) the least surprising thing. LLMs (and especially their emergent properties) are still black boxes. Humans have been studying the human brain for millenia, and we are barely better at predicting how humans work (or for eg to what extent free will is a thing). Hell, emergent properties of traffic was not understood or properly given attention to, even when a researcher, as a driver, knows what a driver does. Right now, on the front page, is this post:
> 14. Claude Code Found a Linux Vulnerability Hidden for 23 Years (mtlynch.io)
So it's pretty cool we're learning new things about LLMs, sure, but it's barely surprising that we're still learning it.
(Sorry, mini grumpy man rant over. I just wish we knew more of the world but I know that's not realistic.)
I'm a psychiatry resident who finds LLM research fascinating because of how strongly it reminds me of our efforts to understand the human brain/mind.
I dare say that in some ways, we understand LLMs better than humans, or at least the interpretability tools are now superior. Awkward place to be, but an interesting one.
LLMs are orders of magnitude simpler than brains, and we literally designed them from scratch. Also, we have full control over their operation and we can trace every signal.
Are you surprised we understand them better than brains?
We've been studying brains a lot longer. LLMs are grown, not built. The part that is designed are the low-level architecture - but what it builds from that is incomprehensible and unplanned.
LLMs draw origins from, both n-gram language models (ca. 1990s) and neural networks and deep learning (ca. 2000). So we've only had really good ones maybe 6-8 years or so, but the roots of the study go back 30 years at least.
Psychiatry, psychology, and neurology on the other hand, are really only roughly 150 years old. Before that, there wasn't enough information about the human body to be able to study it, let alone the resources or biochemical knowledge necessary to be able to understand it or do much of anything with it.
So, sure, we've studied it longer. But only 5 times longer. And, I mean, we've studied language, geometry, and reasoning for literally thousands of years. Markov chains are like 120 years old, so older than computer science, and you need those to make an LLM.
And if you think we went down some dead-end directions with language models in the last 30 years, boy, have I got some bad news for you about how badly we botched psychiatry, psychology, and neurology!
Embedding „meaning“ in vector spaces goes back to 1950s structuralist linguistics and early information retrieval research, there is a nice overview in the draft for the 3rd edition of speech and language processing https://web.stanford.edu/~jurafsky/slp3/5.pdf
You are still talking about low level infrastructure. This is like studying neurons only from a cellular biology perspective and then trying to understand language acquisition in children. It is very clear from recent literature that the emergent structure and behavior of LLMs is absolutely a new research field.
"Designed" is a bit strong. We "literally" couldn't design programs to do the interesting things LLMs can do. So we gave a giant for loop a bunch of data and a bunch of parameterized math functions and just kept updating the parameters until we got something we liked.... even on the architecture (ie, what math functions) people are just trying stuff and seeing if it works.
> We "literally" couldn't design programs to do the interesting things LLMs can do.
That's a bit of an overstatement.
The entire field of ML is aimed at problems where deterministic code would work just fine, but the amount of cases it would need to cover is too large to be practical (note, this has nothing to do with the impossibility of its design) AND there's a sufficient corpus of data that allows plausible enough models to be trained. So we accept the occasionally questionable precision of ML models over the huge time and money costs of engineering these kinds of systems the traditional way. LLMs are no different.
Saying ML is a field where deterministic code would work just fine conveniently leaves out the difficult part - writing the actual code.... Which we haven't been able to do for most of the tasks at hand.
It is impossible to design even in a theoretical sense if functional requirements consider matters such as performance and energy consumption. If you have to write petabytes of code you also have to store and execute it.
I'm a psychiatry resident who has been into ML since... at least 2017. I even contemplated leaving medicine for it in 2022 and studied for that, before realizing that I'd never become employable (because I could already tell the models were getting faster than I am).
You would be sorely mistaken to think I'm utterly uninformed about LLM-research, even if I would never dare to claim to be a domain expert.
I'm still impressed by the progress in interpretability, I remember being quite pessimistic that we'd achieve even what we have today (and I recall that being the consensus in ML researchers at the time). In other words, while capabilities have advanced at about the pace I expected from the GPT-2/3 days, mechanistic interpretability has advanced even faster than I'd hoped for (in some ways, we are very far from completely understanding the ways LLMs work).
Learning about the emergent properties of these black boxes is not surprising, but it's also not daily. I think every new insight is worth celebrating.
Oh I very much agree that it's great to see more research and findings and improvements in this field. I'm just a little puzzled by GP's tone (which suggested that it isn't completely expected to find new things about LLMs, a few years in).
Sorry lol, to me it felt like you were (pleasantly) surprised by this research. IMO I'd hardly be surprised to see breakthroughs in LLM understanding years or even decades from now. I guess I misunderstood your tone.
Indeed. For me, it's also a good reminder that AI is here to stay as technology, that the hype and investment bubble don't actually matter (well, except to those that care about AI as investment vehicle, of which I'm not one). Even if all funding dried out today, even if all AI companies shut down tomorrow, and there are no more models being trained - we've barely begun exploring how to properly use the ones we have.
We have tons of low-hanging fruits across all fields of science and engineering to be picked, in form of different ways to apply and chain the models we have, different ways to interact with them, etc. - enough to fuel a good decade of continued progress in everything.
I mean... You could? AI comes in all kinds of forms. It's been around practically since Eliza. What is (not) here to stay are the techbros who think every problem can be solved with LLMs. I imagine that once the bubble bursts and the LLM hype is gone, AI will go back to exactly what it was before ChatGPT came along. After all, IMO it's quite true that the AIs nobody talks about are the AIs that are actually doing good or interesting things. All of those AIs have been pushed to the backseat because LLMs have taken the driver and passenger seats, but the AIs working on cures for cancer (assuming we don't already have said cure and it just isn't profitable enough to talk about/market) for example are still being advanced.
I agree on that part as well, but saying that AI will go back at what it was before ChatGPT came along is false. LLM will still be a standalone product and will be taken for granted. People will (maybe? hopefully?) eventually learn to use them properly and not generate tons of slop for the sake of using AI. Many "AI companies" will disappear from the face of Earth. But our reality has changed.
LLMs will not be just a standalone product. The models will continue to get embedded deep into software stacks, as they're already being today. For example, if you're using a relatively modern smartphone, you have a bunch of transformer models powering local inference for things like image recognition and classification, segmentation, autocomplete, typing suggestions, search suggestions, etc. If you're using Firefox and opted into it, you have local models used to e.g. summarize contents of a page when you long-click on a link. Etc.
LLMs are "little people on a chip", a new kind of component, capable of general problem-solving. They can be tuned and trimmed to specialize in specific classes of problems, at great reduction of size and compute requirements. The big models will be around as part of user interface, but small models are going to be increasingly showing up everywhere in computational paths, as we test out and try new use cases. There's so many low-hanging fruits to pick, we're still going to be seeing massive transformations in our computing experience, even if new model R&D stalled today.
I hate to "umm, akshually" but apparently we have been studying the brain for thousands of years. I wasn't talking about purely modern neuroscience (which ironically for our topic of emergence, (often till recently/still in most places) treats the brain as the sum of its parts - be them neurons or neurotransmitters).
> The earliest reference to the brain occurs in the Edwin Smith Surgical Papyrus, written in the 17th century BC.
I was actually thinking of ancient greeks when writing my comment, but I suppose Egyptians have even older records than them.
None of that counts as studying the brain. It's like saying rubbing sticks together to make fire counts as studying atomic energy. Those early "researchers" were hopelessly far away from even the most tangential understanding of the workings of the brain.
But fundamentally speaking, they were trying to understand the brain, right? IMO that counts as science/study in my books. They understood parts/basics of intracranial pressure so long back.
And if we say it's not science if it's not correct, well, (modern) physics isn't a science then, right? ;) As we haven't unified relativity with quantum mechanics?
Interestingly enough, for a while physics used to be studied by philosophers (and used to be put in the natural philosophy basket, together with biology and most other hard sciences).
The intersection of physics isnt psychology it is philosophy, and the same is true (at present) with LLM's
Much as Diogenes mocked Platos definition of a man with a plucked chicken, LLM's revealed what "real" ai would require: contigous learning. That isnt to diminish the power of LLM's (the are useful) but that limitation is a fairly hard one to over come if true AGI is your goal.
Sir Roger Penrose, on quantum consciousness (and there is some regret on his part here) -- OR -- Jacob Barandes for a much more current thinking on this sort of intersectional exploratory thinking.
I thought it was determined (slight pun) that free will is not a thing. I'm referring to Sapolsky's book "Determined: A Science of Life Without Free Will)" as an example.
Hate to say it and sound so "conspiracy-like", but I no longer can trust what the current US administration is saying. Ever since the path of a hurricane was redrawn with a sharpie, it's been... unusual.
I think the problem is that in previous administrations at least they had some skill in lying in ways that were not so constantly contradicting one another.
Regardless of whether it's a "perfect setup" or not, the facts speak for themselves.
Most competent governments don't say things that are outright wrong. They may use double speak, or not comment on a topic. But this government (and unfortunately it's this specific adminstration/president) has acted time and again in a way that both of us know very well.
Not really. Just that trust ain't binary and the govt is made of people. I don't like this admin but this too shall pass. Cultivate your garden. Electing bad people has consequences.
None of what's happening today could have happened without everything that came before it.
The blue team carries plenty of blame for not fielding better candidates. If nobody is buying your bullshit, it's a little weak to blame the customer.
And all of the us electorate carries plenty of blame for letting our government get so massive and out of control over time. We've let this beast metastasize and grow, and now were stuck with it.
The American people are ultimately to blame for it, they've got the government they deserve, which is actively dismantling the US empire day by day. The American people voted for Trump instead of Kamala, and that is rather damning of the state of the American people, far more so than however damning it may also be for the Democratic party.
As we all know, in this day and age, you need to REALLY sell your story, and have the media behind you. Competence is tertiary.
> Approval of Trump among Republicans has slipped to a second-term low of 84%, down from 92% last March. At the same time, an all-time high 16% of Republicans disapprove. This shift can be attributed, at least in part, to declining support among non-MAGA Republicans, as approval dropped 11 points in the last year among this group (70% in March 2025 to 59% today). Virtually all MAGA Republicans continue to approve of Trump, with 98% approving a year ago and 97% now.
Or the bootlicker olympics for those who want everyone else to ignore the constant lies because they think bigger, more powerful government is utopian.
I wouldn't be so pleased with myself over such "You will get wet in a rainstorm." style predictions.
truths from different angles that are at odds with one another produce mistrust and thoughts of conspiracy. We have more of that now than we have ever had, ever. It doesn't take Nostradamus to point to the trend.
tl;dr : Gee, where did this mistrust in the current government come from? I'd point but I don't have that many hands.
I desperately hope so too. It will be absolutely terrible if there were to be an issue, and moreso if people can say “We knew about it beforehand but still went ahead”.
So he'd have a better idea of what the govt would want to do?
Keep in mind that a govt that feels (admittedly reasonably) that it has been backstabbed and has its head assassinated would not hesitate to call bluffs instead of acting cool. You've ever seen how a cornered wild animal behaves?
> I wouldn't want them to get a notification if I suddenly revised my profile because maybe I'm shopping around for a new job, for example.
If I'm not mistaken, LinkedIn has options for all of this. You can edit your profile with or without a notification post. You can select "show open to being hired only to people outside your company".
Not that I have great (or any) love for the platform, but if I understood you right, these things aren't really issues.
Fully agree. I wonder in the long term how this will show up. Will every business/CEO do more of what he/they anyway want to do, but now supported by AI/LLMs?
The possibilities in "dangerous" fields are a bit more frightening. A general is much more likely to ask ChatGPT "Do you think this war is a good idea/should I drop a bomb", rather than an actually helpful tool - where you might ask "What are 5 hidden points on favor of/against bombing that one likely has missed".
The more you use AI as a strict tool that can be wrong, the safer. Unfortunately I'm not sure if that helps if the guy bombing your city (or even your president) is using AI poorly, and their decisions affect you.
> Will every business/CEO do more of what he/they anyway want to do, but now supported by AI/LLMs?
Arguably, it already worked that way. The best way to climb the ranks of a 'dictatorial' organization (a repressive government or an average large business) is to always say yes. Adopt what the people from up above want you to use, say and think. Don't question anything. Find silver linings in their most deranged ideas to show your loyalty. The rich and powerful that occupy the top ranks of these structures often hate being challenged, even if it's irrational for their well-being. Whenever you see a country or a company making a massive mistake, you can often trace it to a consequence of this. Humans hate being challenged and the rich can insulate themselves even further from the real world.
What's worrying me is the opposite - that this power is more available now. Instead of requiring a team of people and an asset cushion that lets you act irrationally, now you just need to have a phone in your pocket. People get addicted to LLMs because they can provide endless, varied validation for just about anything. Even if someone is aware of their own biases, it's not a given that they'll always counteract the validation.
My understanding is that it’s the same even on iOS (or at least on my iPhone SE 2020). More specifically, the output only measures till 50hz (but the sensor sampling rate is actually 100hz - Nquist, you need double the measured frequency as sampling frequency, yada yada.)
reply