I don’t think OP’s point has anything to do with AI companions.
The big benefit of moving compute to edge devices is to distribute the inference load on the grid. Powering and cooling phones is a lot easier than powering and cooling a datacenter
Local ai is probably a good direction, i agree. But there was a part of their point that had to do with ai companions: the bit where they say we are closer to “her”-like ai companions. That was the bit i was responding to.
My issue with this is that a simple design can set you up for failure if you don’t foresee and account for future requirements.
Every abstraction adds some complexity. So maybe the PoC skips all abstractions. Then we need to add a variant to something. Well, a single if/else is simpler than an abstract base class with two concrete implementations. Adding the 3rd as another if clause is simpler than refactoring all of them to an ABC structure. And so on.
“Simple” is relative. Investing in a little complexity now can save your ass later. Weighing this decision takes skill and experience
I think what matters more than the abstract class vs if statement dichotomy, is how well something maps the problem domain/data structures and flows.
Sure maybe its fast to write that simple if statement, but if it doesn't capture the deeper problem you'll just keep running head first into edge cases - whereas if you're modelling the problem in a good way it comes as a natural extension/interaction in the code with very little tweaking _and_ it covers all edge cases in a clean way.
I’m aware I’m about to be “that guy”, but I really like how Rich Hickey’s “Simple Made Easy” clarifies simplicity here. In that model, what you’re describing is easy, not simple.
I am seeing this at my work right now. They are about to start using token consumption as _part_ of the performance review process. Obviously this is a coarse and problematic proxy for productivity.
OTOH, it’s an attempt to address a real problem. There are people who are in fact falling behind (I’m talking literally editing code in notepad), and we can either let them get PIPped eventually, or try to bring them along. There is a real “activation energy” required to learning new tools, and some people need an excuse/permission. Not saying that token count is a GOOD signal, but I haven’t heard many better ideas
AI pull request descriptions are my current pet peeve. The ones I have seen are verbose and filled with meaningless fluff words (“optimized”, “performant” for what? In terms of what?), they leak details about the CoT that didn’t make it into the final solution (“removed the SQLite implementation” what SQLite implementation? There isn’t one on main…), and are devoid of context about _why_ the work is even being done, what alternatives were considered etc.
My first round of code review has become a back and forth with the original author just asking them questions about their description, before I even bother to look at code. At first I decided I’d just be a stick in the mud and juniors would learn to get it right the first time, but it turns out I’m just burning myself out due to spite instead.
I burnt out helping a junior on my team for the past few months. It was just terribly obvious she was feeding my responses directly into a chatbot to fix instead of actually understanding the issue. I can’t really even blame her, there isn’t much incentive to actually learn
I've been in situations like that. For me, it's like interviewing, I just keep backing off, lowering the bar, making it easier and easier until they can get it, then start going back up again. I pretty quickly get a confident read on where they are.
If at that point it's clear (to me) the situation is not salvageable, it's a management issue, I've done my job.
Very bad hire. I’ve gently said as much to my manager and skip. But for some reason hiring is hard and firing is hard, and we’re a small team, so I’ve been told to just lower my standards. Yeah, I know
I think this is well put. Cohesive philosophy, even if flawed, is a lot easier to work with than a patchwork of out-of-context “best practices” strewn together by an LLM
That these facets of use exist at all are indicative of immature product design.
These are leaked implementation details that the labs are forcing us to know because these are weak, early products and they’re still exploring the design space. The median user doesn’t want to and shouldn’t have to care about details like this.
Future products in this space won’t have them and future users won’t be left in the dust by not learning them today.
Python programmers aren’t left behind by not knowing malloc and free.
Strawman. It’s entirely possible for two things to be true at once: border laws are worth enforcing, and the current approach of flooding ICE with untrained goons explicitly targeted with white supremacist recruiting material is not going to end well
The big benefit of moving compute to edge devices is to distribute the inference load on the grid. Powering and cooling phones is a lot easier than powering and cooling a datacenter
reply