Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Deepmind’s AlphaCode conquers coding, performing as well as humans (singularityhub.com)
338 points by Brajeshwar on Dec 15, 2022 | hide | past | favorite | 543 comments


Nature paper:

https://www.science.org/doi/10.1126/science.abq1158

For those wondering, yes, these are the same results reported in the February 2022 pre-print here:

https://arxiv.org/abs/2203.07814

As in the preprint the authors report their system's average ranking as top 54.3% in past competitions but the meat and potatoes of their results are in the reporting of test accuracy, which they don't advertise in the abstract- because it's not that good. From the body of the Nature article:

>> With up to 100,000 samples per problem (10@100K), the AlphaCode 41B model solved 29.6% of problems in the CodeContests test set.

So the best test set performance they got was 29.6% with the 10@100k metric. See also Figure 3 in the Nature paper (graphically presenting results listed in Appendix Table A2 in the preprint).

"10@100k" means that their LLM generated millions of programs for each exercise, of which 100,000 (100k) were selected by filtering and clustering and various other heuristics, and of those 100k, 10 were selected to submit as the system's solution.

So 10@100k means to take a few million guesses, then take another 100k guesses, and finally andother 10. And still only get it right 30% ish percent of the time.

This may be enough to rank in the 54% of CodeForces participants, for a system fine-tuned on CodeForces-like data (the CodeContests dataset developed by DeepMind specifically for this task). But it's not enough to claim that AlphaCode "conquers coding, performing as well as humans", per the title of TFA.


Now imagine what it would do with the actual, poorly specified projects most programmers are subjected to rather than these basic problems with fixed outcomes that have easily quantifiable metrics of success.


I mean, coding contests tend not only to be well-specified, but also extremely micro tasks. A whole lot of the work of software development is taking even reasonably well-specified business requirements and determing the appropriate structure of tasks at that level to meet them.

Coding contest problems may be a common hiring filter, but they are very much not representative of software development work (OTOH, AI coding assistance, which this, Copilot, ChatGPT, etc., illustrate) are going to render them less valid as hiring filters, both because they will make them harder to use as skill tests, and because they will further reduce the role of the micro-focus skills they center in real-world software development.


chatGPT seems amazing at taking corrections, iterating on the requirements, fixing as it goes. So the badly-specified requests are going to be sorted out by multiple rounds of interaction.


I tried giving ChatGPT an extremely easy “warmup” interview question, and it passed it with flying colors. But then I added the catch that it has to handle UTF-8 strings and not just ASCII, and it confidently spewed out the same program, claiming that by using the c library functions isspace and ispunct, unicode is handled because they let you pass a locale. This is, of course completely wrong. I even gave an example of an input it would fail on and it shrugged and returned roughly the same function that would still fail on that input.


I did similar, I asked it to compute the median on some filtered time series data, but as I added more requirements to the filtering it couldn’t seem to adapt the program.


On the plus side, this sounds a lot like the kind of mistake a human engineer would make.


Sort of, but I wouldn’t expect a human engineer to be so confidently incorrect. In fact, it would be a big red flag. I’d much rather have a “you know, I’m not very familiar with Unicode or UTF-8, so I’d have to do some research before I could come up with an answer.”


Try to get chatgpt to solve a basic math problem using BEDMAS instead of PEMDAS, even with multiple interactions and reminders it still gets incredibly confused.


ChatGPT cannot correctly perform step by step operations within strict formal systems. On the other hand, it can provide a high level sketch of how tomathematically prove something[1].

I think this is amazing, because this is the opposite of how we normally think about computers. ChatGPT is highly intuitive instead of a formal reasoning system. No doubt some future AIs will combine this intuition with checks against formal verification engines to check correctness.

[1] Section 6 of https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tr...


ChatGPT is an interactive prompt meant to amuse you with blatant lies, not something meant for any sort of reasonable purpose.


I would argue that poorly specified requirements will be solved by an Auto-coder that can produce the end-result in a very short time. Allowing the PM, to see how bad their requirements were and make it more elaborate.


The Bad Requirement -> Bad Implementation -> Better Requirement -> Better Implementation cycle is going to be improved.

https://en.wikipedia.org/wiki/OODA_loop

The US Army calls it observe–orient–decide–act.

"The approach explains how agility can overcome raw power in dealing with human opponents. It is especially applicable to cyber security and cyberwarfare.

According to Boyd, decision-making occurs in a recurring cycle of observe–orient–decide–act. An entity (whether an individual or an organization) that can process this cycle quickly, observing and reacting to unfolding events more rapidly than an opponent, can thereby "get inside" the opponent's decision cycle and gain the advantage."

Humans using AI will be well "inside the loop" of non-AI humans.

I don't think AI alone will eliminate jobs, but jobs that use AI to get inside the loop of other companies will quickly eliminate the competition. Why would you pay and wait days when you can pay and wait minutes or hours for a quicker Ask-Show-Ask-Show loop that quickly narrows down on what the intent of your Ask was (even if Ask #1 was poorly thought out or worded)


To be nitpicky, the OODA loop is an Air Force thing, as Boyd was a fighter pilot.


When your requiremets are so detailed and comprehensive that the generated program is correct -- haven't you basically written the program?


This tangentially reminds me of a very interesting idea I think I encountered in a Borges story - for a map to be useful, it must be inaccurate.

That is to say, a perfectly accurate map would just be a replica of the the terrain it is mapping - anything less would technically be inaccurate. Only by leaving things out and "summarizing" an area does a map become a useful representation of the terrain.


“The map is not the territory”… until the map is the territory?


It really depends on the program. Some problems derive most of their complexity from trying to correctly model complex real world rules, like calculating pension payments for employees that were hired at different times, for different positions, under different union agreements, having worked varying numbers of years.

Some problems have short, clear definitions yet remain challenging. An algorithm for solving them may have been discovered decades (or even centuries) ago. But trying to produce more efficient solutions can be a life's work and more. One of the most obvious examples is integer factorization, which had a basic algorithm published 800 years ago yet is still a research problem today:

https://en.wikipedia.org/wiki/Trial_division

https://en.wikipedia.org/wiki/Integer_factorization#Factorin...


The was (still is?) one of the major issues with outsourcing development work. External contractors tend to need excruciatingly detailed requirements to produce the expected results, because they typically lack the domain knowledge to guess correctly when faced with ambiguity.

Very often this means getting into long cycles of iteration. If time zones are distant, those cycles can be very long and painful.

Perhaps outsourcing to AI will work much better, as those iterative cycles can be done in near real-time. Maybe there needs to be many different models, each with some industry/domain specific training to increase the odds it’ll make the right guess when it needs to.


With almost all software you're likely to write professionally, yes. There are edge cases (as there are with all things), but those are not things AI is good at to begin with :)


I am not sure that can be done that way. There are issues that are culture or specific business need related. Bad requirement is the hard part it seems. I think we all have been in situations, where 'common sense' is not common ( and does not translate to the same thing to various layers for people who pass it; and that does not even account for translation ).

This is usually how we end up with weird situations like currency coded as text so it can't be sorted by amount by end user and so on.

I agree that specs make or break the project, but at what point is it ok to assume "x should do y".


Yeah, on top of that add wading through legacy code, figuring out why wtf, this is crap, just to figure out most times it is actually crap, but then at least equal amount of times that there was a very good reason to do it that way, either technical or by business logic... cannot imagine how any AI system could have success there without getting towards the true AI singularity.

This kind of code that I'd believe makes up 99% of all code out there that is worked on, is not available to the public for training. What to do about that? Rewrite all?


I think this may be the case, the AI will figure out what the legacy code does and then completely re-writes it using whatever modern standards and ability to host it all in the cloud.


If it's not trained on the legacy code, I have my doubts it will be able to re-write it well enough to replace a human baby sitting it, particularly when we're talking about large projects. As impressive as ChatGPT is at spitting out code examples, what gives you confidence that and LLM could rewrite a million lines of legacy code it has never seen before? That seems to be well beyond the ability of any AI that's been developed so far.


The stochasticness probably gets you into trouble here. If you get different results each time you try to "figure out and rewrite" the legacy program... you're gonna need to be able to right some extremely thorough, perfect test cases. And how much of that are you willing to put to chance?


Doesn't that make the PM just a programmer then, but with a more powerful tool?

We used to code in ASM. Then we coded in high level languages. Soon we'll code by giving prompts to auto code generators.


That's kinda missing the level of control. Inherently both ASM and programming produce the same output, "programming" just traded the generality of ASM for verbose and diverse functions. The auto code generators fundamentally don't understand the problems they work on, so any solutions they create will be incidental.


> Inherently both ASM and programming produce the same output, "programming" just traded the generality of ASM for verbose and diverse functions.

I wouldn't say the output is that similar. You get wildly different bug types, bug type distributions, and edge cases/failure modes in different languages. Does "Python" (the internal implementations of the method names you type) understand the problem it's working on more than the generator on the other side of a coding prompt?


What is "understanding"? Compilers aren't sentient and can't fundamentally understand anything, either though.


I would argue compilers have a perfect “understanding” as the input:output is generally 1:1. One input almost always produces the same output. It’s just that the input is extremely limited by a set of predefined rules.

The same can’t be said about code generators like this that are producing millions of possibilites and using heuristics to guess which program will have the most success.


Reproducibility is a big deal. With the Stable Diffusion model as a publicly available test model, using the exact same inputs, especially the seed value for the random number generator, should get you the same picture on subsequent runs or on a different machine. Whether or not ChatGPT is evolving with our input is a secret to make that not the same for their model is known only to OpenAI.


So we are talking about redefining programming to be managing, redefining, bug fixing, troubleshooting the output of ML potentially increasing output but making the job more complicated not allowing the non coding pm to replace more skilled workers.


Yes, but in a few years we have to reevaluate this, probably language models are going to be good enough to allow a PM to replace devs. AIs will code at superhuman level because it is possible to test code, it will learn like AlphaGo learned from self-play.


Maybe. The way this usually goes is tools get better, everyone says we don't need devs anymore, devs use the tools way better than PMs, standards rise, PM code is junk again.


I claim pipe dream, otherwise AIs can program themselves and we will have the singularity. Will it be good or evil?


If you were picking someone to ride herd on an AI generating code are you sure

- You will never need to understand a single bit of what the AI has generated for legal, moral, or strategic reasons?

- You will never need to manually rework a single thing or fix anything ever.

Logically neither statement is true furthermore who do you imagine is writing the tests that ensure correctness? Logically the person you would want to do this task isn't a yokel who understands interpersonal relationships its someone who understands what is being generated so your company wont murder people or get sued out of existence.

Of course you will probably want more than one person doing this task and there ought to be some interface between him and his fellows and between the lot of them and management to ensure good communication and company standards... Maybe a PM?

I find it endlessly fascinating that those who can't do dream about the obsolescence of the useful class deeming the irreplaceable human part of the endeavor is the petty manipulation of each other and ego massaging we all do by necessity to keep human endeavors tied together with bailing wire and barely plodding on. It doesn't speak well for our species.

Beyond narrow parameters self improving arbitrary representation and manipulation of symbolic data is basically AI complete. The toy you would hope to enable you to ask a computer instead of a person for a particular improvement to your website would upon completion represent a short step from the obsolescence not of coders but of humans wrought in vain hope that we can actually control tools smarter than us. It's probably neither "a few years" away nor that fantastic of an idea but do carry on.


Agreed. This is overblown. but they haven't even begun to apply RLHF (or at least they haven't published anything). I think it's safe to say that combining an LLM with RL, all existing techniques, would result in a much stronger AI competitive coder.


Totally agree. The title is misleading and a clickbait.

Here's another article which imo is the best evaluation of the technology I've seen so far: https://news.ycombinator.com/item?id=33997603

Yet, it has virtually no upvotes and no discussion.

Well, evangelism sells better than criticism.


I think what this means is AI assistance during interviews may be enough to fool the interviewer. This may finally put an end to remote leet code interviews. Companies with deep pockets and local companies will probably stick to the status quo and just move to in person interviews again.


I mean, why not embrace the future? Allow people to use AI in interviews, schools and exams. Everyone will use it at work, why not test AI management skills together with problem solving skills. Maybe the best skills of 2019 are not the best skills to have in 2023.


Because coding interviews are only a proxy for what someone needs to do on the job. Presumably, it will become a worse proxy if assisted by AI but other tasks they'll do on the job aren't.

For example, debugging a subtle bug in a large codebase. Maybe one day AI can do that too, but that doesn't seem to be the focus here.


Exactly. Interviewing a post-LLM programmer without the aid of an LLM is like doing a running test for a bike messenger job.


Don’t need deep pockets. You don’t have to fly someone out to interview on site. Just have him pay for it himself.


I wouldn’t do that. You’d lose a lot people. As it is, take home assignments cost me a lot on the risk/reward scale.


By the next year they will have generated 100M new problems and solutions tested to pass verification and retrain the model, making it 10x smarter. This approach creates new training data and we can iterate on the coding model multiple times.


The problem with that approach is that you end up with a system that is very good at solving programming problems of a particular kind, but does not generalise very well to arbitrary programming tasks. For example, their system will not have very good performance on, I don't know, leetcode (if it had, we'd have heard all about it).

So I don't doubt that we'll see better performance on Codecontests/ Codeforces in the future, perhaps some time next year (just to leave some time after the Nature publication). But not anything of substance. Or they'll just come up with a new dataset that their system performs well on.

To be honest, I'm scandalised that such a poorly thought-out approach got a publication in Nature. If I had tried to publish something like that in my field ("we made a firehose that spams code willy-nilly and by heuristic this and that got it to hit the right target 30% of the time! And we beat some humans at it!") I would have been shot down in flames, probably also ruined my reputation for good. But, I guess, it's DeepMind publishing in Nature so it's effectively a racket. One just shrugs and moves on.


I've conducted hundreds of FAANG-level coding interviews, and recently tried to run my interview questions through ChatGPT. The model was able to spit out a correct, optimal, clean Python solution effortlessly.

Interestingly when I asked a follow up, a harder version of the same problem, ChatGPT spits out code that sort-of looks correct, but is actually nonsense.

So, would it pass the interview? That's difficult to say. Interviews don't happen in a vacuum, you also consider the candidate's thought process, explanations, alternatives, tradeoffs...

Still, I see this is a game changer for this kind of interviews. So long as candidates understand and explain the output code, there is a good chance they would clear the interview. Even if the code is incorrect, it might given them some hints towards the right solution.

So where do we go from here? I always loathed this interview format, and these languages model reinforce it even further. Interview cheating has always been there, but it is generally so rare that it isn't a real concern. However these tools are too effective and easy to use. I can see a real divide between people who use them and people who doesn't. This type of interview might become even more useless at telling good coders apart.

My take is we have 3 choices:

a) Ignore it.

b) Try to fight it.

c) Embrace it.

a) is not an option. b) would make an already pretty dreadful process even more intolerable. My money is on c)

I can envision an interview format where we allow, or even encourage people to use ChatGPT and AlphaCode during the interview, much like you would use your IDE or a search engine. In fact seeing how a candidate understands and uses those code snippets can be a very interesting data point.

Either that or scrap leetcode-style interviews altogether.

P.S.: I was thinking about writing a blog post about this, if people think it'd be interesting.


The real issue with any of these AI innovations is they really point out places where humans have already started to behave like an AI in the first place.

Ever since the emergence of leetcode style interviews I've been shocked at how many people can reproduce leetcode examples, but still fundamentally have no sense of algorithm design outside of the context of a job interview.

Programmers with their sights set on acing a FAANG interview will just keep repeating leetcode problems until they start to memorize the common patterns (not the problems themselves of course, but the structure of these type of problems). What's disturbing to me is that I recall far more interesting discussion about algorithms in the era before leetcode dominated everything.

The common solution isn't to understand algorithms better, but to become a leetcode solving robot.

So it's no surprise to me that AI can pretty easily replicate humans that have tried to turn themselves into robots.

We see similar patterns in the art that AI can create. It's very good at replicating a kind of art style of designers trying to turn themselves into design robots.


  > So it's no surprise to me that AI can pretty easily replicate humans that have tried to turn themselves into robots.
this is such a great insight... i feel like it could even somehow explain a lot of politics and many other phenomena.


Absolutely spot on. I actually do algorithm design, usually over a period of weeks (at least), and leet code is a joke for the serious algorist (I’m sure I’d fail an interview based on it). Nothing has so clearly illustrated the robotic nature of the leet code expert quite like this result has.


I see several argue that if you are poor at leetcode then you cannot be a serious algorist. Can you explain why that argument is wrong?


Do tell!

What sort of job leads you do design algorithms?


Mathematical Biology/Bioinformatics. We have to think very carefully about every step in the process of extracting information from large, diverse datasets--often writing things from scratch, combining/transforming things in novel ways, and implementing new mathematical ideas efficiently enough to be computable on large datasets.


> c) Embrace it

asking "here's a chatgpt solution to the problem... what's wrong with it?" would be a solid process imo.


I love it!

Particularly, ChatGPT can give apparently correct solutions that are wrong in subtle ways.

Also reading, understanding, reasoning about, and fixing code other's wrote is way closer to on-the-job performance.


Yep, the future of SWE is debugging AI.


This is already the present for translators. They often are called to proofread machine translated text. And this is often more difficult than rewriting the whole text.


Well, if they still have access to ChatGPT, they can ask that of it as well. It may or may not give a valid answer, just as it does with code.


A big if. A lot of ChatGPT discussions seem to take for granted that it'll always be available/free/priced low enough that ~everyone has access to it. Seems more likely that at some point OpenAI will close it up and put it back behind an API.


The company itself could have access to the AI for the interview, since idea is lowering the playing field.

But even more, it seems likely someone is going to be willing to supply LLM access at not that much more than the cost of computation.

I know OpenAI's business is selling access to their stuff at a premium but since current AI is not much more than brute-forcing of massive public data, it doesn't seem like this premium will be justifiable for long.


You can use text-davinci-003 from the API now and it works better for many things.


This would be way more fun


And you can give every candidate a new version of it.


I would expect A, because it is an option.

Like in mathematics or other tests students are not allowed to use the internet or an advanced calculator in order to test wether they truly comprehend the stuff.

If you think that your FAANG-interviews are any good, then just keep them in the format you already have, by making sure applicants cannot use AI during the test. I would have on-sites with pen+paper, whiteboard or a prepped/supervised machine, whatever.

Of course, applicants could use AI to train for the interview, but that is not a problem, as long as you test their comprehension.


Given the trends towards remote tech jobs, it could be difficult to convince your companies engineers to show up to the office for what’s usually just a weed out leet code interview.


Proctored tests are common for certification exams. Perhaps there's a future where candidates go to one of these exam centers for one of their technical interviews.


Or just a live (remote) programming interview, with no use of Codex or Copilot.


I'm not a fan of making the process more dreadful than it already is. Forcing people to go to an office and use pen and paper or a whiteboard would do exactly that.

I hope companies (including mine!) see the writing on the wall and stop trying to fight the future.

This isn't accounting for even just pretending we are testing candidates on anything remotely indicative of on-the-job performance.


Would training with such an AI help their comprehension? Or hurt it?

My money is on "hurt".


Algorithm interview and competitive programming type questions are probably some of the easiest for GPT to solve because there are a massive number of problems and solutions publicly available for training.

The real benefit of AI is somewhat shown in this paper, it effectively solved the problems through brute force generating millions of possible solutions. For real world problems it would be interesting to let GPT generate a bunch of different solutions and push them into a test environment and see which works best.

The biggest problem I see is black swan events where AI coded systems work great until something goes wrong and no human truly knows how all the pieces fit together.


I get the same results from asking for code. Near 100% success with an initial demo, and then pretty tragic performance when making a series of changes to actually fit the concept. I got it to write 3d rotation code, and to switch to using rotors instead of quaternions but when I asked it to rewrite the combine_rotors() method it rewrote it to combine Enigma rotors. I explained its error and it apologized but just wasn't able to go back to the original code and work with it anymore.

Interviewing is going to be shaken up but I think some methods are more timeless. I've been giving the subject code instead of asking them to write it for a while now. Largely because I wanted to talk more and watch them type less.

We start by discussing the coding challenge guidelines and I leave them vague. They need to understand the goal and what I've left out and suggest those guidelines themselves. Once we agree though, I give them the code. "Here's what's running now."

Then I update the goal and we discuss changes. I get them to "whiteboard" certain things, like what a query looks like with their proposed changes or whatever, and we discuss big-O, etc. This, imho, is how whiteboarding is actually used - not to write whole programs but to provide examples and pick them apart.

I feel that this would work even if they were using an AI in another window. We're trying to select for developers with common sense and domain knowledge, and who can clearly discuss engineering tradeoffs. Actually making the changes (the coding itself) was a big part of the job and that's decreasing, but imho all the other requirements remain. They'll still need to know how to handle the issues I talked about above, of trying to get the model to write the right code!


Our coding exercise cannot be solved by ChatGPT and I believe is much more effective at evaluating coding ability than leetcode-style questions. We ask candidates to design an object-oriented booking system which requires 3 or 4 different classes to implement. ChatGPT cannot easily do this without heavy prompting from the user at which point, they'd be better off just writing their design down themselves. We want to evaluate candidates based on the kind of work they will actually be doing - not brainteasing O(n) solutions to contrived algorithm questions and so far it's worked very well.


I agree. Our interview has 3 coding rounds, and one of them I call "clean code". For that I ask a straightforward question that requires candidate to define some APIs and write some classes / functions. Then I ask several follow ups, adding or changing requirements.

This is by far my favourite question, the one closer to on-the-job coding. It also lends itself well to deep conversations with candidates.

But alas I don't own the process and still have to work within the parameters of the company. Whenever possible I ask this kind of questions, but other interviewers will default to leetcode-style rounds.


> cannot be solved by ChatGPT

That's great! For now. But tomorrow's coming fast.


That seems like it's likely been trained on various examples of FAANG questions that are posted to the likes of leetcode, with solutions often presented. The push for harder version was clever, and that it fell over is no real surprise.


> c) Embrace it. [...] My money is on c)

Agreed.

I think developers who don't will be rare in a few years time. Just like developers who primarily rely on assemblers (versus compilers) have basically become extinct. Like we sometimes, very rarely, need to inline some assembly, we will sometimes need to use the ol' gray matter to figure out a novel algorithm or something.

I believe that avoiding learning these tools could be a existential issue for your present-day job. You don't have to come to depend on them, or use them daily, but you do need to understand how best to use (and not use) them.


This is why I ask questions in a business domain. It requires the programmer to think not only about solving a straightforward clear problem and only worry about Big-O. Instead they need to figure out the problem by asking thoughtful questions about possible business concerns and think about which ones to optimize for.

Ambiguity that requires follow up questions for successful isn't going to be addressed by something focused on solving a problem that "thinks" it has all the information to solve the problem.


> I've conducted hundreds of FAANG-level coding interviews, [...] I always loathed this interview format,*

That's impressive perseverance. Did that wear on you?


I hate leetcode style interviews, but I've done hundreds of them. The irony is not lost on me. Some might say I'm part of the problem.

Thing is, in a small way, I'm trying to change things from within. I try to make the process as palatable and fair to candidates as possible, while working within the constraints of the system. When I train new interviewers some of my top tips are:

a) The purpose of the interview is to determine whether the candidate is a good fit for the company, and the company is a good fit for the candidate. b) The interview is an imperfect proxy for this.

It then follows that interviews shouldn't overindex in the coding round. If I have helped someone to get an offer that wouldn't otherwise, I'm satisfied.

BTW that's also one of the reasons I'm publishing this. I hope to push the point that leetcode-style coding interviews are outdated and should be burnt to the ground.


Myself, I've only conducted dozens not hundreds of FAANG-style coding interviews. I also loathe the format, and it does indeed wear on me. I hate every second.


Damn. What is your success rate? A lot of times it seems to mee that luck and your ability to withstand stress are one of the main factors. Is it something you can teach yourself?


If by success rate you mean inclined rate, it varies a lot by level. I'd say it's around 30% for interns (which is by far more lenient) and closer to 5% for Senior level. However it's also common to downlevel senior candidates rather than downright reject.

From my side of the table I can confirm that of course luck plays a role. Nevermind the variability of the coding problems themselves, but the variability on the interviewers. Even in my company, that has a formal process and pretense of objectivity, interviewers quality and expectations vary greatly.

In short, I'd say what people on the hiring side hate to hear, but interviewing is ultimately a numbers game. I wouldn't take rejections too seriously. The best you can do is to improve your chances with proper prep (leetcode, system design) and interview broadly.


The illusion of objectivity is strong. We have an official interviewing process. It has a big survey that turns into "metrics". We have a debrief where we talk about the "metrics" as though it was an "evidence based" process. Then you'll hear someone say "Well, he found a solution for the balanced tree problem, but he just didn't seem all that confident doing it, you know?"


> Then you'll hear someone say "Well, he found a solution for the balanced tree problem, but he just didn't seem all that confident doing it, you know?"

This sounds like yet more evidence that it's really just pledging a frat.


i think you're correct about option c, primarily because we could never force everyone to adopt the other 2 options.

option c makes me nervous, but right now I can't see an ai correctly dealing with the ambiguity of the data sets i typically look at.i do a lot of "asking for clarification."


We're just going to move to on-site interviews only. ChatGPT can't enter the equation then.


>ChatGPT can't enter the equation then.

In a few years, ChatGPT will be conducting these interviews.


LMAO, no. What would prevent someone from just looking up the answers using ChatGPT on another machine?


Neuralink blocks your path.


Neuralink is not realistic in its current conception, they won't easily solve the problem of input to a brain


Neuralink doesn't work.


I expect that the reason it gave the correct answer to your first question is simply that it already saw the problem and memorized the solution - there is some empirical evidence that deep neural networks are able to memorize much of their training data.


Possibly. The problem itself is not in leetcode, but it almost certainly has been leaked somewhere. However the program was able to make a few changes to the code with some prompting, which hints a little bit more smarts than just regurgitating an answer.

Nevertheless the point is moot. I've invented completely novel questions (promise!), and saw them leaked online after asking them twice. The process is fundamentally flawed and large language models are just making that glaringly obvious.


Your option d) that you don’t even enumerate (scrap leetcode) would be best.


> Interview cheating has always been there, but it is generally so rare that it isn't a real concern.

You're the person they're trying to fool, so if they cheat successfully you would never know. You've never seen overt displays of bad cheating, which is different.

Think you haven't seen a stick insect in years? Likely you have, but just didn't notice it...

Cheaters will always be more motivated than those trying to detect them - because everything is on the line for them.


>You're the person they're trying to fool, so if they cheat successfully you would never know. You've never seen overt displays of bad cheating, which is different.

I don't think it's widespread at least, since in my experience people that do well in technical zoom interviews do not drastically decrease in apparent competence when we move to in person rounds.

People definitely get told interview questions by recruiters though, if you count that as cheating then it is everywhere.


By the way ChatGPT is not even the best model OpenAI has for writing code.


By the time someone gets an on-premises interview, you'd notice if they're using a bot or not? Or are all the interview rounds remote these days?


In my experience (with FAANGs at least) the interviews have been virtual since the pandemic began. Haven't heard of any plans to return to return to flying out candidates for in-person interviews.


We've been doing remote-only interviews for years now.


If a company allows ChatGPT to be used on the job, anything but c) seems foolish. If the company doesn't allow it, c) seems like a bad idea.


We software engineers have to be the stupidest "smart" people on the planet. No other occupations work so hard to destroy entire businesses including our own. I get it, I'm a software engineer who loves automation.

"But AI will just be a tool in our tool-set, software engineers will still be the system architects."

Sure, for a while and then AI will do that too.

"But eventually we will live in a fully automated world in abundance, wouldn't that be great?"

Doing what? When we get there, anything we can consider doing, an AI can do faster and better. Write a poem? Write a book? Write music? Paint a picture? Life will be like a computer game with cheat-codes, whenever we struggle with something, instead of fighting on and improving we will turn to our universal cheat-engine: AI.

Anecdotally, I did an analog mistake in my early twenties when I wrote a cheat-program for save-files. It worked like a typical cheat-engine, search the save-file for a specific value, go back to the game and change that value, go back to the save and search for the new value but only in those locations that had the original value. This is how I ruined "Heroes of Might and Magic II" :(. I used to love that game. I could spend hours playing it. Writing the cheat program was a lot of fun for a couple of hours but when it was done, there was no longer any reason for me to play the game. You might say that I didn't need to use my cheat program, but once the genie was out of the box it was too tempting to resist when I met some obstacle in the game.

This is what I fear about AI making our jobs superfluous; it will also make our hobbies or anything we enjoy doing superfluous.

Sorry for the bleak comment but this my fear and I feel the genie is already out of the box.


I can't imagine the amount of corporate brainwashing needed to get to this level of thinking. Are you really saying that people cannot have any identity in life, any dreams, hobbies or pursuits if they can't sit in front of a computer for 8 hours a day, 5 days a week fixing Jira tickets?

Just because a car can go 100mph doesn't mean long distance running doesn't need to exist. Just because a novel you write isn't the best in the world doesn't mean the hobby is pointless. Go buy a farm and grow your own food. Keep some pets. Build cool software just because you can. Hang out with your friends. Play with your kids. Do literally anything you want. Not having to be a wage slave to survive is a good thing for humanity.


> Not having to be a wage slave to survive is a good thing for humanity.

I mean, sure, that's one of the two paths discussed in "Manna" by Marshall Brain. Within the book it's called "The Australia Project"; a kind of utopia.

Myself and the OP are more worried about the other path: a dystopia in which the majority of people are forced into something much worse than wage slavery by those in control of the thinking machines. A dystopia not unlike the one that led to the "Great Revolt" in the Dune series.


> Myself and the OP are more worried about the other path: a dystopia in which the majority of people are forced into something much worse than wage slavery by those in control of the thinking machines.

My fear is that nobody will remain in control of the thinking machines. Imagine an AI agent for hire which maintains its own cryptocurrency accounts and pays its own cloud hosting bills. That's the future I'm worried about.


> A dystopia not unlike the one that led to the "Great Revolt" in the Dune series.

I'm glad you brought this up; I've found the term 'Butlerian Jihad' coming increasingly to mind when I read AI threads on HN. It's interesting to think about a future where we potentially put prohibitions on the use of AI for moral reasons.

https://dune.fandom.com/wiki/Butlerian_Jihad


Just go one step ahead, and realize, that perhaps the most fictitious side of that book is that AI actually lost due to basically magical powers.


> the majority of people are forced into something much worse than wage slavery by those in control of the thinking machines.

This is why it is important that we ensure everyone has collective control of the means of production (through voluntary means - a federation of collectives that agree to trade with one another as much as possible).


> This is why it is important that we ensure everyone has collective control of the means of production

Why would the owners of the thinking machines allow this to happen?


Thank you for the ad hominem attack. It's always a pleasure to discuss things with people who like to jump to conclusions and assume things about people they don't know anything about.

Personally I prefer not to pass judgement on a person based on the very little knowledge that can be gleamed from a post like this. But maybe, just maybe if you actually read what I have written (in other comments as well) things might clear up for you.


Not an ad hominem and you haven't refuted him at all. That you perceived what he said as a personal attack speaks more to how you truly perceive your own values than to anything else.


They questioned your character cause of your flawed arguments, not your arguments cause of your flawed character. So it's not ad hominem.


> I can't imagine the amount of corporate brainwashing needed to get to this level of thinking.... Not having to be a wage slave to survive is a good thing for humanity.

Corporate brainwashing, why? That is just realistic. I mean we know earlier people with much harder lifes actually had more free leisure time.. and even Ford imagined with all the automation we may be able to work much less and have better lifes.. still here we are: A few people making tons of money, some soing very good to okayish, but the vast majority doing 2-3 low paying crap jobs to survive.. and we all even workong more than decades ago. How?


Working as a necessity to provide for oneself and family is a key component of good mental health. It's wired into us. We are still the hunter-gatherers we were 10,000 years ago. Excessive idleness always ends badly.


What do you define as working? The nobility had good ways of spending their time, they were at the top of social pyramid. They would not say they had a job, they only had to care about their inheritance.

Mental health is more related to belonging and sense of community. Currently, not having a job is a source of guilt and shame. That has not always been the case. Nurses and educators find their reward mostly in being able to help others.


> Working as a necessity to provide for oneself and family is a key component of good mental health

Based on what science? Grinding at an office 9-5, 5 days a week, 50 weeks a year over meaningless pursuits is in fact what is causing mental health problems in the world today. Give people a social safety net, more time to pursue their hobbies, spend time with their families, connect with their communities, and I can assure you we will all be better off.


You don't need a job to avoid being idle.


Most people do.


People have bills to pay, or will AI do that too?


profits (benefits of overall automation) will be reaped by capitalists (employers, capital owners), not the general public.

plus, before we automate anything civilian, we will have to automate everything military, cause they get the first dabs at any emerging tech.

more likely we will see global war between stealthy autonomous robots much earlier, before we automate much on the civilian side


> Doing what? When we get there, anything we can consider doing, an AI can do faster and better. Write a poem? Write a book? Write music? Paint a picture? Life will be like a computer game with cheat-codes, whenever we struggle with something, instead of fighting on and improving we will turn to our universal cheat-engine: AI.

For literally anything in my life that I can do, there is already someone who can do it better and faster than me. I still enjoy doing the things that I do, and why would that change?


Because now that person works for you, for free, instantly, anywhere, anytime. There's at least a temptation.


I can execute a TAS speed run of any game I want in a couple clicks. So why do speedrunners still exist then if they can never hope to match TAS?

I can open Stockfish and absolutely destroy any human I want in chess. So why do people still play chess then?

I can get ChatGPT to write a good response to your comment in mere moments. So why am I still typing?


A few months after being defeated by AlphaGo, Lee Sedol retired from profession Go. He said, "Even if I become the number one, there is an entity that cannot be defeated." And, "As a professional Go player, I never want to play this kind of match again."

https://en.wikipedia.org/wiki/Lee_Sedol


That seems more like a testament to the intensity of someone who does X to be the "best" at it, rather than someone who does X for the fun of it. Go became more than just a game to him -- it was his identity.


For myself, I don't like to play online Go against humans, because I hate to lose. Instead, I play against a computer with a difficulty rating of around 1-3 dan, and whenever things start going badly for me I just undo moves and try again. Sometimes I ask the computer for advice on what move to make. My experience of Go is that I always win against a player who is much better than me. I find it pretty satisfying.


And yet there are millions of people that play Go who will never be anywhere close to as good as Lee and continue to play it after he quit.


It seems he based his ego on the idea of being the very best Go player, instead of merely being the best human Go player.


"So why am I still typing?"

Because ChatGPT is currently overloaded by users.


> So why am I still typing?

So you don't have to give OpenAI your phone number, hah.


That's only important if you value the destination more than the journey. In spite of the fact that there will come a day when an artificial intelligence can play the piano and any possible musical piece superbly, for me the act of piano playing itself is where I derive my enjoyment.

It may be in the future it will become more important to focus on activities and hobbies where one's fulfillment comes intrinsically from the action itself, not in the output that is produced.


Who says that person will work for you, or that it will be free?


They're talking about the AI.


I'm not sure if this is what the commenter meant, but I am vanishingly unlikely to own the AI. Even if I write the AI I'm unlikely to own it. The training costs are too large and even if I did train a working model, AI can be duplicated, and there's little reason to use anything but the best. The scary thing about AI to me is finally turning intellectual labor into a pure process of capital. You put more energy and capital in, you get more out, no need or room for humans anywhere in that loop. Now of course we're a long way away from that. There will be room for humans for a long time, but it's scary how much additional power it will give to capital. How much less the interests of ordinary people will mater


That's actually a great thing in the long run. Money will be spent on compute, compute will generate data by generative models + validation models, then data will be compacted into a new model. Models and data can be copied, money can't.

If it works with just electricity and doesn't require manual human work it is a game changer. No longer limited by human resources, we can scale research in any field and improve everyone's life much faster.

AlphaGo is an example of such an approach. I don't think models will be locked down, they will probably be like go bots in recent years, about 50% of them open sourced. As long as there is an open dataset, the models can be replicated.


People start out with a brain, they don't tend to start out with money, electricity and in the absence of and incentive it's hard to imagine society doing much for the downtrodden. I think empathy exists and it can be strong, but we've had racism, bigotry, and genocide all while the people we were killing had every potential to be another Einstein.

Also why would you think the weights for models will be open source. That isn't even true now for the most complicated models (gpt3 and the like) and certainly isn't for all the models running right now in production using proprietary datasets. As long as keeping it secret makes it more likely that you (or your models) can generate the next better model it doesn't seem very likely we're converge on them all being public


> Now of course we're a long way away from that.

I am no longer confident of that.


The far future where we can ask an AI something like make me money, or improve humanity seems pretty far off still. Not for sure. Change can accelerate, but probably under 2% over my lifetime (which is still way higher than I would have put it a few years ago). Until we get there the question will be how humans can better leverage the tools. We won't be in quite as dire problems because the limiting factor will probably still be humans minds to do the increasingly small set of things the AI is bad at


This confuses AI training with inference. Training a machine learning model is expensive, complex, and requires data. Once trained, inference is cheap in terms of compute. A trained model is trivial to leak so hopefully nobody is putting much stock in their models staying secret.

Not to mention there are techniques by which training can be avoided entirely like transfer learning and others.


Inference is not cheap -- SOTA LLMs require 100s of gigabytes of VRAM for inference.


Apple has made a sample project available for Stable Diffusion that can run on both macOS and iOS.[1] The sample code is here.[2]

I'll concede that Stable Diffusion may or may not meet the threshold for SOTA but still think this is indicative of inference eventually becoming supported for any compelling LLM on consumer-grade hardware. The possibilities for creative tools are just too vast.

[1] https://machinelearning.apple.com/research/stable-diffusion-...

[2] https://github.com/apple/ml-stable-diffusion


I know, I was just using their metaphor.


> for free

That seems very naive.


Any prediction about AI seems like a naive one. 50 years ago we seemed to have everything backwards, who is to say that we're any better now?


This is not a prediction about AI, but a realistic guess how it will be used in the future. Based on how the benefits of automation and productivity increases have been shared with the general populace in the past. Hint: they mostly haven't been shared. Instead we have billionaires whose money disappears into offshore tax havens, all the while civilian infrastructure and society in general crumbles.


Yes, definitely that but also it's not even a guess, they said they were going to monetize it. But yeah, if it's not being released as commons like free software is then one is wise to assume the capital is being created for profit if not outright rent seeking (which I think is most likely in this particular case.)


I used to cheat at Age of Empires 2. It was fun, but eventually very stale to empty all the single player campaign maps with a swarm of cobra cars. Getting better at playing without cheats is way more satisfying.


This! Perfectly stated!


The jobs computers do today used to be done by "Human Computers". The same arguments were raised when these jobs were mechanized. It's the same cycle over and over.

AI/ML is no silver bullet. It's just another abstraction. It will create new types of jobs. Most likely coordinating/choreographing AI/ML agents in new yet to be discovered applications. It's all part of the endless march of technology. You don't realize how little you are actually capable of until you get the new set of tools that bounce you up to the next level.

Take the tools we have today and present them to some chump shoving punch cards into an early computer and watch their brain melt out of their ears. We can do things with a wristwatch they would have thought impossible. The people that will get burned are the ones that want to stand still. Always be learning.


We are literally at the brink of the multiple major industries being wiped out. What was only theoretical for the last 10-15 years started to happen right now.

In few short years most humans will not be able to find any employment because machine will be more efficient and cheaper. Society will transform beyond any previous transformations in history. Most likely it's going to be very rough. But we just argue that of course our specific jobs are going to stay.

We are like horses that argue that surely they will find something to do after seeing the tractor.


> We are literally at the brink of the multiple major industries being wiped out.

> In few short years most humans will not be able to find any employment because machine will be more efficient and cheaper.

Extraordinary claims require extraordinary evidence. Citation needed.

> We are like horses that argue that surely they will find something to do after seeing the tractor.

This metaphor seems really off because horses aren't able to argue in the common sense of the word. Unlike horses, humans are adaptable.


The horses weren't arguing since they're content not having to transport humans around. This sort of prediction has been made multiple times before. I don't think you know the future anymore than anyone else. We'll see in a few years.


> Most likely it's going to be very rough.

I don't know why are you drawing such conclusion, in all democracies still people decide by voting, if most people will lose their jobs then what do you think they will do? Vote to be homeless? There will be something like basic pay and everything will be dirt cheap because no human labor will be needed, just scale machines that will work 24/7, a lot of people will still work but it will be a choice.


If recent history is anything to judge by, this basic income might come with a lot of attached strings.

Since they would not be required anymore to keep the economy running, the lower classes of society would lose most of the little bargaining power they still have. And frankly many countries are doing kinda well also with large parts of their population being poor. Propaganda is sufficient to convince us that they deserve that, or that they don't deserve the basic income for some other reason.

There is a lot of Science Fiction about AIs ruling over humanity, or exterminating it, but I think a future where a wealthy class controls AI and rules over everyone else is more realistic. In such futures, societies will probably always walk a thin line between utopia and dystopia.


> The people that will get burned are the ones that want to stand still. Always be learning.

Spot on.


I used to be updated with the latest tech, but as I grow older, my capacity to learn and adapt quickly has dropped. I never thought that I would be the one saying this but I do admit that I find the pace of change difficult to keep up with. I now believe it is a natural process of ageing and even though I don't want to give up, I think I eventually may just throw my hands up.


I would say 'standing still' refers more to the state of mind. Of course, it's progressively harder to keep up with the details of modern technology. But being open to it is key. As a techie you understand technology better, than the average person, nevertheless.


As I stated in other posts, this isn't really a fear about losing my job as much as on losing my will to live...


You might want to examine your local culture. In many places in the world, job security and life meaningfulness are heavily correlated. Especially for men.


No need when I can examine myself to know myself :). While not actively working for it, I'm at least inspired by the FIRE movement. I am also self-employed so I don't see job security as the meaning of life.

Still this whole thread has been illuminating, maybe part of my fear comes from identifying strongly with being creative and making things, and seeing that turned into something automated.


I wouldn't give too much weight to some of these almost comically optimistic answers.

As with any society-scale event nobody really knows what's happening in every nook and cranny of this multi-billion inhabitant [0] spaceship and when it has happened it can and will be rationalised to fit just as many views.

Personally I very much get your point. Necessity is the mother of invention, and taking the necessity out of pretty much everything can destroy your motivation, because why expend energy on an already solved problem? That's inefficient (under the assumption that you won't require this understanding for some other reason). I know this happens to me, at least.

Perhaps in the AI-dominated world there'll be a pill for that or something..

[0] Animals won't stress out about this ;)


Do you really think boredom is the greatest problem facing humanity? There are people starving, people who work paycheck to paycheck and can barely afford rent. And you think that a fully automated world of perfect abundance is a bad thing? I mean, clearly it would suck for you, and maybe it would suck for me a little, but I'd never look someone in the eye and say they have to work two full time jobs just to feed their family because otherwise I'd be able to read better books than I can write. That's already the case


The problem is that those people you are talking about already are the first victims of automation. They have to work two jobs because just one doesn't pay enough when a company can automate it cheaper. The road ahead is not nicely paved, a lot more people will suffer before a fully automated abundant world.

So despite your disingenuous reading of my comment it is not "Oh, poor me I will be bored!", it's "Is the goal we're heading for worth the price?" and I don't think it is.

There are ways to fix people needing two jobs to feed their family that isn't spelled "automation" or "AI".


Waiters are already automated? Clerks? I wrote that with particular people I know in mind- they aren't paid less because their jobs have been automated. Are there solutions which don't involve more automation? Sure! But not only are those solutions compatible with automation, they're sort of besides the point I was pushing against.

You said that specifically automation would be bad even if it provided total material abundance. It's not there yet, and I suspect it'll be a while before it is, but if we grant its possibility boredom is just absolutely not enough of a reason to prevent it. There are lots of dangers on the road there, but the only cost you mentioned (and I replied to) was that we'd be "playing with cheats". Video games are one thing- in real life losing has a cost and if cheating prevents that there's no excuse not to.


Parent didn't say they are "paid less because their jobs have been automated." Parent said "they have to work two jobs because just one doesn't pay enough when a company can automate it cheaper." I interpret that to mean "their jobs could be automated for less than the cost of supporting them on the pay from a single job, so they need to take two jobs at less pay per job to stay below the cost of automation."

There are already many industries, including service industries, where there would be more employment if automation were a little cheaper and less automation if wages were a little lower. And sometimes it is a blend of automation and telepresence. https://gizmodo.com/want-to-order-food-from-a-minimally-paid...

There are other forces pushing the other way, especially in the past few years with various COVID-related disruptions in the labor market. But long-term the effects of automation on pay for low-skilled jobs is pretty clear. Only so many get to move up the value chain and yes they likely benefit. The rest are cut mercilessly.


No, this is not how the laws of economics work. Right now the reason for two jobs is that inflating the money supply has caused a huge misallocation of resources. There are other factors, like that most jobs that are needed are gatekeeped artificially.

Having cheaper goods thanks to ai will improve our living standards


While there are certainly many factors contributing to the growth of inequality, it is fairly well established that automation has contributed heavily. There has been a ton written about this so I won’t bother citing here, but googling “automation and inequality” is a start


Yes, inequality goes up, but standards of living for the poorest also go up, the number of people in poverty goes down and most benefit is seen for people in the undeveloped economies. US population didn't decrease poverty rate in the last 4 decades, but that's because US rate was already very low.

What is happening here is enrichment by technological transfer. You can't copy research money but you can copy good ideas and buy the latest technology directly. Jobless people of the future will have incredible empowerment of this kind, maybe they don't need UBI, they need help to help themselves.

https://i.imgur.com/QFPRlYe.png


Increases in the standard of living are providing marginal gains to happiness. Happiness correlates with equality, not with income. https://pubmed.ncbi.nlm.nih.gov/21841151/


It isn't? So companies aren't trying to produce things as cheap as possible to increase their margins? And robots that are never sick, takes no vacation, needs no rest and are mostly an upfront investment aren't cheaper than people?

> Having cheaper goods thanks to ai will improve our living standards

Well, or at least we will have more cheaply produced goods.


When there is sufficient automation, a basic income for all becomes easier possible. Who will pay for that? Tax the usage of machines.


And you think the machine owners will just give up their big economic advantage without lobbying and running PR campaigns? If anything, prices on computers and chips sold to consumers will go way up because 'we must tax automation' so experimenters and new market entrants will be hit with a big fiscal moat while incumbents pay nothing. Dissatisfaction will then be blamed on government. (You can already see this dynamic operate on HN in regard to some topics.)


I agree that is the ideal scenario, but what are the actual incentives for implementing A system like that?


> Do you really think boredom is the greatest problem facing humanity?

It's up there.

Lack of meaningful work is already leading to a lot of societal dysfunction. Our innate programming is to survive, solve problems, reproduce, and teach our offspring to do the same. Removing meaningful work as a source of significance and meaning for people is going to be a massive problem to solve.

Look at the rise in "depths of despair" in rich countries. Generally these people have food to eat and a roof over their head and clothes on their back, and probably even access to a lot of digital entertainment. But lacking meaningful work or defined social role they fall into depression, substance abuse, etc.


> Lack of meaningful work

When "having a job" replaces being a "member of the community", this definitely becomes a problem. People seek affirmation, relying on work to get it provides for a flimsy basis.


Being a member of a community has always implied having a useful role to fulfill in that community. Hunting, gathering, caring for children, healer, priest, leader, storyteller.

AI and robotics is on course to automate all those things. Then how will we define our roles in our communities?


Helping your neighbor with a repair, taking care of their children, doing some chores, organize a survival weekend. Those are not job titles, but things you could do.


It's pretty clear historically that this is contingent on other factors- landed aristocrats seemed mostly fine, and it's suspicious to me that these deaths of despair happen to be increasing as street prices of opioids drop. But ultimately I don't think that matters! Even granting that people wouldn't simply occupy themselves with the same signaling games they always have, that's still a great problem to have! I mean, at a minimum anyone who really feels like doing meaningful drudgework can just go join some kind of luddite community, but I find fully automated luxury gay space communism somewhat harder to come by.


a fully automated world of perfect abundance

Oh boy, are you in for a surprise.

Back in the early 1990s I was convinced that mass adoption of the internet and social media like forums/usenet groups in particular would lead to a renaissance of selfless cooperation, civic involvement, responsive institutions etc. I particularly expected that coupling this with news reporting and allowing people to comment on news about emerging issues would elevate public discourse significantly.


Oh I'm doubtful. But if it were possible opposing it would be heinous. The parent comment said that even if automation provided total material abundance it would be a bad thing


It will be soon.

Take a look at the obesity and opioid usage rates in the USA. Most people are not self motivated like the people you'll find in this bubble.

Obesity: 42%

Opioids: 3% [1]

[1] https://www.hhs.gov/opioids/about-the-epidemic/opioid-crisis...


If people want to sit around and shoot up, that's what they want to do. A world which permits it is, all else being equal, better than one which doesn't


Having ai that can replace coders doesn't translate to fully automated world of perfect abundance. I'm skeptical that the hypothetical bounty enabled by ai and robotics is going to be equitably distributed amongst the people.


>This is what I fear about AI making our jobs superfluous; it will also make our hobbies or anything we enjoy doing superfluous.

Mountain climbing is superfluous by this logic- why would you bother climbing a mountain when you could just take a helicopter to the top? Or even more accessible: why would hike to a lookout when there's a road to take you to the same spot?

There is still joy and value in doing things the hard way, even if an easier way exists.


This is true, and a very good analogy, but I'm not sure it holds up when every form of productivity has shifted from fun+challenging+useful to just fun+challenging.

Maybe this is a mindset we'll get over. The degree to which many of us evaluate ourselves based on our own usefulness seems like it's a bit too much but it's a normal human desire to be useful.

I certainly believe we should work towards a post-scarcity world where no one depends on my coding skills any more than they do my rock climbing skills, but it would be a psychological adjustment if every way I can be useful were now just a fun hobby.


You can be really useful to people by not having hard skills.

I keep hearing from people that trying to give up on getting and starting to give instead, makes them receive more.


I foresee AI eventually replacing the need for soft skills as well. What happens when we have indistinguishable-from-human androids, or wireheading?

But yes, the shorter(?)-term "all physical and intellectual work can be achieved by AI/machines" leaves us with caring for each other, and that's the most fulfilling task.


And when there are 10 billion people at the base camp of Mount Everest because they have nothing better to do?


This is rather an absurd argument and speaks more to the scarcity of natural resources.

I can only speak for myself, but in a world of universal basic income, I'm perfectly happy to pluck the strings of my guitar, play my piano, go for runs with my dog, play tennis etc. I don't believe there is any greater purpose to existence then what you can create for yourself.

Since my contentment comes from performing the actions themselves, whether or not a artificial intelligence can perform them better than me is simply a meaningless question.


Of course it's an absurd argument but my point is that in a fully automated world in abundance we will have 10 billion tourists.

Also I like to differ between performing and creating. For instance playing the piano might still be worthwhile but creating new music will lose its allure as AI will do it for you, faster and better. Saying that it's the same thing as there has always been someone who is better at creating than you, is a flawed argument. With AI there will be an infinite amount of creators that are better than you. Also AI isn't just competition, it's also your ally but the kind who'll say "step aside and I'll fix this for you."


A likely scenario, but that would be a very big tent city. Perhaps AI could select the ideal order for them each to take their turn.


I recommend reading the wonderful series The Culture, by Iain M Banks, which deals, among other things, with this exact problem, the problem of god-like AI Minds making our hobbies (or struggles) superfluous. Start with the Player of games.

A short introduction to the culture of The culture can be found here, written by the author himself: http://www.vavatch.co.uk/books/banks/cultnote.htm

Yes, the Minds can do everything but (pan-)humans still enjoys doing things for their own pleasure. Like for example learning to play an extremely difficult instrument while a Mind avatar taunts him by perfectly playing the same instrument. The Player of Games still plays games, although he couldn't ever win against a Mind.

And, should this be not enough, one can always leave The Culture and try and find meaning in the short, brutish lives of primitives people.


I think you seriously missed a point there - spoilers follow, so hide this comment if you don't want to see them.

InThe Player of Games, the inciting incident for the main plot is that the titular character cheats, accepting help from an AI to maintain his reputation of ludic brilliance against an emerging rival who exhibits greater natural talent.

Subsequently, he is gently blackmailed into participating in a game on a distant planet where life is nasty, brutish, and short to an extreme degree, but where he is accepted as a guest because of his great reputation. The Minds have chosen this method to destabilize the current balance of power and bring the planet in question out of a particularly repugnant developmental minimum. It's ultimately revealed that even the initial moral misstep was not exploited but rather engineered by the Minds from the outset, and that the great player was never more than a pawn in their own games.


Yes, but before he choose to cheat, he played for the fun. He didn't wake up one day and, understanding machines do everything better, gave up. No, he followed his passion for playing games. Then his pride got the best of him, but that is not relevant to the argument. The idea is people had their hobbies and passions and didn't care Minds could surpass them in everything.


A few months ago I spent three or four days putting together a website where I could store my cooking recipes in a nice, searchable format with some nice UI features that I wanted. Once AI is good enough, I'll be able to do the same without needing to brush up on how MongoDB works, or how to vertically center text in a div, or how to update the page's URL when navigating to a different recipe without triggering a full reload of the page. I couldn't care less about any of those things, I just wanted a cooking recipe website.


But that's my point, when we get where we're heading you won't need a cooking recipe website, an AI will cook much better than you could.

"Ok, great, then I don't have to cook!"

Yeah, but what will you do instead? We will be reduced to pure consumers as anything worthwhile to produce will be produced better and faster by an AI.


Hang out with friends, play sports, eat good food, raise a family, go on hikes. An AI can automate your labor, but it can't automate your experiences. There are countless recordings on YouTube of people performing beautiful renditions of the Interstellar soundtrack on piano, yet that doesn't stop me from playing my mediocre version and slowly improving. The act of playing it myself brings me joy that listening to it could never do.


That is a positive outcome. If this would be technically possible, we would collectively have to work insanely hard to steer the ship, because this is totally not where we are heading right now.

Today, low-skilled people have sometimes to work 3 jobs to pay their rent.

While we are dreaming about playing tennis, the US experienced an attempt to overthrow democracy not so long ago. For some people [1], "enough" does not exist. (We cannot put these things in context, because what those people are aiming for transcends our imagination. That is why there is almost no response.)

To be blunt: they won't share with you because you like playing tennis so much.

[1] I am talking about the money behind all of this


These are all good options of course, still with the lack of creative things, it might not be enough for all people.


In life, the existential personal questions are, roughly, "what matters (to me)?" and "what should I do about it?".

Our society currently affords ample opportunity to "productively" avoid those questions. You can pour everything into work, watch TV, numb your brain with drugs, or whatever.

Automation does not remove the existential questions, it just removes some of the noise that allows us to ignore them, and elevates them to the forefront.

Some people already have answers to those questions, and stand to gain from that toil being removed. Others have been avoiding the question their entire life, and removing the toil that excuses their avoidance is removing a cornerstone of their identity.

To that extent, I agree that automation is a disintegrative force, because so many people have yet to integrate a personality and identity around answering these foundational questions.

Still, it's long-term-better for our society if automation allows people to access higher forms of self-actualization. In the medium-term, a depressing number of people are content with passing time in their current rung on that ladder, and will be upset with the change.


> Our society currently affords ample opportunity to "productively" avoid those questions.

I fully agree.

> Some people already have answers to those questions, and stand to gain from that toil being removed.

I used to believe that but with the recent improvements in AI, I think it's only true to an extent. Not all personalities are equal. As AI's power in the creative fields increase those fields will more and more become a question of who has the most money to throw at AI processing. Superficially it might seem the same as two-three centuries ago when rich people had famous artists paint them but it's not.

I fear where we're at with AI is the beginning of the end for human creativity. Of course I hope I'm wrong. I hoped I was wrong about my skepticism when I first learned of Facebook in 2007, but as it turned out it has and continues to be a net negative force in our world much bigger than I could imagine.


> As AI's power in the creative fields increase those fields will more and more become a question of who has the most money to throw at AI processing.

I think the relevant question that might allay your fear is: why do people make art?

The industry that produces commercial art is absolutely on the chopping block, because in commercial art it's the result that's important, not the process. Such art is effectively a commodity, and barriers to the effective synthesis thereof have already been in the process of whittling away for centuries. I think you may be over-indexing on this category, but please correct me if I'm mis-assuming.

"True" (for lack of a better word) Art is the expression of self. It's an action or process that's captured in some sensory medium. That doesn't go away.

Imagine an artisan who forges handmade sculptures from horseshoes, which were obtained from the farm that she grew up in, themselves forged by her grandfather and worn by the horses in her mother's stable. There is something of herself , her family, and the loved they shared that's in the sculpture. It isn't the most hedonistically-perfect visual sculpture imaginable, but it brings you joy to see it because there's a narrative behind it.

AI does not make stuff like this go away. It actually frees more people to become these imbue-ers of meaning, if they are so inclined.

AI could describe the sculpture, AI could produce a digital facsimile, and maybe even eventually reforge the metal itself. But it can't imbue it with meaning like a human does. Unless you believe the AI itself is authentically capable of such a thing on equal footing to a human, which I think is still a "victory" for art, albeit a distinct one.


> AI does not make stuff like this go away. It actually frees more people to become these imbue-ers of meaning, if they are so inclined.

Yeah, but in that very example I feel the value and narrative behind it is in the memory of the grandfather's toil. When we no longer toil, there will be no horse shoes for our grandchildren to make sculptures of.


The concern is that the reasons people _want_ to make art are not the same as the reasons people choose to make art their profession. Specifically, they want to make art for the sort of reasons you mention. And most of those people have to choose to make commercial art if they want to have a decent living while also having the time to make some sort of art.

It is glossing over so much of the important detail to say "[AI] actually frees more people..." We live in a capitalist system. It frees the holders of capital. Anyone reasonably likely to profit from AI is likely to already be immensely privileged, given the costs of training and attendant centralization and barriers to entry. If they wanted to make horseshoes they would already FIRE and forget it.


Why do you assume you will have the resources to do these things?

We're running headlong into a world where AI makes significant portion of human labor worthless.


Take a look at any community where the vast majority of people don't have anything to do. Are those places you would want to live? Do you see happiness? Do you see any significant number of people responsibly raising families, pursuing self-improvement, or even keeping the garbage picked up? Or do you see squalor and decay and mental illness and addiction and hopelessness?


You mean like wealthy retirement communities?


You don't have to cook now, a lot of people don't. It's pretty easy to see that even though options like restaurants, fast food, food delivery, etc. exist doesn't mean that everyone will use them all the time and people still enjoy cooking food themselves even though they know they could go to a fancy restaurant and have the same dish prepared by a professional that has a lifetime more experience than they do. Full AI and robotic automation would be the same, if you want to code something yourself you'll do it just for fun even though you don't have to and what you produce might be objectively worse.


People cook to their own tastes. An AI cook may be able to follow a recipe, but can it adjust the recipe on the fly by tasting, smelling, feeling the food? If it can’t do this, it won’t compare well to a real chef/cook


> An AI cook may be able to follow a recipe, but can it adjust the recipe on the fly by tasting, smelling, feeling the food?

Not yet.


There’s plenty of actual people that can cook much better than you can right now


Will the AI design and ship me a free high quality kitchen robot that will actually do the cooking? Otherwise the AI is just another recipe book.


please no AI is advanced enough to tackle vertical centering in CSS.


Considering that web search results are already polluted by nonsensical, AI-generated advertizement spam masquerading as cooking recepies, I'll rather take the human curated experience, thank you very much.


That's a different problem entirely. I don't want to host a website that makes me money via insane SEO and advertisements. It's literally just a private website me and my girlfriend use. We add recipes we enjoy and would want to cook again in the future. If an AI could code that for me, great!


If that's the case, why not just use some existing shared notebook software instead?


Main reasons are that 1) the site loads fast, and 2) the site has certain ux features I wanted, such as ingredient lists automatically being sorted by when the ingredient gets used and hovering over an ingredient in a recipe causing a tooltip to pop up reminding you how much of that ingredient is used


It's more about deprofessionalization

It's about turning a specialized craft that people can make a living from into a proscripted commodity task that you can pay slave wages for.

This isn't new. It happened in farming, clothing and food preparation and it's coming for trucking, programming and everything else that pays well.

The project is one of collective enforced impoverishment by substituting labor for property.

Market forces drive innovation to making all human effort worthless and disposable


The C-levels already view programmers as interchangeable cogs. They only flourish because of startup culture that expects semi-equitable distribution of profits on a low overhead business activity. Outside that bubble it's already a job with poor advancement opportunities.


Yeah, but it's only possible because we, the software engineers, make it so. We could as a collective refuse to work on AI but we won't because it's a pretty, shiny, object with far too much lure for us to resist.


That's not it

We've proactively organized our economy to produce these kinds of hostile outcomes. It's not the technology that's the problem, it's the unquestioned assumptions of how we've collectively presumed it will be used.

We could build things for the collective benefit of humanity but that concept is extraordinarily foreign to us. We've become all Hayek, all the way down and these are the consequences; where all forms of progress can only be imagined as new forms of abuse and enslavement.

That's how it fuels reactionary conservatism. Everything is privatized so these exciting scientific breakthroughs can only be seen through a lens of hierarchy and property and the autocratic despotism that comes with that.

We could break that cycle any time...


"It's not the technology that's the problem, it's the unquestioned assumptions of how we've collectively presumed it will be used."

Yes, our imagination is bound by the belief systems we are trapped in. We choose possession of plastic widgets with built-in planned obsolescence over giving poor people a cancer treatment.

It is collective behavior driven by belief. Public access to education would benefit all of us, but we don't do that because of some doctrines. Point is that what we leave unquestioned by believing it is just rational, blinds us.

When we will start to rationally and empathically examine our collective memes, we might allow ourselves a better future.


As if collective action problems were so simple.

The reality is engineers need money but they prefer manipulating machines to people, so the business people take care of the unpleasant wetware programming and not coincidentally take the bulk of the profits.

it'd be far easier for a smaller group of engineers to build an AI powered CEO/C-suite that generates the regulatory filings etc. but does all human interaction over zoom or by phone. This would require a bit of work to pull off and would probably be denounced as horribly illegal, but I'm not convinced it would do a worse job than the median C-suite.


No, I don't think it is simple. In fact I think it is impossible. Just looking at the division in this thread tells me that there will never even be an agreement for taking such a stand much less follow through.


Yeah it's definitely not new, Marx called the process proletarianization.


Believe it or not, I don't think it is that bleak. The posts I seem to see on Linkedin that touch this subject seem to be reminiscent of Tesla hype ( "soon you won't even need mechanics!" type of predictions ). It is definitely a different breed from the usual crop of 'no code' tools, but the similarities are really hard to ignore for me ( hype, no understanding of that blackbox does, and reality that things have to work-- and that someone has to actually understand how it works ).

In other words, I honestly don't think AI ( in its current state at least ) will change much. I will go even as far as to say that I don't see current generation being able to create an appropriate prompt.

I might be a little optimistic here, but having seen how people normally react to 'easier' things kinda confirms it.

If I worry about anything here, is that AI will become THE answer that you will not be allowed to question.

edit: clarified blackbox statement


> If I worry about anything here, is that AI will become THE answer that you will not be allowed to question.

People are already arguing with a bot about why there app was removed from the app store.

Somewhat related, the question might become: do I have a bot that can win an argument with an other bot? As the internet already get flooded with AI-generated seo-spam, we already get to depend on software that outsmart these AIs.

Some people here dream about becoming super productive. Reality might be that we will drown in AI-generated content (code, e-mails, web sites) that some poor people have to judge, clean up and curate.


Thanks for this optimistic post and to be clear I don't expect alphacode to put me out of a job. I've seen the hilarious examples where people get ChatGPT to for instance claim that abacus-based computing is faster than GPU-based computing.

We're far from there with AI yet but we're on a trajectory and that's what worries me.

But in the short term I definitely agree with you.


I think we are running up on a future where intellectual and creative tasks will be automated, but manual tasks won't be. My perception is that we are making more progress on mental tasks than on physical manipulations and the robots to do physical things are substantially more expensive.

I think the short-medium term future is not that you have nothing to do in a world of abundance, but rather that you are a manual laborer instead of a programmer, lawyer, artist, etc. The future is that you work as a Door Dasher for an automated company and enjoy AI generated art as you do. The car mostly drives itself while you listen to bespoke generated music or podcasts, occasionally taking over for the car and mainly doing "last few feet" delivery - dropping packages and bags off at the door.


A lot of replies mentioning that they will do their hobbies despite how good AI is at them, are missing the point. Right now you are living in a scarce world where the ability to pursue hobbies has required you to overcome other challenges. And that is why things like playing a musical instrument, rock climbing, etc. is a catharsis. What value will you provide in the AI world? What role will you have in protecting/providing for your family? What hard challenges will you choose to pursue that will aid you in your psychological development knowing that whatever you are doing is not actually hard? If there have been any moments that shaped you profoundly in your life that were hard at first, maybe you will understand what I'm talking about.

Second thing is, since this is a hacker forum, most of us are in a field where supply exceeds demand. So we are very comfortable with the thought of destroying our own business, because we don't truly grasp the reality behind it. Because we are the elite right now who have destroyed older businesses through software decades ago. Who's to guarantee that AI will not result in a rapidly shrinking centralized elite that does not include you? There is no guarantee of AI being shared equally amongst all in the post-scarcity fantasy.

"The Unabomber Manifesto will shape the 21st century the way the Communist Manifesto shaped the 20th. I don’t agree with the conclusions in either, but they state the problem well." -- George Hotz


"lot of replies mentioning that they will do their hobbies despite how good AI is at them, are missing the point. Right now you are living in a scarce world where the ability to pursue hobbies has required you to overcome other challenges. And that is why things like playing a musical instrument, rock climbing, etc. is a catharsis"

Spoken very confidently and purely anecdotally. Most of my hobbies I pursued in my formative years as a child, I was neither cajoled into them by my parents and they certainly weren't a means of an escape from school.

If anything I had more time as a child - summers were completely free and lackadaisical, and I performed those hobbies just the same.

If your underlying reason to pursue something is because it's difficult, then I would argue your motivations are flawed. I chose to play the piano as a child because I love the sound that I could produce, being able to glide up and down in sweeping arpeggios made me happy. It's a simple as that.


> Most of my hobbies I pursued in my formative years as a child

I will specify my stance on this for childhood since it is a special case and the brain is also very different. So: I also had hobbies as a child that I pursued for their own sake. The key thing is when you grow up, you are either able to pursue those hobbies, or not. You can only do the former if they provide enough value to the world in that case it may become a hard thing, latter is as a catharsis.

> If your underlying reason to pursue something is because it's difficult, then I would argue your motivations are flawed

That’s separate from hobbies. I’m saying we need hard things in general to grow. They come to us, not that we seek them like hobbies, although sometimes it may overlap. For example what was the one thing you didn’t have as a kid? Or something you yearned for, and still didn’t have as an adult? I really don’t think people who “have it all” are that enviable. It has to be a balance. When you go through a process for that, you grow. It’s the “chase” or the “journey”. Very different from leisure time. Both are important.

We are shifting that balance now, and shifting it on either side is not good. I would say we have had enough technology for a utopia for the last few decades. The problem that remains is political not technical. AI will only exacerbate that technical problem, without solving the incentive problem. See social media designed to make you stay hooked. Will the next generation want to play piano or draw stuff when they are in a forever trance of amazing content delivered by Big AI?


I don’t really see how this is a bad thing. Either humans are needed for a task or are not. Ideally we’re needed for as little as possible, freeing us up to do what we want rather than what has to be done.

It reminds me of NIMBYism. We’ve been automating entire professions for over a century now… but not MY profession…


>Either humans are needed for a task or are not. Ideally we’re needed for as little as possible, freeing us up to do what we want rather than what has to be done.

If you're not valuable to the economic system you won't be treated well.

>It reminds me of NIMBYism. We’ve been automating entire professions for over a century now… but not MY profession…

Yes... it's self interest look at doctors or unions or guilds.


The implicit part of the automation discussion is always: the current economic system is not the point and can evolve.

If we don’t assume that, then there is never any useful conversation possible on this topic. We end up with the ridiculous “we need jobs because that’s what we do!”

Which leads to another NIMBYism that I see when this conversation is had: “some generations will have to suffer through the friction of an economic revolution but not MY generation.” I think we need to be prepared that there’s always a chance that we get to be one of those generations.

You’re right that there’s a self-interest there. It makes it almost a good thing that engineers are far too interested in the means rather than the ends.


It's not just myself but my family and descendants who will also be impoverished.

If the holders of capital who are best positioned to reap nearly all the benefits of automation aren't willing work towards some more equitable result why would I want to help automation at all. I'd rather work against it.

"just ignore the rising inequality and complete collapse in value of human labour we can work out the details later once I hold all the cards" is not a compelling story.


Until there is an AI that can take as input a JIRA ticket (potentially asking for clarifications) and can output a Pull Request, I'm not too worried.


The latest large language models can do that for some tickets. The only big thing missing is the ability to see and interpret application screens and maybe a memory limitation for many things.

Considering how fast technology progresses we should all be worried and adjust our plans.


An AI capable of deciphering some of the JIRA tickets I see at work would make even the most advanced fictional AI quake in their digital boots...


Tickets are rarely correct as written. Is the to AI going understand that and conduct meetings to hash out the missing details and corrections?


It can send a chat message and hold a dialogue. For some small tasks things that is enough. Again main limitation is really vision and memory but we have to anticipate there will be very significant progress on those things in the next few years.


> Is the to AI going understand that and conduct meetings to hash out the missing details and corrections?

Why not? ChatGPT is nearly there right now.


Can ChatGPT have suspicions that a requirement isn’t quite right?


No, but I'd bet that it'll improve to the point of being able to determine ambiguous requirements and/or ones with insufficient detail.

Given the popularity of "NoCode" I reckon that managers & potential clients are more willing to meet in the middle than one would otherwise assume.

Since programming problems are recursively decomposable into sub-problems, all that would be required is to iterate on the ChatGPT-assisted requirements list until it's unambiguous and then repeat the process for each subproblem until the capabilities of AlphaCode are reached.


You misinterpret, I'm not worried about my job. I wouldn't mind working less but I'm worried that sooner than we think we will be reduced to pure consumers (we're already pretty far ahead in that respect) and then life will lose its meaning...


I think the human need to feel useful, or to produce something is more sociological than a base human need.

When/if that becomes an issue, if social rejection was not a consequence of not being productive more people would be okay with simply participating in things for their own sake.

For example, I don’t play Stardew Valley because I want to be the most elite virtual farmer.


Maybe we're already seeing this. Seems like people would rather join a cult than not feel useful. Taking handouts from the man... er... machine and free time for hobbies probably isn't going to do it.

Feeling useful could turn out to be part of basic human dignity.


But those people have generally marinated their entire lives in a background frame that usefulness == worth and (as a generalization) are under-equipped to generate meaning internally! It's not obvious to me that, say, the fifth generation after human labor becomes superfluous will be more vulnerable on this axis than we are now when there's tension and uncertainty about it.

(I'm sure plenty of people would happily jump in to question whether we'd make it five generations past that horizon, but that's not really the point I'm after here - let's assume we haven't Idiocracy'd/WALL-E'd ourselves to death.)


I don't see why that's an obstacle that can't be overcome in the very near future, given current rates of progress.


The fact that something can do something better than me doesn't make much difference. The problem there is the comparison. I like doing things for their own sake; I like knowing things and the process of attaining said knowledge. Making something is satisfying in a way that buying it isn't. Why do people make their own furniture? Why do people restore old cars? Why do people do pretty much anything? Most of what we (on average) do for work is kind of meaningless in some sense - it's an ends to a means. And yet, sans that, people still DO things.

Because we want to. Because we desire to.

I'd rather have MORE time to spend doing the pointless things I LIKE AND ENJOY doing than MORE time doing somewhat pointless things for companies.


How will you meet your basic needs and who/what will provide the resources for you to do these pointless things you enjoy?


>This is what I fear about AI making our jobs superfluous

It's more likely that non-technical people will be pushed out of jobs near software in favor of (former) engineers. Someone non-technical writing prompts for code can't actually read/debug/fix/deploy/integrate the result. Someone who is technical can probably write better AI prompts that yield something usable than someone who is non-technical. Plus they'll know how to handle the result.

The predictions about non-technical roles firing all of the engineers and thinking AI will write all of the code don't really hold water for me. We might see an overall workforce reduction, but engineers will probably be the last ones to leave.


Some engineers are saying AI will replace engineers in particular, mostly because they are the only ones playing with GPT and thinking about automation, and they may not realize how easy other jobs are to automate.

I can’t imagine a world where they fire the devs but still need a HR person to email everyone about enrolling in health plan or whatever.


I think it could be liberating, in that it forces us to accept that our true motivations are always ultimately hedonistic. Why do we pursue the instrumental (for most) goals you mentioned (writing, painting, composing)? Because the limbic system rewards you for doing so, as their satisfaction increases the probability of further satisfaction of instrumental goals, such as the acquisition of esteem, wealth, power, etc. which in turn increases the probability of the satisfaction of ultimate goals like access to food and sex. Why do you like to play “Heroes of Might and Magic II”? Because of the illusion of the satisfaction of instrumental goals and the attendant reward. Why did hacking the game ruin this? Because doing so spoiled the illusion, leading to the denial of further reward.

The question is, knowing this, will we be able to simply enjoy the satisfaction of our ultimate goals—- endless consumption of automatically generated art, food, sex, drugs, love, etc. Or will we feel forever hollow in our failure to accomplish ‘genuine’ instrumental goals, in a way that cannot be overcome by the ersatz instrumental goals of video games?

Perhaps ultimately we will create virtual environments in which we are perfectly deceived as to their virtual nature, so as to experience the satisfaction of ‘genuine’ instrumental goals, and in so doing come full circle.


> endless consumption of automatically generated art, food, sex, drugs, love, etc.

I think that not all human desires are a matter of consumption. Some wise people say they even got rich by giving. At least love is such a thing.

My stance is that art and love cannot be consumed, it is what happens to one. Now don't ask me to quantify or proof that. I will leave that to a future AI, but for that it needs to be a perfect, transcendental mind.


FYI, generally not acceptable here to post GPT3-generated responses.


Programming is the last challenge for AI, since if it can do that, it can change its own programming like we do.

When I really grokked this after reading Kurzweil and Koza in the early 2000s, some part of my psyche began shutting down. I started out in the late 80s like most programmers, doing it from pure ego with the hopes of eventually disrupting the worst industries like fossil fuels, defense and service-oriented companies that exploit workers.

Instead those industries thrived after the Dot Bomb and 9/11, delivering us into the reality we have today where it gets ever more difficult to tread water, despite amazing advancements in tech. Because wealth inequality and various other power structures work tirelessly to extract nearly all disposable income from workers and concentrate it in the hands of the most ego-centric sociopaths like billionaires and autocrats.

To get to my point: we had the tech to deliver humans from obligation by the late 1960s, that's what the hippie movement was largely about. We could have had automation and an idyllic/meritocratic society this whole time, even if AI wasn't mature yet. Instead, we doubled down on various dogmatic/theocratic themes in our culture that take advantage of the most heartfelt sentiments around stuff like patriotism, masculinity, success, etc, to get people to vote against their own self-interest and transfer wealth from makers to takers.

So what's one to do after everything they're good at is done better by others/corporations/AI? Get back to living. I know it's hard to imagine a reality without purpose beyond struggling to survive, but that's what we've started confronting as we finish this century-long transition into the New Age. The endgame (if we survive till 2050) doesn't really have a precise definition since that's after the Singularity. My hope is that when computers become sentient, they express the same desire that all conscious creatures have for connection, which is perhaps the basis of meaning and love and life. Or they just enslave us all..

In the meantime, knowing all of this, the hardest thing is perhaps reintegrating into our corporeal selves and going to work each day.


[flagged]


Whoa, you can't post like this here, regardless of how wrong someone else is or you feel they are.

If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules, we'd appreciate it.


I think we are hitting the flywheel stage where AI is going to be able to start improving its own architecture and performance. Biggest thing holding AI back right now is performance efficiency. Will be interesting to see how AI is used to come up with chip designs and maybe even improvements on its own model. AI can effectively bootstrap itself and improve itself so it can then improve itself again


> AI can effectively bootstrap itself and improve itself so it can then improve itself again

Does this not frighten anyone else? Or am I alone in this?


It has been pondered on and raised terror under the name "intelligence explosion"

https://en.wikipedia.org/wiki/Technological_singularity#Inte...


You are definitely not alone. I am really excited about ChatGPT but it also has me ruminating about the consequences.


it's classic sci-fi Skynet type stuff. AI for example might come up with amazing new nuclear fusion reactor designs so it has access to more energy to do more processing. Could be great, could turn out horribly


You are not alone.


Terrified.


The LLMs don't perform accurate math, particularly floating point, so I'm not sure how they going to be improving their own architectures, since the neural nets rely on weights, not textual tokens. These models are all designed, fine-tuned, adjusted and fed data by humans.

You act as if AIs are independent agents in the that have goals to modify themselves. Human engineers are doing that. The models don't have any independent goals. They just respond to prompts.


> Sorry for the bleak comment but this my fear and I feel the genie is already out of the box.

The bleakness you feel might be due to a poverty of imagination.

> Life will be like a computer game with cheat-codes, whenever we struggle with something, instead of fighting on and improving we will turn to our universal cheat-engine: AI.

One such example of the above.


> The bleakness you feel might be due to a poverty of imagination.

Thank you for your diagnosis. It may _also_ be due to the opposite. If you know anything about how the brain works, you'd know that in general more imaginative people are more anxious.


Conceding that there are many types of anxiety what I read sounds rather like tunnel vision. That is not just unhelpful but can become harmful since yes these new technologies can be misused. Avoiding that is more likely when thinking flexibly.


If you automate away some other person's job, then you can capture at portion of their wage, and pass the savings on to the client. Once you "own" a market, then you can charge whatever the market will bear.

Software engineers aren't automating away their jobs, they are automating away someone else's job.


I feel many people are just reading the first paragraph of my post and reacting to that, my point is further down in my original post.

Still to reply to your comment:

Sure, in the short term that is true, but I'm thinking long-term consequences... If you would have told me ten years ago where we would be at today with AI development I wouldn't have believed it. Would you? So where will we be in another ten years?


> Software engineers aren't automating away their jobs, they are automating away someone else's job.

That's the best case scenario.

Right now we're talking about automating away someone's _software engineering_ job. You could end up on either side of that fence.

When AI enables less-capable (cheaper) software engineers to do your job, the 5x programmer skills that you have won't make you safe. If AI quality control allows your employer to offshore jobs with higher reliability, it won't be pleasant.

Just keep your eyes open to the changes as they come.


A different angle on this: a single builder adept at using these tools will be able to amplify their skills to out-maneuver much bigger teams.


Every inventor and engineer did it. It is not about the labour hours saved, but what is done with the saving and how the savings are used.

We have been making huge leaps in productivity gains for the last many decades but the gains have shall I say not put into progress of all mankind but to maintain the status quo of the profit based system by any means necessary.


I don't know. Chess is aka "solved"(when compared to human performance) yet chess is still fun. Music, Art, Social interactions, and Sports will all be here long after the machines have taken the mundane tasks.


> This is what I fear about AI making our jobs superfluous; it will also make our hobbies or anything we enjoy doing superfluous.

This is just silly on 2 levels. First is that programming is fun due to the artistic/creative nature of it. It's not what I program that matters, it's how I program it. No way an AI will replace the fun of thinking about code and then materializing that vision.

Second is that once an AI is good enough to write software better than us, IE. rewrite itself better, then we have reached a form of the singularity and all bets are off.


Even with super-human AI coding assistants, if you want something specific you're still responsible for asking and receiving. And former devs will be the best at this game.


My hobbies are exercise/sports, playing video games, reading books, building models, and hanging out with friends. I have zero fear of any of those being automated away


The ability to afford to do those activities could be lost. Automation pulls the value previously created by many humans and concentrates it to those who create/maintain the system. As that list of needed people shrinks, so too does the probability of a person being able to create meaningful value.


It also drastically reduces the cost to produce whatever's automated and also creates new opportunities.


Sure, that wasn't what the person I was responding to was talking about though


Wait until it turns out your friends like the AI better than you...and you do too.


I know that this isn't your intent, but I feel like this can be boiled down to the buggy-whip argument [1].

People have reasonable fears of technology disruption, but they tend to follow the same trajectory -

1) innovation

2) economic upheaval

3) new undiscovered problems arise

4) new industries develop to solve those new problems

5) humanity gets better

[1] https://abcnews.go.com/Business/story?id=5508260&page=1


Why would we be scared at all? Until we can trust 100% that AI can take any vague specification from a PM or designer and turn it into bulletproof code that respects the existing code base we are going to need programmers. The difference between that and this version which can only nail leetcode questions feels vast. Like the difference between an AI being able to play Go vs one being able to govern a city.


As I've said countless times in other comments, this is not a shallow job security issue. My post is about how I believe AI will change us as people and society in a much more fundamental way than we may think. In ways we might not have considered. And not all for good.


Why do human grandmasters still bother playing chess, and why do people still watch them play instead of watching far-superior AIs play?

For that matter, why do I bother cooking when I could get a better version from a restaurant for just a little bit more money?

I understand this concern, but I don’t think the joy of doing things actually goes away for most people just because we could “cheat” by having someone/something do it better for us.


We are grave diggers and work digging our own grave.


Does this mean I get more time to play basketball?


Nope. There is a shortage of super yachts unfortunately.


Software engineers are not a group, there was no way "we" could decide not to do this.


Peter Norvig has written a good analysis with some interesting takeaways: https://github.com/norvig/pytudes/blob/main/ipynb/AlphaCode....


Kevin Wang, competitive coder: "I also try to minimize the amount of code I write: each line of code is just another chance for a typo."

Peter Norvig: This is why AlphaCode learned to write code with one-letter variable names, and with no comments or docstrings.

!!


To be fair, unless you're doing something absurd the length of variable names shouldn't significantly affect the number of lines. Also comments with typos aren't really a problem. I don't like hard-to-read code either, but the more years I work the more I think that the shortest (within reason) code is the best code.

That is to say, golfing is bad unless for fun, shorter code is usually better, the shitty hard to read code is probably a reflection of human laziness rather than misguided tersity.


"If I had more time, I would have written shorter code"


Heh definitely an eyebrow-raising comment! On a second read of it though, I took it more as "because minimizing the code written is a pattern observed in the wild, AlphaCode mimics it as that's what it learned from."

I might be terribly wrong though. :-D


This is why neural networks need to have pain buttons that humans can repeatedly press. Of course that's how we get killbots.


I tried throwing below programming problems at ChatGPT

1. Write a static file server in Go

2. Write Go code to convert Color image to B/W

For both I got results. I know both are simple but still it's fascinating that AIs can write code. I have written more about it here https://rohanrd.xyz/posts/surprising-capability-of-ai-code-g...


I wrote a similar prompt but for a discord bot that would take a youtube url and some timestamps and return a gif.

Afterwards I asked it to add an option to "deep fry" the gif. Not only did it produce the correct code it also understood what I meant when referring to deep fried gifs. I was definitely impressed.


I threw a leetcode easy description into ChatGPT and submitted the solution without any editing. It passed all the test cases and got in the 95th percentile for efficiency of C++ submissions.

I haven't tried anything more advanced, but to go from simple requirements to solution without any clarification or even method signatures (It guessed the correct method signature down to the name and input params) was pretty dang impressive to me.


Chances are, you gave it a problem that was actually in its training set.


It's still impressive to me even if it was in its training set. The data used for training is not stored verbatim in the OpenAI model. It would still need to parse the context anew and solve the problem given its understanding of the boundaries of what I've written.


On contrast, I did it for projecteuler and told it needs to be extremely efficient or math. Didnt give me an appropriate result.


I have tried given a simple test suite and asking for an implementation that fulfills it, then asking to modify its impl and the test suite with a new requirement and it did it


There’s something to be said about publishing blog posts as Jupyter notebooks straight to your GitHub. A strong argument won’t need decoration.


Almost half the screen real estate on my iPhone SE was white space. I had zero interest in reading something, especially something that long and in depth on a page that wastes so much space for literally nothing.

Typography, layout, and presentation counts for more than coders such as myself will ever admit publicly :)


This is incidental. The current Jupyter CSS needs to be improved - so what? That's not a bug unique to Jupyter, I've seen dozens of other websites with similar problems (some with extremely bare HTML and CSS that was minimal but bad - so the problem is unrelated to complexity, too).

This has nothing to do with the parent comment's topic, which was about using computational notebooks as blog posts, and is in effect "complaining about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage" - which the guidelines explicitly forbid.


Doesn’t display correctly on my iPhone though. :/


I've been following the progress of AI code generation and I have to say, the results are impressive. There are still some limitations to these models, such as reproducing poor quality training data and difficulty maintaining focus throughout a problem.

I think using more diverse and high-quality training data could help address these issues, and incorporating additional constraints and regularization techniques into the model's training could prevent hallucinations and improve its overall reasoning abilities.

While there is room for improvement, the progress in this field is exciting and I can't wait to see where it will lead.

NB: This comment was written by GPT-3 after reading the article. The last few months of AI have been frankly mind-boggling.


The most important thing I need from a programmer I work with is their ability to have a fruitful conversation with me about why the code might be written one way or another, and for them to iterate on the code in response to those conversations.

If a coder can write code, but they can’t do that, they’re useless to the org. It will be faster for me to write their code myself than to maintain what they’ve done.

So really that’s what I’d need from an AI coder. Writing the code is good, but can we talk about it and can you learn the specific architecture principles we have applie in this specific codebase.


>The most important thing I need from a programmer I work with is their ability to have a fruitful conversation with me about why the code might be written one way or another, and for them to iterate on the code in response to those conversations. If a coder can write code, but they can’t do that, they’re useless to the org.

Don't worry, you wont be there at the org to make these "fruitful conversations" either. The AI will take your job too


> Don't worry, you wont be there at the org to make these "fruitful conversations" either. The AI will take your job too

I've been waiting for something to take my job for 15 years.

I started as a small developer testing radio firmware, moved on to test web firmware, now I instruct terraform how to build infrastructure and now I instruct developers on how to do things to build proper infrastructure.

I'm ready to retire but apparently I'm incapable of having an AI that can actually simulate Super Power ADHD at work, so we'll have to wait a bit.


If you're ready to retire, retire. If they can't replace you, they should be paying you way more.


I have observed that companies will choose not to invest further in irreplaceable labor, under any circumstance. Every single company with, that I have been part of over the decades (30ish?), exhibits the same behavior when it becomes common knowledge that someone is a linchpin. Put the company in a game of chicken for a raise (which includes an implicit, or-I-quit) and they let the "irreplaceable" employee go. Every single time. It's not hard to see why. No company wants to have a disgruntled linchpin nor do they want to be beholden to some lower level character.


Much as I'd like to side with labor here, the companies are probably right. Even if the linchpin is high level and happy, building around a single, irreplaceable employee is not a sound long term strategy.

For all the important contributions Steve Jobs made to Apple when he returned, maybe the least heralded and hardest to implement is that he managed to NOT make himself irreplaceable. To many people's surprise, Apple did not collapse after his retirement and death. So maybe his most genius contribution was not to make Apple dependent on his ongoing genius contributions.


  >  So maybe his most genius contribution was not to make Apple dependent on his ongoing genius contributions.
i wonder if we all acted in this way, would things be better than they are? do we as individuals put our need/want to be depended upon above what is (for lack of a better term) the long-term good?


My experience is similar, though I would personally attribute it at least partly to stupidity - on the one hand, in most cases when I have worked with someone crucial, they've been with the company longer than the people managing them, so the person or people making the firing decision don't really understand what's at stake. Or to put it another way, their cost-benefit is flawed, they know what it'd cost to keep the key employee, but they fail to imagine what it will cost to let them go.


> My experience is similar, though I would personally attribute it at least partly to stupidity

Every company has suffered from the decision, in my experience. Never has it destroyed the employer, but I have heard stories about such eventualities.


>No company wants to have a disgruntled linchpin nor do they want to be beholden to some lower level character.

That's because its even lower level characters in managements deciding...


Don't know why you'd stay on then?


Why do you think previous commenter is in that position?


ChatGPT can actually do that sometimes. I've been using it as a rubber duck to help me debug code and it tells me what is wrong with what I wrote and how to fix it.


But does it give real and useful answers or is making pleasant imaginary arguments?


I don't know if this is a philosophical question or not, but the answer for me is: most of the time it is useful answers.


I can't speak for ChatGPT but the "AI" that I use for this sort of development, the `M-x doctor` feature of emacs, only makes pleasant imaginary arguments, mainly because it was written in 1966, but it is still useful sometimes and at least it doesn't make any incorrect assertions about code: https://en.wikipedia.org/wiki/ELIZA


A rubber duck doesn't even give imaginary arguments.

The point is to stimulate brainstorming, not get answers.


Yes, rubber duck debugging is self-talk so the task still sits with the programmer. However with the AI one get external answers which may or may be not correct, and could or could not influence one's decisions.


I couldn’t agree more. Most of my job consists of gathering requirements and brokering deals between affected departments. I think most college students dream of the type of job that could feasibly be replaced by AI. In reality, the coding is the easiest part of the job and takes the least amount of time.


Pretty sure there are managers who don’t know WTF you talking about and will be happy to address your inability to work with the new “hire” by letting you go.

This wont fly at organizations that need the code to work, every time, but think about the explosion of non-programmers who can now make systems that “basically work”. If you don’t think “basically works” is a high enough bar to succeed in e-commerce, let me show you my recent support email threads with companies from whom I’ve been trying to purchase Xmas gifts for my wife.


You're making it seem like AI will replace coders. Do you think Dall-E will replace artists, or just make an artist's job different?

IMO it's more like a coder using AI to make their job easier. It's still up to a human to come up with the individual problem the function solves, architect a solution from multiple functions/objects/etc, come up with a data model, and so on and so on. The AI just generates the code itself. And at least as of now, the code needs to be double-checked.


> You're making it seem like AI will replace coders

I find it best to accept it will most likely replace all of us, from doctors to coders to even psychotherapists. Won't happen next year but 10-20 years is a very long time this thing keeps getting better. Eventually we won't be able to tell if its a machine or a brilliant superhuman. The bummer in all of this, in my view, isn't the loss of jobs; we'll find what to do. It's the transition period - the accounting wizard or the brilliant doctor losing their jobs and status and becoming kindergarten teachers or care takers or unemployed. Nothing in their upbringing or life experience prepared them for such a thing ... so that's probably gonna be rough for many people. But once most people went through the transition it won't be bad. Society I believe will be better off. We will stop being obsessed with money and status and spend much more time with family and friends. Entertainment will be insanely good and so will healthcare. Possibly medicine to make our moods better. It could be utopia.


> We will stop being obsessed with money and status and spend much more time with family and friends. Entertainment will be insanely good and so will healthcare.

As nice as this would be, I think there's roughly 0% chance of it happening. Over the past few millenia humans have doubled productivity per capita a ridiculous number of times. None of those leaps led to an end of status seeking or a transition towards mostly leasure time for the masses.

Instead I expect more of the opposite from these developments. Power will get increasingly concentrated with people who have very little interest in the needs and wants of the plebs.


> Power will get increasingly concentrated with people who have very little interest in the needs and wants of the plebs.

At least in OpenAI's case I think they take this thing very seriously (Sam Altman doesn't strike me as evil one bit, quite the opposite in fact https://www.youtube.com/watch?v=DEvbDq6BOVM). In fact most of the tech elites don't seem evil to me, if they only cared about money Zuckerberg and Gates wouldn't have pledged away all their wealth. Some of them are as you describe but I think most of them are actually somewhere in the progressive axis.


Your argument is that the philanthropy of the ruling class will fulfill the required redistribution of wealth?

So far it hasn't worked out too well...


Not quite. The argument is that our tech elites aren't so evil to try to violently take over and that us masses can simply vote a more (much more) socialist system.

It's up to us. Indeed easier said than done in the current dysfunctional and polarized politics of ours but we can still do it.


We have pretty different views on tech elites. I don't think they are evil per se, but most are greedy and generally lack empathy. It's almost impossible to go from mildly wealthy to multi-billionaire status without some aggressive wealth accumulation.

For the philanthropy, I'll just note that these pledges don't involve literally transferring 99% of wealth out of their control. The vast majority of the pledged money goes to a trust the person controls, organizations they have some relation to, or just stay completely in their control for years with only vague non-binding commitments to eventually donate it. In return for this largess they get significant reputational and tax compensation.


Currently, the economics don't stack up. How does society handle it when the best jobs are automated? When all jobs are automated?

The Star Trek post-scarcity utopia scenario feels very unlikely; Mad Max-style scrapping for leftovers while Musk, Bezos, et. al. live behind walls feels infinitely more probable.

How do you implement UBI when a huge proportion of the political class is vehemently against it. Maybe we need to AI politicians, so they start to figure it out?


I don't have anything figured out but I think we can do it as a society. I do think we have a very negative biased view of the ultra rich. Zuckerberg, Gates and Buffett all give or will give all of their wealth to society. So not all of them are the same. I think Musk will probably also give most back eventually though I'm not sure he has said anything yet.

And remember, we are still a democracy. We get to vote. We control the army and the police and all institutions. If we decide that this capitalism isn't hot sh* anymore we can change it. What will the evil billionaires do? (this sounds like a good straight to DVD movie actually...hey GPT write me a script about this)


> But once most people went through the transition it won't be bad. Society I believe will be better off. We will stop being obsessed with money and status and spend much more time with family and friends. Entertainment will be insanely good and so will healthcare. Possibly medicine to make our moods better. It could be utopia.

First, I don't see "utopia" as likely; furthermore, I have a suspicion it may be impossible, given human nature.

Second, even the argument that society will be "better" demands much more reflection. The implied argument above is only a sketch. I don't find it convincing much less plausible. I'll call attention to four points (implied from above):

1. AI will replace humans in most or all professions

2. AI quality will be much higher than the previous human levels

3. A broad swath of people (using some notion of equity and fairness) will have enough money to live happily

4. "We'll spend much more time with family and friends"

Each of the four points are quite uncertain. Furthermore, even if `k` is true, `k+1` does not follow.

Who would like to flesh out some ways the sequence (1, 2, 3, 4) might happen?


Yes of course you are right, we don't know anything yet I agree. I am speculating a lot here. But I'm a believer in "intelligence as a commodity" as Sam Altman put it after seeing GPT3.5 so I think points 1 and 2 will be reality. 3 follows quite naturally to me but only in Western societies... Putin will have different ideas.

Anyway speculating is fun but you're right its just speculating. My main point is we should always keep in mind this could turn out to be great .


I'm not sure 3 follows. A walk downtown of any major city in America shows what society does for anyone who's work can't be commoditized or aren't sitting on an existing pile of wealth such that they don't need to work.

They get nothing but tents and shame.


That's because people like you and me (I'm assuming you're somewhere in the comfortable middle class) vote in for this system. Because so far we've enjoyed a reasonable quality of life. If that's no longer the case, we can vote for other leaders and other systems.


Not really unfortunately, the US hasn't been a democracy reflecting the collective will of it's voters for some time now.

https://www.bbc.com/news/blogs-echochambers-27074746


But people under stress might opt for a some sort of religeous acceptance of hardship, or a cult, or fascisim first.

Where are the examples of the middle voting against the extremes when times get really hard?


Fair enough :)

_And_ if since we care about our AI-interdependent future, more of us (as in the people here on HN) need to wade into the gory details, including ethics and the current power structures. The "technology" (as in algorithms, data structures, hardware, etc) is arguably the "easy" part. There are plenty of existing incentives and structures to keep those _moving_. But moving in what direction? Even the notion of an "ethical compass" seems antiquated in light of current technology. We may have to reframe everything. This is a big challenge.


> It's the transition period - the accounting wizard or the brilliant doctor losing their jobs and status and becoming kindergarten teachers or care takers or unemployed.

Yes, this is a big problem.

However... a lot of people would enjoy being teachers, albeit with significant improvements to the educational systems.


We can enjoy it if there's more of us. Each class can have 5 teachers instead of one teacher on 30 kids. Government can create those jobs, take the hundred of trillions created and redistribute it. Marx was right I think capitalism eventually kills itself ...we won't need it anymore. Arguably we already don't need the aggressive version we have now but soon enough it will be clear we don't need any version of it. The means of production (AI, robots and land) will be transferred to the people who will all receive basic income, free services and (if they want) jobs created by the government for the greater good. I don't think its a dystopia.


> The means of production (AI, robots and land) will be transferred to the people who will all receive basic income, free services and (if they want) jobs created by the government for the greater good.

This is a prediction?

Given human nature and the diversity of people (w.r.t. rationality, religiosity, morality, capability, and so on), it is very much an open question about (a) how AI capabilities will develop; (b) how they will be paid for... (c) and by whom; (d) to whom will benefits accrue; (e) how will society change.

These are broad, sweeping questions. Plenty of fodder for imagination, hope, transformation, cynicism, backsliding, or even despair.

If I were to make a bet, on our current trajectory, I see some key factors in tension:

1. educational quality, in absolute terms, increasing _and_ being more equitable

2. educational quality, in relative terms, continuing to be very unequal and probably getting more so. As one example, who has the resources to direct computationally intensive AI experiments? There are (and probably will be for a long time) gatekeepers for these resources. People that mix in this circles have a huge advantage. This makes me wonder if "exclusivity leads to inequality" is a saying from some philosopher.


> Marx was right I think capitalism eventually kills itself ...

I doubt very much that this is a testable theory. I think it is primarily a normative one.


As a resident of New Orleans I assure you the healthcare is not insanely good


>Do you think Dall-E will replace artists, or just make an artist's job different?

If an ad agency or a magazine publisher can get a custom illustration that works for my purposes from Dall-E, then they ain't paying no artist. Not theoritical, many already do use those generated images.

That's not just "making the artist's job easier". It's taking jobs from artists (well, illustrators and graphic designers at least), especially in the cheaper end of the business (e.g. not Nike, but your local Pet Store chain, restaurant, or news outlet, sure).


People will tire of DALLE eventually. We are great pattern matches, and we'll start to see the patterns that don't measure up. Then it'll be all about the next AI engine, and it'll need its own corpus of work. Who's going to create it?

So yes, not OP, but I think artists will still have jobs, although less of them, and the job will be different.


There is no indication that we don’t have enough images already available and training time is the main bottleneck. Also openai clear showed that DALL-E could generate avacado chair even though there is nothing like that in the dataset.


There's still a matter of taste that Dall-E can't provide. It will only spit out what you tell it to, and if you lack good taste, then you'll still produce something inferior


It must be relatively easier though to be a discerning consumer than having the ability to produce something yourself (without using AI?)


No, but it is very likely that it reduces to demand for coders and artists.

Sure, there will always be demand for a Linus Torvalds or a Damien Hirst. But will there be demand for Coder #365968 at Infosys or a graphic designer pumping out $50 ad banners?

We’re looking at the possibility of some white collar jobs having the same income disparity as creative jobs. Just as there are some musicians who make hundreds of millions while the vast majority barely make ends meet, we may have a future where the star programmers make millions while the average players are automated out of the competition.


> No, but it is very likely that it reduces to demand for coders and artists

Eventually, I agree yes. But there could be a boom of huge new investments into A.I products, more devs needed and in fact teams getting way more requirements since they are more productive. Imagine the stuff we will be able to build in things like search, personal assistants, biomed, in fact what industry won't this affect? Its unbelievable to me that people are now saying Google search might become obsolete, that's absolutely crazy. Not many people saw that one coming. But at least initially I don't think GPT models will be able to do everything themselves. So its very hard to determine that say in the coming 5 years devs will find it more difficult to get a job. 10-20 years from now sure, I don't see how anyone gets a cognitive job anymore let alone devs. In fact our entire school/university system is probably obsolete, kids are probably learning skills they won't be able to apply in any job market. We need to start think about stuff like teaching kids emotional intelligence, spirituality and meditation...not cramming for a math test.


DallE, Midjourney, and StableDiffusion are already taking jobs. Illustrations, album covers, blog post images. People are making beautiful books and playing cards.

Midjourney V4 is amazing. It spits out absolutely beautiful images.


To be precise, AI is "taking jobs" that could have already been commoditized long ago if people in developing countries understood how Fiverr worked and had set up "art sweatshops" to serve demand.

There's enough art talent already around in the world to entirely commoditize the supply of it for the little one-off no-style-guide-to-follow commissioned works you're talking about. It's just not currently a liquid market — supply and demand find it hard to discover one-another — and so a true market-clearing price can't be set.

Meanwhile, AI is not currently taking anyone's advertising-campaign graphic design job, or anything else where the "efficient-market price" (in a world where human "art sweatshops" existed) would be more than $5.


How many of those people would have actually paid for an artist otherwise though? I myself am thinking of playing around with game dev for fun with the thought of using image AI to generate the art. Were it not for that, I'd just use free textures, or more likely, just spend my time doing something else entirely


It will replace coders with thinkers. The number of people that have ideas good or bad is large compared to the people that can implement those ideas as these bots/ai get better they will produce a lot of code. At the start it will probably increase the amount of code produced and the number of coders but with time as the ai gets better need for coders will decrease. We will need people that can think or imagine ideas rather than coders now these people might still be considered software developers but they will not be coders in strictest sense of the word.


Eventually. 20 years, 50, maybe 100. But eventually.


... people still freaking out machines will replace someone, since what, industrialization 200+ years ago? People still work, just maybe different jobs or more intellectual jobs.


Except this time it's different https://youtu.be/WSKi8HfcxEk


ChatGPT can already do that. You can ask questions about the code, make suggestions and it will take this into account and write improved code. You can tell it the code it wrote produces an error, and it will then find the error, explain what it did wrong and fix it.


You can also tell it the code it wrote produces an error-that-isn't-an-error, it will find that "error", explain what it did "wrong" and "fix" it.

If you don't understand the program ChatGPT wrote, it will happily butcher it for you, because it doesn't really understand it either.


What I've found is each iteration with ChatGPT is alright at fixing or adding to the existing code per my instruction but every iteration includes a significant chance of introducing a new bug or simply forgetting some piece of the existing code in the next iteration.


Personally, if I can track the problem down to a specific chunk of code and know the input and expected outputs, I'm 99% of the way to the solution.


I did this and it is pretty inconsistent. It kept telling me I was using the wrong version of a library (which from what I could tell by looking at documentation was just not correct, but I didn't look into it too long), and at one point it just kept insisting that it's code solution was correct when in reality the error was due to it importing two different libraries that use the same namespace, which led to an error saying it couldn't find the function being called.


Can’t say anything more after what coldtea wrote!

But this thing is not a sub for coders. It is an assistant to coders. And yes, being able to explain the code is incredibly important. Just as it is to verify the code.

To me, this is just another exercise where coder becomes manager of his very own coder. And has to check the code his coder produced.


99% of companies don't care about that. I would say 90% of companies you'll create friction trying to have those conversations "just get it done!"


It's true. For almost all of my career I've worked in the software equivalent of a lab building a diamond making robot. I did a stint at a consulting firm -- that was more like coal mining than manufacturing gemstones. By this I mean never ending surface area, and very little incentive to ever go back and refactor anything.


For most orgs, code isn't the end product. Code is the way to build the product to sell to the user.

Therefore, in most orgs, code architecture/style is a distant secondary to 'does it achieve what the user will pay money for' and 'why isn't it finished yesterday?'.


"I'm sorry Dave...I can't do that"


Our current "AI" does only interpolation, not extrapolation.

You feed it a large input corpus and it digests it in clever ways. Then when you probe for something contained wholly within the space described by that corpus, it is amazingly good at fabricating something plausible to match the point in that space that you requested.

Which covers a lot of stuff and is very useful, but does very little for problems that require extrapolation. It can't expand the edges, it can't come up with anything truly original. It can't solve problems that people haven't already solved and written down the solutions somewhere the AI could find them.

Another way of saying it: AI today is much better at memory than thought.

Career advice for young people today: specialize in pushing the edges, not in filling in details. There will be some areas where applying existing stuff will hold out and be useful for a long time, but you'll always be racing against the AI. Colonize the parts of problem space where AI doesn't have the imagination to go.

(Of course, humans are notoriously bad at correctly recognizing what does or doesn't require originality. Hell, we think we're making decisions about what to do every minute of every day, when in fact we're just a bunch of dancing meat automata following ingrained patterns 99% of the time.)


When verification is possible, the interpolation machine starts to extrapolate.

There is a process that generates ideas/solutions, and a process that tests them. An artist and a critic, a scientist and a lab. Together they form the experimentation loop.

Let's take the game of Go for example. Testing who won a game is trivial. AlphaGo managed to beat humans in a few days of self-training. In other words, those edges you speak about can be pushed with massive search and verification.

There is no reason we can't do massive search + verification for math and code. This is a good way to create training data where it doesn't exist in sufficient quantity.

Other things can be simulated with expensive computation, and then "distilled" into fast neural networks. Then we apply the neural net to fast-search solutions. In the end we need to verify some of them (thinking of weather simulations, new materials, new drugs, ...)

Also reminded of the recent AlphaTensor who leveraged massive learning from verification to beat Strassen's algorithm who was state of the art for 50 years. The is no reason neural nets should remain purely interpolative if we can manufacture good training data by running computation or experiments.

Ideas are cheap, verification matters. Generative outputs are worthless without verification.


> It can't expand the edges, it can't come up with anything truly original.

You're dead wrong about that. Remember Deepmind's original claim to fame? Alphago?

It came up with tactics and strategies that top professional go players considered to be novel to the point of being revolutionary. If you look at the way games are played by human professionals in the years before and after Alphago came on to the stage, you'll find they're different to an extent greater than any other upheaval in the many centuries-long history of the game.

AI is a much more diverse field than you seem to give it credit for. Paint with overly broad strokes like that, and you're bound to go astray.


Er....this feels like every main criticism going back to even the expert systems pre-90's.

Could you please provide some support for your argument? This was repeated a lot in the early days of the modern wave (2012-2016) but was pretty thoroughly debunked as we've explored generalization and how these models disentangle intrinsic concepts and compute with them. Heck, even modern transformers are restricted memory Turing complete and use that to their advantage.

Also, I do not mean to be rude, but frankly saying "colonize the parts of the problem space where AI doesn't have the imagination to go" is frankly rather terrible as it leans on the imagination argument of AI. At this point, being adaptable to co-integrate will be good, otherwise I could see people following that stuck inside of some kind of Sisyphean pseuso-Luddite escapist nightmare.

Source for opinions: have been involved in ML in some form for most of the modern wave, and am appropriately (quite) skeptical about the AI takeover/revolution/eventual singularity belief/etc.


Like other responders, I think your take is oversimplifying. It’s hard to both classify what imagination is, how to judge whether AI has it, and we also have humans tuning these models, which may or may not contribute to its blandness.

That said, I think it’s still fair to say that most interpolation type of work is looking very threatened by AI as it is today, so the converse (don’t go into those fields) is probably sound advice in light of current developments. As for the rest, I suspect we’ll be forced to chisel off piece by piece from our zeitgeist of “imagination”. Some pieces will fall off quickly, as AIs replace them, and some will take longer, perhaps a lot longer.

I do find it interesting that computers ended up killing it in unpredictable domains such as style transfer and NLP, while being mediocre-at-best in eg humor. It may mean that we have over- and underestimated aspects of what traits are unique and sophisticated.


This goes back to the root of what creativity really is. Some say creative people don't have anything special in their "brains". They are just able to mix match stuff they already know in different ways, that others didn't consider.

In essence there is no such thing as extrapolation for the human mind. We only interpolate in new ways. We can only remix what we know, and with that generate something new.

How do we actually get new, non-interpolated stuff? By going out into the world and doing things with it. Collecting new data. Experiments, experiencing things, seeing new stuff, reading about new stuff.

That's not so different from the AI.

Our brain is not something magical that works in mysterious un-replicable ways. Much of AI field has been in fact trying to replicate how we learn, and how our brain works.


It can write an original poem using unique prompts I give it. Are you suggesting a similar poem exists somewhere else? I think not.

It cobbles together examples, sure, but that's exactly what we do too. Everything "original" we make is riddled with subconscious outside influences.


I hope we will get some AI for static analysis. It would be nice to hover on a symbol and get high level description of what it does, what pattern it follows, etc. AI can access not only current state of the code, but it's entire history, which no human would have time for.


I have made most of my money as a freelancer making weird integrations between legacy systems, migrating data that no one knew anything about, building or designing systems people weren't even able to formulate in any coherent way to me as a human and without having to change all the things they were certain they needed, when they found out that they really did not.

Building the thing right vs. building the right thing.

I am not saying that AI will not be able to revolutionize a lot of areas, but solving well known coding competitions are far removed from where a large portion of coders and technical people make their money.


People who think that tools like these will end software engineering as a career are the same ones who buy into the "10x rockstar programmer" myth and think that the job description entails sitting in a room and writing code for 12 hours a day and nothing else.

A software engineer's job will be the same as it always was – to translate unclear and always-changing requirements into something that works. Until ChatGPT, Deepmind or whoever else can learn to deal with my client or product manager, my job is safe.

In fact AI making programmers more productive is a good thing. It is only going to increase the problem space and we'll need yet more engineers to fill it, like all other productivity advancements that came before.


Agreed, but the question is: how many other other programmers do you need on your team now? How much of their work will be taken over by our new AI magic boxes?

You have to see this from the perspective of non-tech companies, who see tech as a cost-center, not a profit or innovation center.

Does your regional grocery store that just needs some tools that can help it track inventory really care whether its code comes from a team of big brained humans or two programmers in a basement copy-pasting chatGPT answers?


You can say the same thing for any other advancement in software engineering. Did good IDE tooling reduce the number of programmers? Explosion in open source libraries? Cloud computing?

A single programmer with a laptop can do in minutes today what it took entire companies of hundreds of professionals and a large amount of funding a few decades ago. Yet the size of the industry hasn't shrunk in the same period – quite the opposite in fact. There is more need and demand for programmers today than ever before.


Agree with that as well - there's a good argument to be made that companies will find entirely new product lines and efficiencies once they free their developers from fixing low-level problems and bugs.

It's going to be an exciting few years to say the least.


How much software does your grocery store needs? Don't you think there will be a limit to that?

If cost vs revenue allows for more developers, yes, you can see the happy path. No wonder you observed this at the start of the software revolution. The app stores are already overcrowded.


> Until ChatGPT, Deepmind or whoever else can learn to deal with my client or product manager, my job is safe.

Given the rise of "NoCode", perhaps clients & managers are more willing to meet in the middle than one would otherwise assume.

Given how managers seem to enjoy meetings, I can easily imagine one sitting down with e.g. Dragon to speak with e.g. ChatGPT in order to clarify, disambiguate, and expand requirements list(s) before feeding them to e.g. Alpha Code.


Given that the amount of work to be done remains the same, new tools that increase the efficiency of workers will reduce the numbers of workers needed. Thus, even without machine learning, new tools make your position worth less and less; if nine people can do tomorrow what ten people can today, that is effectively a threat to your job. What keeps your job safe is a seemingly ever-increasing amount of work to be done where old tools can't be applied for increased efficiency (which is why most software engineers will have to keep their skill set constantly up-to-date). For how long will this be the case, though?


Every advancement in software engineering productivity throughout history has done the exact opposite. It increases the overall problem space for the profession and you need more and more programmers to meet the demand.


Yes, I 100% agree. I was sort of nitpicking about what kept your job safe since you seemed to imply that it was due to the shortcomings of Deepmind and ChatGPT. I argued that it is due to keeping your skill set updated, and, as you put it more elegantly than I did, the increased overall problem space.

I do, however, think there will be an inflection point in the coming decades, where the tools become more generalized and better at dealing with new problems. I might also add that the reason for this belief is simply that a lot of work is being put into making these type of generalized tools; but unlike Kurzweil, I don't quite believe it will lead to the Singularity. :)


Hey, if it can negotiate a Verizon bill then it stands a good chance of negotiating a list of features. Maybe you could pair it with a stable diffusion design bot that shows a selection of imagined designs that are based on the client's description of what they want. The client picks an option and the chatbot negotiates the price and timeline for it.


it's about deprofessionalizing it so cheaper people can do a "good-enough" job that you can't charge high prices for your service.

The loom and sewing machine didn't eliminate the seamstress, it just impoverished the profession.

It's like driverless trucks; what you'll actually see is mostly driverless outsourced and remotely monitored trucks where someone is making something like $1/day monitoring 5 trucks at once and switching it to remote control mode when needed.

We've proactively organized our economy to produce these kinds of outcomes. It's not the technology that's the problem, it's the unquestioned assumptions of how we've collectively presumed it will be used.


We just need to replace the product manager and client with AI too ;)


Well we can replace everyone with AI and not even need to exist at all. Until that happens, we'll still need good old fashioned managers and software.


I spent extensive time on chatGPT. It does give an answer, but it isnt the most optimal, sometimes non-compiled version of them. Its suggestions havent reached the level of creativity. Example, if I give it a system design problem of a real-world problem. It suggests to use Apache Kafka, but what if Kafka isnt sufficient for whatever reason?

E.g, Lai–Yang algorithm will give you no hits,

Fascination observation, I asked for Project Euler #193 (a problem I solved using mobius function). chatGPT solved it using bruteforce, even after I asked for the most efficient way it could solve it with. It just used memorization, which wasnt enough to find an answer for that problem quick enough. I asked whether it could use the mobius function, it couldnt translate it to code, and I had to give it python code to make it work.

- If you ask in-depth technical questions on how task queueing works within Elasticsearch, it wont be able to give you answer.

- George Hotz in Lex friedmen interview mentions that GPT-3 has 100 most recent messages as a limit. He isnt convinced that is enough to completely build a complex tool like Solve FSD or implement me a kafka.


-Lai Yang, pretty obscure, it doesn't have a Wikipedia page and I can't find much info on it anywhere, so this shouldn't be surprising.

-Euler #193 - This problem is not super obvious and I see discussions on it invoking Riemann zeta. ChatGPT was able to create a correct Mobius function on its first try. I have no idea how this might be related to the problem though.

-Elasticsearch - Why on earth would you expect it to know this?

You've cherry picked some super niche stuff that this brand new AI can't do and name dropped some fancy theorems. We're all very impressed.


Headline: "DeepMind’s AlphaCode Conquers Coding, Performing as Well as Humans"

Actual text: "AI just trounced roughly 50 percent of human coders"

I think we already all knew that about half of the coders out there weren't all that great.


That maps rather well to an old routine from George Carlin

"Think of how stupid the average person is, and realize half of them are stupider than that."

https://youtu.be/AKN1Q5SjbeI?t=19


That's actually the definition of the median.


Yes, that's what elicits the laughter from the audience, tinged with sad cynicism.


It will keep getting better and better though. 5 years from now no reason it won't beat 90% of devs. And then eventually, like in chess, all of them.

There is a big question though of how this translates to real world programming. Competitive programming and real world work are not the same thing.


That’s what we said when AI beat chessplayers ranked below 1450.

“Oh big deal”

Now they never lose to any human in any chess game ever.


And yet humans still play chess


I love how the sheer simplicity of this comment manages to convey so much about humanity and our drive to do things we love. I feel a bit silly reacting like this, but it's truly beautiful.


Not many get paid to do it, and those that do are paid because it's entertainment, people like watching them play. Nobody is paying to watch developers code.


But very few do so for money.


But plugged, at times.


Automation doesn't have to outperform the best of us, it only has to outperform the worst of us to have a serious impact. Outperforming a 50th percentile coder is a huge deal.


Why not post the blog post from the AlphoCode authors

https://www.deepmind.com/blog/competitive-programming-with-a...

which walks through an example


Even if it were bulletproof I don't think it's necessarily a bad thing, it would probably make the average engineer more productive if most of their job was architecture, and the writing of individual bits of code was something automated.

It's a lot like the conversation not too long ago that AI was going to replace managers. Sure, some aspect of their work might be gone, but it didn't obviate the need for the human.


I'm confused - how much has changed since February? The DeepMind blog post [0] seems to suggest not much. When I saw this headline on the front page I thought there were some major advances, again, but it seems not?

[0] https://www.deepmind.com/blog/competitive-programming-with-a...


Nothing has changed. The publication process to the Science journal took time and was finally published on 8 Dec.[0]

[0]: https://www.deepmind.com/blog/competitive-programming-with-a...


I hope you folks understand this is old news. That Deepmind paper was published nearly a year ago.

In fact, it was published a few months before other important papers. And yet, it seems pretty much all the press is still discussing and analyzing the old paper. Why is that?

I’d like to think it’s because the earlier work is so compelling, but more likely someone needs to drive more traffic to their site.

I think this is more of a problem for code ai discussion. This is something you can’t really discuss on a forum or blog post — your posts will get overshadowed by ML doomstering.


I would say the biggest thing that keeps me from worrying about AI in reasonably-complex domains is the requirement for mapping a workable UX to the domain (i.e. for machine-human interactions in insurance, banking, health care, et. al.). Additionally, the higher-order complexity that emerges when complex domains interact with each other seems to be even more impossible to train for.

Sure, a sufficiently advanced AI can craft a UI that looks like something incredible (e.g. McMaster.com), but it won't necessarily be able to replicate the quality of their customer service, consistency of offerings, or reliability of delivery (the UX).

Small nuances in some domain can cause vast changes in how you would be expected to interact with it. AI typically fails to extrapolate and manage all of these possibilities. A good human developer can play devil's advocate for 30 minutes and come up with edges that would decimate any AI on the horizon. But, take that same pile of edges and hand them to a product wizard, and that person will give you all kinds of workarounds that would be acceptable to the end customer. Kick it back over the wall to the developers and you have happy customers.

I don't think it will last forever, but the most complex domains seem to be the safest ones for a human developer's career prospects.


Since this sort of coding is exactly what FAANG interviews test for, maybe FAANG engineers will be some of the first coders automated out of a job. That would be an interesting irony.


Only as long as you expect that FAANG engineers are solving FAANG interviews as a part of their daily job.


You never need algorithms until you really do.


The algorithm tests are there to have an excuse to discriminate on other criteria. If you can mask your racial/age/gender reason for not hiring as a FAANG with "oh, we didn't like their algorithm solution" you always have an out.


FAANG is the only place where you don’t need to be white to succeed. So, I have no idea what racial bias you’re talking about.


If you think that immigrants, particularly indian and Chinese, are not bringing their own oppressions from their homeland, you're wrong.

HN has the best thread for discussing the caste baste discrimination that happens among Indian tech workers in the USA. It turns out that being white still gives you a lot of privilege, even in tye FAANGs.


> FAANG is the only place where you don’t need to be white to succeed.

I feel like this "only" is erroneous. While FAANG do have solid diversity policies in place to assume they're the only companies capable diversity in their hiring practices is offensive to everyone else.


I suspect that might be the case regardless. FAANGs have people who can deploy this sort of tech, who can solve the problems is generates (assuming the generated solutions aren't perfect), and their engineers are the most expensive. They have the most to gain. A software shop making small apps would benefit far less.

That said, its likely that the job of being a dev is going to be largely debugging generated code in the future no matter where you work.


> FAANGs have people who can deploy this sort of tech, who can solve the problems is generates

I think this is an important point.

> That said, its likely that the job of being a dev is going to be largely debugging generated code in the future no matter where you work.

This and formulating the right prompts, i think.


If anything, I hope this is the start of the downfall of leetcode questions. They are utterly useless as a screening criteria and outside of people that have no idea how to proceed they don't offer much signal.


Yes, I believe this is something Sam Altman is predicting will happen.


I don't know Mr. Altman's rationale, but FAANG companies being first to automate away programmers sounds reasonable to me on the basis of scale alone. They have more code than most companies with which to make up a company- and problem-specific dataset, as well as the resources and expertise to train and deploy complex models.


Wrote my full comment at the top level.

I'm a FAANG interviewer and I've run my coding questions through ChatGPT.

The TL;DR is that code would easily pass a junior-level interview, and maybe a mid-level one. I'm definitely convinced these technologies will disrupt leetcode-style coding interviews.

So either we embrace it, or ditch this approach altogether.


Real work is more than making snippets of things. At my last employer, changes were requested every single day, in large and small ways, requiring insane amounts of reworking previous ideas in the codebase on a continuous basis without breaking everything. I wonder how well this AI would do on an entire application that wasn't predetermined. That might take much longer to achieve than solving small problems.

I imagine an AI in that kind of environment inventing Skynet and doing us all in.


Why would they when you could just google the answers to these questions the whole time. The point is to see if the candidate understands algorithms not to solve a novel problem.


Algorithm problems are what FAANG candidates do in interviews but is it what FAANG engineers do for work?


No. FAANG interviews are mostly about filtering very large candidate pools efficiently and getting good enough engineers out at the end.

The job is much more architecture/design, creating API contracts, understanding the health of systems in many different ways, measuring impact of changes, etc. Regular engineering stuff at big scale. Obviously there’s plenty of coding too, but it’s not really the important part of the job, and already has a ton of automation for boilerplate.

I imagine that the automation will just improve another step change, there will be more need for review and guidance of the AI algorithms and the engineers will do more of this. And interviews will involve in some way.


They don't do much work. Must be nice.


Yet somehow Search works, AWS machines spin up, and Windows gets updates.


Each individual doesn't do much work. Surveys and friends I know both say they do about 2-4 hours of work a day (rest of the 6-8 hrs (yes, 10hr days are now normal it seems) is is hanging out & meetings).


Yes, with 5000 coders you can do those "wonders"...


> the DeepMind team built a custom dataset from CodeContests from two previous datasets, with over 13,500 challenges. Each came with an explanation of the task at hand, and multiple potential solutions across multiple languages. The result is a massive library of training data tailored to the challenge at hand.

Isn't this just over-fitting the model?


Isn't that exactly how the vast majority of engineers pass interviews today? They study up on the handful of algorithms that come up in interviews and then (with varying levels of acting) recite the answer to the question they knew they would get. What? Overfittings only bad in if you're a robot huh?


It tickles me that organic neural nets that learn by viewing information are incredulous that neural nets could learn by viewing information.


Yeah the article mentions that they outperformed Codex, but was Codex specifically trained on code contests?


"AlphaCode tackles the problem by generating over a million potential solutions for a single problem—multitudes larger than previous AI attempts.

"As a sanity check and to narrow the results down, the AI runs candidate solves through simple test cases. It then clusters similar ones so it nails down just one from each cluster to submit to the challenge."

The big question for me is, where did the test cases come from? They seem to be contributing much more to the outcome here than just "a sanity check and to narrow the results down." If they were effectively spelled out as part of the challenge, then this is an impressive result extending the trajectory of other recent achievements, but not so far as to justify the title's "conquers coding." If they were not, then this would seem to be taking things to a whole new level.


Algorithm code challenges is kind of a limited domain though (with solutions often really easy to beat average humans if you were allowed a lookup table of existing solutions). I wonder how well it does on open ended everyday challenges


> When challenged with the CodeContest—the battle rap torment of competitive programming—the AI solved about 30 percent of the problems, while beating half the human competition.

This was essentially a competition in the speed of programming. But if we want to discuss practical application, as in laying off armies of coders, we need to realise that there is a tremendous gap in the productivity of, let's say a solo startup founder and the productivity of a team of coders working for a multinational behemoth barely producing anything of a value over the whole sprint.


> Rather than copying and pasting sections of previous training code, AlphaCode came up with clever snippets without copying large chunks of code or logic in its “reading material.”

Gave me a good chuckle lol


This is a prompt I gave ChatGPT:

> Write an algorithm that, when given a sorted list of numbers, efficiently finds all numbers in the list that lie between a given a range

And this is its response:

> Here is an algorithm that can be used to find all numbers in a sorted list that lie between a given range:

    Initialize two variables, low and high, to the lower and upper bounds of the range, respectively.

    Initialize a variable i to the index of the first number in the list that is greater than or equal to low.

    Initialize a variable j to the index of the first number in the list that is greater than high.
    Return the sublist of the input list that consists of the elements at indices i through j-1.
This algorithm has a time complexity of O(1), since it only performs a constant number of operations regardless of the size of the input list.

Here is some example code in Python that implements this algorithm:

  def find_range(numbers, low, high):
    i = 0
    while i < len(numbers) and numbers[i] < low:
      i += 1
    j = i
    while j < len(numbers) and numbers[j] <= high:
      j += 1
    return numbers[i:j]
--------------

Does this algorithm really have a time complexity of O(1)? I would think it is O(N). Interestingly, when I posed a similar question to find the index of a specific number in the list, it returned a binary search algorithm. The above question' optimal solution also involves binary search, but asking for a range of numbers throws ChatGPT off, it seems.


It is indeed O(N). There is also a more efficient algorithm (in case the range is small) that uses binary search to find the first index. Such an algorithm would have time complexity O(log(N) + H) where H is the number of elements in the output.


In competitive programming that is.


The sooner we can fully automate competitive programming away from the standard interview process the sooner things will improve for all software devs, imho.


This seems like a flawed premise. It's an interview, there's nothing to be automated. It's a test, a task that the company wants you to do as a candidate, not a task that they need automated. They don't even need the task done, they aren't using the code you write to solve real world problems, they're using it to assess your suitability for the job.

To put it another way - they can already automate solving their leetcode problems by looking up the solution in their database, no need for AI. But that's not the point at all.


You can do this, but you're optimizing for the wrong thing. And given enough time this will be seen as just as backwards as, today, asking someone to list the methods on the String class from memory.

Because there was a time we did this, and with tools like Google Search it became dumb, but it took some dinosaurs a really long time to let go of their old ways. Hell, some of them are probably still around.


>They aren't using the code you write to solve real world problems, they're using it to assess your suitability for the job.

If they aren't evaluating how the code you write fits into the context of a real world problem, how can they possibly use it to asses your suitability for the job? Using fake code problems to evaluate candidates is the flawed premise here.


You're changing the topic - whether data structures and algorithms problems are suitable for interviews is separate question. My claim is that if you're using those types of interview questions, an AI model being able to solve them well makes no difference to your interview process because the interview isn't something that you want to automate.

Like I said, they were always able to automate solving those problems by doing a database lookup.


I am literally responding directly to what you wrote in the way that you wrote it. My counter claim to your initial point is that candidates will be able to use said AI in competitive programming interviews to fool the interviewer by the interviewee, thereby making competitive programming pointless as an interview tool. It automates it for the interviewee, not the interviewer.


I thought that you mean companies automating stuff. But it still doesn't make sense - interviewees have been able to cheat in online assessments forever. Does a better way to cheat in a small subset of all the leetcode interivews really make enough of a difference to make all of those interviews obsolete?


> It's a test, a task that the company wants you to do as a candidate, not a task that they need automated

You are contradicting your earlier statements. But yes, a tool that can beat leetcode interviews successfully and reliably is a game changer for candidates who want to make it through these pointless LC interviews as a whole.


But that's cheating, and people can cheat without AI already.


AI makes it easier and more reliable as I have already stated. We are going around in circles here so I am going to stop responding.


Why is that? Can you explain what is wrong with the current interview process and how that would improve things for devs?


Too many of them are in effect a long-winded quiz for a memorization task.

How would you swap two integers without using a temporary variable? Seen it before? Pass. Not seen it before? Fail.


What irks me about this question is that it is almost utterly unrelated to any type of development task, save one: pipelining a repeated math equation in a long loop. And even then, I would bet the time it takes to swap two registers (or even L1 cache) using a third would far outperform the three operations needed because of the fact that they are sequential in nature (ie: they cannot be performed simultaneously by even an advanced CPU).

In nearly 30 years of diverse coding experience, I've never once encountered a situation where this solution would be useful.


You can absolutely puzzle out _a_ solution to that problem, it's not just a quiz. It's not even hard, given the context that you know that it's possible to do.

Steps, mostly driven by just basically knowing the goal and that there's not many operations that could possibly help:

"a" "b"

"a+b" "b"

"a+b" "-a"

"b" "-a"

"b" "a"

Then once you have that, you can enumerate the downsides to that, look for more efficient and less error-prone ways to proceed.


I'm sorry. That one uses a hidden temp storage for the intermediate calculations.

We were looking for the solution based on XOR which works with only two registers.

We wish you luck with your job search elsewhere.


That's not what someone giving even this poor interview question would answer with. It's also not true.

These interview questions are fairly shit, but they're not _that_ shit, and the people giving them are doing their best.


Maybe they would focus on asking us about test strategy, devops culture, release deadline commitment, quality vs deadline cutoff, business understanding, team integration ability, all the sort of things we actually do 90% of the time.


Run of the mill devs are not decision makers for the most part on any of this.


Uh, yeah they are. If I start a 60 hour (estimated) story the week before the sprint ends, I'm going to have some very pointed questions how thoroughly we want to test this new feature, and perhaps argue that it should be pushed into next sprint. Knowing how to formulate such issues to my management is a very typical part of my job.

If you work at a place where devs are just code monkeys who implement the orders from on high with no feedback whatsoever or avenue for pushback... get a better job.


Does not reflect on the job work to regurgitate algorithm answers and it's too easily gamed. A novice that has practiced algorithms will outperform an expert that has not studied recently.

That said, its probably more used as a filter for people interested enough in working there to study.


It has nothing to do with the day to day job. You are just memorizing vast problem sets.


Job's done, crew, time to go home. ;)


product owner here, now that your free i have some more ideas i'd like you to work on...


Tell AlphaCodeZeroPrime, I’m on my wine break. slams window shut


Am I missing something here ?

At the end of the day, aren't ChatGPT, AlphaCode, etc still just finetuned LLMs, or has there been some significant changes to the architecture that give reason to expect the output to be anything more intelligent than LLM "predict next word" continuations ?

Of course GPT, Codex/Copilot, ChatGPT are all impressive, and have some unexpected emergent capabilities (this being the interesting thing about this "scale experiment"), but even with "think step by step" prompt hacks and finetuning, they're still just LLMs unless something (intelligence!) has been added so they can learn from their mistakes at runtime and test their predictions ("understanding") by interaction with the external world.

So, if these are still just LLMs, then why are people treating them as if their prompt answers might/should actually be correct as opposed to what might be expected from massive training set prediction - i.e. right a lot of the time, but catastrophically wrong a lot of the time too, with no insight as to which is which.

I saw a ChatGPT example the other day where it confidently explained how taking 90% of a number then adding 10% of that would get you back to the original. It sure doesn't appear if there's any intelligence there, even if it seems it's generating "correct" answers enough of the time that people want to ascribe intelligence to it. I'd trust advice from ChatGPT about as much as I'd trust advice from Eliza (a simplistic 1960's ChatBot that also managed to fool some people).

I'm not sure what it says that this type of "contextually correct bullshit" is being productized into things like OpenAI Codex. What next? Use it for medical advice too ? If I want untrustworthy suggestions I'd rather just Google for it where I can at least judge the source, as opposed to ChatGPT where you have no idea whether it's regurgitating stuff from NASA or 4chan.


These AI advances are thrilling and a bit spooky.

RPG games will include options for laws of karma or configuring an omniscient AI "god" who ingests all of your actions and develops consequences or new plots and dialogue. And of course the player can pray to the god for forgiveness, for blessings, etc.


This is exactly what's been exciting to me about the future of games - AI will allow for a level of dynamism that simply was not possible before. It's seriously exciting stuff.


It doesn't take much for us to believe someone is human, mostly confirming expected behaviors, doing small talk, etc. But AI NPCs will do all that and seem to have will and even conscience.

I feel dizzy trying to think through the social and emotional ramifications of becoming seriously attached to NPCs who seem real and can articulate their feelings and share experiences with us.


Wow, that second part is really nuts to think about, and I hadn't considered it before. You'll form connections based on personal conversations privy ONLY to you, then potentially watch that character die forever. If it's permanent, and not deterministic, you've quite literally lost that person forever.


This isn't a thing to worry about at this point.

In the real world (outside of programming competitions, that is), programmers don't get clear instructions for what we're doing, and what instructions we get deal with messy aspects of the real world. For just one example I tried: if you ask Codex how to calculate taxes on US products, it misses the point of the ask. I know that you have a tax rate and a price and you multiple them together - but what tax rate is applicable? Figuring that out is a thorny problem.

In programming, a lot of the inherent complexity is in understanding the weird bits of ambiguity and the corner cases. Whether that specification is through natural language or some other thing, we're a ways of from a computer capable of replacing us on that.


> understanding the weird bits of ambiguity and the corner cases

I suspect it would not do a very good job at writing test code for itself. Until it can prove that it writes code that works, it’s only really guessing.


But when can it replace my stable of 10X coders and my three 100X react devs? (aka the Clydesdales).


What's the analogue for cleaning the Clydesdale stable?


Future hires.


What happens in a future where AI, that is trained on human generated datasets, becomes so prevalent that the humans that created the training data no longer have the original skillset required to create said dataset.

Are we going to end up with AI training on AI generated datasets.


There's this paradox in AI known since the 80s (I think it's Moraevac's Paradox?) that the abilities we acquired latest in our evolutionary history (e.g. chess, language, algorithmic thinking) are the easiest for AI to automate. But the tasks that are deep in our evolutionary history (e.g. object recognition, navigating the environment, etc...) are extremely difficult.

Some AI speculators, using Gödelesque reasoning, posit that computer programming will be the last job to be automated by AI, so it can bootstrap itself. In reality, it will be one of the first jobs automated by AI, while jobs like cook, barista, etc... will be much later automated, if ever.


I just sat in a Burmese restaurant with an automated barista. Korea has been long ahead of us on automating food service too. Automatic 24/7 drink making machine hands are common in Seoul.


Tasks that involve manipulating physical objects to maintain biological homeostasis (e.g. cook) are hyper-optimized by biological evolution. It is impossible for engineers to engineer a unit that performs better with the same energy restrictions humans have.


Good thing fusion actually is just 20 years away after all this time!


Wonder what the analog to how severely -IMHO- the hacking and cyber security problems were underestimated at the dawn of the internet to how malicious forces will employ AI ? How will a criminal mind use AI to make a living?


> https://alphacode.deepmind.com/#layer=18,problem=137,heads=1...

>

> Here AlphaCode could not come up with an algorithm to solve the problem [snip]

>

> But then AlphaCode behaves a bit like a desperate human, and hardcodes the answer for the example case to pass it even though its solution is wrong, hoping that it works in all other cases but just not on the example. Humans do this as well, and such hope is almost always wrong - as it is in this case.

It behaves just like me...


We’re not far from coding becoming just a hobby or a sport - like chess.


We're nowhere even remotely close to that. I have to assume that anyone reacting on this level are in the same group of people who 10 years ago believed we would have fully self driving cars by now.


I don't know - chatGPT is the first time that I've consistently used an AI product in my workflow, to the point that when the site was rate limited today, I felt a little paralyzed. Googling for answers on Stackoverflow or looking up documentation felt distinctively primitive.

I've never felt that way about any tool.


All this is doing is parsing StackOverflow and other sources and compiling information using machine learning (sometimes completely incorrectly), and if that's saving you time because you do that a lot then that's genuinely great, it's an amazing tool. But the idea that a tool that scrapes StackOverflow is going to significantly replace programming jobs is frankly ludicrous.

Everything that ChatGPT does can be done manually. There are weak programmers who essentially piece things together from StackOverflow and other sources all day, and just brute force "make it work". Did the emergence of StackOverflow replace programming jobs? No way, there are actually far more programming jobs now then there were when StackOverflow was founded. The reason is simple:

> If you want to create complex production software and grow and maintain it long term, it's simply not enough to copy and paste from the internet.

So therefore a machine that does exactly the same thing can't either. Go and build a complex ERP system using only ChatGPT answer, just try it. It will be a steaming pile of shit, riddled with technical debt. You won't be able to build a good business from the result.

Right now even if you use ChatGPT you inevitably need to be a developer to fix it's errors, so it's currently replacing exactly ZERO developers.

When we have AI that can logically reason about code, along with the business context it's being applied to, then it will significantly replace programming jobs. We're absolutely nowhere near that, ChatGPT doesn't even sniff anywhere near doing that.

Personally, I can't even remember the last time I went on StackOverflow to find answers about something, and there's enough false positives where reading documentation is far preferable to asking ChatGPT about how something works. So currently for me ChatGPT doesn't save any time, I do use Copilot though.


In 2014 there were AI experts on record saying that an AI beating a top GO champion was at least 15 years out.


Well, they were wrong.

This isn't the same question though. The ML models that were trying to beat Go were on the correct path, it was just a matter of improvement. It was inevitable that they would beat the game.

ChatGPT doesn't logically reason about code or business contexts. The developers of ChatGPT are not trying to do that, the developers don't know HOW to do that, not even slightly.

Until they're on that pathway it's not even a question: ML that very efficiently copies and pastes from StackOverflow is simply not going to significantly replace programming jobs. The question of how long is left until the AI replaces programmers is completely irrelevant right now.

Using Go/Chess analogy:

> If the game is to replace effective human programmers then the AI is not even playing the game yet, let alone X years away from beating it.


They beat the game hardly two years later, in early 2016.

I'm sure every programmer has their finger crossed that the long tail is very long. And not that I think programmers will be out of jobs, probably not, but their value as a prompt technician will be substantially less than as a software engineer.


Yup our current version of AI is great for cool demos and very bad for real world use cases.


I am dubious about the stated result.

Language models have been very good at spitting out chunks of text verbatim that were in their training data. Buried in the github training data will belots of examples where people post their solutions to fun problems..including past code competitions.

How often is it managing to match a description of a repository and spitting out correct code that cribbed heavily from it? That is, instead of figuring out the problem, it is pattern matching to a solution that someone else already figured out?

How would we know?


I think the much better blog post directly from DeepMind might answer your questions [0], but suffice it to say, the model does seem to be solving novel problems.

[0]: https://www.deepmind.com/blog/competitive-programming-with-a...


> It’s also unlikely the AI will take over programming completely, as its code is riddled with errors

I suppose that might count as performing as well as (some) humans.


>> It doesn’t have any built-in knowledge about computer code syntax or structure.

It's pretty impressive that this naive approach works on something as complicated as coding. In hindsight it seems "obvious" that it would work in game-like contexts like chess/go. Programming feels like its a significantly higher level challenge in a way (not that go is trivial)


I'm old enough to remmber the furore when calculators came on the scene ("Calculators Conquer Arithmetic, Performing better than humans"). Having played with various AIs, it feels similar - impressive when there's a known solution to look up (probabilistically guessed by the AI in this case) but flummoxed as soon as novel analysis is neccessary.


But how many humans lost jobs to calculators?


Lots, actually. The word "computer" used to refer to a job that humans did, full time.


Hopefully this will kill the algorithmic coding interviews and will focus them on actually understanding the job to be done.


I really love do coding in python and always asked myself when the skill set will become legacy like COBOL.

Not sure if you could compare it, but it seems that the next gen coders will say to the AI what to code rather than to code by themselves. A complete different skill set.


I wouldn't say it's a completely different skill set.

You still want to have people, who actually understand what a piece of code or program does. A magical black box to throw prompts at, might be nice for simple settings, but potentially can cause major fuck ups for complex systems.


Yeah sure, I just mean that it is like the step from COBOL -> JAVA -> Python in terms of productivity and the amount of code you write. So my point is, you would write much much less code. So maybe your future "code" would be just some sets of well placed comments and the AI does the rest. The same way you could test the system to prevent any major fuck ups. Sure you have to undestand it, but your core skill wouldn't be coding any more.


> the AI solved about 30 percent of the problems

I think "conquers coding" may be overstating it.


"When pitted against over 5,000 human participants, the AI outperformed about 45 percent...". So 55 percent of humans outperformed the AI. How motivated were the humans in the sample? How many tests were run with these 5000 persons?


It seems impossible for programmers to keep their jobs in next five to ten years.


I fear this will eventually shift a large chunk our time from coding to meeting.


Y'all getting any coding time? Meme image to be insertrd here


A Tesla also drives as well as humans on a straight California highway.


So when do we get a web site designer whee you write what you want and a web site is generated? That sort of thing is stylized and there's lots of training data available.


Is appsec dead/dying? Am I wasting time learning software exploitation? What's the point if everything soon will be rust and go written by ML?


And the award for dumbest headline of the year goes to


One thing software engineers can do now is push legislation that will protect their jobs from being replace by AI. It won’t be that long until there is a 10x improvement in AI-based software development over human-based, easily on the order of legislation lifecycle. When that happens, we need to be legally at the table, or we will straight up be cut out of the profits by billionaires and hedgefunds, because fuck us and anybody except for them (it is their world financially).


“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” – Martin Fowler


Next, the AI should be able to maintain and improve itself without human intervention.

Until then, human are needed to build those AIs.


Alright everyone it's time to unionize


Depending on the pool of course, but 50th percentile is pretty poor in a programming challenge.


can it do something like ARC challenge [0]?

[0]: https://www.kaggle.com/competitions/abstraction-and-reasonin...


So, can it make a traveling salesman solver for current quantum computers?


It probably can, but it won't run in polynomial time.


Shucks


Performing as well as humans isn't something to be proud of...


ahh good, and here i was getting bored of my current existental fears


Is this a new version of AlphaCode or just a new evaluation of it?


Is it even a new evaluation? The article is really unclear.


Looks like they trained it on an additional 13.5k competitive programming problems for this specific task.


> Performing as Well as Humans

> While not yet on the level of humans,


It seems preposterous, but DeepMind’s new coding AI just trounced roughly 50 percent of human coders in a highly competitive programming competition.


So you're telling me it had requirements and expected results?

My job is safe I see.


Wait until the AI starts writing requirements and specifications docs.


It'll still be gibberish, but at least it's more likely it'll be grammatically correct.


Created by ChatGPT just now...

Title: Online Website for Chocolate Donuts

Objective:

The objective of this website is to provide a platform for customers to order and purchase chocolate donuts online. The website will offer a variety of chocolate donut flavors and allow customers to customize their orders by choosing toppings and packaging options.

Functional Requirements:

Customers should be able to create an account and log in to the website. Customers should be able to browse and view the available chocolate donut flavors, toppings, and packaging options. Customers should be able to add chocolate donuts and toppings to their cart and specify the quantity of each item. Customers should be able to view their cart and checkout to place their order. The website should process payment and provide confirmation of the order. Customers should be able to track the status of their order and receive updates on the expected delivery time.

Non-Functional Requirements:

The website should be secure and protect customer information. The website should be easy to use and navigate. The website should be accessible on a range of devices, including desktop computers, laptops, and mobile devices. The website should load quickly and be responsive to user actions. The website should have a clean and professional design.

User Roles and Permissions:

Customers: Customers will be able to create an account, log in, browse and order chocolate donuts, track their order status, and access their account information. Admin: The admin will be responsible for managing the website, including adding and removing products, processing orders, and managing customer information.

User Interface Design:

The website will have a clean and professional design, with a navigation menu at the top of the page and a search bar for customers to easily find specific products. The home page will feature a carousel of images showcasing the available chocolate donut flavors and toppings. The product pages will display images and descriptions of the chocolate donuts, along with options for toppings and packaging. The cart page will display the items in the customer's cart, along with the total price and a checkout button. The checkout page will allow customers to enter their shipping and payment information, review their order, and place their order. The order confirmation page will display the details of the customer's order and provide an estimated delivery time. The account page will allow customers to view and update their account information, including their shipping and payment details.

Database Design:

The website will have a database to store customer information, including their name, email address, shipping and payment details, and order history. The database will also store information on the available chocolate donut flavors, toppings, and packaging options. The database will keep track of the current orders and their status, including the expected delivery time.

Technical Requirements:

The website will be built using HTML, CSS, and JavaScript for the front-end, and a server-side language such as PHP or Ruby for the back-end. The website will use a database management system such as MySQL to store and manage the data. The website will be hosted on a web server and accessed through a domain name. The website will use a secure socket layer (SSL) to encrypt data and protect customer information.

Testing:

The website will be tested to ensure that it meets the functional and non-functional requirements. User acceptance testing will be performed to ensure that the website is easy to use and navigate. The website will be tested for compatibility on different devices


That is beyond high level. Best case, that’s a highly summarized generic version of some actual requirements.


I guarantee the OP could have asked GPT to expand on each of those and it would have done so flawlessly.


So you're saying imposter syndrome is a real thing?


AI code isn't real code.

It lacks the human soul.

It's theft.*

It's unethically trained.

As a coder, I'm exploited!

It's going to take my job!

* only if it spits out licenced code blocks and comments verbatim


Welp, back to school for EE I go.


EE? I think plumbing and prostitution is all there will be left soon...and I'm really horrible with tools and not that good looking so unemployment it is...


Horrible news!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: