After seeing the no-code movement over and over again, I do not believe that this time it is any different. From MS Access days, Adobe's Dreamweaver, and many other forms of no-code solutions, the environment that supports no-code requires a tremendous amount of code. Knowing how to code can help you optimize, automate, configure no-code solutions. The end result is always back to code.
Throughout my career in computing I have heard people claim that the solution to the software problem is automatic programming. All that one has to do is write the specifications for the software, and the computer will find a program [...]
The oldest paper known to me that discusses automatic programming was written in the 1940s by Saul Gorn when he was working at the Aberdeen Proving Ground. This paper, entitled “Is Automatic Programming Feasible?” was classified for a while. It answered the question positively.
At that time, programs were fed into computers on paper tapes. The programmer worked the punch directly and actually looked at the holes in the tape. I have seen programmers “patch” programs by literally patching the paper tape.
The automatic programming system considered by Gorn in that paper was an assembler in today’s terminology. All that one would have to do with his automatic programming system would be to write a code such as CLA, and the computer would automatically punch the proper holes in the tape. In this way, the programmer’s task would be performed automatically by the computer.
In later years the phrase was used to refer to program generation from languages such as IT, FORTRAN, and ALGOL. In each case, the programmer entered a specification of what he wanted, and the computer produced the program in the language of the machine. In short, automatic programming always has been a euphemism for programming with a higher-level language than was then available to the programmer. Research in automatic programming is simply research in the implementation of higher-level programming languages.
> automatic programming always has been a euphemism for programming with a higher-level language than was then available to the programmer
And it seems to me that progress is going in the opposite direction than "they" want. Every time you move up the abstraction stack, you're surrendering some decision-making to the lower levels. If the underlying technologies guess right every time, you have no need to understand what they're doing. The first time they guess wrong, you have to spend a lot of time understanding not only how the lower layers work, and not only why they did the "wrong" thing in this one instance, but how to fiddle correctly with the layer you're operating at to get the lower layers to behave properly. You can work quickly with the high-level abstractions only as long as you understand the lower levels reasonably well.
Optimal machine learning requires a good understanding of memory cache hierarchies, parallel instructions and complexity theory - not to mention the statistics and calculus that it's formed on. And "optimal" isn't some trivial "save a few seconds" but often "return an answer within the lifetime of the universe".
Security is also something to be mindful of. A lot of my work as a professional vulnerability researcher is just using my low-level knowledge to circumvent higher level abstractions people usually ignore. The abstractions aren't seeming to slow down and I fear soon only a few will be able to peak into most or all of level needed to provide reasonable security. Whenever I see a system built with "Automated" technologies, I usually start there to find flaws. In order to truly utilize high level abstractions it is useful to actually understand what provides them.
I feel like I was lucky to have started learning computers when I did in the late 80's. There weren't nearly so many "time saving" abstractions back then, so if you wanted to see anything happen, you had to have a good understanding of what was going on under the hood. Although it was at times frustrating back then to put so much effort into something as simple as drawing a circle on a screen, I'm fortunate that I was forced to spend so much time internalizing the details - I don't know if I would have the patience to learn it all if I was starting right now if I could see that shortcuts existed.
It really depends on the specific person. I started programming in 2006 at 21yo and I don't feel like I missed anything at all. Spent a lot of time reading and researching things anyway. Higher abstractions doesn't necessarily mean lower complexity.
Hmm - I don't see much people tinkering with machine code, even compiler bugs are rare and programmers don't normally need to understand what their compilers do.
If you're working with HPC applications, your choice of compilers and sometimes higher level specifications you may write in C, C++, or (gasp) Fortran often do demand you think about what your compilers are doing or choose compilers to think better for you.
If you're fine with lower performance (which is reasonable for a lot of application cases, so I almost entirely agree with you), you certainly don't have to deal with this.
These folks are working on bootstrapping a full Linux distribution solely from a small amount of machine code plus all the source code of the distribution:
I find a lot of useful abstractions end up getting implemented twice: once in a "magical" way where you rely on the runtime to manage it according to a bunch of cobbled-together ad-hoc rules, then again in a "principled" way where it's under the programmer's control and can be reasoned about, but still (almost) as usable as if it were working by magic.
e.g. ad-hoc exceptions -> errors as plain values, but with "railway-oriented programming" techniques that make them as easy to work with as exceptions
e.g. runtime-managed garbage collection -> rust-style borrow checker ad-hoc in the compiler -> haskell/idris-style linearity in the type system
e.g. "magic" green-threading -> explicit-but-easy async/await
e.g. behind-the-scenes MVCC in databases -> explicit event sourcing
> The first time they guess wrong, you have to spend a lot of time understanding not only how the lower layers work, and not only why they did the "wrong" thing in this one instance, but how to fiddle correctly with the layer you're operating at to get the lower layers to behave properly.
A sister team at my company is quite proud of its rules engine that allows non-programmers to quickly implement business policies using its DSL in a web UI.
With the years and postmortems gone by it has grown half-assed attempts at version control, code review, unit testing, deployment pipelines, etc. Now it’s very obviously just a shitty, hobbled software development environment. It’s used primarily because the team that owns it is aggressive about blocking design reviews for “duplication of effort” if you propose to use normal software development tools anywhere near its domain.
>version control, code review, unit testing, deployment pipelines, etc.
This is absolutely one of my biggest pain-points with "no-code" solutions. Even trying to track revisions to something relatively simple like a word document over time is a big pain compared to tracking revisions to source code or a configuration file. Trying to get a grip on how people are fiddling with a no-code product from the audit logs is incredibly difficult, never mind trying to track down a change from 1 year ago. Often changes won't appear on audit logs at all or they won't be explained in enough detail and the format of the logs will have little resemblance to how things are actually configured. You can use some no-code solution to modify a SQL query under the hood and the audit log will just say "THIS USER CHANGED THIS QUERY" and that's all the detail you get! It's frequently difficult to explain to your peers how you're going to change a system without showing them a bunch of screenshots and going "Well I'm going to tick this box and move the green rectangle over here and link it to the orange oval". Rolling back changes can often be impossible without rolling back EVERY change between now and when the first incorrect change was made.
I use code, and these problems just don't happen! It's only when people are using some wonderful "user-friendly" solution that things get so jacked up.
Whenever someone is championing a "no code" or "configuration driven" solution for business processes all of these things you just described are alarm bells ringing in my head.
Especially when it is anything involving finances, even tangentially.
Making it easy for non-engineers to change business rules on the fly without a code deploy sounds nice in theory, until you think about it for a few more minutes.
What if a no-code system was built atop a text-based language that lived in a regular file-system workspace? The "no-code" part would just be a fancy IDE, but the code would still exist to be edited directly, tracked with git, whatever else.
You'd need to make sure that edits in that fancy IDE can be sensibly diffed/merged at the level of that text-based language. I've never seen good diff/merge for a graphical format, so I think this kind of "no-code" ends up being just code.
I'm doing something like this for a system that's configured via web-interface. It has a stable, readable export format which I'm tracking with Git. So we can actually have a diff and reviews of the changes.
They store so much, why not a changelog? It baffles me.
The other thing is there may be no way for me to, say, get a list of all the forms associated with a certain table and the fields they contain in some programmatic way (without browser automation, which is something I use not infrequently with a no-code system) and APIs that are almost complete but not quite.
I'm having flashbacks to attempting keystroke automation which works 90% reliably until a field is added via an update and breaks EVERYTHING until you workaround it.
I was almost bitten by a project like this, despite ample protests the whole way long that "configuration" was just becoming a "crappy programming language". All too often I still have to deal with someone who thinks because logic hinges on a value in a JSON file, they've magically abstracted away the concern of what it is that the code should be doing.
Man, I'm not from the Java world, but these days I had to deal with it for a project and had to mess with ant builds, which I have never dealt with before. I was totally baffled. I just do not understand why would anyone subject themselfs to such levels of pain.
The above describes what is today called program synthesis, i.e. generating a program from a complete specification. There is a parallel discipline, of inductive programming, or pgoram induction, which is the generation of programs from incomplete specifications, which usuall means examples (specifically, positive and negative examples of the inputs and outputs of the target program). Together, program synthesis and program induction, comprise the automatic programming field, which has advanced a little bit since the 1980's, I dare say.
Inductive progamming is very much a branch of machine learning and that's the reason most haven't heard of it (i.e. it's machine learning that is not deep learning). The main approaches are Inductive Functional Programming (IFP) and Inductive Logic Programming (ILP, which I study for my PhD). IFP systems learn programs in functional programming languages, like Haskell, and ILP systems learn progarms in logic programming languages like Prolog. And that is why neither is used in industry.
That is to say, both approaches work just fine - but they're not going to be adopted anytime soon (if I may be a bit of a pessimist) because most programmers lack the background to understand them and they can't be replaced by a large dataset.
A quick introduction to Inductive Programming is on the wikipedia articles:
I suggest to follow the links to the article's sources and to search for IFP and ILP systems separately. Two prominent representatives are Magic Haskeller (IFP) and Aleph (ILP):
Neat! Thanks for posting this. I bumped into Genetic Programming a few times when I was studying evolutionary algorithms. Program induction seems bigger, more modern, and more sophisticated.
Some of the IFP systems actually use a evolutionary algorithms, I believe- ADATE in particular, though I can't find an actual instance of it online, or any other information but a mistitled paper:
I see where you're going here. It might help if we construct a specific syntax and grammar for writing specifications for software. It'll be helpful for all of us if we can make our specifications as clear and efficient as possible. We could even call this combination of a syntax and grammar a programming language.
Just imagine a world when we have these programming languages to help us encode the specifications for our software. Wild! :)
That’s a specification that gives you no knowledge how to solve the problem. You can definitely have precise specifications that do not directly correlate to machine instructions.
I like this answer because it says it will require someone with specific knowledge and specific training doing a specific job anyway...
Also "anyway" a specification is always incomplete. It's part of the programmers' job to fill the gaps with something sensible, and when they have no idea what to do, to point out that some corner case was overlooked.
This is where programmers sometimes start to feel like Lieutenant Columbo: at first everyone is nice, but people become more and more irritated as the pesky cop asks more and more embarrassing questions.
A key distinction here is whether you could write your no-code system using your no-code system. If you no-code system is self-hosting, then you've made a higher-level programming language. Otherwise, it's a tool that lives on some dimension other than "programming language".
This is a good insight. So, clearly, higher-level languages have been an enormous success. But "no-code", as we mean the term today, has still (mostly) been a failure.
So then what do we actually mean by "no-code" these days? I think "no-code", today, has the unspoken implication of "graphical". Okay, so why are "graphical coding" systems (languages?) mostly unsuccessful? There are some clear exceptions like Unreal Engine's Blueprints and certain WYSIWYG web editors like Squarespace, but the great majority of programming is still done in what comes down to, at the end of the day, text files. There may be more and more elaborate editors built atop these text files, but the "bones" of the code is always available, and never too far out of reach.
My pet theory is that this last bit is the differentiator. In a totally graphical programming environment, the programmer never has to be exposed to the underlying format directly. This perhaps encourages proprietary stacks, where the GUI is all that's known and the substance of the "code" itself may never even be made available to those who seek it out.
Maybe this arrangement keeps these "languages" siloed, and therefore keeps them from gaining real traction. It's hard for a thriving ecosystem to evolve around a closed format. Competing tools, editors, compilers, open transfer via regular files, translation, etc are all stifled. You end up totally dependent on one company for all of your needs. For some projects the value-proposition still works out; for most it doesn't.
If so, here's my proposal: instead of focusing on all-in-one no-code environments, focus on creating graphical tooling for existing languages. Or even, creating new (text-based!) programming languages that are designed from the get-go with advanced (graphical?) tooling in mind, while still having that real, open, core source-of-truth underneath it.
We've seen echoes of this already: Rust's features would make it nearly unusable without its (excellent) user-friendly tooling. Design apps like Sketch can output React source code. Pure-functional languages like Clojure really thrive with an editor that has inline-eval. I think if "no-code" is ever going to catch on for real, it needs to be less afraid of code.
On the other hand: is the value-add really the graphical interface, or is it the "height" of the language?
In the latter case, maybe it's more important that we explore even-higher-level languages, and set aside the graphical part as a distraction. Or, maybe we combine the two goals: create higher-level-languages that also lend themselves to graphical UIs, but are still grounded in formalized text underneath.
The common denomination in textual coding is text itself: There were historically some differences by locale, encoding, and line ending, but we have managed to converge on a sufficiently encompassing standard with UTF-8 text that the rest can reasonably-if-imperfectly be dealt with by the text processing code, and this is the source of friction: our text editors and terminal emulators are great - so great, that it's hard to get out of the path dependency involved in utilizing them. It isn't just the interactive cursor, or copy-paste, or search-and-replace, or even regular expressions...it's decades of accumulated solutions that can handle every concievable text problem.
Every time we go to some other way of describing a document we lose all that text editing infrastructure, which is such a huge setback that no alternate solution has yet hit a widespread critical mass, only specific niches in specific domains.
Which doesn't make text good, it makes it worse-is-better!
As I see it, the probable way forward would not be in language design, but in formats and editors. Breaking through the dependency will be a slow process.
I think there's real value add from a language that's designed to work in an IDE-first way, that uses the GUI not to replace text but to enhance it. The best example I know of this is Scala's implicits: they're not visible in the code itself, but in an IDE you can see where they're coming from (e.g. ctrl-shift-P in IntelliJ), so they're a great way to express concerns that are secondary but not completely irrelevant (e.g. audit logging - basically anything you'd consider using an "aspect" or "decorator" for). Another example is type inference (which a lot of languages have): your types aren't written in the program text, but they're available reliably when editing, so you can use them to express secondary information.
People on HN seem to think programming languages should be designed for a text editor as the primary way of editing, and I think that's a mistake that holds programming culture back (and is why we see these "graphical" languages go too far in the other direction, because the only way to get people to take a proper language editor seriously is to make a language that's impossible to edit in a text editor). Having a canonical plain text representation is important. Having good diff/merge is important. Editor customizability is important. But if you embrace the IDE and build an IDE-first language, without abandoning those parts, you can get a much better editing experience.
> This perhaps encourages proprietary stacks, where the GUI is all that's known and the substance of the "code" itself may never even be made available to those who seek it out.
These kinds of full-featured IDEs tend to lead to knowledge gaps, in my experience. I'm not really a Java programmer, but I've helped more than one Java developer (usually junior ones, to be fair) with build issues. This is because when I use Java, I write it using vim and either `javac`and a Makefile, or else the command line interface of a Java build tool (ant, maven, gradle, etc). That has given me a good understanding of the Java build process, that I've found many users of Eclipse or IntelliJ do not have.
As such, I think your proposal may have legs, because it could allow less technical users access to the programming environment, while retaining a text-based interface for more expert users.
I can't agree more. When I started working on Slashdiv, I had looked at the existing NoCode platforms and came to the conclusion that code is the most succinct representation of logic and graphical tools will have to get users to jump through hoops to create basic logic (programmer here, no surprises!).
Where visual tools help is in UI building and layouts. That's exactly what I built Slashdiv for: create a UI and output React code.
I agree. This goes back even further than that. In the 80's various "construction set" programs were all the rage. They were essentially no-code programs, many disguised as games. Later there were 16-bit solutions, like a program called "Can Do" for the Amiga. (I'm not entirely sure of the title, but I remember the splash screen had a voice shouting "I Can Do; you can, too!")
Macromedia Dreamweaver was the big inflection point because the results could be deployed on the web, for anyone to see, regardless of what box they were running.
The problem I have is that it devalues the worth of actual programmers by making everyone think what we do is easy. It isn't. I see this all the time with do-gooder charities on TV "We're teaching kids to code, and next week they'll all be billionaires!" The present themselves as if a week in a makeshift classroom is all it takes to compile and deploy a Swift app.
I've thought about this since the early part of this year when a middle manager in my company's Communications department dismissively told me that she could do my job because, "I know code." What does that even mean?
> I've thought about this since the early part of this year when a middle manager in my company's Communications department dismissively told me that she could do my job because, "I know code." What does that even mean?
It's a signal to look for a better position somewhere else.
Sounds like you work at a company that considers tech to be "the easy part" or a cost center...
> I've thought about this since the early part of this year when a middle manager in my company's Communications department dismissively told me that she could do my job because, "I know code." What does that even mean?
Frustrating. And I know how to read and write, but it doesn’t mean I could write a decent novel.
What a lot of people don’t understand about software development is that for any non-trivial project, “coding” is the easy part, the real challenge is in the managing of complexity.
> We're teaching kids to code, and next week they'll all be billionaires!
Sure, that's not true. But they will gain a better understanding of how to organize and process information. This understanding will help them even if they don't end up in a software engineering career.
While No Code (nee Visual Programming, CASE Tools, whatever) may handle 90% of the use cases, the remaining 10% become dramatically harder. Because now you're fighting the framework. Which is an angry 800lb gorilla sitting between you and your work.
Faced with the same challenge, I went in the opposite direction.
Typical strategy of BizTalk, Talend, SeeBeyond, many many others is some kind of patch cord flow chart style programming, with Access VBA style event hooks for script extensions.
I created a stupid simple framework and API optimized for our domain (medical records). Think serverless computing and awesome DOMs for HL7 and adjacent data formats.
Onboarding for our SeeBeyond-based projects was 3 to 6 months. Using my stack was 1 to 2 weeks. (One of the weeks was teaching healthcare domain experts some Java, Eclipse IDE, and version control.)
Further, in my experience, none of these No Code solutions have useful exception handling, logging, fault/error recovery, and other misc devops type stuff. So are an absolute nightmare to support in production.
The remaining 10% become dramatically harder. Because now you're fighting the framework.
That is a great point. Whenever I've tried a no-code solution I end up dropping it and going back to code. Some insurmountable roadblock appears that if I had started with code I won't be in this mess.
I've also made the mistake of recommending no-code solutions to my business - trying to reach for the golden ring of self-sufficiency. Seems great at first and then you end up trying to "debug" a no-code framework that you have little of no insight into.
My experience with SSIS 2012, Microsoft's low-code ETL tool, suggests that you're trading code for a giant config file.
Unless carefully planned out to be modular, you end up with a giant pile of technical debt that is
(a) pretty opaque in its error messages
(b) has poor error handling systems
(c) is brittle in its ability to handle changing file formats or business needs
(d) can only be tested end-to-end, with no space for small unit tests to act as guide rails
(e) makes it very easy to hide business logic all throughout the ETL process
Upside to this tool: It seems to be handling a lot of simple parallelization speedups behind the scenes for you, and catches most type-conversion errors.
A third of the websites in the world use wordpress which can certainly be configured using "no-code" and yet it's being sold as something brand new. Excel is everywhere and allows for "no-code" automation. No-code has been around forever and for some reason incremental improvements to something utterly pervasive is a revolution that will be embraced by the next generation.
All I can think of is how Windows used to be administered almost entirely through the GUI but eventually PowerShell was invented and its popularity grows year over year. How is that even possible when the "no-code" way to administer Windows must clearly be better and more accessible and user friendly and coding is old fashioned and borderline obsolete? Every company with their heads on straight would fire anybody who uses PowerShell and replace all their admins with unskilled high school graduates. There is not enough room in town for both no-code and code solutions, so I expect Microsoft to remove PowerShell in it's next update.
I get this hunch that no-code is attractive because it hides the true costs. You can apparently hire cheaper labor, but you don't see the maintenance costs. You can also apparently delegate coding tasks to non-coders, which might help you squeeze the budget bubble around the business. But there's costs to that as well in terms of making people less effective at their normal work.
the reality is that even the no-code advocates are fully aware of this. I was surprised that webflow let me publish "No code is a lie" on their blog. https://webflow.com/blog/no-code-is-a-lie
if you stop there though, you're missing out on the business opportunity. you can dunk on things on HN all you want but if these businesses are right and the demand for software has massively outstripped the availability of engineers then you're gonna make money in no code tools.
Software engineers spend every day trying to automate away their jobs, and it's hardly a conservative field.
If it were easier to use a visual interface than code, then devs would figure it out and we'd all start doing it.
The "no code" revolution is probably more accurately the Microsoft Access replacement for our generation. Sleeker, cooler, more professional. But like Access, you have to replace it with real software when it starts getting complex.
I think this is just it: text has a higher information density than (say) visual programming tools. At some point you quickly reach an amount of threshold of complexity where you either need to:
1. Hide parts of the program (e.g. hidden cells on a spreadsheet; attributes of nodes in a visual programming environment)
2. Switch to text for the higher information density.
I think the burden of #1 quickly surpasses the burden of learning to work with #2.
No code can be great if you want to spin up something simple quickly.
Once it grows to any reasonable level of complexity it becomes more of a hindrance than a help.
Access was actually a great tool for allowing semi-technical people make quick apps, but almost every shop I've been in has needed to convert Access to grown-up software.
Granted, it's a good way to prototype ideas. Less analysis is needed when one converts from Access to the new thing because the ideas are road tested. But one problem is that modern web stacks are a bloated incoherent mess that take an army of specialists to manage. That may be part of the reason for the resurgence of "no code" tools.
But the real fix is to simplify web stacks for smaller projects. We don't need Rube Goldberg routing engines/URL-prettifiers, ORM organics, Bootcrap organics (UI), Async/Await bloat, etc. We want our app track some local e-paper-work, not be Netflix. A Learjet will do, don't gear it for Jumbo Jets. Time for a KISS web stack to become a de-facto standard.
Oh yeah, Access was great for prototyping and I've argued as much here several times. I've built Access prototypes in earlier jobs.
> But the real fix is to simplify web stacks for smaller projects
We've already done that. RoR, Django, ASP.NET MVC, etc. etc. It's super easy to get something working out of the box. But at some point, you still need to understand what your database is doing and optimize for it. Some level of complexity is inescapable.
And you could probably argue that as back-end has gotten simpler, complexity has just expanded to fill the void. Now we expect fancy JS interfaces on top of them, too. And phone apps.
I have no doubt all of the things I mentioned will get simpler, but I can't foresee anything that will make software devs irrelevant in on the immediate horizon. I suppose at some point BAs/PMs (people who gather requirements and draw wireframes) will be able to write code, and we'll start to compete more with skilled BAs. But that's still not the end of the entire profession. Gathering requirements is still hard.
A lot never will though. I don't see no-code replacing many custom developed applications. It see it making apps accessible for processes that used to managed by spreadsheet and email.
I think the core of the issue is that real problems contain a lot of complexity, and code is still by far the most efficient way to encode complex logic.
For instance, I've done some work with node-based editors like Node Red and Max MXP, and while they are nice for simple cases, invariably they reach a tipping point where you end up with a giant graph which is far less navigable and comprehensible than a set of source files.
I've been using Mendix for a while now (for a client), which is more low-code than no-code. I enjoy it quite a bit. It's very easy to add a new entity (table) and some screens for CRUD stuff. Even without code you can customise quite a bit, but there's always Java (server) and JavaScript (client) actions. Nevertheless, as with many frameworks, it gets awkward when you move too far away from the 'intended use cases'.
However, I do think that you still need a dev mindset, with dev experience, to use it well. Principles like DRY, loose coupling, code organisation, consistent and informative naming, concise logic, etc adhere just as well. Those hard to transfer skills you build up over time. Although a junior may get things working very quickly, keeping things maintainable and extendable is just as hard, or even harder.
Mendix & Outsystems are pretty popular in the enterprise space and they work quite well. Like you say, don't move too far away from the intended space, but the LoB (line of business) space is vast for big enterprises and that's what these tools are for. There is no need to do 100% as you can get to 80-90% and the rest you don't do with those packages; you do with code. Why is that a problem; you use different frameworks / languages / dbs / etc with code too; these systems are no different; use them what they are meant for and do the rest with other tools. Ofcourse there is more than enough work to be had just creating departmental LoB apps to last you 1000s of lifetimes, but that's another story.
So like you said; I believe you still need to be a dev to deliver with these systems; 'end user programming' is not close outside Excel. People who know nothing about programming will generally make complete monsters with these systems, even after training (I saw it a lot with Outsystems when I click open a business flow).
Real enduser development revolutions we won't see this low-code wave, maybe the next somewhere around 2030?
> Nevertheless, as with many frameworks, it gets awkward when you move too far away from the 'intended use cases'
Right. And that probably applies to many, if not all frameworks, by virtue (vice?) of their design. I don't know Mendix, but I've worked on programming frameworks like Rails, Flask and others, including some in-house company proprietary ones. You sometimes have to work around their limitations with custom code, awkwardly, even Rube Goldberg style at times.
And there can be instances or even categories of apps that can't be done using the framework at all.
The framework giveth and the framework taketh away.
People would rather learn [high-level abstraction] than to learn [low-level abstraction]. But guess what..[high-level abstraction] IS [a kind of abstraction].
This is true for all of us - it's just that the level of abstraction each of us is comfortable with is different.
Not sure why people don't understand the value of raising the level of abstraction for more people to participate in software creation. It seems like an obviously valuable and worthwhile objective, even if it does look different to the way I build software.
Because they're pretending that this next [high-level abstraction] will be replaced by another style of [high-level abstraction] as if [high-level abstraction] is nothing but a scaffold to let us ascend to the true productive glory of [high-level abstraction]. It ignores the fact that it's not unheard of for platforms which started out no-code later REGRESS and introduce coding features such as APIs because as it turns out, no-code is intrinsically limited in a lot of very significant ways.
When I use a "no-code" product it's usually not more than a few months before I start using it's API and adding small bits of code here and there to do things not natively supported by the product. You would have to have a puritanical fervor and a lack of sense to want to eliminate all semblance of code from a product made out of code.
It's almost as if after seeing you could add images to Microsoft word documents you saw an article coming out about how Written English was going to be replaced by Hieroglyphics and soon we could communicate with nothing but Emoji and talking about how the next generation won't be literate. The sentiment in this thread is mostly a reaction to irritating and implausible clickbait.
> It's almost as if after seeing you could add images to Microsoft word documents you saw an article coming out about how Written English was going to be replaced by Hieroglyphics and soon we could communicate with nothing but Emoji and talking about how the next generation won't be literate.
Perhaps because many believe that at minimum existing solutions don't successfully present a useful abstraction whereas people have been over promising for decades.
At maximum that a gui represents a bad abstraction that can't be used by non coders effectively to produce code save for when actual code exists that sufficiently specifies the program. For example a gui builder can help you build your web based store but is insufficient to build the tool that builds the store.
> knowing how to code can help you optimize, automate, configure no-code solutions. The end result is always back to code.
There's some value in having abstractions being more visible. I don't think that strictly speaking something like Node-Red qualifies as 'no code'. But it made me understand that, given that so much of development nowadays consists of 'gluing' code together, there's potential for such tools. Even if some of the boxes happen to allow you to write code.
At some point, all you care about is inputs and outputs. We are missing a better way to do these things.
I share your view that the limits of no-code have been proven time and time again. Just ask the countless devs who have added custom code to WordPress or Squarespace sites.
That said, maybe this time is (a bit) different. Machine learning is markedly more mature today and could take no-code farther than it has in the past.
However, I can personally vouch for 2 champions in no code.
The big behemoth is Salesforce.com.
The incipient champion is Bubble.io.
Cloud infrastructure ,API consumption and mobile usage are the paradigm shifting variables that were not present 10y ago.
The fact that cloud is getting (traditional) corporate green lights and that every employee is using software to do regular 9-5 work indicates that software creation is demanded at exponential scale.
> The end result is always back to code.
Sure. The same way the end result is always back to electrons flowing around. The abstraction layer that no code provides NOT TO YOU but to Michel the accountant is very relevant and will (10y from now) produce a new paradigm shift, as Salesforce pioneered the "no software" [1] SaaS 20y ago.
And the paradigm shift is not "0 code". It's "0 code to get thing rolling; and developing programming skills visually to later on have a more productive talk with coders"
My experience with Salesforce.com is that it's only a no-code champion if your needs are so simple that you might as well not be using Salesforce.com - i.e., a simple CRM with a few plugins. The real strength of Salesforce.com is that you can use code to customize it and get the real utility you're paying for out of it.
It's "no software" in the sense that all SaaS systems are "no software". Realistically, Salesforce.com is very much enterprise software of the sort that requires real customization in the form of real code to implement the important business requirements.
My experience has been different. When I worked for a travel company, it joined Salesforce. Shortly thereafter it had to hire a full-time Apex programmer to make Salesforce do all the things the Salesforce salespeople promised it would do without a programmer.
Exactly this. I've never seen anyone set up Salesforce themselves, I'd even go so far as to suggest that Salesforce is wilfully so opaque purely to enable a commercial ecosystem of "certified" chancers and ne'er-do-wells who you have to call every time anything goes wrong.
I'm trying to think of a historical analog.. something that was always on the cusp of discovery and often hyped up but either never realized or only realized after many many delays. My partially educated mind thinks of two things -- alchemy and the philosopher's stone -- and flight.
My historical analogue is the telephone. Adoption and the number of switchboard operators was growing so fast that it was said that at that rate everyone would have to be an operator. Then direct-dialing became a thing that replaced switchboards. If software continues to eat the world, no-code/low-code a matter of when, not if.
these companies only need to move the needle a little bit to be wildly successful and usher a new set of internal applications that end-users can put together. Will it be children programming our internal enterprise apps? Maybe not. But there are a whole lot of folks who have the need and the desire to get some internal things built, and with these new tools (retool especially), it's not that hard. Calling it no-code does these tools a disservice. It's really block-oriented programming with a variety of complex blocks.
"No code" as a term sounds like "serverless": There's a blatant contradiction in the terminology, but if you point it out, people begin to snicker up their sleeves at the person who reads the words as if they were English.
There's definitely a place for a la carte pricing for server-based software ("serverless" which of course includes servers, you silly person) and there's a place for very high-level code in domain-specific languages to solve immediate business problems ("no code" which of course includes writing code, you silly person) so just learn the shibboleth and try to avoid sounding silly.
It is different this time. I am a really good coder but I am using no code tools as much as I can. I only use code for new projects if I absolutely must.