Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So, compiled Javascript then? "We meet again, at last. The circle is now complete."

The more I see interpreted languages being compiled for speed purposes, and compiled languages being interpreted for ease-of-use purposes, desktop applications becoming subscription web applications (remember mainframe programs? ), and then web applications becoming desktop applications (electron) the more I realize that computing is closer to clothing fads than anything else. Can't wait to pickup some bellbottoms at my local target.



You're not the only one to observe that computing tends to be fad-driven. I enjoy Alan Kay's take on it:

In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

...

I don’t spend time complaining about this stuff, because what happened in the last 20 years is quite normal, even though it was unfortunate. Once you have something that grows faster than education grows, you’re always going to get a pop culture.

...

But pop culture holds a disdain for history. Pop culture is all about identity and feeling like you're participating. It has nothing to do with cooperation, the past or the future — it's living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from] — and the Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.

sources:

http://queue.acm.org/detail.cfm?id=1039523

http://www.drdobbs.com/architecture-and-design/interview-wit...


I wish I've heard/seen some of the Alan Kay talks/articles earlier in my career. The more I work in IT, the more I see the wisdom in how he sees the industry. (And I don't just mean these pop culture comments.)


I feel fortunate to have seen his talks and interviews early in my career.

The downside is that exposure to both of those led me to write some software in both Smalltalk and Common Lisp...which is a downside because having worked with those makes some currently popular tools seem like dirty hacks.

Although I use things and enjoy things like React, and Webpack, and npm and I'm very happy with the things I'm able to create with them, when I stop and think about it, the tooling and building/debugging workflow feels pretty disjointed to what was available in Smalltalk and CL environments 30 years ago.

I understand why those tools didn't win over the hearts and minds of developers at the time. They were expensive and not that easily accessible to the average developer. I just wish that as an industry, we'd at least have taken the time to understand what they did well and incorporate more of those lessons into our modern tools.


You seem to assume that there is One True Way of doing things and we're just circling round trying to find it. In fact, as you point out, there are benefits and tradeoffs to every approach. People compile interpreted languages to make them run faster, but are using them in the first place because they're easier to use. People who have come to compiled languages for speed are now naturally looking to make them easier to develop with as well.

Compiled C is not going to go away any sooner than interpreted Javascript - but a widening of the options available is great as it allows us to focus on developing things quickly and correctly, rather than making decisions based on which particular annoyance you're happy to put up with.


People try to move CS both towards and away from engineering: We want the dependability engineering seems to provide in, say, Civil Engineering, but we don't want to acknowledge that engineering is about trade-offs, which are compromises, and that the nature of those trade-offs changes as the world surrounding a specific tool or program changes.

Maybe they think there is a One True Way. Maybe they think every building needs to be reinforced concrete and structural steel, now and forever, in every possible context.


This isn't driven by a "fad", rather by the simple fact that any trace of JS engine performance will show a sizable chunk at the beginning dedicated to just parsing scripts. Heck, we even have "lazy parsing", where we put off doing as much parsing work as possible until a piece of code is needed. Replacing that with a quick binary deserialization pass is a straightforward win.

I wouldn't call this "compiled JavaScript". In broad strokes what's happening is you take the result of parsing JavaScript source to an AST and serialize that; then next time you deserialize the AST and can skip the parsing stage.

(Source: I spent a summer working on JS engine and JIT performance at Mozilla.)


I don't think the point was that this project is a fad.

Instead, I think the point was that Javascript is a fad (we'll see whether this is true by watching how popular compile-to-WASM languages become compared to JS, once WASM becomes widespread and stable).

Alternatively, we might say that JS was created (in 10 days, yadda yadda) at a time when the fads were dynamic typing over static typing; interpreters over compilers; GC over manual or static memory management; OOP over, say, module systems; imperative over logic/relational/rewriting/etc.; and so on. JS's role as the Web language has tied developers' hands w.r.t. these tradeoffs for 20 years; long enough that a significant chunk of the developer population hasn't experienced anything else, and may not really be aware of some of these tradeoffs; some devs may have more varied experience, but have developed Stockholm Syndrome to combat their nostalgia ;)

As an example, one of the benefits of an interpreter is that it can run human-readable code; this is a sensible choice for JS since it fits nicely with the "View Source" nature of the Web, but it comes at a cost of code size and startup latency. The invention of JS obfuscation and minifiers shows us that many devs would prefer to pick a different balance between readability and code size than Eich made in the 90s. This project brings the same option w.r.t. startup latency. WASM opens up a whole lot more of these tradeoffs to developers.


Actually I think it was created at a time where it was used very sparingly and had very limited scope. The joke used to be that the average javascript program is one line. I don't know if that was ever exactly true but a lot of early javascript was on inline event attributes (think "onclick" and the like).

For that use a forgiving scripting language is a good fit. What changed is what javascript, and the web, is used for. Embedded Java and Flash showed that there was an appetite for the web to do more and the security problems involved with those technologies showed they weren't a good fit.

Javascript was adapted to fill that void by default as it was the only language that every browser could agree on using.


parsed (+compressed), not compiled.

Parsing is unavoidable, regardless if the code is interpreted or compiled. This proposal seems to further compress the AST somehow.

Basically the idea is to already preprocess the text source, so that the client can skip two steps on execution: lexical analysis and the parsing of the text source resulting in the AST. The post explains why the authors believe that this is a good idea.


Software development is fad driven, I think a lot or most people would agree with this. I myself find this to be lamentable. But over the years I've asked myself, why is this the case? I mean it has been faddish for years, this isn't new in the current era.

I think there are two main drivers for fad-ism in software. One is we as developers, are required to chase the popular languages and frameworks to stay current, marketable, and employable. So there is always a sense of "what do I have to learn next, so I can switch jobs succesfully when I need to". Another driver is, developers seek to gain experience by re-doing things from the past. Example, creating new languages, or compiler for an existing language, provides inexperience developers with fantastic new experiences and knowledge. This is desirable.

So yeah, we got fads, and us old timers, or not so old timers with a historical perspective, watch our lawns, but everyone has to learn somehow. Ideally a lot of this would come from an apprenticeship/journeyman system, or a University degree of some sort. But for now it's all we have.


To me, software development seems as tribal as politics. People tend to like and agree with those in their own tribe, and tend to see criticism of their tribe's preferred ideology as an attack on their identity.

There's a lot of pressure to be part of a popular tribe, and so you see situations where even a mild criticism of, for example, the a piece of the JavaScript tribe's technology (like, say, npm or Webpack) tends to elicit vitriolic responses and counter arguments.

You can sometimes get away with this kind of criticism if you choose your words carefully and make it very clear (or at least pretend to) that you're actually a member of the tribe you're criticizing, and that your criticism is meant to be constructive. So you say something like 'I use npm, and I like it, but here's where I struggle with it and here's how I'd improve it'. But if you just write about why you dislike it, you're likely to be flamed and criticized, even if everything you wrote verifiably correct.

So I find that watching programmers argue in blogs and on HN and Reddit feels a lot like reading political arguments on Reddit, or Facebook. And maybe that's why programming tends to be faddish. Each tribe is mostly only aware of its own history; there's no real shared sense of culture, or much awareness of the rich shared history that got us to where we are today. And so you see lots of wheel reinvention, as people poorly re-solve problems that were already solved long ago by other programming tribes.

These are just my personal observations, though. I could very well be completely wrong. :)


Well, it's more "compressed JavaScript" than "compiled JavaScript".


Something we badly need, I greatly appreciate the effort!


It's not. It's a binary representation of JS. Basically directly competing with WebAssembly.

What happens a year from now when WebAssembl gets support for GC and modules?


Please don't tell me what the project is, I'm the tech lead :)


So why do you dismiss any and all arguments against it? Repeatedly.

You and your supporters consistently spread false information.

- WebAssembly does not support JS hence we need binary AST

WebAssembly is on track to support dynamic and GC-ed languages

- WebAssembly will be slower than binary AST

You've provided zero support for this statement. Meanwhile experiments with real apps (like Figma) show significant improvements with WebAssembly

- No one wants to ship bytecode

False

- it's hard to align browsers on bytecode

See WebAssembly

- you can't change bytecode once it's shipped

You can, eventually. Same argument can be applied to binary AST

- WebAssembly not supported by tools

Neither is binary AST. Meanwhile one of the goals of WebAssembly is tool support, readability etc.

There were other claims just as easily falsified, and yet dismissed out of hand.

So. What are you going to do with JS AST when WebAssembly gets support for JS?


>WebAssembly is on track to support dynamic and GC-ed languages

It is, but there's several roadmap items that would need to be ticked off for it to run well. Aside from GC, you would probably need:

- Direct DOM access. The current way of interacting with the DOM from WASM is, at best, 10x slower.

- JIT underpinnings, like a polymorphic inline cache[1]

[1]https://github.com/WebAssembly/design/blob/master/FutureFeat...

It's not clear how far off those things might be.


Webassembly is pretty much a greenfield approach: you need to learn C, redo your whole codebase and still write JS code to load your wasm, interact with DOM and browser APIs, and leverage existing good enough JS libraries. This is the state of things as of 2017. It took 4 years to reach that point. If you want to bet everything on the state of wasm in 2-3 years from now, please be my guest. Meanwhile, I have a business to run.

The binary AST is a quick win for brownfield technology, with already a working version in an experimental Firefox build. It can be rolled quite fast. It improves right now the experience of many users on existing codebase the same way minification did. And improve the experience of current web assembly, as it is still largely dependent on the speed of JS. I'll take this feature any day.


Let me clarify things one last time.

You are obviously very passionate and I believe that nothing I can write will convince you, so I will not pursue this conversation with you after this post.

For the same reason, my answers here are for people who have not followed the original HN thread in which we have already had most of this conversation.

I also believe that the best way for you to demonstrate that another approach is better than the JavaScript Binary AST would be for you to work on demonstrating your approach, rather than criticizing this ongoing work.

> - WebAssembly does not support JS hence we need binary AST

> WebAssembly is on track to support dynamic and GC-ed languages

If/when WebAssembly gains the ability to interact with non-trivial JS objects, JS libraries, etc. we will be able to compare the WebAssembly approach with the Binary AST approach. GC support and dynamic dispatch are prerequisites but are not nearly sufficient to allow interaction with non-trivial JS objects and libraries.

So let's meet again and rediscuss this if/when this happens.

> - WebAssembly will be slower than binary AST

> You've provided zero support for this statement. Meanwhile experiments with real apps (like Figma) show significant improvements with WebAssembly

If you recall, we are only talking about loading speed. I believe that there is no disagreement on execution speed: once WebAssembly implementations are sufficiently optimized, it is very likely that well-tuned WebAssembly code will almost always beat well-tuned JS code on execution.

The argument exposed on the blog is that:

- attempting to compile JavaScript to existing WebAssembly is very hard (hard enough that nobody does it to the best of my knowledge);

- the specifications of JavaScript are complex enough that every single object access, every single array access, every single operator, etc. is actually a very complex operation, which often translates to hundreds of low-level opcodes. Consequently, should such a compilation take place, I suspect that this would considerably increase the size of the file and the parsing duration;

- by opposition, we have actual (preliminary) numbers showing that compressing to JavaScript Binary AST improves both file size and parse time.

While I may of course be wrong, the only way to be sure would be to actually develop such a compiler. I have no intention of doing so, as I am already working on what I feel is a more realistic solution, but if you wish to do so, or if you wish to point me to a project already doing so, I would be interested.

You refer to the experiments by Figma. While Figma has very encouraging numbers, I seem to recall that Figma measured the speedup of switching from asm.js to WebAssembly. In other words, they were measuring speedups for starting native code, rather than JS code. Also a valid experiment, but definitely not the same target.

> - No one wants to ship bytecode

> False

Are we talking about the same blog entry?

What I claimed is that if we ended up with one-bytecode-per-browser or worse, one-bytecode-per-browser-version, the web would be much worse than it is. I stand by that claim.

> - it's hard to align browsers on bytecode

> See WebAssembly

I wrote the following things:

- as far as I know, browser vendors are not working together on standardizing a bytecode for the JavaScript language;

- coming up with a bytecode for a language that keeps changing is really hard;

- keeping the language-as-bytecode and the language-as-interpreted in sync is really hard;

- because of the last two points, I do not think that anybody will start working on standardizing a bytecode for the JavaScript language.

In case of ambiguity, let me mention that I am part of these "browser vendors". I can, of course, be wrong, but once again, let's reconvene if/when this happens.

> - you can't change bytecode once it's shipped

> You can, eventually. Same argument can be applied to binary AST

I claimed that no browser vendor seriously contemplates exposing their internal bytecode format because this would make the maintenance of their VM a disaster.

Indeed, there is an order of magnitude of difference between the difficulty of maintaining several bytecode-level interpreters that need to interact together in the same VM (hard) and maintaining several parsers for different syntaxes of the same language (much easier). If you know of examples of the former, I'd be interested in hearing about them. The latter, on the other hand, is pretty common.

Additionally, the JavaScript Binary AST is designed in such a manner that evolutions of the JavaScript language should not break existing parsers. I will post more about this in a future entry, so please bear with me until I have time to write it down.

> - WebAssembly not supported by tools

> Neither is binary AST. Meanwhile one of the goals of WebAssembly is tool support, readability etc.

I'm pretty sure I never said that.

> So. What are you going to do with JS AST when WebAssembly gets support for JS?

If/when that day arrives, I'll compare both approaches.


I may be mistaken, but I believe Adobe had to ship an Actionscript 2 runtime forever because they completely replaced it when they introduced AS3. They needed to keep the flash player backwards compatible, so the player had to have both runtimes. While it worked, it was a kludge.


> in which we have already had most of this conversation.

Interestingly enough, nowhere do you even mention these conversations in your attempts to push binary AST as hard as possible.

> Consequently, should such a compilation take place, I suspect that this would considerably increase the size of the file and the parsing duration;

Emphasis above is mine. However, it's presented (or was presented) by you as a fact.

> What I claimed is that if we ended up with one-bytecode-per-browser or worse, one-bytecode-per-browser-version, the web would be much worse than it is. I stand by that claim.

We are ending up with one wasm bytecode for every browser, aren't we?

> as far as I know, browser vendors are not working together on standardizing a bytecode for the JavaScript language;

Because they are working together on standardizing bytecode for the web in general, aren't they?

> coming up with a bytecode for a language that keeps changing is really hard

So you're trying to come up with a binary AST for a language that keeps changing :-\

> keeping the language-as-bytecode and the language-as-interpreted in sync is really hard

What's the point of WebAssembly then?

> I do not think that anybody will start working on standardizing a bytecode for the JavaScript language.

Because that's not really required, is it? This is a non-goal, and never was the goal. The goal is to create an interoperable standardized bytecode (akin to JVM's bytecode or .Net's MSIL), not a "standardized bytecode for Javascript". For some reason you don't even want to mention this.

> I claimed that no browser vendor seriously contemplates exposing their internal bytecode format

They don't need to.

> Additionally, the JavaScript Binary AST is designed in such a manner that evolutions of the JavaScript language should not break existing parsers.

I shudder to think how rapidly changing languages like Scala, ClojureScript etc. can ever survive on the JVM. The horror!

> If/when that day arrives, I'll compare both approaches.

Basically you will end up with two representations of JS: a binary AST and a version that compiles to WASM. Oh joy. Wasn't this something you wanted to avoid?


Regardless of all your points, the fact is that WASM isn't ready for this now, and doesn't appear that it will be for some time.

Combined with the fact, as mentioned above, that DOM access is significantly slower means that WASM isn't a suitable candidate. This is something you forgot to mention or take into account in your comment, for some reason.

Yes this does seem to overlap a bit with wasm as you have noted, but saying "well we could get this optimization but we need to wait several years until this highy complex other spec is completely finished" doesn't seem as good.

Why not do this, and use wasm when it's available? Why can't you have both?


> Regardless of all your points, the fact is that WASM isn't ready for this now

As opposed to Binary AST which as available now? :)

> Combined with the fact, as mentioned above, that DOM access is significantly slower

Given that DOM access for WASM currently happens via weird interop through JavaScript (if I'm not mistaken) how is this a fact?

> "well we could get this optimization but we need to wait several years until this highy complex other spec is completely finished" doesn't seem as good.

No. My point is: "Let's for once make a thing that is properly implemented, and not do a right here right now short-sighted short-range solution"

That's basically how TC39 committee operates these days: if something is too difficult to spec/design properly, all work is stopped and half-baked solutions are thrown in "because we need something right now".

> Why not do this, and use wasm when it's available? Why can't you have both?

Because this means:

- spreading the resources too thin. There is a limited amount of people who can do this work

- doing much of the work twice

- JS VM implementors will have to support text-based parsing (for older browsers), new binary AST and WASM


> how is this a fact?

... because DOM access is significantly (10x) slower? That alone rules out your approach right now, regardless of the other points raised.

> Because this means: ...

When you understand how weak these arguments are you will understand the negative reaction to your comments on this issue.

You're clearly very passionate about this issue but you don't seem to be assessing the trade offs. Having something right now and iterating on it is better than waiting an indeterminate amount of time for a possible future solution involving an overlapping but otherwise unrelated specification that has not been fully implemented by anyone to a satisfactory point, and one with very complex technical issues blocking its suitability.

Sure, it would be nice to use WASM for this, but it is in no way needed at all. Given the status of WASM and the technical issues present with using it in this way it is odd to champion it to such a degree.

It seems your entire arguments boil down to "WASM may be OK to use at some point in the future, stop working on this and wait!". I, and I'm assuming others, don't see this as a very convincing point.

If I may, I'd offer some advice: stop taking this issue to heart. Time will tell if you're right, and making borderline insulting comments to the technical lead of the project in an attempt to push your position doesn't help anyone.

The world is heating up and species are dying, there are much better causes to take to heart.


How can you say it's "compressed" over "compiled" when you are actually parsing it into an AST and then (iiuc) converting that to binary? That's exactly what compilers do. You are in fact going to a new source format (whatever syntax/semantics your binary AST is encoded with) so you really are compiling.

To be fair, these two concepts are similar and I may be totally misunderstanding what this project is about. In the spirit of fairness, let me test my understanding. You are saying wasm bytecode is one step too early and a true "machine code" format would be better able to improve performance (especially startup time). I'm not following wasm development, but from comments here I am gathering that wasm is too level and you want something that works on V8. Is that what this project is about?

On a side note, it's truly a testament to human nature that the minute we get close to standardizing on something (wasm), someone's gotta step up with another approach.


> How can you say it's "compressed" over "compiled" when you are actually parsing it into an AST and then (iiuc) converting that to binary? That's exactly what compilers do. You are in fact going to a new source format (whatever syntax/semantics your binary AST is encoded with) so you really are compiling.

I am not sure but there may be a misunderstanding on the word "binary". While the word "binary" is often used to mean "native", this is not the case here. Here, "binary" simply means "not text", just as for instance images or zipped files are binary.

A compiler typically goes from a high-level language to a lower-level language, losing data. I prefer calling this a compression mechanism, insofar as you can decompress without loss (well, minus layout and possibly comments). Think of it as the PNG of JS: yes, you need to read the source code/image before you can compress it, but the output is still the same source code/image, just in a different format.

> You are saying wasm bytecode is one step too early and a true "machine code" format would be better able to improve performance (especially startup time). I'm not following wasm development, but from comments here I am gathering that wasm is too level and you want something that works on V8. Is that what this project is about?

No native code involved in this proposal. Wasm is about native code. JS BinAST is about compressing your everyday JS code. As someone pointed out in a comment, this could happen transparently, as a module of your HTTP server.

> On a side note, it's truly a testament to human nature that the minute we get close to standardizing on something (wasm), someone's gotta step up with another approach.

Well, we're trying to solve a different problem :)


Posting this in the hope that it might help some people grok what they are actually doing:

When I first discovered what Yoric and syg were doing, the first thing that I thought of was old-school Visual Basic. IIRC when you saved your source code from the VB IDE, the saved file was not text: it was a binary AST.

When you reopened the file in the VB6 IDE, the code was restored to text exactly the way that you had originally written it.


Interesting. Do you know of any technical documentation on the topic?


The parent might have been thinking of QuickBasic, which saved programs as byte-code, along with formatting information to turn it back into text: http://www.qb64.net/wiki/index.php/Tokenized_Code

VB6 projects were actually a textual format - even the widget layout: https://msdn.microsoft.com/en-us/library/aa241723(v=vs.60).a...

As someone who runs a WYSIWYG app builder startup (https://anvil.works), I can attest that this is a Really Good Idea for debugging your IDE.


BBC Basic did this too -- keywords were stored as single bytes (using the 128-255 values unused by ASCII). Apart from that the program code wasn't compiled, it just was interpreted directly. Very smart design when everything had to fit in 32KB of RAM.


> Here, "binary" simply means "not text", just as for instance images or zipped files are binary.

If it's not text, then what is it? I'm not sure "not text" is a good definition of the word "binary".

> A compiler typically goes from a high-level language to a lower-level language, losing data.

I don't agree, I don't think there is any loss in data, the compiled-to representation should cover everything you wanted to do (I suppose not counting tree-shaking or comment removal).

> I prefer calling this a compression mechanism, insofar as you can decompress without loss (well, minus layout and possibly comments).

Ahh, so you mean without losing the original textual representation of the source file.

> Wasm is about native code

Here you are making claims about their project that are just not the whole picture. Here's the one-line vision from their homepage[1]:

> WebAssembly or wasm is a new portable, size- and load-time-efficient format suitable for compilation to the web.

With that description in mind, how do you see BinAST as different?

> Well, we're trying to solve a different problem :)

I think you might be misunderstanding what wasm is intended for. Here's a blurb from the wasm docs that is pertinent:

> A JavaScript API is provided which allows JavaScript to compile WebAssembly modules, perform limited reflection on compiled modules, store and retrieve compiled modules from offline storage, instantiate compiled modules with JavaScript imports, call the exported functions of instantiated modules, alias the exported memory of instantiated modules, etc.

The main difference I can gather is that you are intending BinAST to allow better reflection on compiled modules than wasm intends to support.

Here's another excerpt from their docs (and others have mentioned this elsewhere):

> Once GC is supported, WebAssembly code would be able to reference and access JavaScript, DOM, and general WebIDL-defined objects.

[1]: http://webassembly.org/

[META: Wow, I thought downvotes were for negative or offtopic comments]


Ok I am understanding the distinction now.

I ran a google search for "js to wasm" and found a ticket on the webassembly github that explained it all: https://github.com/WebAssembly/design/issues/219


Thanks for the link, it will certainly prove useful in the future.


Text means a sequence of characters conforming to some character encoding. Yoric's binary AST is not a sequence of characters conforming to a character encoding.

Compilation maps a program in a higher level language to a program in a lower level language. The map is not required to be one-to-one: the colloquial term for this is "lossy."


To quote the original post:

> If you prefer, this Binary AST representation is a form of source compression, designed specifically for JavaScript, and optimized to improve parsing speed.


Fads, or steadfast pursuit of balancing tradeoffs and finding clever ways to improve tools?

Compilation and interpretation both have distinct advantages. It's hard to do both (read: it took a while) and each has tradeoffs.

I'm impressed and grateful that I can get compile time errors for interpreted languages, and I'd feel the same about being able to use a compiled lang in a repl.

Regarding the actual topic at hand, this isn't compiled JavaScript, just parsed JavaScript. They're shipping an AST, not bytecode/machine code.


Haha, I did enjoy this comment. It's kinda fair, but also kinda not.

At each step developers are trying to create the best thing they can with the tools at their disposal. There will always be a tradeoff between flexibility and ease of use. There's a reason Electron exists — the browser has limitations that are easier to work around in Electron. There's a reason we moved to putting stuff online, we could write it once and distribute it everywhere.


> At each step developers are trying to create the best thing they can with the tools at their disposal.

Well, yeah. The point is that we don't have a consistent definition of "best" that's constant over time. Only by changing your direction all the time can you end up walking in circles.


I think everything will eventually settle in the middle where Java and C# currently live.

They have an "intermediate code" that they somewhat compile down to. But this is really a more compact version of the source than a binary though. You can see this easily by running a decompiler. Often times the decompiled IL is almost identical to the source. And before the JIT kicks in the IL is typically interpreted.

You get the best of both worlds. Easy to compile, easy to decompile, platform agnostic, compact code, and fast if it needs to be


The Design of Everyday Things is instructive here. You don't design for the middle or the average. You design for the extreme edges. If you have a programming language that can be used both by absolute beginners who just want to learn enough programming to script their excel spreadsheet AND HFT traders who need to ring out every cycle of performance they can on every core they have at their disposal, then the middle will take care of itself.


On .NET's case the IL is never interpreted, it is only a portable executable format.

It is either AOT compiled to native code via NGEN, Mono AOT, MDIL or .NET Native.

Or JITed on file load before actually executing the code.

The only .NET toolchain, from Microsoft, that actually does interpret IL it is .NET Micro Framework.

The idea of using bytecodes as portable execution formats goes back to mainframes, of which IBM AS/400's TIMI is the best known example.


Didn't they use to have a IL interpreter, back in the days? Or was it already a compiler-maskerading-as-an-interpreter?


No, .NET never had an interpreter phase like the JVM.

Which actually can be configured to always JIT as well.

The majority of JVMs have a non-standard flag that allows to configure how the interpreter, JIT or if supported AOT compiler work together.


This is interesting to know, thanks.


It's not optimal for things like embedded, device drivers, high performance computing, etc. So I doubt everything will settle there. There will always be situations where C/Rust/Fortran are better choices.


Well the complete circle has a major benefit, flexibility:

You develop one app, and it can run as both web and desktop application, or even on mobile with some extra efforts.

Say you want to quickly edit something like a document, you don't need to install Word, just open browser and load Google Docs or Office 365.

And if you find yourself constantly using one web app or website (travis CI, Trello), you can wrap it with nativefier [1] and use it as a desktop app.

[1] https://github.com/jiahaog/nativefier


Part of this is fad driven, but part of this is also driven by other human concerns. For example, had the early web been binary (like we're pushing for now) instead of plain text it would have died in its crib. Executing binary code sent blindly from a remote server without a sandbox is a security nightmare. Now that we have robust sandboxes, remote binary execution becomes a viable option again. But, it took a decade's worth of JVM R&D to get to this point.


There is a serious problem when every new generations of a technology fixates on some problem of its choice and ignores all the others. The issue of binary blobs didn't go away. What changed is that a lot of developers today don't care about the open nature of the web and are perfectly fine with sacrificing it for faster load times of their JS-saturated websites.

I think a much better approach to this pronblem would be a compression format that's designed specifically for JavaScript. It could still be negotiable, like GZIP, so anyone wanting to see the source would be able to see it. It could also be designed in a way that allows direct parsing without (full?) decompression.


So, just to clarify: this is what the JavaScript Binary AST is aiming for.

We are not working on the negotiation part for the moment, because that's not part of the JS VM, but this will happen eventually.


That's good to hear. Apparently I jumped all the important parts when skimming the article. Appreciate the clarification here.


> What changed is that a lot of developers today don't care about the open nature of the web and are perfectly fine with sacrificing it for faster load times of their JS-saturated websites.

I'm one of those developers who couldn't care less about the 'open nature of the web'. I understood the necessity of it to the early, emerging internet, but times have changed. Now that we have secure sandboxes and the web is choking on its own bloat, it's time to shift back to binary blobs.


How does adding more code to the browser stack prevent the web from "choking on its own bloat"? You need to take things away to reduce bloat, not add them.


But there has been Java Applets, Active X, Flash, and Silverlight.


And they all had crippling security problems[1]. I only mentioned the JVM, but you're right, all of those technologies contributed to the work of secure sandboxes.

[1]: Except maybe Silverlight. From what I remember, it didn't see enough success to be a tempting target for hackers.


The JIT/AOT approach was already there in Xerox PARC systems in the 70's, oh well.


Imagine if Xerox Parc happened in the early 90s and we ended up with a Smalltalk web browser environment.


That is what on my ideal world ChromeOS should have been, but with Dart instead, unfortunately the ChromeOS team had other plans in mind and made it into a Chrome juggler OS.

I was using Smalltalk on some university projects, before Sun decided to rename Oak and announce Java to the world.

The development experience was quite good.

Similarly with Native Oberon, which captured many of the Mesa/Cedar workflows.


> That is what on my ideal world ChromeOS should have been, but with Dart instead,

That would have been very interesting. I liked Dart from the brief time I looked at it. The web still feels like a somewhat crippled platform to develop for.


> The web still feels like a somewhat crippled platform to develop for.

Which is why nowadays I always favour native development when given the option, in spite of having been a Web 1.0 enthusiastic developer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: