Hacker Newsnew | past | comments | ask | show | jobs | submit | axelriet's commentslogin

The timeline and facts were quite different. Debating an org-wide quality issue on a 100+ member team's alias is not whistleblowing.

The thing with cockroaches is that if even a single one is seen in the dining room and someone calls environmental health, regardless of the restaurant's prestige, they close it with immediate effect until they get their act together and a food sanitation inspection clears them.

At the end, everyone feels better, in particular the customers.


Maybe they didn’t have sufficient visibility at the ground level to make proper decisions.

You have all the right words but some are in the wrong order.

Life works in mysterious ways. Whoever you are, bring it on and prove any of my points wrong.

It came from the top of Azure and for Azure only. Specifically the mandate was for all new code that cannot use a GC i.e. no more new C or C++ specifically.

I think the CTO was very public about that at RustCon and other places where he spoke.

The examples he gave were contrived, though, mostly tiny bits of old GDI code rewritten in Rust as success stories to justify his mandate. Not convincing at all.

Azure node software can be written in Rust, C, or C++ it really does not matter.

What matters is who writes it as it should be seen as “OS-level” code requiring the same focus as actual OS code given the criticality, therefore should probably be made by the Core OS folks themselves.


May I ask, what kind of training does the new joins of the kernel team (or any team that effectively writes kernel level code) get? Especially if they haven't written kernel code professionally -- or do they ONLY hire people who has written non-trivial amount of kernel code?

I have followed it from the outside, including talks at Rust Nation.

However the reality you described on the ground is quite different from e.g. Rust Nation UK 2025 talks, or those being done by Victor Ciura.

It seems more in line with the rejections that took place against previous efforts regarding Singularity, Midori, Phoenix compiler toolchain, Longhorn,.... only to be redone with WinRT and COM, in C++ naturally.


It’s org-dependent. On Windows, SAL and OACR are kings, plus any contraption MSR comes up with that they run on checked-in code and files bugs on you out of the blue :) Different standards.

Integration tests (I think we call them scenario tests in our circles) also only tend to test the happy paths. There is no guarantees that your edge cases and anything unusual such an errors from other tiers are covered. In fact the scenario tests may just be testing mostly the same things as the unit tests but from a different angle. The only way to be sure everything is covered is through fault injection, and/or single-stepping but it’s a lost art. Relying only on automated tests gives a false sense of security.


Search VMS stability, I think the consensus is clear.

Then Google VMS longest uptime, and the record is 28 years. VMS often achieved five nines over 10 years (99.999%) so no irony.

He took a bunch of folks with him from DEC to Microsoft to make NT, and of course his principles.

Nowadays NT is bomb-proof believe it or not.

Most of the crashes are in device drivers and some rare times in the UI code (Win32k) that should not be there, but the kernel itself is solid.

(Yes I am a big fan)


I remember reading "Showstoppers" and David was quoted to say "If you break the build I'm the lawn mower and your ass is grass". Do you think such attitude is mandatory for good kernel level code?

(I actually think it does and argued with people on HN, although I never wrote any professional kernel code myself)


VMS, yes. No doubts.

NT, no. Again, no doubts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: