Hacker Newsnew | past | comments | ask | show | jobs | submit | belgesel's commentslogin

If you try to translate "gelato" to English literally, you can say it means "frozen". While "dondurma" also can be translated as "frozen" to the English. I don't know why Americans called it "ice-cream".

When I see the term "ice-cream", I think of cold white creamy thing on a cone that you can buy from fast food chains. That is different from what we call "dondurma" here in Turkey. That is much softer and more creamy than "dondurma".

Turkish people probably saw the dessert from Europeans. At least that's what Nisanyan says.

https://www.nisanyansozluk.com/kelime/dondurma


> When I see the term "ice-cream", I think of cold white creamy thing on a cone that you can buy from fast food chains. That is different from what we call "dondurma" here in Turkey. That is much softer and more creamy than "dondurma".

What you get from fast food chains would be called "soft-serve ice cream" in the US, if you want to be explicit about it. (If you want a shorter term, then "ice cream" if you don't care about the distinction, but "soft-serve" if you do.) It is not the standard form of ice cream - ice cream stores don't sell it - but it is included within the term "ice cream".

https://en.wiktionary.org/wiki/soft_serve

Here's some promotional imagery showing the kind of thing an American would think of when prompted with "ice cream": https://www.baskinrobbins.com/content/dam/br/img/w72024_rele...

It's hard enough that people make birthday cakes out of it.


IIRC the main reason to use FreeBSD instead of Linux was BSD being a lot simpler and have a lot more room to optimize their specific use case.


Can you elaborate on that, couldn't find the proposal, I want to learn more about it.


I was expecting some form of benchmark to see if claims about better performance is true.


Expecting jump tables to have higher performance than the alternatives sounds definitely iffy, this also reads as if the author doesn't know that switch-case statements also just use a jump table under the hood if the case-blocks are continuous.

Regarding performance, if the CPU branch predictor works well the jump indirection overhead might disappear completely, but that's still not as good as if the compiler can inline the destination function, and jump tables usually prevent that.

(a switch-case dispatcher might actually be better than a traditional function pointer jump table, because the switch-case eliminates some function entry/exit "ceremony", also see "computed goto")


Function pointer jumps definitely break most chances for a compiler to optimize a function call. But then, so does a switch statement, most of the time. (Not to mention switch statements are often transformed into jump tables anyways.)

Humans tend to do a lot better jump tables in assembler, because of better choices about register usage etc. can be made, less (or no) need to spill to stack. One of the few remaining compiler weaknesses.


I'm also dubious on the claim the switch statement is O(n). It might be in a pathological worst case, but you can pretty much bet the compiler is going to transform it into a jump table or other optimized execution (maybe a computed jump). Especially when the cases are contiguous like this...

I agree that a benchmark is warranted, or at least a comparison of the generated assembly (at different optimization levels).


Also, O(n) doesn’t mean much when the CPU can execute hundreds of checks in a few tens of cycles.


Not for jumps - modern CPUs still (usually) have limit of one taken branch per cycle, or 1 to 2 untaken branches. And usually branch prediction will be the limiting factor anyway, for which a single branch is gonna be faster than many.


even echo has it $ echo -e "hel\x70"


Just out of curiosity, how much time you have before taking the shot? (seconds or minutes)


It of course depends on the focal length of the lens, but with a tele lens in the 200-400mm region (also depending on your sensor size), I would estimate that the moon crosses the image in the order of a minute. There is plenty of time to frame and focus the shot, but you have to readjust your camera every couple of shots. This motion is just the earth rotation and the same for the moon, the sun and all the stars. The rotation of the moon around the earth becomes noticeable only from day to day, as the moon roughly moves 13 degrees/day. Tracking mounts have a separate speed setting though, which does take the moons movement into account for greater precision.


Exposure time was 1/640 of a second[1]. Must say I’m surprised it was that fast, but you fight both blurriness from the motion and the quite intense light reflected by a full moon.

So first manually focus, then I aimed a bit ahead along its path and waited a few seconds. Remote trigger and tripod is necessary to avoid any camera movement.

[1] f/9, Iso 400, -3 step


Ignoring the (rather slow) orbital motion of the Moon, consider: the sky (and the Moon) appears to rotate 360°/24 h. This amounts to 360/(24*3600) = 0.004167 °/s. The apparent size of the lunar disc is about 0.5°, so it takes about 120 s for the Moon to move by its own diameter in the sky. Plenty of time.


I believe it is their new rule. But it shows up only if you are using twitter for a while as logged out. Clear data from browser and it goes away, but will eventually show up again.


Open in incognito tab to receive the preferential treatment new users get. Data-greedy product managers training is to starve them.


Isn't that the other way around? I thought clang was trying to support all gcc extensions to be able to build Linux kernel.

I would rather see both of these compilers to stay competitive to push themselves higher.


Kind of both. They've added some gcc-specific extensions to clang, but also removed some gcc-ism's from the Linux kernel.


clang already builds Linux kernel on Mountain View dungeons, and it has been doing that for a couple of years now.

Those changes just don't get to upstream.


That's...not true?

https://www.kernel.org/doc/html/latest/kbuild/llvm.html

I could be missing something, but I don't see any suggestion that you need a specific forked tree with patches to build with LLVM, and I've seen people filing bugs about using LLVM sanitizers to build the vanilla tree, so I don't think the expectation is that you need to apply a huge out of tree patchset for this to work any more?


There's an semi-official github[0] for this.

AFAICT from the issues page, Clang and binutils/LLVM tools work fine with no patches for the mainstream archs and when not trying to be super-fancy with custom flags. The more non-mainstream one goes with arch or flags the more likely one will run into something.

[0] https://github.com/ClangBuiltLinux/linux/issues (Note they use github for issues/wiki, not code, so no surprise the 'linux version' in code is oldish).


I follow Android, not Linux itself.

My latest update was that not all patches were accepted upstream, or Google didn't care about upstreaming them, whatever.

There are some Linux Plumbers talks, or from Linaro, about this a years back.


For those who don't know, this is a follow-up story of A Teenager's Guide to Avoiding Actual Work which got the attention on HN lately.

https://news.ycombinator.com/item?id=27206552


I do use 1000^n, but I agree that most people tend to use 1024^n. 1000^n kind of makes more sense since "kilo, mega" etc. are the actual SI prefixes for multiples of 1000s. I don't know who or what caused this chaos but 1000^n is definitely more human friendly.


I feel the problem may be that, unlike just about every other unit in SI, bits are discrete, not continuous. Except in few very specific subfields of theoretical CS, there's no concept of a fractional bit. You can have kilometers and millimeters, you can have kilobits and kilobytes but not milibits and milibytes.

The nature of bits is that of a base-2 system, so using power of 10s for counting them is only superficially human friendly - in practice it's human-unfriendly, because it flies in the face of how bits are used. All hardware and all software groups them by powers of 2, that's inherent to what bits are.


> 1000^n is definitely more human friendly.

It may be more human friendly, but 1024^n is more programmer friendly, especially at the low level.


Yes, powers of two match physical reality of binary computer architectures, while powers of ten in computing are a marketing concept.


1000^n might be more human friendly but computers aren’t decimal machines and a byte isn’t 10bits. 1024^n technically makes sense as a unit for binary machines that have 8bits to a byte.

Everyone was happy with 1024^n convention in the 80s. The problem was HDD manufacturers got greedy and switched to 1000^n to make their drives sound like they had more storage. Thats what started the confusion.


RFC 1951 (NTP) was published in 1988 and refers to 56k modems. Does a 56k modem operate at 57344 bits per second or 56000 bits per second? Your claim implies the former, but I'm pretty sure it was always the latter.

> a byte isn’t 10bits

It could be. Historically, the number of bits per byte varied somewhat from machine to machine. Many standards used the term 'octets' to avoid ambiguity.


Historically yes. But even as early as the 60s 8bit was the norm. IIRC C then “standardised” 8bits (though ASCII went some way to doing that prior to C).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: