If you try to translate "gelato" to English literally, you can say it means "frozen". While "dondurma" also can be translated as "frozen" to the English. I don't know why Americans called it "ice-cream".
When I see the term "ice-cream", I think of cold white creamy thing on a cone that you can buy from fast food chains. That is different from what we call "dondurma" here in Turkey. That is much softer and more creamy than "dondurma".
Turkish people probably saw the dessert from Europeans. At least that's what Nisanyan says.
> When I see the term "ice-cream", I think of cold white creamy thing on a cone that you can buy from fast food chains. That is different from what we call "dondurma" here in Turkey. That is much softer and more creamy than "dondurma".
What you get from fast food chains would be called "soft-serve ice cream" in the US, if you want to be explicit about it. (If you want a shorter term, then "ice cream" if you don't care about the distinction, but "soft-serve" if you do.) It is not the standard form of ice cream - ice cream stores don't sell it - but it is included within the term "ice cream".
Expecting jump tables to have higher performance than the alternatives sounds definitely iffy, this also reads as if the author doesn't know that switch-case statements also just use a jump table under the hood if the case-blocks are continuous.
Regarding performance, if the CPU branch predictor works well the jump indirection overhead might disappear completely, but that's still not as good as if the compiler can inline the destination function, and jump tables usually prevent that.
(a switch-case dispatcher might actually be better than a traditional function pointer jump table, because the switch-case eliminates some function entry/exit "ceremony", also see "computed goto")
Function pointer jumps definitely break most chances for a compiler to optimize a function call. But then, so does a switch statement, most of the time. (Not to mention switch statements are often transformed into jump tables anyways.)
Humans tend to do a lot better jump tables in assembler, because of better choices about register usage etc. can be made, less (or no) need to spill to stack. One of the few remaining compiler weaknesses.
I'm also dubious on the claim the switch statement is O(n). It might be in a pathological worst case, but you can pretty much bet the compiler is going to transform it into a jump table or other optimized execution (maybe a computed jump). Especially when the cases are contiguous like this...
I agree that a benchmark is warranted, or at least a comparison of the generated assembly (at different optimization levels).
Not for jumps - modern CPUs still (usually) have limit of one taken branch per cycle, or 1 to 2 untaken branches. And usually branch prediction will be the limiting factor anyway, for which a single branch is gonna be faster than many.
It of course depends on the focal length of the lens, but with a tele lens in the 200-400mm region (also depending on your sensor size), I would estimate that the moon crosses the image in the order of a minute. There is plenty of time to frame and focus the shot, but you have to readjust your camera every couple of shots.
This motion is just the earth rotation and the same for the moon, the sun and all the stars. The rotation of the moon around the earth becomes noticeable only from day to day, as the moon roughly moves 13 degrees/day. Tracking mounts have a separate speed setting though, which does take the moons movement into account for greater precision.
Exposure time was 1/640 of a second[1]. Must say I’m surprised it was that fast, but you fight both blurriness from the motion and the quite intense light reflected by a full moon.
So first manually focus, then I aimed a bit ahead along its path and waited a few seconds. Remote trigger and tripod is necessary to avoid any camera movement.
Ignoring the (rather slow) orbital motion of the Moon, consider: the sky (and the Moon) appears to rotate 360°/24 h. This amounts to 360/(24*3600) = 0.004167 °/s. The apparent size of the lunar disc is about 0.5°, so it takes about 120 s for the Moon to move by its own diameter in the sky. Plenty of time.
I believe it is their new rule. But it shows up only if you are using twitter for a while as logged out. Clear data from browser and it goes away, but will eventually show up again.
I could be missing something, but I don't see any suggestion that you need a specific forked tree with patches to build with LLVM, and I've seen people filing bugs about using LLVM sanitizers to build the vanilla tree, so I don't think the expectation is that you need to apply a huge out of tree patchset for this to work any more?
AFAICT from the issues page, Clang and binutils/LLVM tools work fine with no patches for the mainstream archs and when not trying to be super-fancy with custom flags. The more non-mainstream one goes with arch or flags the more likely one will run into something.
I do use 1000^n, but I agree that most people tend to use 1024^n. 1000^n kind of makes more sense since "kilo, mega" etc. are the actual SI prefixes for multiples of 1000s. I don't know who or what caused this chaos but 1000^n is definitely more human friendly.
I feel the problem may be that, unlike just about every other unit in SI, bits are discrete, not continuous. Except in few very specific subfields of theoretical CS, there's no concept of a fractional bit. You can have kilometers and millimeters, you can have kilobits and kilobytes but not milibits and milibytes.
The nature of bits is that of a base-2 system, so using power of 10s for counting them is only superficially human friendly - in practice it's human-unfriendly, because it flies in the face of how bits are used. All hardware and all software groups them by powers of 2, that's inherent to what bits are.
1000^n might be more human friendly but computers aren’t decimal machines and a byte isn’t 10bits. 1024^n technically makes sense as a unit for binary machines that have 8bits to a byte.
Everyone was happy with 1024^n convention in the 80s. The problem was HDD manufacturers got greedy and switched to 1000^n to make their drives sound like they had more storage. Thats what started the confusion.
RFC 1951 (NTP) was published in 1988 and refers to 56k modems. Does a 56k modem operate at 57344 bits per second or 56000 bits per second? Your claim implies the former, but I'm pretty sure it was always the latter.
> a byte isn’t 10bits
It could be. Historically, the number of bits per byte varied somewhat from machine to machine. Many standards used the term 'octets' to avoid ambiguity.
Historically yes. But even as early as the 60s 8bit was the norm. IIRC C then “standardised” 8bits (though ASCII went some way to doing that prior to C).
When I see the term "ice-cream", I think of cold white creamy thing on a cone that you can buy from fast food chains. That is different from what we call "dondurma" here in Turkey. That is much softer and more creamy than "dondurma".
Turkish people probably saw the dessert from Europeans. At least that's what Nisanyan says.
https://www.nisanyansozluk.com/kelime/dondurma