Games of tag where you are “out” when hit, optionally with a mechanism for being revived are a staple game for young kids around here. Video games with shooting just seems like a logical extension of that into the virtual domain and with ranged “tag” of that.
Besides shooters there are many puzzle games as well.
I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Boost enabled.
WiFi disabled.
No changes to P clock states or something from bios.
Fedora.
Applied all suggestions from powertop.
I don’t recall changing anything else.
Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.
The N100/150/200/etc. can be clocked to use less power at idle (and capped for better thermals, especially in smaller or power-constrained devices).
A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.
Git LFS is a gross hack that results in pain and suffering. Effectively all games use Perforce because Git and GitLFS suck too much. It’s a necessary evil.
Yes. And the license only allows you to run macOS guests on macOS hosts. So using esxi means you don’t have any license for whatever macOS guests you run.
You are confusing macos guests on KVM (Linux) and macos guests on ESXi which is a real enterprise product, and officially enables you to run as many macos vms as your hardware supports.
I’m really flummoxed at why the MacBooks continue to be spicy. When using then laptop with a charger using the grounded cable on the socket side there used to be no spice. Now that adapters are mostly only used with two prong connectors the spicyness is ubiquitous.
I recall audio equipment also not being grounded because the industry prefers not being grounded over being accidentally grounded to two different grounds causing voltage transients. Maybe the same reason now also applies to MacBooks? Or does someone know another reason why the outer shell of a MacBook is still spicy.
Also the two corner points next to the air slits underneath the screen when folded open. When I wipe away dust there it can feel slightly uncomfortable.
The pci extension slot edges in most PC cases or the IO are way sharper. I’ve cut myself regularly on those when I was a little kid tweaking cases.
Request hedging or backup requests are indeed the terms
I know for requests where you give the first request a bit of a headstart. I didn’t know about the term happy eyeball to signify that all requests fire at the same time.
> I didn’t know about the term happy eyeball to signify that all requests fire at the same time.
It's not quite the same. Usually with Happy Eyeballs, you want to try multiple protocols (e.g. QUIC vs TCP, or IPv6 vs IPv4), and you have a preference for one over the other. As such, you try to establish your connection via IPv6, wait something like 30ms, then try to establish via IPv4. Whichever mechanism completes channel setup first wins, and you can cancel the other one.
It's a mechanism used to drive adoption of newer protocols while limiting the impact on end users.
Ive always wondered where the inflection point lies between on the one hand trying to train the model on all kinds of data such as Wikipedia/encyclopedia, versus in the system prompt pointing to your local versions of those data sources, perhaps even through a search like api/tool.
Is there already some research or experimentation done into this area?
The training gives you a very lossy version of the original data (the smaller the model, the lossier it is; very small models will ultimately output gibberish and word salad that only loosely makes some sort of sense) but it's the right format for generalization. So you actually want both, they're highly complementary.
Here you are comparing a decent bluetooth speaker to a pretty good wireless active speaker to a hifi setup. I think the original comment about audiophiles is them wasting money on upgrading the hifi setup with all kinds of audio cabling, bi-wiring, etc.
That would be similar to upgrading to that one tiny bit sharper lens which otherwise has the same aperture etc.
Yes that's more accurate. And it's about measurability. Even with that tiny bit sharper lens, you can probably point to an actual measurable difference in the photos. Whether that makes them "better" remains subjective.
Audio is a weird world where everyone lives in their own experience and the externally measurable things often don't really translate to the visceral experience. So everyone kinda comes up with their own tribal knowledge that's often more superstition than science, and a lot of people just tend to assume they need "the best" in lossless files and analog whatever and gold-plated this and that.
Besides shooters there are many puzzle games as well.
reply