Welcome to vibe coding. If you ever lurk around the various AI subreddits, you'll soon realize just how bad the average prompts and communication skills of most users are. Ironically, models are now being trained on these 5th-grade-level prompts and improving their success with them.
Thanks! The last link is broken, though, or maybe you didn't mean to include it? Also, if you've never actually resumed a session, do you use these docs at some other time? Do you reference them when working on a related feature, or just keep them for keepsake to track what you've done and why?
Thank you. It was just a screenshot of my handoff directory. I originally tried to upload to imgur but got attacked by ads, then uploaded to github via “new issue” pasting. I thought such screenshots were stable, but looks like GitHub prunes those now.
It wasn’t anything important. I appreciate you pointing that out though.
I just keep old sessions for keepsake. No reason really. I thought maybe I’d want them for some reason but never did.
The docs are the important part. It helps me (and future sessions) understand old decisions.
That's not what tool use permissions are. The LLM doesn't just magically spawn processes or run code. The Claude Code program itself does those things when the LLM indicates that it wants to. The program has checks and permissions whether those things will be done or not.
Claude Code has a sandboxing functionality that works the way you're describing when you opt into it, but my understanding is that the Claude Code program in the default configuration does not second-guess the LLM's decisions on what it'd like to run. Has Anthropic said something to the contrary?
While I agree, the government just gave all of its and it's citizens data to the owner of xAI/Grok. I think the US is way past any security concerns of sharing plain text chat logs to OpenAI/Anthropic.
The README on the GitHub has a section on this[0]:
>Indexing 1 hour of footage costs ~$2.84 with Gemini's embedding API (default settings: 30s chunks, 5s overlap):
>1 hour = 3,600 seconds of video = 3,600 frames processed by the model. 3,600 frames × $0.00079 = ~$2.84/hr
>The Gemini API natively extracts and tokenizes exactly 1 frame per second from uploaded video, regardless of the file's actual frame rate. The preprocessing step (which downscales chunks to 480p at 5fps via ffmpeg) is a local/bandwidth optimization — it keeps payload sizes small so API requests are fast and don't timeout — but does not change the number of frames the API processes.
I don’t know if you’ll see this after a week, but my thought experiment was something like “what if HN took my username and gave it to someone else”? It wouldn’t just be handing off my username, it would be handing off an implied identity, even if they deleted comment history and such.
I’m not sure what the appropriate outcome is but I think it’s fair to say that it’s not straightforward.
Microsoft already has all their business data in the form of handing document storage and emails. Trusting another of their services to also not use that data for Microsoft's own purposes is reasonable.
reply