terminalnotes's profile picture. Building with AI, DevOps & a terminal.
Raw notes, side projects & weird hacks:  https://blog.acascais.com
GitHub: https://github.com/antoniocascais

Antonio Cascais

@terminalnotes

Building with AI, DevOps & a terminal. Raw notes, side projects & weird hacks: https://blog.acascais.com GitHub: https://github.com/antoniocascais

置頂

Booting up @terminalnotes. ⚡️ This is where I'll share the raw feed from my experiments with #AI, #DevOps, and weird side projects. Full write-ups → blog.acascais.com Let's build cool stuff.


100% agree Everything will be a possibility. Need a tool to do xyz? Just a couple minutes/hours away.

codex & other future tools that reach its level are going to totally change how we use computers



Good post, worth reading.

AI coding made planning feel unnecessary. Why map out the work when the system can build it? @kieranklaassen found the answer: Planning teaches the system how you think. Prompting teaches it what to build. One compounds, the other starts from zero: x.every.to/3WKvvB0



I still have this problem. Really annoying. Is it only me?

Is anyone else having problems with the ctrl+g vim editing feature? Since i changed to the native installer, vim becomes super slow (unusable slow) when i press ctrl+g to edit the prompt



Great idea, hope they manage to pull it off

Now in private beta: Aardvark, an agent that finds and fixes security bugs using GPT-5. openai.com/index/introduc…

OpenAI's tweet image. Now in private beta: Aardvark, an agent that finds and fixes security bugs using GPT-5.

openai.com/index/introduc…


We've all read the wrong file at some point in our lives, even the digital ones. Claude Code knows the pain.

terminalnotes's tweet image. We've all read the wrong file at some point in our lives, even the digital ones.

Claude Code knows the pain.

- Extrapolating from things like METR data, next year, the models will be able to work on their own on a whole range of tasks. Task length is important, because it unlocks the ability for a human to supervise a team of models, each of which works autonomously for hours at a time…

Julian Schrittwieser (Anthropic): - Discussion of AI bubble on X is "very divorced" from what is happening in the frontier labs. "In the frontier labs, we are not seeing any slowdown of progress." - AI will have a "massive economic impact". Revenue projections for OpenAI,…



Claude Code worked very well last week :) And it was the first time I (almost) hit the weekly limits. For my flow, they are well balanced.

terminalnotes's tweet image. Claude Code worked very well last week :) 

And it was the first time I (almost) hit the weekly limits. For my flow, they are well balanced.

I think I finally got to a flow I like when working with both Claude Code and Codex: I talk about the features/changes I want to make with Claude Code (using the plan mode) and tell it to implement once I'm happy with the plan. At the end I ask Codex to review the changes. In…

terminalnotes's tweet image. I think I finally got to a flow I like when working with both Claude Code and Codex: I talk about the features/changes I want to make with Claude Code (using the plan mode) and tell it to implement once I'm happy with the plan.

At the end I ask Codex to review the changes.

In…
terminalnotes's tweet image. I think I finally got to a flow I like when working with both Claude Code and Codex: I talk about the features/changes I want to make with Claude Code (using the plan mode) and tell it to implement once I'm happy with the plan.

At the end I ask Codex to review the changes.

In…

Assuming the assumptions are +- correct (they explain them in the website), impressive numbers on R&D. And it also fits with the expected ~50% budget spent on R&D for 2035 (if I'm not mistaken, numbers were circulating on the news a couple weeks ago). No doubt aiming for AGI.

New data insight: How does OpenAI allocate its compute? OpenAI spent ~$7 billion on compute last year. Most of this went to R&D, meaning all research, experiments, and training. Only a minority of this R&D compute went to the final training runs of released models.

EpochAIResearch's tweet image. New data insight: How does OpenAI allocate its compute?

OpenAI spent ~$7 billion on compute last year. Most of this went to R&D, meaning all research, experiments, and training.

Only a minority of this R&D compute went to the final training runs of released models.


Antonio Cascais 已轉發

I grabbed a full copy of the folder and shared it on GitHub here: github.com/simonw/claude-… - here are my notes so far: simonwillison.net/2025/Oct/10/cl…


Uhh, looks promising 👀

Today we're announcing Claude Code plugins! anthropic.com/news/claude-co… It's the first major feature in Claude Code that I've gotten the opportunity to lead, and I'm really excited to see how everyone uses it! 🧵1/4



Just realized the posts in the communities don't show to everyone. Re-posting it for reach :)

Today was working for an MVP app (something that will call Claude Agents SDK and write down info in files) and decided to use codex for the code. It's a really good model for coding. Like, really good. Token usage: total=649,252 input=562,323 (+ 9,164,416 cached) output=86,929

terminalnotes's tweet image. Today was working for an MVP app (something that will call Claude Agents SDK and write down info in files) and decided to use codex for the code. 
It's a really good model for coding. Like, really good.

Token usage: total=649,252 input=562,323 (+ 9,164,416 cached) output=86,929


DeepMind keeps doing pretty awesome stuff

Software vulnerabilities can be notoriously time-consuming for developers to find and fix. Today, we’re sharing details about CodeMender: our new AI agent that uses Gemini Deep Think to automatically patch critical software vulnerabilities. 🧵



Very cool approach!

Today's common voice search technologies are focused on the question, "What words were said?" What if we could answer a more powerful question: "What information is being sought?" Introducing the new Speech-to-Retrieval (S2R) model in today’s blog →goo.gle/4nIQMXJ



Today was working for an MVP app (something that will call Claude Agents SDK and write down info in files) and decided to use codex for the code. It's a really good model for coding. Like, really good. Token usage: total=649,252 input=562,323 (+ 9,164,416 cached) output=86,929

terminalnotes's tweet image. Today was working for an MVP app (something that will call Claude Agents SDK and write down info in files) and decided to use codex for the code. 
It's a really good model for coding. Like, really good.

Token usage: total=649,252 input=562,323 (+ 9,164,416 cached) output=86,929

United States 趨勢

Loading...

Something went wrong.


Something went wrong.