Kernzgo's profile picture. Engineer / Ai / Advanced Materials / Investing in what’s next, roasting what’s dumb🤗

KernZ

@Kernzgo

Engineer / Ai / Advanced Materials / Investing in what’s next, roasting what’s dumb🤗

置頂

All Creatures Great and Small

Kernzgo's tweet image. All Creatures Great and Small

As a founder, you’re the one carrying all the risk. You don’t build a ship just to keep it sitting safely in the harbor.


Opus 4.5 is here >Antigravity

Kernzgo's tweet image. Opus 4.5 is here >Antigravity

I always love roasting thoughtless products like this. A $2,000 robotic arm should be in a lab doing dangerous chemical and biological experiments, not making your coffee, even for an elaborate cappuccino. Hopefully we won’t see products like this out there.


KernZ 已轉發

The Robotaxi endgame won’t be won by speed — but by who cracks three hard locks first. (2026-2027) will be the KEY 1️⃣ Safety vs Regulatory Trust Waymo already has the regulator’s stamp. If Tesla can’t solve pure-vision edge cases by 2026, it gets forced to add sensors or accept…

Kernzgo's tweet image. The Robotaxi endgame won’t be won by speed — but by who cracks three hard locks first. (2026-2027) will be the KEY 

1️⃣ Safety vs Regulatory Trust
Waymo already has the regulator’s stamp.
If Tesla can’t solve pure-vision edge cases by 2026, it gets forced to add sensors or accept…

KernZ 已轉發

“If you want the future to be good, you must make it so.” Elon Musk The Starship is the vehicle for us to become multi planetary


KernZ 已轉發

Alex Rampell on why AI is still underhyped and how it will diffuse inside companies. “Not to oversimplify human behavior, but it’s, ‘I want to be lazy and I want to be rich.’” “There’s somebody at every big company who has figured out, ‘I can do something in one minute that…


A classic story about “value”

Banksy’s famous painting that shredded itself right after selling for $1.4 million later increased in value to $25.4 million.



Nicely done — but don’t underestimate the competition.

Nano Banana Pro keeps getting more SOTA (support for 2K and 4K is available in the API!) 🍌

OfficialLoganK's tweet image. Nano Banana Pro keeps getting more SOTA (support for 2K and 4K is available in the API!) 🍌


"Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it " -Linus Torvalds


Just curious — how many people actually used AI to shop during Black Friday last week? I’d love to hear your experiences. Did it genuinely help you make better buying decisions?


In the near future, the biggest opportunities and investments will be in the realm between Earth orbit and escaping Earth’s gravity entirely.🚀 Right now, you can count on one hand the companies really working in this space, they’re extremely rare! #invest

Kernzgo's tweet image. In the near future, the biggest opportunities and investments will be in the realm between Earth orbit and escaping Earth’s gravity entirely.🚀

Right now, you can count on one hand the companies really working in this space, they’re extremely rare!
#invest

Breaking🔥: By introducing its DeepSeek Sparse Attention (DSA) technique, the DeepSeek-V3.2 model keeps all the low-memory advantages of the MLA architecture while using sparse computation to dramatically cut the cost of training and inference on long-context workloads. The…

Kernzgo's tweet image. Breaking🔥: By introducing its DeepSeek Sparse Attention (DSA) technique, the DeepSeek-V3.2 model keeps all the low-memory advantages of the MLA architecture while using sparse computation to dramatically cut the cost of training and inference on long-context workloads. The…
Kernzgo's tweet image. Breaking🔥: By introducing its DeepSeek Sparse Attention (DSA) technique, the DeepSeek-V3.2 model keeps all the low-memory advantages of the MLA architecture while using sparse computation to dramatically cut the cost of training and inference on long-context workloads. The…
Kernzgo's tweet image. Breaking🔥: By introducing its DeepSeek Sparse Attention (DSA) technique, the DeepSeek-V3.2 model keeps all the low-memory advantages of the MLA architecture while using sparse computation to dramatically cut the cost of training and inference on long-context workloads. The…

United States 趨勢

Loading...

Something went wrong.


Something went wrong.