MoZ_Data's profile picture.

Mohamed ZERGAOUI

@MoZ_Data

你可能會喜歡
Mohamed ZERGAOUI 已轉發

We probably shouldn't tell you how to build your own document parsing agents, but we will 😮. AI agents are transforming how we handle messy, real-world documents that break traditional OCR systems. Join our live webinar on December 4th at 9 AM PST where the LlamaParse team…

llama_index's tweet image. We probably shouldn't tell you how to build your own document parsing agents, but we will 😮.

AI agents are transforming how we handle messy, real-world documents that break traditional OCR systems.

Join our live webinar on December 4th at 9 AM PST where the LlamaParse team…

Mohamed ZERGAOUI 已轉發

We’re opening offices in Paris and Munich. EMEA has become our fastest-growing region, with a run-rate revenue that has grown more than ninefold in the past year. We’ll be hiring local teams to support this expansion. Read more here: anthropic.com/news/new-offic…


Mohamed ZERGAOUI 已轉發

We need more papers like this one which examines how AI agents & humans work together Current agents were fast, but not strong enough to do tasks on their own & approached problems from too much of a programing mindset. But combining human & AI resulted in gains in performance

emollick's tweet image. We need more papers like this one which examines how AI  agents & humans work together

Current agents were fast, but not strong enough to do tasks on their own & approached problems from too much of a programing mindset. But combining human & AI resulted in gains in performance
emollick's tweet image. We need more papers like this one which examines how AI  agents & humans work together

Current agents were fast, but not strong enough to do tasks on their own & approached problems from too much of a programing mindset. But combining human & AI resulted in gains in performance

Mohamed ZERGAOUI 已轉發

Recently, there was a clash between the popular @FFmpeg project, a low-level multimedia library found everywhere… and Google. A Google AI agent found a bug in FFmpeg. FFmpeg is a far-ranging library, supporting niche multimedia files, often through reverse-engineering. It is…

lemire's tweet image. Recently, there was a clash between the popular @FFmpeg project, a low-level multimedia library found everywhere… and Google. A Google AI agent found a bug in FFmpeg.
FFmpeg is a far-ranging library, supporting niche multimedia files, often through reverse-engineering. It is…
lemire's tweet image. Recently, there was a clash between the popular @FFmpeg project, a low-level multimedia library found everywhere… and Google. A Google AI agent found a bug in FFmpeg.
FFmpeg is a far-ranging library, supporting niche multimedia files, often through reverse-engineering. It is…

Mohamed ZERGAOUI 已轉發

- Test-time Adaptation of Tiny Recursive Models - New Paper, and the Trelis Submission Approach for the 2025 @arcprize Competition! In brief: - @jm_alexia's excellent TRM approach does not quite fit in the compute constraints of the ARC Prize competition - BUT, if you take a…

TrelisResearch's tweet image. - Test-time Adaptation of Tiny Recursive Models - 

New Paper, and the Trelis Submission Approach for the 2025 @arcprize Competition!

In brief:
- @jm_alexia's excellent TRM approach does not quite fit in the compute constraints of the ARC Prize competition
- BUT, if you take a…

Mohamed ZERGAOUI 已轉發

Today we are releasing Brumby-14B-Base, the strongest attention-free base model around. manifestai.com/articles/relea…

manifest__ai's tweet image. Today we are releasing Brumby-14B-Base, the strongest attention-free base model around. 

manifestai.com/articles/relea…

Mohamed ZERGAOUI 已轉發

HRM-Agent: Using the Hierarchical Reasoning Model in Reinforcement Learning Paper: arxiv.org/abs/2510.22832 The Hierarchical Reasoning Model (HRM) has impressive reasoning abilities given its small size, but has only been applied to supervised, static, fully-observable problems.

CausalWizard's tweet image. HRM-Agent: Using the Hierarchical Reasoning Model in Reinforcement Learning
Paper: arxiv.org/abs/2510.22832

The Hierarchical Reasoning Model (HRM) has impressive reasoning abilities given its small size, but has only been applied to supervised, static, fully-observable problems.

Mohamed ZERGAOUI 已轉發

The write-up of my new graph layout algorithm for SpiderMonkey is now live. We built a custom layout algorithm for JS and WASM that follows the structure of the source code. No more spaghetti nightmares from Graphviz, and thousands of times faster.


Mohamed ZERGAOUI 已轉發

🔎Did someone steal your language model? We can tell you, as long as you shuffled your training data🔀. All we need is some text from their model! Concretely, suppose Alice trains an open-weight model and Bob uses it to produce text. Can Alice prove Bob used her model?🚨


Mohamed ZERGAOUI 已轉發

I am looking for a job starting May 2026. I am an expert in SIMD programming, in particular for non-numeric applications such as text processing or database programming. Please have a look at my website for the sort of work I do. I am located in Berlin, Germany.


Mohamed ZERGAOUI 已轉發

Introducing NotebookLM for arXiv papers 🚀 Transform dense AI research into an engaging conversation With context across thousands of related papers, it captures motivations, draws connections to SOTA, and explains key insights like a professor who's read the entire field


Mohamed ZERGAOUI 已轉發

Context is the new RAM

jeffzwang's tweet image. Context is the new RAM

Mohamed ZERGAOUI 已轉發

Explicitly spawning subagents with Claude Code is extremely freaking cool My workflow is: - Enter plan mode - Explicitly say how many subagents I want and which tasks they should perform - Let it rip Super nice way to do research across a large repo.

mattpocockuk's tweet image. Explicitly spawning subagents with Claude Code is extremely freaking cool

My workflow is:

- Enter plan mode
- Explicitly say how many subagents I want and which tasks they should perform
- Let it rip

Super nice way to do research across a large repo.

Mohamed ZERGAOUI 已轉發

Introducing Qwen3-VL Cookbooks! 🧑‍🍳 A curated collection of notebooks showcasing the power of Qwen3-VL—via both local deployment and API—across diverse multimodal use cases: ✅ Thinking with Images ✅ Computer-Use Agent ✅ Multimodal Coding ✅ Omni Recognition ✅ Advanced…

Alibaba_Qwen's tweet image. Introducing Qwen3-VL Cookbooks! 🧑‍🍳

A curated collection of notebooks showcasing the power of Qwen3-VL—via both local deployment and API—across diverse multimodal use cases:

✅ Thinking with Images
✅ Computer-Use Agent
✅ Multimodal Coding
✅ Omni Recognition
✅ Advanced…

Mohamed ZERGAOUI 已轉發

The TRM paper feels like a significant AI breakthrough. It destroys the pareto frontier on the ARC AGI 1 and 2 benchmarks (and Sudoku and Maze solving) with an estd < $0.01 cost per task and cost < $500 to train the 7M model on 2 H100s for 2 days. [Training and test specifics]…

deedydas's tweet image. The TRM paper feels like a significant AI breakthrough.

It destroys the pareto frontier on the ARC AGI 1 and 2 benchmarks (and Sudoku and Maze solving) with an estd &amp;lt; $0.01 cost per task and cost &amp;lt; $500 to train the 7M model on 2 H100s for 2 days.

[Training and test specifics]…

Mohamed ZERGAOUI 已轉發

Today we’re introducing Claude Code Plugins in public beta. Plugins allow you to install and share curated collections of slash commands, agents, MCP servers, and hooks directly within Claude Code.

claudeai's tweet image. Today we’re introducing Claude Code Plugins in public beta.

Plugins allow you to install and share curated collections of slash commands, agents, MCP servers, and hooks directly within Claude Code.

Mohamed ZERGAOUI 已轉發

You can teach a Transformer to execute a simple algorithm if you provide the exact step by step algorithm during training via CoT tokens. This is interesting, but the point of machine learning should be to *find* the algorithm during training, from input/output pairs only -- not…

A beautiful paper from MIT+Harvard+ @GoogleDeepMind 👏 Explains why Transformers miss multi digit multiplication and shows a simple bias that fixes it. The researchers trained two small Transformer models on 4-digit-by-4-digit multiplication. One used a special training method…

rohanpaul_ai's tweet image. A beautiful paper from MIT+Harvard+ @GoogleDeepMind 👏

Explains why Transformers miss multi digit multiplication and shows a simple bias that fixes it.

The researchers trained two small Transformer models on 4-digit-by-4-digit multiplication.

One used a special training method…


Mohamed ZERGAOUI 已轉發

As a researcher at a frontier lab I’m often surprised by how unaware of current AI progress public discussions are. I wrote a post to summarize studies of recent progress, and what we should expect in the next 1-2 years: julian.ac/blog/2025/09/2…


Mohamed ZERGAOUI 已轉發

🚨 Meta just exposed a massive inefficiency in AI reasoning Current models burn through tokens re-deriving the same basic procedures over and over. Every geometric series problem triggers a full derivation of the formula. Every probability question reconstructs…

connordavis_ai's tweet image. 🚨 Meta just exposed a massive inefficiency in AI reasoning

Current models burn through tokens re-deriving the same basic procedures over and over. Every geometric series problem triggers a full derivation of the formula. Every probability question reconstructs…

Mohamed ZERGAOUI 已轉發

Yet more evidence that a pretty major shift is happening, this time by Scott Aaronson scottaaronson.blog/?p=9183&fbclid…

SebastienBubeck's tweet image. Yet more evidence that a pretty major shift is happening, this time by Scott Aaronson 

scottaaronson.blog/?p=9183&amp;amp;fbclid…

United States 趨勢

Loading...

Something went wrong.


Something went wrong.