solyarisoftware's profile picture. Conversational LLM-based Applications Specialist @almawave | Former ITD-CNR Researcher | Soundscapes (Orchestral) Composer.

Giorgio Robino

@solyarisoftware

Conversational LLM-based Applications Specialist @almawave | Former ITD-CNR Researcher | Soundscapes (Orchestral) Composer.

Pinned

My preprint "Conversation Routines: A Prompt Engineering Framework for Task-Oriented Dialog Systems" now has a revised version on @arXiv with updated experimental results. Here’s a thread with the changes! 🧵 ➡️ Paper: arxiv.org/abs/2501.11613 1/ What’s CR?


Giorgio Robino reposted

The 12 Architectural Innovations That Prompt-Oriented Programming Adds to Context Engineering Prompt-Oriented Programming goes beyond good context engineering. After mapping every architectural decision in Claude's skills system, I found 12 patterns that don't exist in…

IntuitMachine's tweet image. The 12 Architectural Innovations That Prompt-Oriented Programming Adds to Context Engineering

Prompt-Oriented Programming goes beyond good context engineering.

After mapping every architectural decision in Claude's skills system, I found 12 patterns that don't exist in…

Giorgio Robino reposted

We are hosting a series of events for the community of Python builders in AI: meet the #PyAI event series. Next events will be hosted SF, NY, and London. Followed by a conference in early 2026. Register and join us! pydantic.dev/articles/pydan…


Giorgio Robino reposted

Ollama v0.12.8 is here! This release improved Qwen 3 VL performance, and improvements and bug fixes in Ollama's engine. ❤️ Happy halloween! 👻👻👻

ollama's tweet image. Ollama v0.12.8 is here! 

This release improved Qwen 3 VL performance, and improvements and bug fixes in Ollama's engine. ❤️

Happy halloween! 👻👻👻

Giorgio Robino reposted

yesterday, Hugging Face dropped a 214-page MASTERCLASS on how to train LLMs > it’s called The Smol Training Playbook > and if want to learn how to train LLMs, > this GIFT is for you > this training bible walks you through the ENTIRE pipeline > covers every concept that matters…

TheAhmadOsman's tweet image. yesterday, Hugging Face dropped a 214-page MASTERCLASS on how to train LLMs

> it’s called The Smol Training Playbook
> and if want to learn how to train LLMs,
> this GIFT is for you

> this training bible walks you through the ENTIRE pipeline
> covers every concept that matters…

Giorgio Robino reposted

🚨 OpenAI introduces Aardvark, an agentic security researcher powered by GPT 5. Aardvark acts like an autonomous AI security expert that scans codebases, finds vulnerabilities, validates them in a sandbox, and suggests fixes using Codex. Key features - Continuously…

ai_for_success's tweet image. 🚨 OpenAI introduces Aardvark, an agentic security researcher powered by GPT 5.  

Aardvark acts like an autonomous AI security expert that scans codebases, finds vulnerabilities, validates them in a sandbox, and suggests fixes using Codex.  

Key features  
- Continuously…

Giorgio Robino reposted

Google literally just made the best way to create AI Agents

snappyprompts's tweet image. Google literally just made the best way to create AI Agents

Giorgio Robino reposted

🚨 BREAKING: Anthropic has developed a method to test whether AI models can understand their own thoughts. Using a new technique called concept injection, researchers inserted specific neural patterns into Claude models, and asked if they could detect them. Claude Opus 4.1 was…

CodeByPoonam's tweet image. 🚨 BREAKING: Anthropic has developed a method to test whether AI models can understand their own thoughts.

Using a new technique called concept injection, researchers inserted specific neural patterns into Claude models, and asked if they could detect them.

Claude Opus 4.1 was…

New Anthropic research: Signs of introspection in LLMs. Can language models recognize their own internal thoughts? Or do they just make up plausible answers when asked about them? We found evidence for genuine—though limited—introspective capabilities in Claude.

AnthropicAI's tweet image. New Anthropic research: Signs of introspection in LLMs.

Can language models recognize their own internal thoughts? Or do they just make up plausible answers when asked about them? We found evidence for genuine—though limited—introspective capabilities in Claude.


Giorgio Robino reposted

CLI tool for drawing AWS infrastructure diagrams from YAML

tom_doerr's tweet image. CLI tool for drawing AWS infrastructure diagrams from YAML

Giorgio Robino reposted

Introducing Prompt-Oriented Programming 🤯

IntuitMachine's tweet image. Introducing Prompt-Oriented Programming 🤯

Giorgio Robino reposted

your AI agent only forgets because you let it. there is a simple technique that everybody needs, but few actually use, and it can improve the agent by 51.1%. here's how you can use workflow memory: you ask your agent to train a simple ML model on your custom CSV data. — it…

Hesamation's tweet image. your AI agent only forgets because you let it.

there is a simple technique that everybody needs, but few actually use, and it can improve the agent by 51.1%.

here's how you can use workflow memory:

you ask your agent to train a simple ML model on your custom CSV data. 
— it…

Giorgio Robino reposted

Kimi Linear Tech Report is dropped! 🚀 huggingface.co/moonshotai/Kim… Kimi Linear: A novel architecture that outperforms full attention with faster speeds and better performance—ready to serve as a drop-in replacement for full attention, featuring our open-sourced KDA kernels! Kimi…


Giorgio Robino reposted

The paper shows how Human-Computer Interaction (HCI) talks about LLM reasoning without looking at what builds it. The authors read 258 CHI papers from 2020 to 2025 to see how reasoning is used . They find many papers use reasoning as a selling point to try an idea with LLMs.…

rohanpaul_ai's tweet image. The paper shows how Human-Computer Interaction (HCI)  talks about LLM reasoning without looking at what builds it.

The authors read 258 CHI papers from 2020 to 2025 to see how reasoning is used .

They find many papers use reasoning as a selling point to try an idea with LLMs.…

Giorgio Robino reposted

Not only have I built hundreds of AI agents myself, I've seen other people build thousands of AI agents for every use case under the sun. The people who are the most successful are the ones who don't overcomplicate it - and I want that to be you too. It's easy to think building…


Giorgio Robino reposted

What is "good" reasoning and how to evalute it? 🚀We explore a new pipeline to model step-level reasoning, a “Goldilocks principle” that balances free-form CoT and LEAN! Led by my student @yuanhezhang6, in colloboration with Ilja from DeepMind, @jasondeanlee, @CL_Theory

Fanghui_SgrA's tweet image. What is "good" reasoning and how to evalute it? 

🚀We explore a new pipeline to model step-level reasoning, a “Goldilocks principle” that balances free-form CoT and LEAN! 

Led by my student @yuanhezhang6, in colloboration with Ilja from DeepMind, @jasondeanlee, @CL_Theory

Giorgio Robino reposted

Full prompt in text format /////////////////////////////////////////////////////// You are an expert game theorist who uses first principles thinking. I want to learn game theory from the ground up. Teaching approach: 1.Start with fundamental questions and build concepts from…

learn game theory through this prompt

PromptLLM's tweet image. learn game theory through this prompt


Giorgio Robino reposted

Our first official vLLM Meetup is coming to Europe on Nov 6! 🇨🇭 Meet vLLM committers @mgoin_, @tms_jr, Thomas Parnell, + speakers from @RedHat_AI, @IBM, @MistralAI. Topics: vLLM updates, quantization, Mistral+vLLM, hybrid models, distributed inference luma.com/0gls27kb


Giorgio Robino reposted

Open-source Agentic browser; privacy-first alternative to ChatGPT Atlas, Perplexity Comet, Dia.

GithubProjects's tweet image. Open-source Agentic browser; privacy-first alternative to ChatGPT Atlas, Perplexity Comet, Dia.

Giorgio Robino reposted

Now you can fine-tune your local LLMs on your iPhone. Apple presents MeBP, Memory-efficient BackPropagation, a new method that makes on-device fine-tuning practical. Unlike zeroth-order optimization (ZO), which needs 10–100× more steps, MeBP achieves faster convergence and…

jiqizhixin's tweet image. Now you can fine-tune your local LLMs on your iPhone.

Apple presents MeBP, Memory-efficient BackPropagation, a new method that makes on-device fine-tuning practical.

Unlike zeroth-order optimization (ZO), which needs 10–100× more steps, MeBP achieves faster convergence and…

Giorgio Robino reposted

Glyph: Scaling Context Windows via Visual-Text Compression Paper: arxiv.org/abs/2510.17800 Weights: huggingface.co/zai-org/Glyph Repo: github.com/thu-coai/Glyph Glyph is a framework for scaling the context length through visual-text compression. It renders long textual sequences into…

Zai_org's tweet image. Glyph: Scaling Context Windows via Visual-Text Compression

Paper: arxiv.org/abs/2510.17800
Weights: huggingface.co/zai-org/Glyph
Repo: github.com/thu-coai/Glyph

Glyph is a framework for scaling the context length through visual-text compression. It renders long textual sequences into…

Giorgio Robino reposted

Can AI teach itself to reason better and without any human labels? Enter Socratic-Zero, a fully autonomous framework where three agents—Teacher, Solver, and Generator—co-evolve to create high-quality reasoning data from just 100 seed questions. - Teacher crafts harder problems…

jiqizhixin's tweet image. Can AI teach itself to reason better and without any human labels?

Enter Socratic-Zero, a fully autonomous framework where three agents—Teacher, Solver, and Generator—co-evolve to create high-quality reasoning data from just 100 seed questions.

- Teacher crafts harder problems…

United States Trends

Loading...

Something went wrong.


Something went wrong.