linuxpotter's profile picture. Ph.D. Candidate @Mila_Quebec interested in AI/ML connections with economics, game theory, and social choice theory.

Manfred Diaz

@linuxpotter

Ph.D. Candidate @Mila_Quebec interested in AI/ML connections with economics, game theory, and social choice theory.

Manfred Diaz reposted

Here's our deeply Rorty-influenced paper on the topic: arxiv.org/abs/2510.26396


Manfred Diaz reposted

GRACIAS por tu fútbol, @5sergiob. 🫶✨


Manfred Diaz reposted

We're running back our AAMAS 2025 tutorial at SAGT in Bath, UK! If you're interested in general evaluation of AI agents, you should check out our tutorial today (Sep 2nd)! The website is here sites.google.com/view/sagt2025e…, including some draft notes!

If you're attending @AAMASconf 2025 and are interested in general evaluation of AI agents, you should check our tutorial on May 19th! The website is here sites.google.com/view/aamas2025…, including some draft notes! Co-organized with Marc Lanctot, @drimgemp, and @kateslarson 1/2



Manfred Diaz reposted

Introducing Concordia 2.0, an update to our library for building multi-actor LLM simulations!! 🚀 We view multi-actor generative AI as a game engine. The new version is built on a flexible Entity-Component architecture, inspired by modern game development.


Manfred Diaz reposted

How should we rank generalist agents on a wide set of benchmarks and tasks? Honored to get the AAMAS best paper award for SCO, a scheme based on voting theory which minimizes the mistakes in predicting agent comparisons based on the evaluation data. arxiv.org/abs/2411.00119

yorambac's tweet image. How should we rank generalist agents on a wide set of benchmarks and tasks? Honored to get the AAMAS best paper award for SCO, a scheme based on voting theory which minimizes the mistakes in predicting agent comparisons based on the evaluation data. arxiv.org/abs/2411.00119

Manfred Diaz reposted

It may be time to develop AI programming languages. Code generation must be optimized for guiding models in exploring solution space and ensuring correctness, not for human comprehension. Code specification must optimize synchronization between human intention and AI


Manfred Diaz reposted

Announcing our latest arxiv paper: Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt arxiv.org/abs/2505.05197 We argue for a view of AI safety centered on preventing disagreement from spiraling into conflict.


Manfred Diaz reposted

You should be so lucky to have people throughout your research career that you can openly bounce ideas to and from - especially if they complement your strengths in your areas of weakness - it is a rare and precious gift.

docmilanfar's tweet image. You should be so lucky to have people throughout your research career that you can openly bounce ideas to and from - especially if they complement your strengths in your areas of weakness - it is a rare and precious gift.

Manfred Diaz reposted

We should let people design minds and personalities appropriate to their needs, just like it's good to let social media users have more control over their feeds. Polycentric design/governance is more durable than rigid "How Do You Do Fellow Kids" personalities imposed by LLCs. 😏

sebkrier's tweet image. We should let people design minds and personalities appropriate to their needs, just like it's good to let social media users have more control over their feeds. Polycentric design/governance is more durable than rigid "How Do You Do Fellow Kids" personalities imposed by LLCs. 😏

Oh no Please Please stop doing this



Manfred Diaz reposted

This post is a rare articulation of an important outside perspective on AI Safety, which I think better accounts for a future which is open-ended and massively multi-agent. It effectively questions foundational philosophical assumptions which should be reconsidered

First LessWrong post! Inspired by Richard Rorty, we argue for a different view of AI alignment, where the goal is "more like sewing together a very large, elaborate, polychrome quilt", than it is "like getting a clearer vision of something true and deep" lesswrong.com/posts/S8KYwtg5…



Manfred Diaz reposted

🐙 Very excited about this post. We reject the Axiom of Rational Convergence and reframe alignment as the art of coexisting amid deep, enduring disagreement: a patchwork quilt, not a mirror of the true and the deep, stitched from pluralism and pragmatism. lesswrong.com/posts/S8KYwtg5…


Manfred Diaz reposted

First LessWrong post! Inspired by Richard Rorty, we argue for a different view of AI alignment, where the goal is "more like sewing together a very large, elaborate, polychrome quilt", than it is "like getting a clearer vision of something true and deep" lesswrong.com/posts/S8KYwtg5…


Manfred Diaz reposted

In case folks are interested, here's a video of a talk I gave at MIT a couple weeks ago: youtu.be/FmN6fRyfcsY?si…

jzl86's tweet card. A Theory of Appropriateness with Applications to Generative Artific...

youtube.com

YouTube

A Theory of Appropriateness with Applications to Generative Artific...


Manfred Diaz reposted

[video] "A Theory of Appropriateness with Applications to Generative Artificial Intelligence" Joel Leibo, senior staff research scientist at Google DeepMind and professor at King's College London cbmm.mit.edu/video/theory-a…

MIT_CBMM's tweet image. [video] "A Theory of Appropriateness with Applications to Generative Artificial Intelligence"
Joel Leibo, senior staff research scientist at Google DeepMind and professor at King's College London
cbmm.mit.edu/video/theory-a…

Loading...

Something went wrong.


Something went wrong.