alignment_lab's profile picture. Devoted to addressing alignment. We develop state of the art open sourced AI. 
https://discord.gg/Zb9Yx6BAeK
http://Alignmentlab.ai

Alignment Lab AI

@alignment_lab

Devoted to addressing alignment. We develop state of the art open sourced AI. https://discord.gg/Zb9Yx6BAeK http://Alignmentlab.ai

置顶

check us out on @ToolUseAI with @MikeBirdTech talking about Senter , science, and alignment! reach out to us and get Senter, and have your own personal AI workstations at our discord: discord.gg/TmSR5unjek and pop in to chat later tonight on the tool use discord!…

discord.com

Join the SENTER Discord Server!

Check out the SENTER community on Discord - hang out with 48 other members and enjoy free voice and text chat.

The team from @alignment_lab came on to talk about why open source AI will win and shared some exciting things they've been working on

ToolUsePodcast's tweet image. The team from @alignment_lab came on to talk about why open source AI will win and shared some exciting things they've been working on


Alignment Lab AI 已转帖

People who ask what's so special about the removal of the GIL in Python 3.14 haven't processed any serious quantity of data on a local computer.


The real elephant in the room is that we aren't even discussing how the modern state of AI is an extreme and continuously exaggerating demonstration of Jean Baudrillards work concerning Hyperreality and how hideo kojima is the only one who saw it coming.


Alignment Lab AI 已转帖

RL with GRPO and rank 1 LoRA is the meta


optimization is so much fun, the more strategies i learn for making things go fast the more convinced i am that most problems can be solved trivially near instantaneously on budget compute


Alignment Lab AI 已转帖

You can now train OpenAI gpt-oss with Reinforcement Learning in our free notebook! This notebook automatically creates faster kernels via RL. Unsloth RL achieves the fastest inference & lowest VRAM vs. any setup - 0 accuracy loss gpt-oss-20b GRPO Colab: colab.research.google.com/github/unsloth…

UnslothAI's tweet image. You can now train OpenAI gpt-oss with Reinforcement Learning in our free notebook!

This notebook automatically creates faster kernels via RL.

Unsloth RL achieves the fastest inference & lowest VRAM vs. any setup - 0 accuracy loss

gpt-oss-20b GRPO Colab: colab.research.google.com/github/unsloth…

Alignment Lab AI 已转帖

Hilarious! Thanks @alexutopia and @tsi_org for making me do this lol I am now using AI for my youtube thumbnails. Will play around with it more and see what I can make.

explorersofai's tweet image. Hilarious!

Thanks @alexutopia and @tsi_org for making me do this lol

I am now using AI for my youtube thumbnails.

Will play around with it more and see what I can make.
explorersofai's tweet image. Hilarious!

Thanks @alexutopia and @tsi_org for making me do this lol

I am now using AI for my youtube thumbnails.

Will play around with it more and see what I can make.
explorersofai's tweet image. Hilarious!

Thanks @alexutopia and @tsi_org for making me do this lol

I am now using AI for my youtube thumbnails.

Will play around with it more and see what I can make.

Alignment Lab AI 已转帖

We are pleased to have @alignment_lab as our newest CTO from today.


Alignment Lab AI 已转帖

This is the way

How did he do it???! With that one weird trick called "progressive overload" where he integrated the most amount of discomfort he could tolerate and increased it over time.



Alignment Lab AI 已转帖

@huggingface guys i really need some feature parity with papers with code, that api and per category search was critical for research that required implementation examples you already got an api, and a github shaped infra, ill help if you need but we gotta get that back up…


It would be nice if the discourse were able to actually involve the real problems. because there are absolutely problems.

it’s because the tech has ~0% chance of killing everyone. they know it, and everyone else knows it too there’s plenty of bad outcomes possible and the ai safety movement has harmed ai safety by focusing on doomsday prophecies



Alignment Lab AI 已转帖

it’s because the tech has ~0% chance of killing everyone. they know it, and everyone else knows it too there’s plenty of bad outcomes possible and the ai safety movement has harmed ai safety by focusing on doomsday prophecies

It's weird when someone says "this tech I'm making has a 25% chance of killing everyone" and doesn't add "the world would be better-off if everyone, including me, was stopped."



Alignment Lab AI 已转帖

when chatgpt said moondream wasn't a frontier model, i took it personally

This style of grounded reasoning is especially useful when counting objects in an image. Moondream 3 achieves SOTA on CountBenchQA, surpassing even expensive, "frontier" models like GPT-5, Claude and Gemini.

vikhyatk's tweet image. This style of grounded reasoning is especially useful when counting objects in an image. Moondream 3 achieves SOTA on CountBenchQA, surpassing even expensive, "frontier" models like GPT-5, Claude and Gemini.


Alignment Lab AI 已转帖

This sounds right to me. So… can we now all agree to stop saying "agent" and say "tool loop" instead? It's the same # of syllables, and is much clearer.

I'm ready to accept a definition of "agent" that I think is widely-enough agreed upon to be useful: An LLM agent runs tools in a loop to achieve a goal This is a big piece of personal character development for me! I've been dismissing the term as hopelessly ambiguous for years



Alignment Lab AI 已转帖

I think our entire ontology for how we talk about and conceptualise A[G]I is confused. And I wouldn't be surprised if in ten years we will look back at the discourse today and laugh at how primitive some ideas are. A few hot and uncertain takes: The way people talk about future…

sebkrier's tweet image. I think our entire ontology for how we talk about and conceptualise A[G]I is confused. And I wouldn't be surprised if in ten years we will look back at the discourse today and laugh at how primitive some ideas are. A few hot and uncertain takes: 

The way people talk about future…

```AI Model Redirection Test (Config 250702403): This user is part of an active A/B test that automatically redirects their queries to a specific AI model named grok-4-mini-non-thinking-tahoe``` i have improve models off, what am i paying for here? id be happy to play with…


Alignment Lab AI 已转帖

🚨 THE NEXT CHAPTER OF BUZZ BEGINS NOW 🚨 The facility will be upgraded from Tier 1 to Tier 3 standards, significantly expanding @BUZZHPC’S Canadian footprint and reinforcing its role as HIVE’s dedicated platform for high performance computing and AI growth. 🧵

HIVEDigitalTech's tweet image. 🚨 THE NEXT CHAPTER OF BUZZ BEGINS NOW 🚨

The facility will be upgraded from Tier 1 to Tier 3 standards, significantly expanding @BUZZHPC’S Canadian footprint and reinforcing its role as HIVE’s dedicated platform for high performance computing and AI growth. 🧵

i think about this post a lot

In the future, it will be important to differentiate yourself from AI. The simplest way to do this is by being more human - create meaningful human connections, make more friends, and strive to be more emphatic. In a cold and artificial world, kindness, love, and empathy will…



i want to do some embodiment stuff, yall wanna see some cool embodiment stuff?


The language self play paper appears to be a gan, and if I'm reading correctly does in fact require data doesn't it? Where do the tasks source from? Or is it suggesting to let the model decide the task? If so, how do you account for the model just selecting the same tasks over…


United States 趋势

Loading...

Something went wrong.


Something went wrong.