gelisam
@haskell_cat
AI Safety ∩ Programming Language Theory. Part-time technical alignment researcher, full-time Haskell software engineer at http://well.co, opinions are my own.
你可能會喜歡
I am doing technical alignment research in my free time. Here is a project in which I use static analysis to verify whether a neural network satisfies its safety property under _all_ inputs or if it needs more training. gelisam.com/ai-safety-via-…
Tell your model _why_ ask! "Would you like me to explain how to downgrade your compiler, port the code to a newer version... or, since you want to use this code as a starting point, would you like me to generate some starting code which works with your version of the compiler?"
I know I'm a little late to the unwrap discourse, but: The value proposition of Rust is not "your program will not crash." It's "your program will not have memory unsafe behavior" The widespread conflation of the latter with the former is due to Haskell being too successful :)
We should not develop superintelligence until we know how to do it safely. Sign the statement today! #KeepTheFutureHuman superintelligence-statement.org
🚨BREAKING: A tremendous coalition of leaders just called for a ban on the development of superintelligence. The two most-cited living scientists, Nobel laureates, faith leaders, politicians, founders, artists, a former head of state, and over 1000 others have joined. Thread 🧵
Somebody on my timeline recommends a book and it is _not_ If Anyone Builds It Everyone Dies??
This is a really good book. I like it because it covers both ends of the spectrum: 1. How LLMs work 2. How to build using LLMs It's a really nice one-two punch: start with the theory and use that right away to implement something useful. The second half of the book is what I…
> my prediction is that auto-regressive LLMs are doomed Yan LeCun, notorious AI Doomer 🤣
1. "Nobody in their right mind will use autoregressive LLMs a few years from now." The technology powering ChatGPT and GPT-4? Dead within years. The problem isn't fixable with more data or compute. It's architectural. Here's where it gets interesting...
One concept I wish more people were aware of is the Tocqueville Effect. Named for Alexis de Tocqueville, this concept describes the curious phenomenon by which people become more frustrated as problems are resolved: As life gets better, people think it's getting worse!🧵
Technology is generally really good. Why should AI be any different? A new video: (youtube link in the reply)
When I was a teen, my government held a public consultation about switching from 1st-past-the-post to a proportional system. I did my research, went, presented approval voting, and was laughed off the stage. I never did politics again. Today, we're still using 1st-past-the-post.
If you liked this, follow @ElectionScience for more information on a better way! It's called "approval voting", and it's so simple: you can vote for multiple candidates, and the candidate with the most votes wins. While not perfect, I think it's better than ranked-choice
I like to learn about neural networks by working on tiny problems for which there exists a 100% correct solution. Here is an interactive experiment showing how in practice, backprop doesn't find this solution: gelisam.com/local-minima/
gelisam.com
Tensorflow — Neural Network Playground
Tinker with a real neural network right here in your browser.
My first MCP server, which allows the agent to pick from a selection of shell commands. GitHub Copilot can natively run shell commands, but VS Code asks you to confirm each command. With mcp-cli, you only have to authorize the use of the tool once! github.com/gelisam/mcp-cli
github.com
GitHub - gelisam/mcp-cli: a simple MCP server allowing VS Code to run a specific shell command...
a simple MCP server allowing VS Code to run a specific shell command without asking - gelisam/mcp-cli
Some people say they liked this one better than previous podcasts. youtube.com/watch?v=0QmDcQ…
youtube.com
YouTube
Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
New video, about how to work in technical AI Safety research! (link in reply)
United States 趨勢
- 1. Mets 28.7K posts
- 2. Dodgers 31.2K posts
- 3. Stearns 9,403 posts
- 4. Stearns 9,403 posts
- 5. Philip Rivers 34.3K posts
- 6. GeForce Season 1,698 posts
- 7. Schwarber 15.5K posts
- 8. Devin Williams 1,458 posts
- 9. #TrumpInflationCrisis 4,381 posts
- 10. Phillies 12.1K posts
- 11. Alonso 63.7K posts
- 12. Soto 14K posts
- 13. World Series 8,652 posts
- 14. Nimmo N/A
- 15. Reds 15.8K posts
- 16. Lennart Karl 3,893 posts
- 17. Jimmy Olsen 1,833 posts
- 18. Colts 40.5K posts
- 19. Vella 1,541 posts
- 20. Rogue 50.5K posts
你可能會喜歡
-
João Forte Carvalho @Conste11ation
@bgamari -
Tweag by Modus Create
@tweagio -
Well-Typed
@welltyped -
Andres Löh
@kosmikus -
Matt Parsons
@mattoflambda -
Joachim Breitner
@nomeata -
Nikita Volkov
@NikitaYVolkov -
Michael Snoyman
@snoyberg -
Vladislav Zavialov
@int_index -
Tikhon Jelvis
@tikhonjelvis -
Veronika Romashkina
@vrom911 -
Csaba Hruska
@csaba_hruska -
Dad×2_jack
@Iceland_jack -
Oskar Wickström
@owickstrom
Something went wrong.
Something went wrong.