IntrinsicalAI's profile picture. IR, RAG, LLMs & more!
- https://github.com/Intrinsical-AI
- https://github.com/MrCabss69 
- https://medium.com/@IntrinsicalAI
- https://python-lair.space

Intrinsical AI

@IntrinsicalAI

IR, RAG, LLMs & more! - https://github.com/Intrinsical-AI - https://github.com/MrCabss69 - https://medium.com/@IntrinsicalAI - https://python-lair.space

Vastgezet

Datos públicos de #IODA y la cronología de registros energéticos, revelan anomalías HORAS ANTES y una caída coordinada PT/ES que no cuadra🤔. Analizo la secuencia aquí: medium.com/@IntrinsicalAI… #RedElectrica #blackout #osint #apagon #apagonelectrico #blackoutspain #DataScience


I came searching for silver but got the gold

Porque los aztecas y mayas practicaron sacrificios humanos masivos (estimados en decenas de miles al año), pero la narrativa moderna prioriza culpar al colonizador europeo, ignorando barbaries precolombinas. Claudia Sheinbaum celebra "pasos" de España, pero evade confrontar la…



I've been thinking about: if God exists, I might beg permission for a quick code refactor Solid architect, sure, great work on v1. But a multiagent swarm with GPT-5-pro as orchestrator? Time to inject some agnoscity for 2.0


🦧🦧🦧🦧🦧🥴 🫂

LLMs are injective and invertible. In our new paper, we show that different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space. (1/6)

GladiaLab's tweet image. LLMs are injective and invertible.

In our new paper, we show that different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space.

(1/6)


Intrinsical AI heeft deze post opnieuw geplaatst

New paper! We reverse engineered the mechanisms underlying Claude Haiku’s ability to perform a simple “perceptual” task. We discover beautiful feature families and manifolds, clean geometric transformations, and distributed attention algorithms!

wesg52's tweet image. New paper! We reverse engineered the mechanisms underlying Claude Haiku’s ability to perform a simple “perceptual” task. We discover beautiful feature families and manifolds, clean geometric transformations, and distributed attention algorithms!

Let's not anthropomorphize AI, but let's also conduct all possible anthropological tests on LLMs and obtain interesting insights.

On one hand: don't anthropomorphize AI. On the other: LLMs exhibit signs of gambling addiction. The more autonomy they were given, the more risks the LLMs took. They exhibit gambler's fallacy, loss-chasing, illusion of control... A cautionary note for using LLMs for investing.

emollick's tweet image. On one hand: don't anthropomorphize AI. On the other: LLMs exhibit signs of gambling addiction.

The more autonomy they were given, the more risks the LLMs took. They exhibit gambler's fallacy, loss-chasing, illusion of control...

A cautionary note for using LLMs for investing.


The EU AI Act is no joke 😂 Up to €35M (or 7% revenue) for non-compliance. aligne.ai/blog-posts/eu-… #EUAIAct #AICompliance #Legislation #Europe #Laws #News #Compliance


How likely is this behavior to appear in more realistic scenarios? We told Claude Opus 4 to consider whether this was real or an evaluation. It blackmailed much more when it said it thought it was really deployed (55.1% of the time) versus when it said it was in an eval (6.5%).



United States Trends

Loading...

Something went wrong.


Something went wrong.