likicode's profile picture. Applied AI @Databricks | PhD @ucl_nlp | ex-Research Scientist intern @GoogleDeepMind @SFResearch

Linqing Liu

@likicode

Applied AI @Databricks | PhD @ucl_nlp | ex-Research Scientist intern @GoogleDeepMind @SFResearch

Linqing Liu reposted

Good post from @balajis on the "verification gap". You could see it as there being two modes in creation. Borrowing GAN terminology: 1) generation and 2) discrimination. e.g. painting - you make a brush stroke (1) and then you look for a while to see if you improved the…

AI PROMPTING → AI VERIFYING AI prompting scales, because prompting is just typing. But AI verifying doesn’t scale, because verifying AI output involves much more than just typing. Sometimes you can verify by eye, which is why AI is great for frontend, images, and video. But…



Linqing Liu reposted

Thrilled to kick off the Gemini 2.0 era with Gemini 2.0 Flash, an update to our workhorse model that outperforms even 1.5 Pro at twice the speed. It has really great multilingual skills, and can natively call tools, like Google Search. It’s the first release in the Gemini 2.0…


Linqing Liu reposted

The world model is taking shape... 🌐

Introducing Genie 2: our AI model that can create an endless variety of playable 3D worlds - all from a single image. 🖼️ These types of large-scale foundation world models could enable future agents to be trained and evaluated in an endless number of virtual environments. →…



Evaluating LLMs in enterprise domains can be challenging. In this post, we share how our applied AI team synthesized high-quality code tests for specific libraries to enhance system performance. Joint work with MatthewHayes @matei_zaharia @ritendra!

#LLMs are revolutionizing code generation, but ensuring accuracy with domain-specific tools like Spark SQL is vital. Discover how to synthesize tailored code tests for LLMs, offering a precise way to evaluate performance across any coding library. dbricks.co/3TOZery



Linqing Liu reposted

We are thrilled to announce the milestone release of SGLang Runtime v0.2, featuring significant inference optimizations after months of hard work. It achieves up to 2.1x higher throughput compared to TRT-LLM and up to 3.8x higher throughput compared to vLLM. It consistently…

arena's tweet image. We are thrilled to announce the milestone release of SGLang Runtime v0.2, featuring significant inference optimizations after months of hard work.

It achieves up to 2.1x higher throughput compared to TRT-LLM and up to 3.8x higher throughput compared to vLLM. It consistently…

Linqing Liu reposted

Character AI is serving 20,000 QPS. Here are the technologies we use to serve hyper-efficiently. [research.character.ai/optimizing-inf… ]


Excited to work on this code autocompletion model that supercharge your coding experience!

We just launched Databricks Assistant Autocomplete, another context-aware AI feature using our data intelligence engine. Now your autocomplete in notebooks and SQL is aware of all the data in your catalog and how it is used! sprou.tt/1bL8Hneo5iZ



Linqing Liu reposted

Official now, very proud of the team! Apache 2.0 and instructed versions for your pleasure, available today on la Plateforme mistral.ai/news/mixtral-8…


Linqing Liu reposted

magnet:?xt=urn:btih:9238b09245d0d8cd915be09927769d5f7584c1c9&dn=mixtral-8x22b&tr=udp%3A%2F%2Fopen.demonii.com%3A1337%2Fannounce&tr=http%3A%2F%https://t.co/OdtBUsbeV5%3A1337%2Fannounce


Linqing Liu reposted

Today we released an open source model, DBRX, that beats all previous open source models on the standard benchmarks. The model itself is a Mixture of Experts (MoE), that's roughly twice the brains (132B) but half the cost (36B) of Llama2-70B. Making it both smart and cheap. Since…


Linqing Liu reposted

At Databricks, we've built an awesome model training and tuning stack. We now used it to release DBRX, the best open source LLM on standard benchmarks to date, exceeding GPT-3.5 while running 2x faster than Llama-70B. databricks.com/blog/introduci…


Linqing Liu reposted

Meet DBRX, a new sota open llm from @databricks. It's a 132B MoE with 36B active params trained from scratch on 12T tokens. It sets a new bar on all the standard benchmarks, and - as an MoE - inference is blazingly fast. Simply put, it's the model your data has been waiting for.

jefrankle's tweet image. Meet DBRX, a new sota open llm from @databricks. It's a 132B MoE with 36B active params trained from scratch on 12T tokens. It sets a new bar on all the standard benchmarks, and - as an MoE - inference is blazingly fast. Simply put, it's the model your data has been waiting for.

Linqing Liu reposted

Announcing Mixtral 8x7B mistral.ai/news/mixtral-o… and our early developer platform mistral.ai/news/la-platef…. Very proud of the team!


Linqing Liu reposted

Very excited to share our latest work: 🤔 Which Prompts Make The Difference? Data Prioritization For Efficient Human LLM Evaluation w/ @eddotman, @beyzaermis, @mziizm, @sarahookr 🔗: arxiv.org/abs/2310.14424

mellem_boo's tweet image. Very excited to share our latest work:
🤔 Which Prompts Make The Difference? Data Prioritization For Efficient Human LLM Evaluation

w/  @eddotman, @beyzaermis, @mziizm, @sarahookr
🔗: arxiv.org/abs/2310.14424

Linqing Liu reposted

📢The costs for training (L)LMs skyrocketed 🚀 in recent years, motivating efficient training algorithms. However, when pre-training BERT and T5 models with a fixed compute budget, we find their gains vanish compared to a baseline with a fully-decayed learning rate! 1/5

jeankaddour's tweet image. 📢The costs for training (L)LMs skyrocketed 🚀 in recent years, motivating efficient training algorithms. However, when pre-training BERT and T5 models with a fixed compute budget, we find their gains vanish compared to a baseline with a fully-decayed learning rate! 1/5

Linqing Liu reposted

If you want to move past the AI hype and learn some real fundamental basics behind today's learning algorithms there's no better choice than MacKay's "Information Theory, Inference and Learning Algorithms". You can read the book for free on the official website:…


Linqing Liu reposted

For anyone who’s interested, here is the code github.com/bazingagin/npc…. btw, I’m the author of the paper and thanks @LukeGessler for digging my paper out of that many ACL papers lol 😂

this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by.... gzip aclanthology.org/2023.findings-…

LukeGessler's tweet image. this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by.... gzip aclanthology.org/2023.findings-…


Linqing Liu reposted

That's a wrap! The Waterloo (@UWCheritonCS) team had fun attending the ACL 2023 Conference in Toronto, Canada! #ACL2023NLP 🇨🇦 We would like to congratulate @ralph_tang @likicode @ZhiyingJ @lintool et al. for winning the Best Paper Award at ACL 2023!!🏆 Next stop is SIGIR 2023.

beirmug's tweet image. That's a wrap! The Waterloo (@UWCheritonCS) team had fun attending the ACL 2023 Conference in Toronto, Canada! #ACL2023NLP 🇨🇦

We would like to congratulate @ralph_tang @likicode @ZhiyingJ @lintool et al. for winning the Best Paper Award at ACL 2023!!🏆

Next stop is SIGIR 2023.

Linqing Liu reposted

Finally launched x.ai! The mathematics of deep learning is profound, beautiful, and unreasonably effective. Developing the "theory of everything" for large neural networks will be central to taking AI to the next level. Conversely, this AI will enable everyone…


Loading...

Something went wrong.


Something went wrong.