Subhabrata Mukherjee
@subho_mpi
Co-Founder & Chief Scientific Officer, @HippocraticAI. PhD. Head of AI. Former Principal Researcher @MicrosoftResearch.
You might like
When we started building a safety-focused LLM for healthcare a year back, a result like this was beyond imagination. We are excited to share some of the technical and a lot of the clinical considerations that went into building #Polaris in our 53-page technical report available…
We hosted our annual AI Pioneers Summit this week to celebrate the technical leaders at the forefront of deploying AI and LLMs in production today! 🚀 Over 170 notable product and engineering execs from 136 companies gathered to celebrate this year’s award winners: ***Expanding…
We are super excited to have @subho_mpi, Chief Scientific Officer & Co-founder at @hippocraticai to join us next Tuesday! He will share how to build a safety-focused healthcare AI agent focusing on non-diagnostic patient-facing use-case!
Hippocratic is Augmenting Repetitive Healthcare Tasks with AI linkedin.com/pulse/hippocra…
linkedin.com
Hippocratic is Augmenting Repetitive Healthcare Tasks with AI
Hey Everyone, As I cover Generative AI in healthcare I have taken a special interest in what Hippocratic AI is doing. What I am noticing is for such a young startup, they are making rather exceptio...
We are truly excited to find @EricTopol summarizing #Polaris in his report. Read about our LLM constellation work for real-time patient-AI voice conversations in erictopol.substack.com/p/a-big-week-i… Preprint: arxiv.org/abs/2403.13313 #GenerativeAI #healthcare @hippocraticai
This was a big week in healthcare #AI, summarized in the new Ground Truths (link in profile) Important new reports by @pranavrajpurkar @AI4Pathology @hippocraticai @PierreEliasMD @ItsJonStokes @james_y_zou @KyleWSwanson @ogevaert and their colleagues
SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference Obtains 2-5x inference speedups with negligible regression across a variety of tasks arxiv.org/abs//2307.02628
Source: Conversation with Bing Chat about Orca - 6/23/2023 We asked Bing about our recent language model Orca, what makes its learning process unique and the promises it holds for the future. This is its response (references in…lnkd.in/g6tXh8kc lnkd.in/gTPRXtjy
Bothered by the expensive runs on Auto-GPT and LangChain agents? Check out our recent work, ReWOO, that eliminates token redundancy in prevailing 'thought-action-observation' paradigm, achieving better task completion with 5x less token usage at inference. #LLMs #AIGA #NLP
Do you want to make your transformers more efficient? Check out @subho_mpi talk on ‘AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers’: youtu.be/sSusEYtL-YM Don’t forget to subscribe!
youtube.com
YouTube
Subho Mukherjee: "AutoMoE: Neural Architecture Search for Efficient...
We kick-off the near year with a talk by @subho_mpi about ‘AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers’ on Thursday. Note that, this time the AutoML seminar will be at 6pm CET / 9am PDT. More details at automl-seminars.github.io
We are hiring research interns to work in Efficient Large-scale AI at Microsoft Research for summer 2023! Looking for candidates interested in building adaptive, modular and efficient NLU/NLG models at scale. The internship will be…lnkd.in/gn3ddFsm lnkd.in/gDBreieR
AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers abs: arxiv.org/abs/2210.07535
Starting a new position as Principal Researcher at Microsoft Research (MSR) at Microsoft! lnkd.in/gUQwmKYs
We (@BeEngelhardt, @NailaMurray and I) are proud to announce the creation of a Journal-to-Conference track, in collaboration with JMLR and conferences NeurIPS 2022, ICLR 2023 and ICML 2023! neurips.cc/public/Journal… iclr.cc/public/Journal… icml.cc/public/Journal…
To put the "scale" narrative into perspective... The brain runs on 15 watts, at 8-35 hertz. And while we have ~90B neurons, usually only ~1B are active at any given time. The brain is very slow and does a lot with very little.
United States Trends
- 1. Broncos 32.4K posts
- 2. Raiders 42.3K posts
- 3. #911onABC 21.4K posts
- 4. Bo Nix 6,168 posts
- 5. AJ Cole N/A
- 6. GTA 6 19.2K posts
- 7. #WickedOneWonderfulNight 2,373 posts
- 8. eddie 42.4K posts
- 9. Chip Kelly N/A
- 10. Geno 6,471 posts
- 11. #TNFonPrime 2,565 posts
- 12. Crawshaw N/A
- 13. Ravi 15.2K posts
- 14. #RaiderNation 2,547 posts
- 15. tim minear 2,401 posts
- 16. Cynthia 34.9K posts
- 17. Jeanty 4,131 posts
- 18. Al Michaels N/A
- 19. #RHOC 1,866 posts
- 20. Mostert N/A
You might like
-
Shikhar
@ShikharMurty -
UW NLP
@uwnlp -
AllenNLP
@ai2_allennlp -
Phillip Isola
@phillip_isola -
AACL 2025
@aaclmeeting -
Sean Ren
@xiangrenNLP -
Hanna Hajishirzi
@HannaHajishirzi -
Albert Gu
@_albertgu -
Mohit Bansal
@mohitban47 -
Luke Zettlemoyer
@LukeZettlemoyer -
Matt Gardner
@nlpmattg -
Srijan Kumar - Lighthouz AI (YC S24) 🤖
@srijankedia -
Xin Eric Wang @ EMNLP 2025
@xwang_lk -
Bodhisattwa Majumder
@mbodhisattwa -
Sebastian Gehrmann
@sebgehr
Something went wrong.
Something went wrong.