onnxruntime's profile picture. Cross-platform training and inferencing accelerator for machine learning models.

onnxruntime

@onnxruntime

Cross-platform training and inferencing accelerator for machine learning models.

ONNX Runtime & DirectML now support Phi-3 mini models cross-platforms & devices! Plus, the new ONNX Runtime Generate() API simplifies LLM integration into your apps. Try Phi-3 on your favorite hardware! Read more: onnxruntime.ai/blogs/accelera… #ONNX #DirectML #Phi3


Run PyTorch models in the browser, on mobile and desktop, with #onnxruntime, in your language and development environment of choice 🚀onnxruntime.ai/blogs/pytorch-…


onnxruntime reposted

Developers, don't overlook the power of Swift Package Manager! It simplifies dependency management and promotes modularity. Plus, exciting news: ONNXRuntime just added support for SPM! #iOSdev #SwiftPM #ONNXRuntime


#ONNX Runtime saved the day with our interoperability and ability to run locally on-client and/or cloud! Our lightweight solution gave them the performance they needed with quantization & configuration tooling. Learn how they achieved this in this blog! cloudblogs.microsoft.com/opensource/202…


📢 This new blog by @tryolabs is awesome! Learn how to fine-tune a NLP model and accelerate with #ONNXRuntime!

Maximize the power of LLMs! 💬 Our step-by-step guide covers fine-tuning for specific NLP tasks w/ GPT-3, OPT, & T5. We shared everything from building custom datasets to optimizing inf time with @huggingface 🤗Optimum and @onnxai.🚀 bit.ly/3DqLXxb #LargeLanguageModels



Join us live TODAY! We will be talking to Akhila Vidiyala and Devang Aggarwal on AI Show with Cassie! We will show how developers can use #huggingface #optimum #Intel to quantize models and then use #OpenVINO for #ONNXRuntime to accelerate performance. 👇 aka.ms/aishowlive

onnxruntime's tweet card. AI Show LIVE | Foundry Local Integration & New Agent Framework

youtube.com

YouTube

AI Show LIVE | Foundry Local Integration & New Agent Framework


👀

🚀 Want easier and faster training for your models on GPUs? Thanks to the @onnxruntime backend, 🤗 Optimum can help you achieve 39% - 130% acceleration with just a few lines of code change. Check out our benchmark results NOW! 👀 huggingface.co/blog/optimum-o…



onnxruntime reposted

We are seeking your input to shape the ONNX roadmap! Proposals are being collected until January 24, 2023 and will be discussed in February. Submit your ideas at forms.microsoft.com/pages/response…

forms.microsoft.com

Microsoft Forms

Microsoft Forms


onnxruntime reposted

Imagine the frustration of, after applying optimization tricks, finding that the data copying to GPU slows down your "MUST-BE-FAST" inference...🥵 🤗 Optimum v1.5.0 added @onnxruntime IOBinding support to reduce your memory footprint. 👀 github.com/huggingface/op… More ⬇️


onnxruntime reposted

Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've got you covered with 🤗 Optimum! With one line, leverage TensorRT through @onnxruntime! Check out more at hf.co/docs/optimum/o…

efxmarty's tweet image. Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've got you covered with 🤗 Optimum! With one line, leverage TensorRT through @onnxruntime! Check out more at hf.co/docs/optimum/o…

📣The new version of #ONNXRuntime v1.13.0 was just released!!! Check out the release note and video from the engineering team to learn more about what was in this release! 📝github.com/microsoft/onnx… 📽️youtu.be/vo9vlR-TRK4

onnxruntime's tweet card. v1.13 ONNX Runtime - Release Review

youtube.com

YouTube

v1.13 ONNX Runtime - Release Review


👀

Next up from #ONNXCommunityDay: Accelerating Machine Learning w/ @ONNXRuntime & @HuggingFace! In this session, @jeffboudier will show the latest solutions from #HuggingFace to deploy models at scale w/ great performance leveraging #ONNX & #ONNXRuntime. youtu.be/9H7biU4eLZY

onnxai's tweet card. Accelerating Machine Learning with ONNX Runtime and Hugging Face

youtube.com

YouTube

Accelerating Machine Learning with ONNX Runtime and Hugging Face



onnxruntime reposted

Finally tokenization with Sentence Piece BPE now works as expected in #NodeJS #JavaScript with tokenizers library 🚀! Now getting "invalid expand shape" errors when passing text tokens' encoded ids to the MiniLM @onnxruntime converted @MSFTResearch model huggingface.co/microsoft/Mult…

Sentence piece vocabulary and merge files generated, some minor issues occurring, hopefully @huggingface can help 🙏github.com/huggingface/to…

loretoparisi's tweet image. Sentence piece vocabulary and merge files generated, some minor issues occurring, hopefully @huggingface can help 🙏github.com/huggingface/to…


onnxruntime reposted

🏭 The hardware optimization floodgates are open!🔥 Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion 🎨 To find out how to export your own checkpoint and run it with @onnxruntime, check the release notes: github.com/huggingface/di…

anton_lozhkov's tweet image. 🏭 The hardware optimization floodgates are open!🔥

Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion 🎨 

To find out how to export your own checkpoint and run it with @onnxruntime, check the release notes: 

github.com/huggingface/di…

onnxruntime reposted

💡Senior Research & Development Engineer per @deltatre, @tinux80 è anche #MicrosoftMVP e Intel Software Innovator. 📊Non perderti il suo speech su #AzureML e #Onnx Runtime a #WPC2022! 👉𝐀𝐜𝐪𝐮𝐢𝐬𝐭𝐚 𝐢𝐥 𝐭𝐮𝐨 𝐛𝐢𝐠𝐥𝐢𝐞𝐭𝐭𝐨: wpc2022.eventbrite.it @microsofitalia

OverNetE's tweet image. 💡Senior Research & Development Engineer per @deltatre, @tinux80 è anche #MicrosoftMVP e Intel Software Innovator.
📊Non perderti il suo speech su #AzureML e #Onnx Runtime a  #WPC2022!
👉𝐀𝐜𝐪𝐮𝐢𝐬𝐭𝐚 𝐢𝐥 𝐭𝐮𝐨 𝐛𝐢𝐠𝐥𝐢𝐞𝐭𝐭𝐨: wpc2022.eventbrite.it
@microsofitalia

onnxruntime reposted

The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tutorial explaining its use on the blog: msft.it/6013jfemt #OpenSource

OpenAtMicrosoft's tweet image. The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tutorial explaining its use on the blog: msft.it/6013jfemt #OpenSource

In this article, a community member used #ONNXRuntime to try out GPT-2 model which generates English sentences from Ruby language: dev.to/kojix2/text-ge…


Loading...

Something went wrong.


Something went wrong.