#tensorrt resultados da pesquisa

I broke my record 🎉 by using #TensorRT 🔥 with @diffuserslib T4 GPU on 🀗 @huggingface 512x512 50 steps 6.6 seconds (xformers) to 4.57 seconds 🎉 I will make more tests clean up the code, and make it open-source 🐣

camenduru's tweet image. I broke my record 🎉 by using #TensorRT 🔥 with @diffuserslib T4 GPU on 🀗 @huggingface 512x512 50 steps 6.6 seconds (xformers) to 4.57 seconds 🎉 I will make more tests clean up the code, and make it open-source 🐣

👀 Learn how the #Microsoft Bing Visual Search team leveraged #TensorRT, CV-CUDA and nvImageCodec from #NVIDIA to optimize their TuringMM visual embeddings pipeline, achieving 5.13x throughput speedup and significant TCO reduction. ➡ nvda.ws/4dHj9Qd #visualai

NVIDIAAIDev's tweet image. 👀 Learn how the #Microsoft Bing Visual Search team leveraged #TensorRT, CV-CUDA and nvImageCodec from #NVIDIA to optimize their TuringMM visual embeddings pipeline, achieving 5.13x throughput speedup and significant TCO reduction. 

➡ nvda.ws/4dHj9Qd

#visualai

🖌 Ready for next-level image generation? @bfl_ml's FLUX.1 image generation model suite -- built on the Diffusion Transformer (DiT) architecture, and trained on 12 billion parameters -- is now accelerated by #TensorRT and runs the fastest ⚡on NVIDIA RTX AI PCs. 🙌 Learn more


NVIDIAAIDev's tweet image. 🖌 Ready for next-level image generation?

@bfl_ml's  FLUX.1 image generation model suite -- built on the Diffusion Transformer (DiT) architecture, and trained on 12 billion parameters -- is now accelerated by #TensorRT and runs the fastest ⚡on NVIDIA RTX AI PCs.

🙌 Learn more

NVIDIAAIDev's tweet image. 🖌 Ready for next-level image generation?

@bfl_ml's  FLUX.1 image generation model suite -- built on the Diffusion Transformer (DiT) architecture, and trained on 12 billion parameters -- is now accelerated by #TensorRT and runs the fastest ⚡on NVIDIA RTX AI PCs.

🙌 Learn more


TensorRTですが、なんずか動くようになりたしたが、画像生成そのものは早いですが、Hires.fixやI2IでのMultiDiffusionはそこそこな感じ💊 あず、なぜかLoRA倉換しおも効かないずか、MultiDiffusionがEulaに勝手になるずか、ちょっず問題ありそうです😰 #TensorRT #StableDifffusion #AIArt

ARIKA_AICG's tweet image. TensorRTですが、なんずか動くようになりたしたが、画像生成そのものは早いですが、Hires.fixやI2IでのMultiDiffusionはそこそこな感じ💊
あず、なぜかLoRA倉換しおも効かないずか、MultiDiffusionがEulaに勝手になるずか、ちょっず問題ありそうです😰
#TensorRT #StableDifffusion #AIArt

TensorRTでモデルが倉換できなかった件、どうもmodelを階局管理しおるずダメっぜい。 modelフォルダのルヌトにファむルを眮いたら倉換された😇 詊したら、Hires.fix埌の解像床も必芁になるずのこずで、ぶるぺんさんの蚘事のように256-(512)-1536たでの解像床が必芁そう。 #TensorRT #StableDiffusion

ARIKA_AICG's tweet image. TensorRTでモデルが倉換できなかった件、どうもmodelを階局管理しおるずダメっぜい。
modelフォルダのルヌトにファむルを眮いたら倉換された😇
詊したら、Hires.fix埌の解像床も必芁になるずのこずで、ぶるぺんさんの蚘事のように256-(512)-1536たでの解像床が必芁そう。
#TensorRT #StableDiffusion

ネット蚘で、Stable Diffusionの画像生成が高速化される拡匵機胜が出たずのこず! ...ずいうわけで、ぶるぺんさんの蚘事などを参考に、TensorRTのむンストヌルを行っお、モデルの倉換に倱敗...😰 起動時にも゚ラヌが出るため、あきらめお拡匵機胜を無効化しおひずたず敗北😥 #TensorRT #StableDiffusion

ARIKA_AICG's tweet image. ネット蚘で、Stable Diffusionの画像生成が高速化される拡匵機胜が出たずのこず!
...ずいうわけで、ぶるぺんさんの蚘事などを参考に、TensorRTのむンストヌルを行っお、モデルの倉換に倱敗...😰
起動時にも゚ラヌが出るため、あきらめお拡匵機胜を無効化しおひずたず敗北😥
#TensorRT #StableDiffusion

Tune #TensorRT-LLM performance ⚡ 🛠 C++ benchmarking tools for 🏎 Deep dive ➡ nvda.ws/3Xj0hC5 🏁

NVIDIAAIDev's tweet image. Tune #TensorRT-LLM performance ⚡

🛠 C++ benchmarking tools for 🏎

Deep dive ➡ nvda.ws/3Xj0hC5 🏁

おはようございたす。 #stablediffusion #TensorRT #AIart #AIグラビア

Masa_8823's tweet image. おはようございたす。
 #stablediffusion #TensorRT #AIart #AIグラビア

Our latest benchmarks show the unmatched efficiency of @NVIDIAAI #TensorRT LLM, setting a new standard in AI performance 🚀 Discover the future of real-time #AI Apps with reduced latency & enhanced speed by reading our latest technical deep dive 👇🔍 fetch.ai/blog/unleashin



Speed up Ultralytics YOLO11 inference with TensorRT export!⚡ Export YOLO11 models to TensorRT for faster performance and greater efficiency. It's ideal for running computer vision projects on edge devices while saving resources. Learn more ➡ ow.ly/Xu1f50UFUCm #TensorRT

ultralytics's tweet image. Speed up Ultralytics YOLO11 inference with TensorRT export!⚡

Export YOLO11 models to TensorRT for faster performance and greater efficiency. It's ideal for running computer vision projects on edge devices while saving resources.

Learn more ➡ ow.ly/Xu1f50UFUCm

#TensorRT

🙌 We hope to see you at the next #TensorRT #LLM night in San Francisco. ICYMI last week, here is our presentation ➡ nvda.ws/4dj18Y0

NVIDIAAIDev's tweet image. 🙌 We hope to see you at the next #TensorRT #LLM night in San Francisco. 

ICYMI last week, here is our presentation ➡ nvda.ws/4dj18Y0

We leverage @NVIDIAAI's #TensorRT to optimize LLMs, boosting efficiency & performance for real-time AI applications 🀖 Discover the breakthroughs making our AI platforms smarter and faster by diving into our technical deep dive blog below!👇🔍 fetch.ai/blog/advancing


Fetch_ai's tweet image. We leverage @NVIDIAAI's #TensorRT to optimize LLMs, boosting efficiency & performance for real-time AI  applications 🀖

Discover the breakthroughs making our AI platforms smarter and faster by diving into our technical deep dive blog below!👇🔍
fetch.ai/blog/advancing


Super cool news 🥳 Thanks to @ddPn08 ❀ #TensorRT 🔥 working pretty good on 🀗 @huggingface 🀯 T4 GPU 512x512 20 steps 2 seconds 🚀 I will make more tests clean up the code, and make it public space 🐣 please give star ⭐ to @ddPn08 🥰 github.com/ddPn08/Lsmith


TensorRT詊しおみた、爆速だけど制玄倚すぎ、ずりあえずWildcardsでぶん回すには良いかも ちなみに「cudnn_adv_infer64_8.dll」が芋぀からんず蚀われるんだけど最新入れおも倉わらず、たぁ動くからいいけど。 #stablediffusion #TensorRT #AIart #AIグラビア

Masa_8823's tweet image. TensorRT詊しおみた、爆速だけど制玄倚すぎ、ずりあえずWildcardsでぶん回すには良いかも
ちなみに「cudnn_adv_infer64_8.dll」が芋぀からんず蚀われるんだけど最新入れおも倉わらず、たぁ動くからいいけど。
#stablediffusion #TensorRT #AIart #AIグラビア

🌟 @AIatMeta #PyTorch + #TensorRT v2.4 🌟 ⚡ #TensorRT 10.1 ⚡ #PyTorch 2.4 ⚡ #CUDA 12.4 ⚡ #Python 3.12 ➡ github.com/pytorch/Tensor
 ✹

NVIDIAAIDev's tweet image. 🌟 @AIatMeta #PyTorch + #TensorRT v2.4 🌟

⚡ #TensorRT 10.1
⚡ #PyTorch 2.4
⚡ #CUDA 12.4
⚡ #Python 3.12

➡ github.com/pytorch/Tensor
 ✹

Boost @AIatMeta Llama 3.1 405B's performance by 44% with NVIDIA #TensorRT Model Optimizer on H200 GPUs. ⚡ Discover the power of optimized AI processing. ➡ nvda.ws/3T1kqKb ✹

NVIDIAAIDev's tweet image. Boost @AIatMeta Llama 3.1 405B's performance by 44% with NVIDIA #TensorRT Model Optimizer on H200 GPUs. ⚡ Discover the power of optimized AI processing.

➡ nvda.ws/3T1kqKb ✹

맀죌 화요음마닀 TensorRT-LLM읎 새롭게 업데읎튞된닀는 사싀, 알고 계셚나요? 최신 소슀에서 제품을 개발하고 새로욎 소식을 놓치지 마섞요! #TensorRT #LLM github.com/NVIDIA/TensorR


NVIDIAKorea's tweet image. 맀죌 화요음마닀 TensorRT-LLM읎 새롭게 업데읎튞된닀는 사싀, 알고 계셚나요? 최신 소슀에서 제품을 개발하고 새로욎 소식을 놓치지 마섞요! #TensorRT #LLM
github.com/NVIDIA/TensorR


Running LLMs at scale? This TensorRT-LLM benchmarking guide shows how to turn profiling into real latency + throughput gains: glcnd.io/optimizing-llm
 #AI #LLM #TensorRT #Developers


💌 Senior Ai Systems Engineer at RemoteHunter 📍 United States 💰 $170,000-$210,000 🛠 #pytorch #tensorrt #nvidiatriton #langchain #langgraph #vllm #openai #gemini #anthropic #kubernetes #prefect #ray #aws 🔗 applyfirst.app/jobs/070e4677-



🚀 New on the blog: Using ONNX + TensorRT for Faster Inspection AI Models! Learn how we supercharge crack, corrosion & oil-spill detection with high-speed inference. 🔗 Read more: manyatechnologies.com/onnx-tensorrt-
 #ONNX #TensorRT #AI #ManyaTechnologies

ManyaTechBang's tweet image. 🚀 New on the blog: Using ONNX + TensorRT for Faster Inspection AI Models! Learn how we supercharge crack, corrosion & oil-spill detection with high-speed inference.

🔗 Read more: manyatechnologies.com/onnx-tensorrt-


#ONNX #TensorRT #AI #ManyaTechnologies

NVIDIA's TensorRT-LLM unlocks impressive LLM inference speeds on their GPUs with an easy-to-use Python API. Performance gains are substantial with optimizations like quantization and custom attention kernels. #LLM #Python #TensorRT Link to the repo in the next tweet!


Accelerated by NVIDIA #cuEquivariance and #TensorRT for faster inference and ready for enterprise-grade deployment in software platforms.


GPU performans optimizasyonu ile eğitim ve çıkarımda gerçek hız kazanın. Doğru yığını seçin, profilleyin, ayarlayın ve ölçekleyin. #GPU #CUDA #TensorRT 👉 chatrobot.com.tr/?s=GPU%20perfo



Just realized NVIDIA's TensorRT-LLM now supports OpenAI's GPT-OSS-120B on day zero. Huge leap for open-weight LLM inference performance. Makes cutting-edge models far more accessible. #LLM #TensorRT #OpenAI Repo Link in the next tweet.


9/10 🧠Edge察応Tips ・GGUF圢匏でllama.cppに倉換2〜3GBでも動䜜 ・ONNX/TensorRTでJetson察応 ・Triton ServerでAPI化 ・LoRAで機胜分離型アダプタ構成 #llamacpp #TensorRT #LoRA #AI最適化 #Gemma3n


Deepfake scams don’t stand a chance with the new #3DiVi Face SDK 3.27! Now with a #deepfake detection module, turbo inference with #TensorRT & #OpenVINO and Python no-GIL support for parallel pipelines. Check out the full updates here: 3divi.ai/news/tpost/pze


3DiVi_Inc's tweet image. Deepfake scams don’t stand a chance with the new #3DiVi Face SDK 3.27! 

Now with a #deepfake detection module, turbo inference with #TensorRT & #OpenVINO and Python no-GIL support for parallel pipelines. 

Check out the full updates here: 3divi.ai/news/tpost/pze


#PyTorch → #TensorRT Converter ⚙ Optimize for real-time speed on #NVIDIA GPUs

devisionx's tweet image. #PyTorch → #TensorRT Converter ⚙
Optimize for real-time speed on #NVIDIA GPUs

🏆ワヌクフロヌトラック優勝は「Video-to-Video高速化」数時間かかっおいた凊理を玄10分に短瞮するTensorRTなどを䜿った最適化ワヌクフロヌ。リアルタむムでのスタむル倉換も倢じゃない(4/7) #TensorRT #動画生成AI

AICUai's tweet image. 🏆ワヌクフロヌトラック優勝は「Video-to-Video高速化」数時間かかっおいた凊理を玄10分に短瞮するTensorRTなどを䜿った最適化ワヌクフロヌ。リアルタむムでのスタむル倉換も倢じゃない(4/7) #TensorRT #動画生成AI

16/22 Learn from production systems: Study Triton (from OpenAI), FasterTransformer, and TensorRT. See how real systems solve scaling, batching, and optimization challenges. #Triton #TensorRT #Production


✹ #TensorRT and GeForce #RTX unlock ComfyUI SD superhero powers 🊞⚡ 🎥 Demo: nvda.ws/4bQ14iH 📗 DIY notebook: nvda.ws/3Kv1G1d ✹

NVIDIAAIDev's tweet image. ✹ #TensorRT and GeForce #RTX unlock ComfyUI SD superhero powers 🊞⚡

🎥 Demo: nvda.ws/4bQ14iH 
📗 DIY notebook: nvda.ws/3Kv1G1d ✹

👀 @AIatMeta Llama 3.1 405B trained on 16K NVIDIA H100s - inference is #TensorRT #LLM optimized⚡ 🊙 400 tok/s - per node 🊙 37 tok/s - per user 🊙 1 node inference ➡ nvda.ws/3LB1iyQ✹

NVIDIAAIDev's tweet image. 👀 @AIatMeta Llama 3.1 405B trained on 16K NVIDIA H100s - inference is #TensorRT #LLM optimized⚡

🊙 400 tok/s - per node
🊙 37 tok/s - per user
🊙 1 node inference

➡ nvda.ws/3LB1iyQ✹

I broke my record 🎉 by using #TensorRT 🔥 with @diffuserslib T4 GPU on 🀗 @huggingface 512x512 50 steps 6.6 seconds (xformers) to 4.57 seconds 🎉 I will make more tests clean up the code, and make it open-source 🐣

camenduru's tweet image. I broke my record 🎉 by using #TensorRT 🔥 with @diffuserslib T4 GPU on 🀗 @huggingface 512x512 50 steps 6.6 seconds (xformers) to 4.57 seconds 🎉 I will make more tests clean up the code, and make it open-source 🐣

TensorRTをむンストヌルしようずしたら゚ラヌが出たした。なんだこれ #TensorRT #cudnn

ojapyon's tweet image. TensorRTをむンストヌルしようずしたら゚ラヌが出たした。なんだこれ #TensorRT #cudnn

👀 @Meta #Llama3 + #TensorRT LLM multilanguage checklist: ✅ LoRA tuned adaptors ✅ Multilingual ✅ NIM ➡ nvda.ws/3Li6o2L

NVIDIAAIDev's tweet image. 👀 @Meta #Llama3 + #TensorRT LLM multilanguage checklist:

✅ LoRA tuned adaptors
✅ Multilingual
✅ NIM

➡ nvda.ws/3Li6o2L

🌟 @AIatMeta #PyTorch + #TensorRT v2.4 🌟 ⚡ #TensorRT 10.1 ⚡ #PyTorch 2.4 ⚡ #CUDA 12.4 ⚡ #Python 3.12 ➡ github.com/pytorch/Tensor
 ✹

NVIDIAAIDev's tweet image. 🌟 @AIatMeta #PyTorch + #TensorRT v2.4 🌟

⚡ #TensorRT 10.1
⚡ #PyTorch 2.4
⚡ #CUDA 12.4
⚡ #Python 3.12

➡ github.com/pytorch/Tensor
 ✹

👀 Learn how the #Microsoft Bing Visual Search team leveraged #TensorRT, CV-CUDA and nvImageCodec from #NVIDIA to optimize their TuringMM visual embeddings pipeline, achieving 5.13x throughput speedup and significant TCO reduction. ➡ nvda.ws/4dHj9Qd #visualai

NVIDIAAIDev's tweet image. 👀 Learn how the #Microsoft Bing Visual Search team leveraged #TensorRT, CV-CUDA and nvImageCodec from #NVIDIA to optimize their TuringMM visual embeddings pipeline, achieving 5.13x throughput speedup and significant TCO reduction. 

➡ nvda.ws/4dHj9Qd

#visualai

おはようございたす。 #stablediffusion #TensorRT #AIart #AIグラビア

Masa_8823's tweet image. おはようございたす。
 #stablediffusion #TensorRT #AIart #AIグラビア

Let the @MistralAI MoE tokens fly 📈 🚀 #Mixtral 8x7B with NVIDIA #TensorRT #LLM on #H100. ➡ Tech blog: nvda.ws/3xPRMnZ ✹

NVIDIAAIDev's tweet image. Let the @MistralAI MoE tokens fly 📈 🚀 #Mixtral 8x7B with NVIDIA #TensorRT #LLM on #H100.

➡ Tech blog: nvda.ws/3xPRMnZ ✹

Tune #TensorRT-LLM performance ⚡ 🛠 C++ benchmarking tools for 🏎 Deep dive ➡ nvda.ws/3Xj0hC5 🏁

NVIDIAAIDev's tweet image. Tune #TensorRT-LLM performance ⚡

🛠 C++ benchmarking tools for 🏎

Deep dive ➡ nvda.ws/3Xj0hC5 🏁

🖌 Ready for next-level image generation? @bfl_ml's FLUX.1 image generation model suite -- built on the Diffusion Transformer (DiT) architecture, and trained on 12 billion parameters -- is now accelerated by #TensorRT and runs the fastest ⚡on NVIDIA RTX AI PCs. 🙌 Learn more


NVIDIAAIDev's tweet image. 🖌 Ready for next-level image generation?

@bfl_ml's  FLUX.1 image generation model suite -- built on the Diffusion Transformer (DiT) architecture, and trained on 12 billion parameters -- is now accelerated by #TensorRT and runs the fastest ⚡on NVIDIA RTX AI PCs.

🙌 Learn more

NVIDIAAIDev's tweet image. 🖌 Ready for next-level image generation?

@bfl_ml's  FLUX.1 image generation model suite -- built on the Diffusion Transformer (DiT) architecture, and trained on 12 billion parameters -- is now accelerated by #TensorRT and runs the fastest ⚡on NVIDIA RTX AI PCs.

🙌 Learn more


TensorRTでモデルが倉換できなかった件、どうもmodelを階局管理しおるずダメっぜい。 modelフォルダのルヌトにファむルを眮いたら倉換された😇 詊したら、Hires.fix埌の解像床も必芁になるずのこずで、ぶるぺんさんの蚘事のように256-(512)-1536たでの解像床が必芁そう。 #TensorRT #StableDiffusion

ARIKA_AICG's tweet image. TensorRTでモデルが倉換できなかった件、どうもmodelを階局管理しおるずダメっぜい。
modelフォルダのルヌトにファむルを眮いたら倉換された😇
詊したら、Hires.fix埌の解像床も必芁になるずのこずで、ぶるぺんさんの蚘事のように256-(512)-1536たでの解像床が必芁そう。
#TensorRT #StableDiffusion

TensorRT詊しおみた、爆速だけど制玄倚すぎ、ずりあえずWildcardsでぶん回すには良いかも ちなみに「cudnn_adv_infer64_8.dll」が芋぀からんず蚀われるんだけど最新入れおも倉わらず、たぁ動くからいいけど。 #stablediffusion #TensorRT #AIart #AIグラビア

Masa_8823's tweet image. TensorRT詊しおみた、爆速だけど制玄倚すぎ、ずりあえずWildcardsでぶん回すには良いかも
ちなみに「cudnn_adv_infer64_8.dll」が芋぀からんず蚀われるんだけど最新入れおも倉わらず、たぁ動くからいいけど。
#stablediffusion #TensorRT #AIart #AIグラビア

TensorRTですが、なんずか動くようになりたしたが、画像生成そのものは早いですが、Hires.fixやI2IでのMultiDiffusionはそこそこな感じ💊 あず、なぜかLoRA倉換しおも効かないずか、MultiDiffusionがEulaに勝手になるずか、ちょっず問題ありそうです😰 #TensorRT #StableDifffusion #AIArt

ARIKA_AICG's tweet image. TensorRTですが、なんずか動くようになりたしたが、画像生成そのものは早いですが、Hires.fixやI2IでのMultiDiffusionはそこそこな感じ💊
あず、なぜかLoRA倉換しおも効かないずか、MultiDiffusionがEulaに勝手になるずか、ちょっず問題ありそうです😰
#TensorRT #StableDifffusion #AIArt

🙌 We hope to see you at the next #TensorRT #LLM night in San Francisco. ICYMI last week, here is our presentation ➡ nvda.ws/4dj18Y0

NVIDIAAIDev's tweet image. 🙌 We hope to see you at the next #TensorRT #LLM night in San Francisco. 

ICYMI last week, here is our presentation ➡ nvda.ws/4dj18Y0

#NVIDIA Ada Lovelace GPU アヌキテクチャ、第 4 䞖代 Tensor コアず FP8 Transformer Engine 搭茉されおおり、1.45 ペタフロップスTensor凊理胜力を実珟 、加えお、先日、NVIDIAより公開された ラむブラリ #TensorRT-LLM をサポヌトしおおりLLMの掚論凊理性胜が倧幅に向䞊されたす。 

GDEPAdvance's tweet image. #NVIDIA Ada Lovelace GPU アヌキテクチャ、第 4 䞖代 Tensor コアず FP8 Transformer Engine 搭茉されおおり、1.45 ペタフロップスTensor凊理胜力を実珟 、加えお、先日、NVIDIAより公開された ラむブラリ #TensorRT-LLM をサポヌトしおおりLLMの掚論凊理性胜が倧幅に向䞊されたす。 

ネット蚘で、Stable Diffusionの画像生成が高速化される拡匵機胜が出たずのこず! ...ずいうわけで、ぶるぺんさんの蚘事などを参考に、TensorRTのむンストヌルを行っお、モデルの倉換に倱敗...😰 起動時にも゚ラヌが出るため、あきらめお拡匵機胜を無効化しおひずたず敗北😥 #TensorRT #StableDiffusion

ARIKA_AICG's tweet image. ネット蚘で、Stable Diffusionの画像生成が高速化される拡匵機胜が出たずのこず!
...ずいうわけで、ぶるぺんさんの蚘事などを参考に、TensorRTのむンストヌルを行っお、モデルの倉換に倱敗...😰
起動時にも゚ラヌが出るため、あきらめお拡匵機胜を無効化しおひずたず敗北😥
#TensorRT #StableDiffusion

Speed up Ultralytics YOLO11 inference with TensorRT export!⚡ Export YOLO11 models to TensorRT for faster performance and greater efficiency. It's ideal for running computer vision projects on edge devices while saving resources. Learn more ➡ ow.ly/Xu1f50UFUCm #TensorRT

ultralytics's tweet image. Speed up Ultralytics YOLO11 inference with TensorRT export!⚡

Export YOLO11 models to TensorRT for faster performance and greater efficiency. It's ideal for running computer vision projects on edge devices while saving resources.

Learn more ➡ ow.ly/Xu1f50UFUCm

#TensorRT

#TensorRT #AIむラスト 512×512で、あらゆる蚭定を切っおみたw

sunoyan's tweet image. #TensorRT #AIむラスト 
512×512で、あらゆる蚭定を切っおみたw

Loading...

Something went wrong.


Something went wrong.


United States Trends