#tensorrt resultados da pesquisa
I broke my record ð by using #TensorRT ð¥ with @diffuserslib T4 GPU on ð€ @huggingface 512x512 50 steps 6.6 seconds (xformers) to 4.57 seconds ð I will make more tests clean up the code, and make it open-source ð£
ð Learn how the #Microsoft Bing Visual Search team leveraged #TensorRT, CV-CUDA and nvImageCodec from #NVIDIA to optimize their TuringMM visual embeddings pipeline, achieving 5.13x throughput speedup and significant TCO reduction. â¡ïž nvda.ws/4dHj9Qd #visualai
ðŒïž Ready for next-level image generation? @bfl_ml's FLUX.1 image generation model suite -- built on the Diffusion Transformer (DiT) architecture, and trained on 12 billion parameters -- is now accelerated by #TensorRT and runs the fastest â¡ïžon NVIDIA RTX AI PCs. ð Learn moreâŠ
TensorRTã§ããããªããšãåãããã«ãªããŸããããç»åçæãã®ãã®ã¯æ©ãã§ãããHires.fixãI2Iã§ã®MultiDiffusionã¯ãããããªæãïŒðŠ ããšããªããLoRA倿ããŠãå¹ããªããšããMultiDiffusionãEulaã«åæã«ãªããšããã¡ãã£ãšåé¡ããããã§ãð° #TensorRT #StableDifffusion #AIArt
TensorRTã§ã¢ãã«ã倿ã§ããªãã£ãä»¶ãã©ããmodelãé局管çããŠããšãã¡ã£ãœãã modelãã©ã«ãã®ã«ãŒãã«ãã¡ã€ã«ã眮ããã倿ãããð 詊ããããHires.fixåŸã®è§£å床ãå¿ èŠã«ãªããšã®ããšã§ãã¶ããºãããã®èšäºã®ããã«256-(512)-1536ãŸã§ã®è§£å床ãå¿ èŠããã #TensorRT #StableDiffusion
ãããèšã§ãStable Diffusionã®ç»åçæãé«éåãããæ¡åŒµæ©èœãåºããšã®ããš! ...ãšããããã§ãã¶ããºãããã®èšäºãªã©ãåèã«ãTensorRTã®ã€ã³ã¹ããŒã«ãè¡ã£ãŠãã¢ãã«ã®å€æã«å€±æ...ð° èµ·åæã«ããšã©ãŒãåºãããããããããŠæ¡åŒµæ©èœãç¡å¹åããŠã²ãšãŸãæåð¥ #TensorRT #StableDiffusion
Tune #TensorRT-LLM performance â¡ ð ïž C++ benchmarking tools for ð Deep dive â¡ïž nvda.ws/3Xj0hC5 ð
Our latest benchmarks show the unmatched efficiency of @NVIDIAAI #TensorRT LLM, setting a new standard in AI performance ð Discover the future of real-time #AI Apps with reduced latency & enhanced speed by reading our latest technical deep dive ðð fetch.ai/blog/unleashinâŠ
Speed up Ultralytics YOLO11 inference with TensorRT export!â¡ Export YOLO11 models to TensorRT for faster performance and greater efficiency. It's ideal for running computer vision projects on edge devices while saving resources. Learn more â¡ïž ow.ly/Xu1f50UFUCm #TensorRT
ð We hope to see you at the next #TensorRT #LLM night in San Francisco. ICYMI last week, here is our presentation â¡ïž nvda.ws/4dj18Y0
We leverage @NVIDIAAI's #TensorRT to optimize LLMs, boosting efficiency & performance for real-time AI applications ð€ Discover the breakthroughs making our AI platforms smarter and faster by diving into our technical deep dive blog below!ðð fetch.ai/blog/advancingâŠ
Super cool news 𥳠Thanks to @ddPn08 †#TensorRT ð¥ working pretty good on ð€ @huggingface ð€¯ T4 GPU 512x512 20 steps 2 seconds ð I will make more tests clean up the code, and make it public space ð£ please give star â to @ddPn08 𥰠github.com/ddPn08/Lsmith
TensorRT詊ããŠã¿ããçéã ãã©å¶çŽå€ããããšããããWildcardsã§ã¶ãåãã«ã¯è¯ãããïŒïŒ ã¡ãªã¿ã«ãcudnn_adv_infer64_8.dllããèŠã€ããããšèšããããã ãã©ææ°å ¥ããŠãå€ãããããŸãåããããããã©ã #stablediffusion #TensorRT #AIart #AIã°ã©ãã¢
#TensorFlow- #TensorRT Integration! #BigData #Analytics #DataScience #AI #MachineLearning #IoT #IIoT #PyTorch #Python #RStats #TensorFlow #Java #ReactJS #GoLang #CloudComputing #Serverless #DataScientist #Linux #Programming #Coding #100DaysofCode geni.us/TF-TensorRT
ð @AIatMeta #PyTorch + #TensorRT v2.4 ð â¡ #TensorRT 10.1 â¡ #PyTorch 2.4 â¡ #CUDA 12.4 â¡ #Python 3.12 â¡ïž github.com/pytorch/Tensor⊠âš
Boost @AIatMeta Llama 3.1 405B's performance by 44% with NVIDIA #TensorRT Model Optimizer on H200 GPUs. â¡ Discover the power of optimized AI processing. â¡ïž nvda.ws/3T1kqKb âš
맀죌 íììŒë§ë€ TensorRT-LLMìŽ ìë¡ê² ì ë°ìŽížëë€ë ì¬ì€, ìê³ ê³ì šëì? ìµì ìì€ìì ì íì ê°ë°íê³ ìë¡ìŽ ììì ëì¹ì§ ë§ìžì! #TensorRT #LLM github.com/NVIDIA/TensorRâŠ
Running LLMs at scale? This TensorRT-LLM benchmarking guide shows how to turn profiling into real latency + throughput gains: glcnd.io/optimizing-llm⊠#AI #LLM #TensorRT #Developers
ðŒ Senior Ai Systems Engineer at RemoteHunter ð United States ð° $170,000-$210,000 ð ïž #pytorch #tensorrt #nvidiatriton #langchain #langgraph #vllm #openai #gemini #anthropic #kubernetes #prefect #ray #aws ð applyfirst.app/jobs/070e4677-âŠ
ð New on the blog: Using ONNX + TensorRT for Faster Inspection AI Models! Learn how we supercharge crack, corrosion & oil-spill detection with high-speed inference. ð Read more: manyatechnologies.com/onnx-tensorrt-⊠#ONNX #TensorRT #AI #ManyaTechnologies
A TensorRT-LLM az NVIDIA saját technológiája, ami kifejezetten az LLM-ek futtatását gyorsÃtja GPU-n @nvidia @NVIDIAAI #tensor #tensorrt #ai #AINews youtu.be/BQTVoT5O2Zk
youtube.com
YouTube
A TensorRT-LLM az NVIDIA saját technológiája, ami kifejezetten az...
NVIDIA's TensorRT-LLM unlocks impressive LLM inference speeds on their GPUs with an easy-to-use Python API. Performance gains are substantial with optimizations like quantization and custom attention kernels. #LLM #Python #TensorRT Link to the repo in the next tweet!
Accelerated by NVIDIA #cuEquivariance and #TensorRT for faster inference and ready for enterprise-grade deployment in software platforms.
GPU performans optimizasyonu ile eÄitim ve çıkarımda gerçek hız kazanın. DoÄru yıÄını seçin, profilleyin, ayarlayın ve ölçekleyin. #GPU #CUDA #TensorRT ð chatrobot.com.tr/?s=GPU%20perfoâŠ
Just realized NVIDIA's TensorRT-LLM now supports OpenAI's GPT-OSS-120B on day zero. Huge leap for open-weight LLM inference performance. Makes cutting-edge models far more accessible. #LLM #TensorRT #OpenAI Repo Link in the next tweet.
9/10 ð§ Edge察å¿Tips ã»GGUF圢åŒã§llama.cppã«å€æïŒ2ã3GBã§ãåäœïŒ ã»ONNX/TensorRTã§Jetsonå¯Ÿå¿ ã»Triton Serverã§APIå ã»LoRAã§æ©èœåé¢åã¢ããã¿æ§æ #llamacpp #TensorRT #LoRA #AIæé©å #Gemma3n
Deepfake scams donât stand a chance with the new #3DiVi Face SDK 3.27! Now with a #deepfake detection module, turbo inference with #TensorRT & #OpenVINO and Python no-GIL support for parallel pipelines. Check out the full updates here: 3divi.ai/news/tpost/pzeâŠ
ðã¯ãŒã¯ãããŒãã©ãã¯åªåã¯ãVideo-to-Videoé«éåãïŒæ°æéããã£ãŠããåŠçãçŽ10åã«ççž®ããTensorRTãªã©ã䜿ã£ãæé©åã¯ãŒã¯ãããŒããªã¢ã«ã¿ã€ã ã§ã®ã¹ã¿ã€ã«å€æã倢ãããªãïŒ(4/7) #TensorRT #åç»çæAI
16/22 Learn from production systems: Study Triton (from OpenAI), FasterTransformer, and TensorRT. See how real systems solve scaling, batching, and optimization challenges. #Triton #TensorRT #Production
âš #TensorRT and GeForce #RTX unlock ComfyUI SD superhero powers ðŠžâ¡ ð¥ Demo: nvda.ws/4bQ14iH ð DIY notebook: nvda.ws/3Kv1G1d âš
ð @AIatMeta Llama 3.1 405B trained on 16K NVIDIA H100s - inference is #TensorRT #LLM optimizedâ¡ ðŠ 400 tok/s - per node ðŠ 37 tok/s - per user ðŠ 1 node inference â¡ïž nvda.ws/3LB1iyQâš
I broke my record ð by using #TensorRT ð¥ with @diffuserslib T4 GPU on ð€ @huggingface 512x512 50 steps 6.6 seconds (xformers) to 4.57 seconds ð I will make more tests clean up the code, and make it open-source ð£
TensorRTãã€ã³ã¹ããŒã«ããããšããããšã©ãŒãåºãŸããããªãã ããïŒ #TensorRT #cudnn
ð @Meta #Llama3 + #TensorRT LLM multilanguage checklist: â LoRA tuned adaptors â Multilingual â NIM â¡ïž nvda.ws/3Li6o2L
ð @AIatMeta #PyTorch + #TensorRT v2.4 ð â¡ #TensorRT 10.1 â¡ #PyTorch 2.4 â¡ #CUDA 12.4 â¡ #Python 3.12 â¡ïž github.com/pytorch/Tensor⊠âš
ð Learn how the #Microsoft Bing Visual Search team leveraged #TensorRT, CV-CUDA and nvImageCodec from #NVIDIA to optimize their TuringMM visual embeddings pipeline, achieving 5.13x throughput speedup and significant TCO reduction. â¡ïž nvda.ws/4dHj9Qd #visualai
Let the @MistralAI MoE tokens fly ð ð #Mixtral 8x7B with NVIDIA #TensorRT #LLM on #H100. â¡ïž Tech blog: nvda.ws/3xPRMnZ âš
Tune #TensorRT-LLM performance â¡ ð ïž C++ benchmarking tools for ð Deep dive â¡ïž nvda.ws/3Xj0hC5 ð
ðŒïž Ready for next-level image generation? @bfl_ml's FLUX.1 image generation model suite -- built on the Diffusion Transformer (DiT) architecture, and trained on 12 billion parameters -- is now accelerated by #TensorRT and runs the fastest â¡ïžon NVIDIA RTX AI PCs. ð Learn moreâŠ
TensorRTã§ã¢ãã«ã倿ã§ããªãã£ãä»¶ãã©ããmodelãé局管çããŠããšãã¡ã£ãœãã modelãã©ã«ãã®ã«ãŒãã«ãã¡ã€ã«ã眮ããã倿ãããð 詊ããããHires.fixåŸã®è§£å床ãå¿ èŠã«ãªããšã®ããšã§ãã¶ããºãããã®èšäºã®ããã«256-(512)-1536ãŸã§ã®è§£å床ãå¿ èŠããã #TensorRT #StableDiffusion
TensorRT詊ããŠã¿ããçéã ãã©å¶çŽå€ããããšããããWildcardsã§ã¶ãåãã«ã¯è¯ãããïŒïŒ ã¡ãªã¿ã«ãcudnn_adv_infer64_8.dllããèŠã€ããããšèšããããã ãã©ææ°å ¥ããŠãå€ãããããŸãåããããããã©ã #stablediffusion #TensorRT #AIart #AIã°ã©ãã¢
TensorRTã§ããããªããšãåãããã«ãªããŸããããç»åçæãã®ãã®ã¯æ©ãã§ãããHires.fixãI2Iã§ã®MultiDiffusionã¯ãããããªæãïŒðŠ ããšããªããLoRA倿ããŠãå¹ããªããšããMultiDiffusionãEulaã«åæã«ãªããšããã¡ãã£ãšåé¡ããããã§ãð° #TensorRT #StableDifffusion #AIArt
ð We hope to see you at the next #TensorRT #LLM night in San Francisco. ICYMI last week, here is our presentation â¡ïž nvda.ws/4dj18Y0
#NVIDIA Ada Lovelace GPU ã¢ãŒããã¯ãã£ã第 4 äžä»£ Tensor ã³ã¢ãš FP8 Transformer Engine æèŒãããŠããã1.45 ãã¿ããããã¹TensoråŠçèœåãå®çŸ ãå ããŠãå æ¥ãNVIDIAããå ¬éããã ã©ã€ãã©ãª #TensorRT-LLM ããµããŒãããŠããLLMã®æšè«åŠçæ§èœãå€§å¹ ã«åäžãããŸããâŠ
ãããèšã§ãStable Diffusionã®ç»åçæãé«éåãããæ¡åŒµæ©èœãåºããšã®ããš! ...ãšããããã§ãã¶ããºãããã®èšäºãªã©ãåèã«ãTensorRTã®ã€ã³ã¹ããŒã«ãè¡ã£ãŠãã¢ãã«ã®å€æã«å€±æ...ð° èµ·åæã«ããšã©ãŒãåºãããããããããŠæ¡åŒµæ©èœãç¡å¹åããŠã²ãšãŸãæåð¥ #TensorRT #StableDiffusion
Speed up Ultralytics YOLO11 inference with TensorRT export!â¡ Export YOLO11 models to TensorRT for faster performance and greater efficiency. It's ideal for running computer vision projects on edge devices while saving resources. Learn more â¡ïž ow.ly/Xu1f50UFUCm #TensorRT
Something went wrong.
Something went wrong.
United States Trends
- 1. Giannis 59.7K posts
- 2. Spotify 1.57M posts
- 3. Tosin 64.7K posts
- 4. Leeds 99.6K posts
- 5. Bucks 37.9K posts
- 6. Milwaukee 17.3K posts
- 7. Mark Andrews 2,190 posts
- 8. Maresca 49.3K posts
- 9. Steve Cropper 1,093 posts
- 10. #WhyIChime 2,045 posts
- 11. Isaiah Likely N/A
- 12. Danny Phantom 6,930 posts
- 13. Poison Ivy 1,957 posts
- 14. Knicks 25.9K posts
- 15. Purple 53K posts
- 16. Wirtz 36.7K posts
- 17. Phantasm 1,437 posts
- 18. Sunderland 47K posts
- 19. Miguel Rojas 1,707 posts
- 20. Delap 17.7K posts