#pytorch wyniki wyszukiwania

With its v5 release, Transformers is going all in on #PyTorch. Transformers acts as a source of truth and foundation for modeling across the field; we've been working with the team to ensure good performance across the stack. We're excited to continue pushing for this in the…

PyTorch's tweet image. With its v5 release, Transformers is going all in on #PyTorch. Transformers acts as a source of truth and foundation for modeling across the field; we've been working with the team to ensure good performance across the stack. We're excited to continue pushing for this in the…

Transformers v5's first release candidate is out 🔥 The biggest release of my life. It's been five years since the last major (v4). From 20 architectures to 400, 20k daily downloads to 3 million. The release is huge, w/ tokenization (no slow tokenizers!), modeling & processing.

LysandreJik's tweet image. Transformers v5's first release candidate is out 🔥 The biggest release of my life.

It's been five years since the last major (v4). From 20 architectures to 400, 20k daily downloads to 3 million.

The release is huge, w/ tokenization (no slow tokenizers!), modeling & processing.


Explaining PyTorch in 1 minute amzn.to/47woES6 #PyTorch #Python

Python_Dv's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
Python_Dv's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
Python_Dv's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
Python_Dv's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python

LMCache joins the #PyTorch Ecosystem, advancing scalable #LLM inference through integration with @vllm_project. Developed at the University of Chicago, LMCache reuses and shares KV caches across queries and engines, achieving up to 15× faster throughput. 🔗…

PyTorch's tweet image. LMCache joins the #PyTorch Ecosystem, advancing scalable #LLM inference through integration with @vllm_project.

Developed at the University of Chicago, LMCache reuses and shares KV caches across queries and engines, achieving up to 15× faster throughput.

🔗…

🚀 PyLO v0.2.0 is out! (Dec 2025) Introducing VeLO_CUDA 🔥 - a CUDA-accelerated implementation of the VeLO learned optimizer in PyLO. Checkout the fastest available version of this SOTA learned optimizer now in PyTorch. github.com/belilovsky-lab… #PyTorch #DeepLearning #huggingface

janson002's tweet image. 🚀 PyLO v0.2.0 is out! (Dec 2025)
Introducing VeLO_CUDA 🔥 - a CUDA-accelerated implementation of the VeLO learned optimizer in PyLO. Checkout the fastest available version of this SOTA learned optimizer now in PyTorch.
github.com/belilovsky-lab…
#PyTorch #DeepLearning #huggingface

🎉 Congrats to the #PyTorch Contributor Award 2025 winners! 🎉 Honoring community members who power innovation, collaboration, and the PyTorch spirit🔥 🏅 Zesheng Zong — Outstanding PyTorch Ambassador 🏅 Zingo Andersen — PyTorch Review Powerhouse 🏅 Xuehai Pan — PyTorch Problem…

PyTorch's tweet image. 🎉 Congrats to the #PyTorch Contributor Award 2025 winners! 🎉

Honoring community members who power innovation, collaboration, and the PyTorch spirit🔥

🏅 Zesheng Zong — Outstanding PyTorch Ambassador
🏅 Zingo Andersen — PyTorch Review Powerhouse
🏅 Xuehai Pan — PyTorch Problem…

Day 10/25 — Building My First Neural Network in PyTorch I continued my learning journey on WorldQuant University today. The lesson focused on using PyTorch to build my very first neural network model. #PyTorch #DeepLearning #AI #WQU #MachineLearning


2nd Edition, 746 pages, massive! ⬇️ Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and #GenerativeAI: amzn.to/3xAkB7X v/ @PacktDataML —— #DataScience #MachineLearning #AI #ML #GenAI #DataScientist —— 𝓚𝓮𝔂…

KirkDBorne's tweet image. 2nd Edition, 746 pages, massive!
⬇️
Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and #GenerativeAI: amzn.to/3xAkB7X v/ @PacktDataML 
——
#DataScience #MachineLearning #AI #ML #GenAI #DataScientist
——
𝓚𝓮𝔂…

🟠2nd Edition, 746 pages, massive! ⬇️ Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and Generative AI: amzn.to/3xAkB7X v/ @PacktDataML —— #DataScience #MachineLearning #ML #GenAI #DataScientist —— 𝓚𝓮𝔂…

KirkDBorne's tweet image. 🟠2nd Edition, 746 pages, massive!
⬇️
Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and Generative AI: amzn.to/3xAkB7X v/ @PacktDataML
——
#DataScience #MachineLearning #ML #GenAI #DataScientist
——
𝓚𝓮𝔂…

🌟Massive 774-page book by @rasbt 🌟 "#MachineLearning with #PyTorch and Scikit-Learn: Develop #ML and #DeepLearning models with #Python” at amzn.to/4oGqtBP ————— #DataScience #AI #NeuralNetworks #ComputerVision #DataScientist

KirkDBorne's tweet image. 🌟Massive 774-page book by @rasbt 🌟
"#MachineLearning with #PyTorch and Scikit-Learn: Develop #ML and #DeepLearning models with #Python” at amzn.to/4oGqtBP
—————
#DataScience #AI #NeuralNetworks #ComputerVision #DataScientist

Great demo at @PyTorch Conference today. The @AMD stand showcased #logfire spans in their multi-agent-nutrition system. Always happy to see the community using our #observability tool. #PyTorch #pydantic #MLOps #ai

pydantic's tweet image. Great demo at @PyTorch Conference today.
The @AMD stand showcased #logfire spans in their multi-agent-nutrition system. Always happy to see the community using our #observability tool.

#PyTorch #pydantic #MLOps #ai

🟠2nd Edition, 746 pages, massive! ⬇️ Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and #GenerativeAI: amzn.to/3xAkB7X v/ @PacktDataML —— #DataScience #MachineLearning #AI #ML #GenAI #DataScientist —— 𝓚𝓮𝔂…

KirkDBorne's tweet image. 🟠2nd Edition, 746 pages, massive!
⬇️
Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and #GenerativeAI: amzn.to/3xAkB7X v/ @PacktDataML 
——
#DataScience #MachineLearning #AI #ML #GenAI #DataScientist
——
𝓚𝓮𝔂…

🚀 PyLO v0.2.0 is out! (Dec 2025) Introducing VeLO_CUDA 🔥 - a CUDA-accelerated implementation of the VeLO learned optimizer in PyLO. Checkout the fastest available version of this SOTA learned optimizer now in PyTorch. github.com/belilovsky-lab… #PyTorch #DeepLearning #huggingface

janson002's tweet image. 🚀 PyLO v0.2.0 is out! (Dec 2025)
Introducing VeLO_CUDA 🔥 - a CUDA-accelerated implementation of the VeLO learned optimizer in PyLO. Checkout the fastest available version of this SOTA learned optimizer now in PyTorch.
github.com/belilovsky-lab…
#PyTorch #DeepLearning #huggingface

With its v5 release, Transformers is going all in on #PyTorch. Transformers acts as a source of truth and foundation for modeling across the field; we've been working with the team to ensure good performance across the stack. We're excited to continue pushing for this in the…

PyTorch's tweet image. With its v5 release, Transformers is going all in on #PyTorch. Transformers acts as a source of truth and foundation for modeling across the field; we've been working with the team to ensure good performance across the stack. We're excited to continue pushing for this in the…

Transformers v5's first release candidate is out 🔥 The biggest release of my life. It's been five years since the last major (v4). From 20 architectures to 400, 20k daily downloads to 3 million. The release is huge, w/ tokenization (no slow tokenizers!), modeling & processing.

LysandreJik's tweet image. Transformers v5's first release candidate is out 🔥 The biggest release of my life.

It's been five years since the last major (v4). From 20 architectures to 400, 20k daily downloads to 3 million.

The release is huge, w/ tokenization (no slow tokenizers!), modeling & processing.


Looking to dive into PyTorch 🚀 Any recommendations for hands-on, project-based resources or courses to get started? Thanks in advance 🙌🏼 #PyTorch #MachineLearning #DeepLearning


This is Project 2 of my pre-Transformer series. Next → moving towards Transfer Learning + more advanced CV models. Follow along if you're into DL projects 👇 #PyTorch #DeepLearning #FastAPI #AI #100DaysOfCode

sarkar19915's tweet image. This is Project 2 of my pre-Transformer series.
Next → moving towards Transfer Learning + more advanced CV models.
Follow along if you're into DL projects 👇

#PyTorch #DeepLearning #FastAPI #AI #100DaysOfCode

Explaining PyTorch in 1 minute amzn.to/47woES6 #PyTorch #Python

Python_Dv's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
Python_Dv's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
Python_Dv's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
Python_Dv's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python

2nd Edition, 746 pages, massive! ⬇️ Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and #GenerativeAI: amzn.to/3xAkB7X v/ @PacktDataML —— #DataScience #MachineLearning #AI #ML #GenAI #DataScientist —— 𝓚𝓮𝔂…

KirkDBorne's tweet image. 2nd Edition, 746 pages, massive!
⬇️
Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and #GenerativeAI: amzn.to/3xAkB7X v/ @PacktDataML 
——
#DataScience #MachineLearning #AI #ML #GenAI #DataScientist
——
𝓚𝓮𝔂…

🚀 PyLO v0.2.0 is out! (Dec 2025) Introducing VeLO_CUDA 🔥 - a CUDA-accelerated implementation of the VeLO learned optimizer in PyLO. Checkout the fastest available version of this SOTA learned optimizer now in PyTorch. github.com/belilovsky-lab… #PyTorch #DeepLearning #huggingface

janson002's tweet image. 🚀 PyLO v0.2.0 is out! (Dec 2025)
Introducing VeLO_CUDA 🔥 - a CUDA-accelerated implementation of the VeLO learned optimizer in PyLO. Checkout the fastest available version of this SOTA learned optimizer now in PyTorch.
github.com/belilovsky-lab…
#PyTorch #DeepLearning #huggingface

Akash Network is thrilled to be a Platinum Sponsor of this year’s #PyTorch Conference, happening October 22–23 in San Francisco! This event is an incredible opportunity to dive deep into hands-on sessions exploring the intricacies of open-source #AI and #ML. Register now!…

akashnet's tweet image. Akash Network is thrilled to be a Platinum Sponsor of this year’s #PyTorch Conference, happening October 22–23 in San Francisco! This event is an incredible opportunity to dive deep into hands-on sessions exploring the intricacies of open-source #AI and #ML. 

Register now!…

🌟Massive 774-page book by @rasbt 🌟 "#MachineLearning with #PyTorch and Scikit-Learn: Develop #ML and #DeepLearning models with #Python” at amzn.to/4oGqtBP ————— #DataScience #AI #NeuralNetworks #ComputerVision #DataScientist

KirkDBorne's tweet image. 🌟Massive 774-page book by @rasbt 🌟
"#MachineLearning with #PyTorch and Scikit-Learn: Develop #ML and #DeepLearning models with #Python” at amzn.to/4oGqtBP
—————
#DataScience #AI #NeuralNetworks #ComputerVision #DataScientist

🟠2nd Edition, 746 pages, massive! ⬇️ Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and #GenerativeAI: amzn.to/3xAkB7X v/ @PacktDataML —— #DataScience #MachineLearning #AI #ML #GenAI #DataScientist —— 𝓚𝓮𝔂…

KirkDBorne's tweet image. 🟠2nd Edition, 746 pages, massive!
⬇️
Modern #ComputerVision with #PyTorch #DeepLearning — from practical fundamentals to advanced applications and #GenerativeAI: amzn.to/3xAkB7X v/ @PacktDataML 
——
#DataScience #MachineLearning #AI #ML #GenAI #DataScientist
——
𝓚𝓮𝔂…

#vLLM V1 now runs on AMD GPUs. Teams from IBM Research, Red Hat & AMD collaborated to build an optimized attention backend using Triton kernels, achieving state-of-the-art performance. 🔗 Read: hubs.la/Q03PC50p0 #PyTorch #OpenSourceAI

PyTorch's tweet image. #vLLM V1 now runs on AMD GPUs. Teams from IBM Research, Red Hat & AMD collaborated to build an optimized attention backend using Triton kernels, achieving state-of-the-art performance.

🔗 Read: hubs.la/Q03PC50p0

#PyTorch #OpenSourceAI

Loading...

Something went wrong.


Something went wrong.


United States Trends