#pytorch search results

Got a #PyTorch breakthrough to share? ๐ŸŽค The #CallForProposals for #PyTorchCon North America (Oct 20-21 | San Jose) is in full swing. Submit your technical session idea by June 7! Apply: bit.ly/4bIgqbs


From leading the #PyTorch transition to the @linuxfoundation to spearheading the Llama #OpenSource strategy, Joe Spisak has defined the #AI landscape. ๐Ÿš€ Join him at #PyTorchCon Europe in Paris to discuss the future of collaborative AI! ๐Ÿ“ Paris | 7-8 April Details:


Join us in San Jose for #PyTorchCon North America! ๐Ÿš€ Secure your spot for Oct 20-21 & save $400 with our early bird rates. Donโ€™t miss the premier event for the #PyTorch community. Register now: bit.ly/4sh3DSw


Bring the power of #GoogleColossus to #PyTorch with Rapid Bucket + fsspec (GCSFS). ๐Ÿ”น 4.8x faster reads ๐Ÿ”น 2.8x faster writes ๐Ÿ”น 23% faster total training time with PyTorch Lightning Keep your GPUs fed and your workloads moving. Learn more: goo.gle/4945HpX


Heading to #NVIDIAGTC next week? Letโ€™s talk @PyTorch. ๐Ÿš€ Weโ€™re bringing the community to San Jose. Drop by Booth #338 to meet expert developers and core maintainers in person. Scaling, inference, foundation models, and OSS contributions. Full schedule below ๐Ÿ‘‡ #PyTorch


MXFP8 training for MoEs on GB200s enables a 1.3x speedup with equivalent convergence versus BF16. This #PyTorch update via TorchAO and TorchTitan on Crusoe Cloud details gains from dynamically quantized grouped GEMMs. #AI #OpenSource ๐Ÿ”— pytorch.org/blog/mxfp8-traโ€ฆ โœ๏ธ @vega_myhre,

PyTorch's tweet image. MXFP8 training for MoEs on GB200s enables a 1.3x speedup with equivalent convergence versus BF16. This #PyTorch update via TorchAO and TorchTitan on Crusoe Cloud details gains from dynamically quantized grouped GEMMs. #AI #OpenSource

๐Ÿ”— pytorch.org/blog/mxfp8-traโ€ฆ

โœ๏ธ @vega_myhre,

๐Ÿš€ It all starts tomorrow! #PyTorchCon Europe 7โ€“8 April | Paris Two days of #ML innovation, community & #PyTorch breakthroughs. ๐ŸŽŸ It's not too late to join us: bit.ly/4bUWj91


Big update to #Monarch, our distributed programming framework for #PyTorch! Since its launch at the #PyTorchCon NA in October, the team has shipped Kubernetes support, RDMA on AWS EFA and AMD ROCm, distributed SQL-based telemetry, a terminal UI, and dashboards for live job

PyTorch's tweet image. Big update to #Monarch, our distributed programming framework for #PyTorch! Since its launch at the #PyTorchCon NA in October, the team has shipped Kubernetes support, RDMA on AWS EFA and AMD ROCm, distributed SQL-based telemetry, a terminal UI, and dashboards for live job

#ExecuTorch addresses fragmented native deployment for #AI agents as a #PyTorch native platform. It enables voice models across CPU, GPU, and NPU on Android, iOS, Linux, macOS & Windows ๐Ÿ”— pytorch.org/blog/building-โ€ฆ

PyTorch's tweet image. #ExecuTorch addresses fragmented native deployment for #AI agents as a #PyTorch native platform. It enables voice models across CPU, GPU, and NPU on Android, iOS, Linux, macOS & Windows

๐Ÿ”— pytorch.org/blog/building-โ€ฆ

๐Ÿš€ Put your brand in front of the #PyTorch community. Sponsor #PyTorchCon Europe, 7โ€“8 April in Paris, and connect with the researchers, engineers & #ML leaders building the next generation of #AI. Showcase your tech. Meet top talent. Build real partnerships. ๐Ÿค Explore


1๏ธโƒฃ week until #PyTorchCon Europe! ๐Ÿ‡ซ๐Ÿ‡ท Paris becomes the home of #PyTorch for two days of #ML breakthroughs & community from 7-8 April. Check out schedule: bit.ly/3PpSktm. ๐ŸŽŸ Join us! bit.ly/4bUWj91


Training large-scale MoE models just got easier -- with NVIDIA NeMo Automodel, you can train billion-parameter MoE models directly in #PyTorch using built-in GPU optimizations. โœ… Open source โœ… 200+TFLOPs/GPU Read more in our technical blog: developer.nvidia.com/blog/acceleratโ€ฆ


Itโ€™s here! ๐Ÿ’ฅ #PyTorchCon Europe starts TODAY in Paris. Letโ€™s build, learn, and celebrate the #PyTorch community together. ๐ŸŽŸ Join us: bit.ly/4bUWj91


The #GenAI & Multimodal track at #PyTorchCon Europe explores building & scaling generative & multimodal models using #PyTorch. Learn more: hubs.la/Q046Lxy40 ๐ŸŽŸ Register โ†’ hubs.la/Q046LfC90


A new PyTorch-native backend is coming to unlock the power of Google TPUs: โœจ Run existing PyTorch with minimal code changes. โœจ Get a 50-100%+ performance boost with Fused Eager mode. Read the engineering deep dive here: goo.gle/4vbTQQl #TorchTPU #PyTorch #MLOps #AI

googledevs's tweet image. A new PyTorch-native backend is coming to unlock the power of Google TPUs:

โœจ Run existing PyTorch with minimal code changes.
โœจ Get a 50-100%+ performance boost with Fused Eager mode.

Read the engineering deep dive here: goo.gle/4vbTQQl

#TorchTPU #PyTorch #MLOps #AI

Just shipped a full end-to-end edge deployment of ResNet-50 PyTorch โ†’ ONNX โ†’ ExecuTorch Real-time vision running efficiently on the edge. No cloud. Low latency. Production-ready pipeline. Read the complete guide here: anubhutiailabs.substack.com/p/end-to-end-eโ€ฆ #EdgeAI #ResNet #PyTorch

AnubhutiAILabs's tweet image. Just shipped a full end-to-end edge deployment of ResNet-50

PyTorch โ†’ ONNX โ†’ ExecuTorch

Real-time vision running efficiently on the edge.

No cloud. Low latency. Production-ready pipeline.

Read the complete guide here:
anubhutiailabs.substack.com/p/end-to-end-eโ€ฆ

#EdgeAI #ResNet #PyTorch

Day 1 of my Deep Learning journey โœ”๏ธ Started with PyTorch ๐Ÿ”ฅ โ€ข What is PyTorch โ€ข Who created it โ€ข Timeline & evolution โ€ข Features โ€ข PyTorch vs TensorFlow Beginning my DL journey ๐Ÿš€ #DeepLearning #PyTorch #LearningInPublic

miyani2193's tweet image. Day 1 of my Deep Learning journey โœ”๏ธ

Started with PyTorch ๐Ÿ”ฅ
โ€ข What is PyTorch
โ€ข Who created it
โ€ข Timeline & evolution
โ€ข Features
โ€ข PyTorch vs TensorFlow
Beginning my DL journey ๐Ÿš€
#DeepLearning #PyTorch #LearningInPublic

#ใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃ #PyTorch #ใƒžใƒซใ‚ฆใ‚งใ‚ข ๅŒป็™‚AI้–‹็™บใง PyTorch ไฝฟใฃใฆใ‚‹ใชใ‚‰่ฆ็ขบ่ชใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็’ฐๅขƒใŒๆฑšๆŸ“ใ•ใ‚Œใ‚‹ใจใ€ๆ‚ฃ่€…ใƒ‡ใƒผใ‚ฟใŒๆผๆดฉใ™ใ‚‹ๅฏ่ƒฝๆ€งใ‚‚ใ€‚ใ‚ตใƒ—ใƒฉใ‚คใƒใ‚งใƒผใƒณๆฑšๆŸ“ใฏๆœฌๅฝ“ใซๅฑใชใ„โ€ผ๏ธŽ


Got a #PyTorch breakthrough to share? ๐ŸŽค The #CallForProposals for #PyTorchCon North America (Oct 20-21 | San Jose) is in full swing. Submit your technical session idea by June 7! Apply: bit.ly/4bIgqbs


Once you build it from scratch first, the PyTorch version finally makes sense.Still a long way to go, but this tutorial removed a lot of the confusion. Next up: Transfer Learning for Computer Vision.#PyTorch #MachineLearning #BuildInPublic


๐ŸŸก PyTorch AutoSP: compiler in DeepSpeed auto-converts transformer code to sequence-parallel for 100k+ tokens. Tested 8ร— A100. 24-ai.news/en/news/2026-0โ€ฆ #PyTorch #LLM


MarketSonarใฏWindowsไธŠใงๅ‹•ไฝœใ™ใ‚‹็„กๆ–™ใฎใƒใƒฃใƒผใƒˆๅˆ†ๆžใ‚ฝใƒ•ใƒˆใงใ™ใ€‚ ใƒป้Š˜ๆŸ„ใ‚’ๆฌกใ€…ใƒ‘ใƒƒใƒ‘ใ€ใƒ‘ใƒƒใƒ‘ใจๅˆ‡ๆ›ฟใˆใชใŒใ‚‰้–ฒ่ฆงๅฏ่ƒฝ ใƒป่ฑŠๅฏŒใชใ‚คใƒณใ‚ธใ‚ฑใƒผใ‚ฟ ใƒปๅผทๅŠ›ใชๆคœ็ดขๆฉŸ่ƒฝ ใƒปPythonใจใฎ้€ฃๆบๆฉŸ่ƒฝ marketsonar2.wixsite.com/marketsonar/ #ใƒใƒฃใƒผใƒˆ #ใƒ†ใ‚ฏใƒ‹ใ‚ซใƒซๅˆ†ๆž #pytorch #keras


Fix PyTorch bottlenecks with profiling, not just more compute. ๐Ÿ“Š High DataLoader time = I/O limits. ๐Ÿง  High MemCpy = inefficient tensor creation. Get the guide: na2.hubs.ly/H05b2P50 #MLOps #PyTorch

Alluxio's tweet image. Fix PyTorch bottlenecks with profiling, not just more compute. 
๐Ÿ“Š High DataLoader time = I/O limits.
๐Ÿง  High MemCpy = inefficient tensor creation.

Get the guide: na2.hubs.ly/H05b2P50
#MLOps #PyTorch

Bring the power of #GoogleColossus to #PyTorch with Rapid Bucket + fsspec (GCSFS). ๐Ÿ”น 4.8x faster reads ๐Ÿ”น 2.8x faster writes ๐Ÿ”น 23% faster total training time with PyTorch Lightning Keep your GPUs fed and your workloads moving. Learn more: goo.gle/4945HpX


Starting my ML/DL journey โ€” learning in public. Built my first Linear Regression model using PyTorch (nn module) to understand how models learn. ๐Ÿ”— Project: github.com/NeuroDeepDev/Dโ€ฆ Open to feedback on my code and approach. #MachineLearning #PyTorch #LearnInPublic


#Meta #PyTorch dominates #AI #research via #NVIDIA #CUDA, while #Google #TPUs lagged due to #XLA friction. #TorchTPU removes these barriers with #MPMD support and #Eager Modes, enabling smoother PyTorch execution on #TPUs. #GoogleCloudNext2026

Gnirut2023's tweet image. #Meta #PyTorch dominates #AI #research via #NVIDIA #CUDA, while #Google #TPUs lagged due to #XLA friction. #TorchTPU removes these barriers with #MPMD support and #Eager Modes, enabling smoother PyTorch execution on #TPUs.
#GoogleCloudNext2026
Gnirut2023's tweet image. #Meta #PyTorch dominates #AI #research via #NVIDIA #CUDA, while #Google #TPUs lagged due to #XLA friction. #TorchTPU removes these barriers with #MPMD support and #Eager Modes, enabling smoother PyTorch execution on #TPUs.
#GoogleCloudNext2026
Gnirut2023's tweet image. #Meta #PyTorch dominates #AI #research via #NVIDIA #CUDA, while #Google #TPUs lagged due to #XLA friction. #TorchTPU removes these barriers with #MPMD support and #Eager Modes, enabling smoother PyTorch execution on #TPUs.
#GoogleCloudNext2026
Gnirut2023's tweet image. #Meta #PyTorch dominates #AI #research via #NVIDIA #CUDA, while #Google #TPUs lagged due to #XLA friction. #TorchTPU removes these barriers with #MPMD support and #Eager Modes, enabling smoother PyTorch execution on #TPUs.
#GoogleCloudNext2026

#Meta #PyTorch became widely adopted for its easyโ€‘toโ€‘debug Eager Mode, strong Python integration, and tight coupling with #NVIDIA #CUDA, while #Googleโ€™s #TPU struggled due to low PyTorch compatibility requiring XLA and code rewrites.


Next up? Before letting torch.nn do the heavy lifting, my goal is to build a neural network completely from scratch. Time to code the raw math and truly understand the mechanics under the hood. #PyTorch #MachineLearning #100xDevs #LearningInPublic


Built at the @Scaler_SST AI Hackathon Finals (@Meta ร— @PyTorch ร— @huggingface) GPU Budget Negotiation Arena โ€” an OpenEnv-style multi-agent RL environment for compute-market negotiation... Space: huggingface.co/spaces/abhinavโ€ฆ #OpenEnv #Meta #PyTorch #HuggingFace #AI #RL #LLM

gautam_abhinav1's tweet image. Built at the @Scaler_SST  AI Hackathon Finals (@Meta  ร— @PyTorch  ร— @huggingface)
GPU Budget Negotiation Arena โ€” an OpenEnv-style multi-agent RL environment for compute-market negotiation...
Space: huggingface.co/spaces/abhinavโ€ฆ

#OpenEnv #Meta #PyTorch #HuggingFace #AI #RL #LLM

๐Ÿ”„ GitHub Trending (Refresh) TorchCode: LeetCode for PyTorch. Practice implementing softmax, attention, GPT-2 and more from scratch with instant auto-grading. Jupyter-based, 40 problems, no GPU required. Self-host or try online. 3,580 stars #PyTorch #MachineLearning #Education

chenzeling4's tweet image. ๐Ÿ”„ GitHub Trending (Refresh)

TorchCode: LeetCode for PyTorch. Practice implementing softmax, attention, GPT-2 and more from scratch with instant auto-grading. Jupyter-based, 40 problems, no GPU required. Self-host or try online.

3,580 stars
#PyTorch #MachineLearning #Education

#PyTorchCon Europe 2026 #CallForProposals is open! ๐ŸŽค Submit to speak by 8 February for sessions at the Paris event, 7-8 April. Share your #PyTorch insights on research, applications, tooling, best practices, performance, community building, or innovative use cases. โžก๏ธ Submit

PyTorch's tweet image. #PyTorchCon Europe 2026 #CallForProposals is open! ๐ŸŽค Submit to speak by 8 February for sessions at the Paris event, 7-8 April. Share your #PyTorch insights on research, applications, tooling, best practices, performance, community building, or innovative use cases. โžก๏ธ Submit

January PyTorch newsletter is out. Get updates on #PyTorchConferenceEurope CFPs, #PyTorchDayIndia registration, a 2025 year-in-review, and new technical blogs on scalable RL, FlexAttention, and more. Read: hubs.ly/Q03__YVW0 Subscribe: hubs.ly/Q03__ZV30 #PyTorch

PyTorch's tweet image. January PyTorch newsletter is out. Get updates on #PyTorchConferenceEurope CFPs, #PyTorchDayIndia registration, a 2025 year-in-review, and new technical blogs on scalable RL, FlexAttention, and more.

Read: hubs.ly/Q03__YVW0
Subscribe: hubs.ly/Q03__ZV30

#PyTorch

PyTorch at the micro-edge? Yes. See how ExecuTorch brings PyTorch models to Arm microcontrollersโ€”quantized, compiled, and running on a Corstone-320 + Ethos-U NPU (via FVP). ๐Ÿ”— hubs.la/Q045FHpT0 From training to deployment, end to end. #PyTorch #ExecuTorch #EdgeAI #TinyML

PyTorch's tweet image. PyTorch at the micro-edge? Yes.

See how ExecuTorch brings PyTorch models to Arm microcontrollersโ€”quantized, compiled, and running on a Corstone-320 + Ethos-U NPU (via FVP). ๐Ÿ”— hubs.la/Q045FHpT0

From training to deployment, end to end.

#PyTorch #ExecuTorch #EdgeAI #TinyML

๐Ÿ“ฃ ICYMI: #PyTorchDayIndia is coming to Bengaluru on 7 February! Register NOW & reserve your seat for a full day of #PyTorch technical talks, discussions & networking with #AI & #ML leaders โžก๏ธ hubs.la/Q03-s0qv0

PyTorch's tweet image. ๐Ÿ“ฃ ICYMI: #PyTorchDayIndia is coming to Bengaluru on 7 February! Register NOW & reserve your seat for a full day of #PyTorch technical talks, discussions & networking with #AI & #ML leaders โžก๏ธ hubs.la/Q03-s0qv0

MXFP8 training for MoEs on GB200s enables a 1.3x speedup with equivalent convergence versus BF16. This #PyTorch update via TorchAO and TorchTitan on Crusoe Cloud details gains from dynamically quantized grouped GEMMs. #AI #OpenSource ๐Ÿ”— pytorch.org/blog/mxfp8-traโ€ฆ โœ๏ธ @vega_myhre,

PyTorch's tweet image. MXFP8 training for MoEs on GB200s enables a 1.3x speedup with equivalent convergence versus BF16. This #PyTorch update via TorchAO and TorchTitan on Crusoe Cloud details gains from dynamically quantized grouped GEMMs. #AI #OpenSource

๐Ÿ”— pytorch.org/blog/mxfp8-traโ€ฆ

โœ๏ธ @vega_myhre,

๐ŸŽฏ Secure your spot at early bird rates! #PyTorchDayIndia is going live 7 February in Bengaluru. Dive deep into #PyTorch with industry experts, cutting-edge demos & real-world #AI applications. Limited early bird pricing available until 31 January โžก๏ธ hubs.la/Q03_bJhM0

PyTorch's tweet image. ๐ŸŽฏ Secure your spot at early bird rates! #PyTorchDayIndia is going live 7 February in Bengaluru. Dive deep into #PyTorch with industry experts, cutting-edge demos & real-world #AI applications. Limited early bird pricing available until 31 January โžก๏ธ hubs.la/Q03_bJhM0

We're excited to announce that @nota_ai has joined PyTorch Foundation as a Silver Member to advance open source AI ๐ŸŽ‰ย  Nota AI will support the PyTorch community with model compression, quantization, and hardware-aware optimization, from edge to cloud. #PyTorch #OpenSourceAI

PyTorch's tweet image. We're excited to announce that @nota_ai has joined PyTorch Foundation as a Silver Member to advance open source AI ๐ŸŽ‰ย 

Nota AI will support the PyTorch community with model compression, quantization, and hardware-aware optimization, from edge to cloud.

#PyTorch #OpenSourceAI

The Triton team at Meta shares a working implementation of warp specialization in the #Triton compiler, along with the current design and upcoming roadmap, and invites community feedback from the #PyTorch and open source AI community. ๐Ÿ”—hubs.la/Q03-cD2J0 #AIInfrastructure

PyTorch's tweet image. The Triton team at Meta shares a working implementation of warp specialization in the #Triton compiler, along with the current design and upcoming roadmap, and invites community feedback from the #PyTorch and open source AI community.

๐Ÿ”—hubs.la/Q03-cD2J0

#AIInfrastructure

New to the PyTorch Ecosystem Landscape: Kubetorch. Kubetorch enables ML research and development on Kubernetes across training, inference, RL, evals, data processing, and more, in a simple and unopinionated package. Learn more: hubs.la/Q0453qFd0 #PyTorch #Kubernetes

PyTorch's tweet image. New to the PyTorch Ecosystem Landscape: Kubetorch.

Kubetorch enables ML research and development on Kubernetes across training, inference, RL, evals, data processing, and more, in a simple and unopinionated package.

Learn more: hubs.la/Q0453qFd0

#PyTorch #Kubernetes

#PyTorch 2.10 includes updates focused on performance and numerical debugging. Next week, Andrey Talaman, Nikita Shulga, and Shangdi Yu (Meta) will provide a brief update on the release and answer questions in a live Q&A. Topics include TorchScript deprecation, torch.compile

PyTorch's tweet image. #PyTorch 2.10 includes updates focused on performance and numerical debugging. Next week, Andrey Talaman, Nikita Shulga, and Shangdi Yu (Meta) will provide a brief update on the release and answer questions in a live Q&A.

Topics include TorchScript deprecation, torch.compile

Explaining PyTorch in 1 minute amzn.to/47woES6 #PyTorch #Python

PythonDvz's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
PythonDvz's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
PythonDvz's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python
PythonDvz's tweet image. Explaining PyTorch in 1 minute amzn.to/47woES6

#PyTorch #Python

A new PyTorch-native backend is coming to unlock the power of Google TPUs: โœจ Run existing PyTorch with minimal code changes. โœจ Get a 50-100%+ performance boost with Fused Eager mode. Read the engineering deep dive here: goo.gle/4vbTQQl #TorchTPU #PyTorch #MLOps #AI

googledevs's tweet image. A new PyTorch-native backend is coming to unlock the power of Google TPUs:

โœจ Run existing PyTorch with minimal code changes.
โœจ Get a 50-100%+ performance boost with Fused Eager mode.

Read the engineering deep dive here: goo.gle/4vbTQQl

#TorchTPU #PyTorch #MLOps #AI

โณ 3 days left to save! Spend only โ‚ฌ449 on your #PyTorchCon Europe ticket if you regsiter by 27 February. Join researchers, developers, and #AI engineers from 7-8 April in Paris and help shape #PyTorch in production. Schedule: hubs.la/Q044yF_z0 Register:

PyTorch's tweet image. โณ 3 days left to save! Spend only โ‚ฌ449 on your #PyTorchCon Europe ticket if you regsiter by 27 February. Join researchers, developers, and #AI engineers from 7-8 April in Paris and help shape #PyTorch in production.
Schedule: hubs.la/Q044yF_z0
Register:

Scaling RL for LLMs is hard. The #PyTorch team at Meta open sourced torchforge and shares #ReinforcementLearning lessons from evaluating it with Weaver on a 512 GPU cluster. ๐Ÿ”— hubs.la/Q03-fZs_0

PyTorch's tweet image. Scaling RL for LLMs is hard. The #PyTorch team at Meta open sourced torchforge and shares #ReinforcementLearning lessons from evaluating it with Weaver on a 512 GPU cluster. 

๐Ÿ”— hubs.la/Q03-fZs_0

#ExecuTorch addresses fragmented native deployment for #AI agents as a #PyTorch native platform. It enables voice models across CPU, GPU, and NPU on Android, iOS, Linux, macOS & Windows ๐Ÿ”— pytorch.org/blog/building-โ€ฆ

PyTorch's tweet image. #ExecuTorch addresses fragmented native deployment for #AI agents as a #PyTorch native platform. It enables voice models across CPU, GPU, and NPU on Android, iOS, Linux, macOS & Windows

๐Ÿ”— pytorch.org/blog/building-โ€ฆ

Big update to #Monarch, our distributed programming framework for #PyTorch! Since its launch at the #PyTorchCon NA in October, the team has shipped Kubernetes support, RDMA on AWS EFA and AMD ROCm, distributed SQL-based telemetry, a terminal UI, and dashboards for live job

PyTorch's tweet image. Big update to #Monarch, our distributed programming framework for #PyTorch! Since its launch at the #PyTorchCon NA in October, the team has shipped Kubernetes support, RDMA on AWS EFA and AMD ROCm, distributed SQL-based telemetry, a terminal UI, and dashboards for live job

FlexAttention has been adopted across popular #LLM ecosystem projects, including Hugging Face, vLLM, and SGLang, reducing the effort required to adapt and experiment with newer attention variants in modern LLMs. ๐Ÿ”— Read our latest blog from @Intel #PyTorch & Triton Teams:

PyTorch's tweet image. FlexAttention has been adopted across popular #LLM ecosystem projects, including Hugging Face, vLLM, and SGLang, reducing the effort required to adapt and experiment with newer attention variants in modern LLMs.

๐Ÿ”— Read our latest blog from @Intel #PyTorch & Triton Teams:

#DeepLearning with #PyTorch Step-by-Step Beginner's Guides (3 volumes) Vol.1 (Fundamentals) and 2 (Computer Vision): amzn.to/3TWr8SL Vol.3 (Sequences and #NLProc): amzn.to/49bh5h2 โ€”โ€”โ€”โ€” #MachineLearning #ML #AI #DataScience #DataScientist #Python #ComputerVision

KirkDBorne's tweet image. #DeepLearning with #PyTorch Step-by-Step Beginner's Guides (3 volumes)

Vol.1 (Fundamentals) and 2 (Computer Vision): amzn.to/3TWr8SL

Vol.3 (Sequences and #NLProc): amzn.to/49bh5h2
โ€”โ€”โ€”โ€”
#MachineLearning #ML #AI #DataScience #DataScientist #Python #ComputerVision
KirkDBorne's tweet image. #DeepLearning with #PyTorch Step-by-Step Beginner's Guides (3 volumes)

Vol.1 (Fundamentals) and 2 (Computer Vision): amzn.to/3TWr8SL

Vol.3 (Sequences and #NLProc): amzn.to/49bh5h2
โ€”โ€”โ€”โ€”
#MachineLearning #ML #AI #DataScience #DataScientist #Python #ComputerVision
KirkDBorne's tweet image. #DeepLearning with #PyTorch Step-by-Step Beginner's Guides (3 volumes)

Vol.1 (Fundamentals) and 2 (Computer Vision): amzn.to/3TWr8SL

Vol.3 (Sequences and #NLProc): amzn.to/49bh5h2
โ€”โ€”โ€”โ€”
#MachineLearning #ML #AI #DataScience #DataScientist #Python #ComputerVision

Loading...

Something went wrong.


Something went wrong.