#gpuprogramming نتائج البحث

"Need better CUDA textbooks. 'Programming Massively Parallel Processors' is a good intro. I've created C/CUDA C implementations for first 3 chapters. Check book & my GitHub repo for details. #CUDA #GPUprogramming"

taras_y_sereda's tweet image. "Need better CUDA textbooks. 'Programming Massively Parallel Processors' is a good intro. I've created C/CUDA C implementations for first 3 chapters. Check book & my GitHub repo for details. #CUDA #GPUprogramming"

Each common operation is implemented as its own .cu file—modular. intriguing. #CUDA #NVIDIA #GPUProgramming #libcudf

yasunrik's tweet image. Each common operation is implemented as its own .cu file—modular. intriguing.
#CUDA #NVIDIA #GPUProgramming #libcudf

For maximum performance, firms often develop custom CUDA kernels. This involves writing low-level code to directly program the GPU's parallel cores, squeezing out every drop of efficiency for critical tasks. #CUDA #GPUProgramming


#NewBooks Gerassimos Barlas #Multicore and #GPUprogramming: An Integrated Approach 2nd Edition Morgan Kaufmann (August 2022) Blurb: "offers broad coverage of key parallel computing tools, essential for multi-core CPU programming and many-core "massively parallel" computing. >

LGcommaI's tweet image. #NewBooks

Gerassimos Barlas
#Multicore and #GPUprogramming: An Integrated Approach
2nd Edition
Morgan Kaufmann (August 2022)

Blurb:
"offers broad coverage of key parallel computing tools, essential for multi-core CPU programming and many-core "massively parallel" computing.
>

🔥 New Series! Learning GPU programming through Mojo puzzles - on an Apple M4! No expensive data center GPUs needed. No CUDA C++ complexity. Just Python-like syntax with systems performance. First video just dropped: youtube.com/watch?v=-VsP4k… #Mojo #GPUProgramming #AppleSilicon

Modular's tweet card. Learn GPU Programming with Mojo 🔥 GPU Puzzles Tutorial - Introduction

youtube.com

YouTube

Learn GPU Programming with Mojo 🔥 GPU Puzzles Tutorial - Introduction


لا توجد نتائج لـ "#gpuprogramming"
لا توجد نتائج لـ "#gpuprogramming"

Day 3 of GPU Programming -implemented ReLU activation for 1D array -Did potd leetcod -explored how GPUs handle element-wise operations in parallel #100DaysOfGPU #CUDA #GPUProgramming #ParallelComputing #AI #DeepLearning #100DaysOfCode #MachineLearning #NVIDIA #CodingJourney

cuda_programmer's tweet image. Day 3 of GPU Programming

-implemented ReLU activation for 1D array
-Did potd leetcod
-explored how GPUs handle element-wise operations in parallel 

#100DaysOfGPU #CUDA #GPUProgramming #ParallelComputing #AI #DeepLearning #100DaysOfCode #MachineLearning #NVIDIA #CodingJourney
cuda_programmer's tweet image. Day 3 of GPU Programming

-implemented ReLU activation for 1D array
-Did potd leetcod
-explored how GPUs handle element-wise operations in parallel 

#100DaysOfGPU #CUDA #GPUProgramming #ParallelComputing #AI #DeepLearning #100DaysOfCode #MachineLearning #NVIDIA #CodingJourney

Finally, it has arrived!! I have got my #Nvidia Jetson Xavier NX :D #cuda #gpuProgramming

mario21ic's tweet image. Finally, it has arrived!! I have got my #Nvidia Jetson Xavier NX :D #cuda #gpuProgramming

And this was day 3 of the DL & #GPUprogramming course @LRZ_DE @HLRS_HPC @Uni_Stuttgart @NVIDIAAI DLI, with the grande finale: we learnt how to employ distributed stoch. gradient descent w/ multi-GPU, @TensorFlow & #Horovod @UberAILabs @LFAIDataFdn🎉😍highly recommended course!

MGaimann's tweet image. And this was day 3 of the DL & #GPUprogramming course @LRZ_DE @HLRS_HPC @Uni_Stuttgart @NVIDIAAI DLI, with the grande finale: we learnt how to employ distributed stoch. gradient descent w/ multi-GPU, @TensorFlow & #Horovod @UberAILabs @LFAIDataFdn🎉😍highly recommended course!

"Need better CUDA textbooks. 'Programming Massively Parallel Processors' is a good intro. I've created C/CUDA C implementations for first 3 chapters. Check book & my GitHub repo for details. #CUDA #GPUprogramming"

taras_y_sereda's tweet image. "Need better CUDA textbooks. 'Programming Massively Parallel Processors' is a good intro. I've created C/CUDA C implementations for first 3 chapters. Check book & my GitHub repo for details. #CUDA #GPUprogramming"

#GPUProgramming - Day 07: 🔧 #CPU Hazards 101 🚧: Ever heard of #Register Renaming & Out-of-Order Execution? They tackle structural hazards, ensuring smooth sailing for instructions. Watch out for Data Hazards (#RAW, #WAR, #WAW) in #MIPS, but fear not! #COA #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 07:
🔧 #CPU Hazards 101 🚧: Ever heard of #Register Renaming & Out-of-Order Execution? They tackle structural hazards, ensuring smooth sailing for instructions. Watch out for Data Hazards (#RAW, #WAR, #WAW) in #MIPS, but fear not!
#COA #LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 07:
🔧 #CPU Hazards 101 🚧: Ever heard of #Register Renaming & Out-of-Order Execution? They tackle structural hazards, ensuring smooth sailing for instructions. Watch out for Data Hazards (#RAW, #WAR, #WAW) in #MIPS, but fear not!
#COA #LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 07:
🔧 #CPU Hazards 101 🚧: Ever heard of #Register Renaming & Out-of-Order Execution? They tackle structural hazards, ensuring smooth sailing for instructions. Watch out for Data Hazards (#RAW, #WAR, #WAW) in #MIPS, but fear not!
#COA #LearnInPublic

Each common operation is implemented as its own .cu file—modular. intriguing. #CUDA #NVIDIA #GPUProgramming #libcudf

yasunrik's tweet image. Each common operation is implemented as its own .cu file—modular. intriguing.
#CUDA #NVIDIA #GPUProgramming #libcudf

Day 2 of #GPUProgramming: >read an article about shared memory >learnt about registers……..Global memory >almost blacked out from elaboration of L1 & L2 iykyk >”repetition”to digest what I just learnt for about 900k milliseconds

tatavishnurao's tweet image. Day 2 of #GPUProgramming:
>read an article about shared memory 
>learnt about registers……..Global memory
>almost blacked out from elaboration of L1 & L2 iykyk
>”repetition”to digest what I just learnt for about 900k milliseconds

#GPUProgramming - Day 02: 🔄 Exploring CPU architectures! #RISC, like #ARM & #Power, opts for efficiency with many registers. #CISC, exemplified by #Intel 8086, prioritizes simplicity, offering diverse, complex instructions. RISC excels in energy efficiency. #COA #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 02:
🔄 Exploring CPU architectures! #RISC, like #ARM & #Power, opts for efficiency with many registers. #CISC, exemplified by #Intel 8086, prioritizes simplicity, offering diverse, complex instructions. RISC excels in energy efficiency.
#COA #LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 02:
🔄 Exploring CPU architectures! #RISC, like #ARM & #Power, opts for efficiency with many registers. #CISC, exemplified by #Intel 8086, prioritizes simplicity, offering diverse, complex instructions. RISC excels in energy efficiency.
#COA #LearnInPublic

#GPUProgramming - Day 03: 🧠 CPUs: Processors adapt with DISA. #CPU's core duo - Control Unit & Datapath. Datapath: Registers, ALU, Buses, Multiplexers – a data symphony! 🔄 Follow the Instruction Execution Cycle: Fetch ➡️Decode➡️Execute➡️Store➡️ Update PC. 🕹️ #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 03:
🧠 CPUs: Processors adapt with DISA. 
#CPU's core duo - Control Unit & Datapath. 
Datapath: Registers, ALU, Buses, Multiplexers – a data symphony! 
🔄 Follow the Instruction Execution Cycle: 
Fetch ➡️Decode➡️Execute➡️Store➡️ Update PC. 🕹️ 
#LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 03:
🧠 CPUs: Processors adapt with DISA. 
#CPU's core duo - Control Unit & Datapath. 
Datapath: Registers, ALU, Buses, Multiplexers – a data symphony! 
🔄 Follow the Instruction Execution Cycle: 
Fetch ➡️Decode➡️Execute➡️Store➡️ Update PC. 🕹️ 
#LearnInPublic

Day 3 of GPU programming At this rate I'll be writing custom inference kernels for AI by next month. The gap between PyTorch abstractions and bare metal isn't as wide as it seemed. #CUDA #GPUProgramming #MachineLearning

LearnInShadows's tweet image. Day 3 of GPU programming

At this rate I'll be writing custom inference kernels for AI by next month. The gap between PyTorch abstractions and bare metal isn't as wide as it seemed. 
#CUDA #GPUProgramming #MachineLearning
LearnInShadows's tweet image. Day 3 of GPU programming

At this rate I'll be writing custom inference kernels for AI by next month. The gap between PyTorch abstractions and bare metal isn't as wide as it seemed. 
#CUDA #GPUProgramming #MachineLearning

Day 2 of GPU programming Never knew addition needs so much code 😂 Starting to get the hang of program_id. Used Gemini 3.0 to generate pseudocode since I'm new to GPU programming and didn't want full code. Lets hope this momentum continues

LearnInShadows's tweet image. Day 2 of GPU programming

Never knew addition needs so much code 😂

Starting to get the hang of program_id.  
Used Gemini 3.0 to generate pseudocode since I'm new to GPU programming and didn't want full code.
Lets hope this momentum continues
LearnInShadows's tweet image. Day 2 of GPU programming

Never knew addition needs so much code 😂

Starting to get the hang of program_id.  
Used Gemini 3.0 to generate pseudocode since I'm new to GPU programming and didn't want full code.
Lets hope this momentum continues


Loading...

Something went wrong.


Something went wrong.


United States Trends