#gpuprogramming 검색 결과

⚡ Built my own graphics engine: Asthrarisine Sounds fun? Reality = invisible meshes, memory bugs & shader headaches. But here’s what made it work: #OpenGL #GraphicsEngine #GPUProgramming #GLTF #GameDev #ShaderProgramming


Each common operation is implemented as its own .cu file—modular. intriguing. #CUDA #NVIDIA #GPUProgramming #libcudf

yasunrik's tweet image. Each common operation is implemented as its own .cu file—modular. intriguing.
#CUDA #NVIDIA #GPUProgramming #libcudf

"Need better CUDA textbooks. 'Programming Massively Parallel Processors' is a good intro. I've created C/CUDA C implementations for first 3 chapters. Check book & my GitHub repo for details. #CUDA #GPUprogramming"

taras_y_sereda's tweet image. "Need better CUDA textbooks. 'Programming Massively Parallel Processors' is a good intro. I've created C/CUDA C implementations for first 3 chapters. Check book & my GitHub repo for details. #CUDA #GPUprogramming"

"10 days into CUDA, and I’ve earned my first badge of honor! 🚀 From simple kernels to profiling, every day is a step closer to mastering GPU computing. Onward to 100! #CUDA #GPUProgramming #100DaysOfCUDA"

limbizzz11's tweet image. "10 days into CUDA, and I’ve earned my first badge of honor! 🚀 From simple kernels to profiling, every day is a step closer to mastering GPU computing. Onward to 100! #CUDA #GPUProgramming #100DaysOfCUDA"

#GPUProgramming - Day 07: 🔧 #CPU Hazards 101 🚧: Ever heard of #Register Renaming & Out-of-Order Execution? They tackle structural hazards, ensuring smooth sailing for instructions. Watch out for Data Hazards (#RAW, #WAR, #WAW) in #MIPS, but fear not! #COA #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 07:
🔧 #CPU Hazards 101 🚧: Ever heard of #Register Renaming & Out-of-Order Execution? They tackle structural hazards, ensuring smooth sailing for instructions. Watch out for Data Hazards (#RAW, #WAR, #WAW) in #MIPS, but fear not!
#COA #LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 07:
🔧 #CPU Hazards 101 🚧: Ever heard of #Register Renaming & Out-of-Order Execution? They tackle structural hazards, ensuring smooth sailing for instructions. Watch out for Data Hazards (#RAW, #WAR, #WAW) in #MIPS, but fear not!
#COA #LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 07:
🔧 #CPU Hazards 101 🚧: Ever heard of #Register Renaming & Out-of-Order Execution? They tackle structural hazards, ensuring smooth sailing for instructions. Watch out for Data Hazards (#RAW, #WAR, #WAW) in #MIPS, but fear not!
#COA #LearnInPublic

#GPUProgramming - Day 02: 🔄 Exploring CPU architectures! #RISC, like #ARM & #Power, opts for efficiency with many registers. #CISC, exemplified by #Intel 8086, prioritizes simplicity, offering diverse, complex instructions. RISC excels in energy efficiency. #COA #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 02:
🔄 Exploring CPU architectures! #RISC, like #ARM & #Power, opts for efficiency with many registers. #CISC, exemplified by #Intel 8086, prioritizes simplicity, offering diverse, complex instructions. RISC excels in energy efficiency.
#COA #LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 02:
🔄 Exploring CPU architectures! #RISC, like #ARM & #Power, opts for efficiency with many registers. #CISC, exemplified by #Intel 8086, prioritizes simplicity, offering diverse, complex instructions. RISC excels in energy efficiency.
#COA #LearnInPublic

#GPUProgramming - Day 03: 🧠 CPUs: Processors adapt with DISA. #CPU's core duo - Control Unit & Datapath. Datapath: Registers, ALU, Buses, Multiplexers – a data symphony! 🔄 Follow the Instruction Execution Cycle: Fetch ➡️Decode➡️Execute➡️Store➡️ Update PC. 🕹️ #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 03:
🧠 CPUs: Processors adapt with DISA. 
#CPU's core duo - Control Unit & Datapath. 
Datapath: Registers, ALU, Buses, Multiplexers – a data symphony! 
🔄 Follow the Instruction Execution Cycle: 
Fetch ➡️Decode➡️Execute➡️Store➡️ Update PC. 🕹️ 
#LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 03:
🧠 CPUs: Processors adapt with DISA. 
#CPU's core duo - Control Unit & Datapath. 
Datapath: Registers, ALU, Buses, Multiplexers – a data symphony! 
🔄 Follow the Instruction Execution Cycle: 
Fetch ➡️Decode➡️Execute➡️Store➡️ Update PC. 🕹️ 
#LearnInPublic

#GPUProgramming - Day 01: 🚀 Exploring RISC architecture: Simplified, optimized instructions in one clock cycle. 🔄 Bye, CISC complexity! 🏎️ Registers rule, boosting speed. 🤖💡 Compiler-friendly design, slick pipelining for simultaneous processing! 🕵️‍♂️ #COA #RISC #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 01:
🚀 Exploring RISC architecture: Simplified, optimized instructions in one clock cycle. 🔄 Bye, CISC complexity! 🏎️ Registers rule, boosting speed. 🤖💡 Compiler-friendly design, slick pipelining for simultaneous processing! 🕵️‍♂️
#COA #RISC #LearnInPublic

#GPUProgramming - Day 08: 🚀 Explored #computerarchitecture today! 🖥️ Control Hazards tackle branch prediction, #Pentium FDIV bug a classic example. 💡 Memory #Hierarchy is key—#RAM, #cache levels (L1, L2, L3), and storage devices play crucial roles. 🔄🌐 #Memory #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 08:
🚀 Explored #computerarchitecture today! 🖥️ Control Hazards tackle branch prediction, #Pentium FDIV bug a classic example. 💡 Memory #Hierarchy is key—#RAM, #cache levels (L1, L2, L3), and storage devices play crucial roles. 🔄🌐 #Memory #LearnInPublic
hridoy_bashir's tweet image. #GPUProgramming - Day 08:
🚀 Explored #computerarchitecture today! 🖥️ Control Hazards tackle branch prediction, #Pentium FDIV bug a classic example. 💡 Memory #Hierarchy is key—#RAM, #cache levels (L1, L2, L3), and storage devices play crucial roles. 🔄🌐 #Memory #LearnInPublic

For maximum performance, firms often develop custom CUDA kernels. This involves writing low-level code to directly program the GPU's parallel cores, squeezing out every drop of efficiency for critical tasks. #CUDA #GPUProgramming


$AMD will train 100,000 STEM graduates in open-source GPU programming and grant 100,000 hours of free cloud access to Indian researchers and startups: ibn.fm/o5GJF #OpenSource #GPUProgramming #AIIndia


#GPUProgramming - Day 06: 🔍 Diving into computer architecture! 🖥️ Structural hazards arise when hardware resources are in high demand, causing contention among instructions. Data hazards? RAW, WAR, WAW – the battle for data paths and registers! 💡 #COA #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 06:
🔍 Diving into computer architecture! 🖥️ Structural hazards arise when hardware resources are in high demand, causing contention among instructions. Data hazards? RAW, WAR, WAW – the battle for data paths and registers! 💡
#COA #LearnInPublic

#GPUProgramming - Day 04: 🕰️ Dive into processor architectures! 🧠 Single-cycle execution, one clock cycle per instruction, demands a versatile datapath. 🔄 Multi-cycle instructions break it down for a more intricate dance with time. ⏳ #ComputerArchitecture #LearnInPublic 🚀

hridoy_bashir's tweet image. #GPUProgramming - Day 04:
🕰️ Dive into processor architectures! 🧠 Single-cycle execution, one clock cycle per instruction, demands a versatile datapath. 🔄 Multi-cycle instructions break it down for a more intricate dance with time. ⏳ #ComputerArchitecture #LearnInPublic 🚀
hridoy_bashir's tweet image. #GPUProgramming - Day 04:
🕰️ Dive into processor architectures! 🧠 Single-cycle execution, one clock cycle per instruction, demands a versatile datapath. 🔄 Multi-cycle instructions break it down for a more intricate dance with time. ⏳ #ComputerArchitecture #LearnInPublic 🚀

#GPUProgramming - Day 05: 🚀 Pipelining in computer architecture boosts performance by dividing instruction execution into stages. Techniques like forwarding, branch prediction, and superscalar processors enhance parallelism.💻🌐 #ComputerArchitecture #Pipelining #LearnInPublic

hridoy_bashir's tweet image. #GPUProgramming - Day 05:
🚀 Pipelining in computer architecture boosts performance by dividing instruction execution into stages. Techniques like forwarding, branch prediction, and superscalar processors enhance parallelism.💻🌐
#ComputerArchitecture #Pipelining #LearnInPublic

📣 Save the date. Join Dr. Wen-mei Hwu & Dr. Izzat El Hajj on May 28, 2025, at 10:00 AM PDT for a 1-hour webinar on teaching and accelerating #CUDA. Get insider tips, book updates, and expert advice. Register now ➡️ #CUDA #GPUProgramming #TechWebinar bit.ly/3YRAhOI


#NewBooks Gerassimos Barlas #Multicore and #GPUprogramming: An Integrated Approach 2nd Edition Morgan Kaufmann (August 2022) Blurb: "offers broad coverage of key parallel computing tools, essential for multi-core CPU programming and many-core "massively parallel" computing. >

LGcommaI's tweet image. #NewBooks

Gerassimos Barlas
#Multicore and #GPUprogramming: An Integrated Approach
2nd Edition
Morgan Kaufmann (August 2022)

Blurb:
"offers broad coverage of key parallel computing tools, essential for multi-core CPU programming and many-core "massively parallel" computing.
>

"#gpuprogramming"에 대한 결과가 없습니다

This has to be one of the best GPU programming resources I've found - the GPU Glossary from Modal breaks down complex concepts with clear visuals and explanations, from CUDA architecture to Tensor Cores to CTAs. modal.com/gpu-glossary

vivekgalatage's tweet image. This has to be one of the best GPU programming resources I've found - the GPU Glossary from Modal breaks down complex concepts with clear visuals and explanations, from CUDA architecture to Tensor Cores to CTAs.

modal.com/gpu-glossary
vivekgalatage's tweet image. This has to be one of the best GPU programming resources I've found - the GPU Glossary from Modal breaks down complex concepts with clear visuals and explanations, from CUDA architecture to Tensor Cores to CTAs.

modal.com/gpu-glossary
vivekgalatage's tweet image. This has to be one of the best GPU programming resources I've found - the GPU Glossary from Modal breaks down complex concepts with clear visuals and explanations, from CUDA architecture to Tensor Cores to CTAs.

modal.com/gpu-glossary
vivekgalatage's tweet image. This has to be one of the best GPU programming resources I've found - the GPU Glossary from Modal breaks down complex concepts with clear visuals and explanations, from CUDA architecture to Tensor Cores to CTAs.

modal.com/gpu-glossary

Understanding GPU Architecture from Cornell cvw.cac.cornell.edu/gpu-architectu… During a low-level discussion at a casual meetup, many folks were interested in understanding GPUs more closely. While CPUs optimize for complex control flow (see those big cores + caches), the GPUs maximize…

vivekgalatage's tweet image. Understanding GPU Architecture from Cornell 

cvw.cac.cornell.edu/gpu-architectu…

During a low-level discussion at a casual meetup, many folks were interested in understanding GPUs more closely.

While CPUs optimize for complex control flow (see those big cores + caches), the GPUs maximize…
vivekgalatage's tweet image. Understanding GPU Architecture from Cornell 

cvw.cac.cornell.edu/gpu-architectu…

During a low-level discussion at a casual meetup, many folks were interested in understanding GPUs more closely.

While CPUs optimize for complex control flow (see those big cores + caches), the GPUs maximize…
vivekgalatage's tweet image. Understanding GPU Architecture from Cornell 

cvw.cac.cornell.edu/gpu-architectu…

During a low-level discussion at a casual meetup, many folks were interested in understanding GPUs more closely.

While CPUs optimize for complex control flow (see those big cores + caches), the GPUs maximize…


📝 New Blog Post 📝 I spent the summer diving into TSL and WebGPU, porting over my favorite shader work along the way This article compiles all my learnings, alongside many Compute Shader applications and practical TSL shader patterns blog.maximeheckel.com/posts/field-gu…

MaximeHeckel's tweet image. 📝 New Blog Post 📝

I spent the summer diving into TSL and WebGPU, porting over my favorite shader work along the way

This article compiles all my learnings, alongside many Compute Shader applications and practical TSL shader patterns

blog.maximeheckel.com/posts/field-gu…

Stunning UI visuals in seconds. Cut design exploration time by 80% using new GPT-4o image gen. My secret process revealed: 1 – Choose 2 images references

AndreaMontini's tweet image. Stunning UI visuals in seconds.

Cut design exploration time by 80% using new GPT-4o image gen.

My secret process revealed:

1 – Choose 2 images references
AndreaMontini's tweet image. Stunning UI visuals in seconds.

Cut design exploration time by 80% using new GPT-4o image gen.

My secret process revealed:

1 – Choose 2 images references
AndreaMontini's tweet image. Stunning UI visuals in seconds.

Cut design exploration time by 80% using new GPT-4o image gen.

My secret process revealed:

1 – Choose 2 images references
AndreaMontini's tweet image. Stunning UI visuals in seconds.

Cut design exploration time by 80% using new GPT-4o image gen.

My secret process revealed:

1 – Choose 2 images references

gm all and happy thursday! twas a clear night last night, got a few pictures in, including some experiments with the bahtinov filter

sig_term_9's tweet image. gm all and happy  thursday!

twas a clear night last night, got a few pictures in, including some experiments with the bahtinov filter
sig_term_9's tweet image. gm all and happy  thursday!

twas a clear night last night, got a few pictures in, including some experiments with the bahtinov filter
sig_term_9's tweet image. gm all and happy  thursday!

twas a clear night last night, got a few pictures in, including some experiments with the bahtinov filter
sig_term_9's tweet image. gm all and happy  thursday!

twas a clear night last night, got a few pictures in, including some experiments with the bahtinov filter

One thing that can really improve your gposes is to take them in 4k. It makes small details a lot cleaner, and improves ADOF & MXAO/RTGI. You don't need a 4k monitor to do it, just a good enough GPU to handle it! 🧵on how to set this up for Nvidia graphics cards

yuwuria's tweet image. One thing that can really improve your gposes is to take them in 4k.

It makes small details a lot cleaner, and improves ADOF & MXAO/RTGI.

You don't need a 4k monitor to do it, just a good enough GPU to handle it!

🧵on how to set this up for Nvidia graphics cards
yuwuria's tweet image. One thing that can really improve your gposes is to take them in 4k.

It makes small details a lot cleaner, and improves ADOF & MXAO/RTGI.

You don't need a 4k monitor to do it, just a good enough GPU to handle it!

🧵on how to set this up for Nvidia graphics cards

NEWS: Introducing NVIDIA Rubin CPX — a new class of GPU purpose-built to handle million-token coding and generative video applications with groundbreaking speed and efficiency. Read more ➡️ nvda.ws/4pcjYau #AIInfraSummit #NVIDIARubin

nvidianewsroom's tweet image. NEWS: Introducing NVIDIA Rubin CPX — a new class of GPU purpose-built to handle million-token coding and generative video applications with groundbreaking speed and efficiency.

Read more ➡️ nvda.ws/4pcjYau

#AIInfraSummit #NVIDIARubin

core.load_4k(wallpaper("Viewing Party"));

grandberg_'s tweet image. core.load_4k(wallpaper("Viewing Party"));

playground for CUDA code using simulated GPUs!

_apoorvnandan's tweet image. playground for CUDA code using simulated GPUs!

From sketches to sleek UI widgets. Cut design exploration time by 80% using new GPT-4o image gen. Here's my exact process:

AndreaMontini's tweet image. From sketches to sleek UI widgets.

Cut design exploration time by 80% using new GPT-4o image gen.

Here's my exact process:
AndreaMontini's tweet image. From sketches to sleek UI widgets.

Cut design exploration time by 80% using new GPT-4o image gen.

Here's my exact process:
AndreaMontini's tweet image. From sketches to sleek UI widgets.

Cut design exploration time by 80% using new GPT-4o image gen.

Here's my exact process:
AndreaMontini's tweet image. From sketches to sleek UI widgets.

Cut design exploration time by 80% using new GPT-4o image gen.

Here's my exact process:

I literally gained 21% 1% Lows after setting: 🔧 Texture Filtering from Bilinear to 16x 🔧 Ambient Occlusion from Disabled to Medium at a cost of 8% loss in Avg. FPS. IDK how it works, but it does for me. Give it a try and let me know. 😅

ThourCS2's tweet image. I literally gained 21% 1% Lows after setting:

🔧 Texture Filtering from Bilinear to 16x
🔧 Ambient Occlusion from Disabled to Medium

at a cost of 8% loss in Avg. FPS. IDK how it works, but it does for me.  Give it a try and let me know. 😅

This video's graphic of how to think about data being passed from the CPU to the GPU is excellent. youtube.com/watch?v=YNFaOn…

bgolus's tweet image. This video's graphic of how to think about data being passed from the CPU to the GPU is excellent.

youtube.com/watch?v=YNFaOn…

Ich weiß schon, ich bin eine Schande für alle Inder, aber wo genau setze ich die Grafikarte ein ? Ist mein alter Rechner, den ich wieder flott machen will

DerGanesha's tweet image. Ich weiß schon, ich bin eine Schande für alle Inder, aber wo genau setze ich die Grafikarte ein ?

Ist mein alter Rechner, den ich wieder flott machen will

GPU programming: DAY 0 inspired by @sadernoheart and @elliotarledge I’m joining the gang of sharing daily progress and things that may be helpful GOAL: get extremely good at GPU programming REASON: want + can so weekend + today (let's include head start):

tim_acc's tweet image. GPU programming: DAY 0

inspired by @sadernoheart and @elliotarledge I’m joining the gang of sharing daily progress and things that may be helpful

GOAL: get extremely good at GPU programming

REASON: want + can

so weekend + today (let's include head start):
tim_acc's tweet image. GPU programming: DAY 0

inspired by @sadernoheart and @elliotarledge I’m joining the gang of sharing daily progress and things that may be helpful

GOAL: get extremely good at GPU programming

REASON: want + can

so weekend + today (let's include head start):

GPU computing before CUDA was *weird*. Memory primitives were graphics shaped, not computer science shaped. Want to do math on an array? Store it as an RGBA texture. Fragment Shader for processing. *Paint* the result in a big rectangle.

lauriewired's tweet image. GPU computing before CUDA was *weird*.

Memory primitives were graphics shaped, not computer science shaped.

Want to do math on an array? Store it as an RGBA texture.

Fragment Shader for processing. *Paint* the result in a big rectangle.
lauriewired's tweet image. GPU computing before CUDA was *weird*.

Memory primitives were graphics shaped, not computer science shaped.

Want to do math on an array? Store it as an RGBA texture.

Fragment Shader for processing. *Paint* the result in a big rectangle.

for anyone trying to get their hands on gpu programming with CUDA

mrsiipa's tweet image. for anyone trying to get their hands on gpu programming with CUDA

BREAKING: NVIDIA just announced Rubin CPX, a new class of GPU purpose-built for massive-context processing. Rubin CPX works hand in hand with NVIDIA Vera CPUs and Rubin GPUs inside the new NVIDIA Vera Rubin NVL144 CPX platform. This enables AI systems to handle million-token…

The_AI_Investor's tweet image. BREAKING:

NVIDIA just announced Rubin CPX, a new class of GPU purpose-built for massive-context processing.

Rubin CPX works hand in hand with NVIDIA Vera CPUs and Rubin GPUs inside the
new NVIDIA Vera Rubin NVL144 CPX platform.

This enables AI systems to handle million-token…

Turn images into dreamy iridescent visuals with GPT-4o Prompt 👇

egeberkina's tweet image. Turn images into dreamy iridescent visuals with GPT-4o

Prompt 👇
egeberkina's tweet image. Turn images into dreamy iridescent visuals with GPT-4o

Prompt 👇
egeberkina's tweet image. Turn images into dreamy iridescent visuals with GPT-4o

Prompt 👇
egeberkina's tweet image. Turn images into dreamy iridescent visuals with GPT-4o

Prompt 👇

Graphics Processing Clusters! Gorgous and beauty! #Processors

JamesonSharpKc's tweet image. Graphics Processing Clusters! 

 Gorgous and beauty!

#Processors

Loading...

Something went wrong.


Something went wrong.


United States Trends