#parallelcomputing 搜索结果

𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗶𝘀 𝗻𝗼𝘁 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗮𝘀 𝗽𝗮𝗿𝗮𝗹𝗹𝗲𝗹 Most developers use these terms interchangeably. That's a mistake. 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆 is about structure. You design your program so that multiple tasks can progress simultaneously without waiting for…

milan_milanovic's tweet image. 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗶𝘀 𝗻𝗼𝘁 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗮𝘀 𝗽𝗮𝗿𝗮𝗹𝗹𝗲𝗹

Most developers use these terms interchangeably. That's a mistake.

𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆  is about structure. You design your program so that multiple tasks can progress simultaneously without waiting for…

As I test my circuit here I got a chill. This is the most important AI / Robotic discovery of the century. In theory a complete GPU DATA center the size of a home refrigerator. We may very well live in a moment over the next five years where this happens. Analog AI…

BrianRoemmele's tweet image. As I test my circuit here I got a chill.

This is the most important AI / Robotic discovery of the century.

In theory a complete GPU DATA center the size of a home refrigerator.

We may very well live in a moment over the next five years where this happens.

Analog AI…

BOOM! MAJOR AI SPEEDUP! Hot Rod AI 100 times faster inference 100,000 times less power! — Reviving Analog Circuits: A Leap Toward Ultra-Efficient AI with In-Memory Attention I got my start in analog electronics when I was a kid and always thought analog computers would make a…

BrianRoemmele's tweet image. BOOM! MAJOR AI SPEEDUP!

Hot Rod AI 100 times faster inference 100,000 times less power!

—

Reviving Analog Circuits: A Leap Toward Ultra-Efficient AI with In-Memory Attention

I got my start in analog electronics when I was a kid and always thought analog computers would make a…


Clustering NVIDIA DGX Spark + M3 Ultra Mac Studio for 4x faster LLM inference. DGX Spark: 128GB @ 273GB/s, 100 TFLOPS (fp16), $3,999 M3 Ultra: 256GB @ 819GB/s, 26 TFLOPS (fp16), $5,599 The DGX Spark has 3x less memory bandwidth than the M3 Ultra but 4x more FLOPS. By running…

exolabs's tweet image. Clustering NVIDIA DGX Spark + M3 Ultra Mac Studio for 4x faster LLM inference.

DGX Spark: 128GB @ 273GB/s, 100 TFLOPS (fp16), $3,999
M3 Ultra: 256GB @ 819GB/s, 26 TFLOPS (fp16), $5,599

The DGX Spark has 3x less memory bandwidth than the M3 Ultra but 4x more FLOPS.

By running…

We’re talking about domestic high-throughput compute, low-latency data routing, and on-shore model-training capability all built at national scale. This is how India moves from being a consumer of cloud capacity to a producer of global compute. The…


Announcing the completely reimagined vLLM TPU! In collaboration with @Google, we've launched a new high-performance TPU backend unifying @PyTorch and JAX under a single lowering path for amazing performance and flexibility. 🚀 What's New? - JAX + Pytorch: Run PyTorch models on…

vllm_project's tweet image. Announcing the completely reimagined vLLM TPU! In collaboration with @Google, we've launched a new high-performance TPU backend unifying @PyTorch and JAX under a single lowering path for amazing performance and flexibility.

🚀 What's New?
- JAX + Pytorch: Run PyTorch models on…

If you're using async await, you might be running async calls sequentially that can actually be done in parallel. Here's one way you can run them in parallel to speed things up 👌

ryanchenkie's tweet image. If you're using async await, you might be running async calls sequentially that can actually be done in parallel. Here's one way you can run them in parallel to speed things up 👌

Tip: "Concurrency" refers to the ability to manage multiple tasks at once, interleaving them without necessarily executing them simultaneously. #Python #Concurrency

SuperFastPython's tweet image. Tip: "Concurrency" refers to the ability to manage multiple tasks at once, interleaving them without necessarily executing them simultaneously.
#Python #Concurrency

BOOM! MAJOR AI SPEEDUP! Hot Rod AI 100 times faster inference 100,000 times less power! — Reviving Analog Circuits: A Leap Toward Ultra-Efficient AI with In-Memory Attention I got my start in analog electronics when I was a kid and always thought analog computers would make a…

BrianRoemmele's tweet image. BOOM! MAJOR AI SPEEDUP!

Hot Rod AI 100 times faster inference 100,000 times less power!

—

Reviving Analog Circuits: A Leap Toward Ultra-Efficient AI with In-Memory Attention

I got my start in analog electronics when I was a kid and always thought analog computers would make a…

Intelligence has been locked in walled gardens. Today, we’re opening the gates. Parallax now runs in Hybrid mode, with Macs and GPUs serving large models together in a truly distributed framework.

Gradient_HQ's tweet image. Intelligence has been locked in walled gardens. Today, we’re opening the gates.

Parallax now runs in Hybrid mode, with Macs and GPUs serving large models together in a truly distributed framework.

Understanding Pharos’ Parallel Execution Framework 🧵

pharos_network's tweet image. Understanding Pharos’ Parallel Execution Framework 🧵

Sub-Second Data Co-Processor isn’t just a feature, it’s the moment when latency stops being the excuse and becomes the enemy we defeat. In Q2 2025, @Covalent_HQ processed 471 million API calls across 100+ blockchains, pushing total usage past 17 billion calls, and over 95% of…

Sadegh_48's tweet image. Sub-Second Data Co-Processor isn’t just a feature, it’s the moment when latency stops being the excuse and becomes the enemy we defeat.

In Q2 2025, @Covalent_HQ processed 471 million API calls across 100+ blockchains, pushing total usage past 17 billion calls, and over 95% of…

Holy shit... Meta just cracked the art of scaling RL for LLMs. For the first time ever, they showed that "reinforcement learning follows predictable scaling laws" just like pretraining. Their new framework, 'ScaleRL', fits a sigmoid compute-performance curve that can forecast…

godofprompt's tweet image. Holy shit... Meta just cracked the art of scaling RL for LLMs.

For the first time ever, they showed that "reinforcement learning follows predictable scaling laws" just like pretraining.

Their new framework, 'ScaleRL', fits a sigmoid compute-performance curve that can forecast…

Prediction markets need fairness, speed & scale — and that’s exactly what @linera_io microchains deliver. Each market runs on its own parallel chain → instant trades, no congestion, real-time outcomes. The future of prediction is micro-parallel. 🔮⛓️ #Linera


Exploiting the power of the multi-core and many-core processors requires writing parallel algorithms and data structures. Moreover, the implementation also needs to be cache friendly. But these aspects are typically not covered in the introductory courses. I came across this…

abhi9u's tweet image. Exploiting the power of the multi-core and many-core processors requires writing parallel algorithms and data structures. Moreover, the implementation also needs to be cache friendly. But these aspects are typically not covered in the introductory courses. 

I came across this…
abhi9u's tweet image. Exploiting the power of the multi-core and many-core processors requires writing parallel algorithms and data structures. Moreover, the implementation also needs to be cache friendly. But these aspects are typically not covered in the introductory courses. 

I came across this…

Sneak peak from a paper about scaling RL compute for LLMs: probably the most compute-expensive paper I've worked on, but hoping that others can run experiments cheaply for the science of scaling RL. Coincidentally, this is similar motivation to what we had for the NeurIPS best…

agarwl_'s tweet image. Sneak peak from a paper about scaling RL compute for LLMs: probably the most compute-expensive paper I've worked on, but hoping that others can run experiments cheaply for the science of scaling RL. 

Coincidentally, this is similar motivation to what we had for the NeurIPS best…

Instruction Level Parallelism, when it works, it feels like magic. Here's a simple example in C# #dotnet

badamczewski01's tweet image. Instruction Level Parallelism, when it works, it feels like magic. Here's a simple example in C#

#dotnet

Say hello to the Parallel Task MCP Server— the first async MCP server for complex research tasks. Start deep research, move on to other work, retrieve results when done. No blocking, no waiting around.


Common Pandas problem: You have a big dataframe and a function that can't be easily vectorized. So, you want to run it in parallel. Surprisingly, most answers on StackOverflow just point you to a different library. So here's a little recipe I use:

marktenenholtz's tweet image. Common Pandas problem:

You have a big dataframe and a function that can't be easily vectorized.

So, you want to run it in parallel. Surprisingly, most answers on StackOverflow just point you to a different library.

So here's a little recipe I use:

Beginning a daily habit of reading research papers on GPU systems and distributed communication. Sharing brief insights here to track progress and connect with others in the field. #ParallelComputing #HighPerformanceComputing #ResearchCommunity #GPU #SystemsResearch #HPC #MLSys


Turbit: Advanced Node.js library for high-speed parallel computing on multicore CPUs. Boosts intensive tasks. #Nodejs #ParallelComputing #HighPerformance


📄 Benchmarking and Parallelization of Electrostatic Particle-In-Cell for low-temperature Plasma Simulation by particle-thread Binding by Libn Varghese et al. #HPC #HighPerformanceComputing #ParallelComputing arxiv.org/abs/2506.21524…


📄 PEVLM: Parallel Encoding for Vision-Language Models by Letian Kang et al. #HPC #HighPerformanceComputing #ParallelComputing arxiv.org/abs/2506.19651…


📄 PEVLM: Parallel Encoding for Vision-Language Models by Letian Kang et al. #HPC #HighPerformanceComputing #ParallelComputing arxiv.org/abs/2506.19651…


📄 PEVLM: Parallel Encoding for Vision-Language Models by Letian Kang et al. #HPC #HighPerformanceComputing #ParallelComputing arxiv.org/abs/2506.19651…


📄 Fully-Dynamic Parallel Algorithms for Single-Linkage Clustering by Quinten De Man et al. #HPC #HighPerformanceComputing #ParallelComputing arxiv.org/abs/2506.18384…


There is a theoretical maximum speed in concurrent programming. 💻Amdahl's Law says that if 50% of code is parallelizable then there is 2x max speedup. 📈95% parallelizable code gives 20x. #TechTalk #ParallelComputing #SoftwareEngineering

tomconder's tweet image. There is a theoretical maximum speed in concurrent programming. 💻Amdahl's Law says that if 50% of code is parallelizable then there is 2x max speedup. 📈95% parallelizable code gives 20x. #TechTalk #ParallelComputing #SoftwareEngineering

The Practice of Parallel Programming: freecomputerbooks.com/The-Practice-o… An advanced guide to the parallel and multithreaded programming, beyond the high-level design of the applications. #ParallelProgramming #ConcurrentProgramming #ParallelComputing #DistributedProgramming #CloudComputing

ecomputerbooks's tweet image. The Practice of Parallel Programming: freecomputerbooks.com/The-Practice-o…
An advanced guide to the parallel and multithreaded programming, beyond the high-level design of the applications.
#ParallelProgramming #ConcurrentProgramming #ParallelComputing #DistributedProgramming #CloudComputing

BIG CPU, BIG DATA: Solving the World's Toughest Problems with Parallel Computing - freecomputerbooks.com/Big-CPU-Big-Da… Look for "Read and Download Links" section to download. #BigData #cpu #ParallelComputing #ParallelProgramming #DistributedComputing #ConcurrentProgramming

ecomputerbooks's tweet image. BIG CPU, BIG DATA: Solving the World's Toughest Problems with Parallel Computing - freecomputerbooks.com/Big-CPU-Big-Da…

Look for "Read and Download Links" section to download.

#BigData #cpu #ParallelComputing #ParallelProgramming #DistributedComputing #ConcurrentProgramming

Parallel Programming with CUDA: Architecture, Analysis, Application - freecomputerbooks.com/Parallel-Progr… Look for "Read and Download Links" section to download. Follow/Connect me if you like this post. #ParallelProgramming #CUDA #ParallelComputing #ConcurrentProgramming #programming

ecomputerbooks's tweet image. Parallel Programming with CUDA: Architecture, Analysis, Application - freecomputerbooks.com/Parallel-Progr…

Look for "Read and Download Links" section to download. Follow/Connect me if you like this post.

#ParallelProgramming #CUDA #ParallelComputing #ConcurrentProgramming #programming

Me: runs detectCores() from {parallel} detectCores(): Ready to unleash the parallel processing power? Let's get core-geous! 😎🚀 #ParallelComputing #rstats #rbloggers

rprodigest's tweet image. Me: runs detectCores() from {parallel}
detectCores(): Ready to unleash the parallel processing power? Let's get core-geous! 😎🚀 #ParallelComputing #rstats #rbloggers

(Open Access) Introduction to Parallel Computing: freecomputerbooks.com/Introduction-t… Look for "Read and Download Links" section to download. Follow/Connect me if you like this post. #ParallelComputing #ParallelProgramming #ConcurrentProgramming #Algorithms #ParallelAlgorithms

ecomputerbooks's tweet image. (Open Access) Introduction to Parallel Computing: freecomputerbooks.com/Introduction-t…

Look for "Read and Download Links" section to download. Follow/Connect me if you like this post.

#ParallelComputing #ParallelProgramming #ConcurrentProgramming #Algorithms #ParallelAlgorithms

(Open Access) s Parallel Programming Hard? If So, What Can You Do About It? freecomputerbooks.com/Is-Parallel-Pr… Look for "Read and Download Links" section to download. Follow me if you like this post. #ParallelComputing #ParallelProgramming #ConcurrentProgramming #ParallelAlgorithms

ecomputerbooks's tweet image. (Open Access) s Parallel Programming Hard? If So, What Can You Do About It? freecomputerbooks.com/Is-Parallel-Pr…

Look for "Read and Download Links" section to download. Follow me if you like this post.

#ParallelComputing #ParallelProgramming #ConcurrentProgramming #ParallelAlgorithms

I warmly recommend those two books to all #gamedev wanting to dive deep into #parallelcomputing . I've read them years ago, and they were of great help. I'm currently "rethinking" some #procedural algorithms using the parallelism paradygm.

CGarageCoder's tweet image. I warmly recommend those two books to all #gamedev wanting to dive deep into #parallelcomputing . I've read them years ago, and they were of great help. I'm currently "rethinking" some #procedural algorithms using the parallelism paradygm.

David A. Padua from the @UofIllinois has been named the 2024 Ken Kennedy Award recipient for his contributions to the theory and practice of parallel compilation and tools, as well as outstanding mentorship and community service. #IEEE #ParallelComputing bit.ly/3Batg2i

ComputerSociety's tweet image. David A. Padua from the @UofIllinois has been named the 2024 Ken Kennedy Award recipient for his contributions to the theory and practice of parallel compilation and tools, as well as outstanding mentorship and community service. #IEEE #ParallelComputing bit.ly/3Batg2i

Intel's patent application #US20250110741A1 reveals support for 8-bit floating point format in parallel computing. It enables 32-way dot-product ops using interconnected multipliers, shifters & adders in graphics architecture. #ParallelComputing #GraphicsArchitecture $INTC #Intel

PatentPulse's tweet image. Intel's patent application #US20250110741A1 reveals support for 8-bit floating point format in parallel computing. It enables 32-way dot-product ops using interconnected multipliers, shifters & adders in graphics architecture. #ParallelComputing #GraphicsArchitecture $INTC #Intel
PatentPulse's tweet image. Intel's patent application #US20250110741A1 reveals support for 8-bit floating point format in parallel computing. It enables 32-way dot-product ops using interconnected multipliers, shifters & adders in graphics architecture. #ParallelComputing #GraphicsArchitecture $INTC #Intel
PatentPulse's tweet image. Intel's patent application #US20250110741A1 reveals support for 8-bit floating point format in parallel computing. It enables 32-way dot-product ops using interconnected multipliers, shifters & adders in graphics architecture. #ParallelComputing #GraphicsArchitecture $INTC #Intel

RT Use Python to Download Multiple Files (or URLs) in Parallel #datascience #python #parallelcomputing #programming dlvr.it/SvqKFN

DrMattCrowson's tweet image. RT Use Python to Download Multiple Files (or URLs) in Parallel #datascience #python #parallelcomputing #programming  dlvr.it/SvqKFN

🚀 New Training Opportunity! 📅 December 12, 2024 ⏰3 Hours | 🖥️ 100% Online 🎙️Mustafa Onur ÖZKAN 🔗 indico.truba.gov.tr/event/194/ 📧 [email protected] #HPC #MATLAB #ParallelComputing @EuroCC_project #EuroCC4SEE @UHeM_ITU

EuroCC_Turkey's tweet image. 🚀 New Training Opportunity!

📅 December 12, 2024
 ⏰3 Hours | 🖥️ 100% Online
 🎙️Mustafa Onur ÖZKAN

🔗 indico.truba.gov.tr/event/194/
 📧 ncc@ulakbim.gov.tr

#HPC #MATLAB #ParallelComputing @EuroCC_project  #EuroCC4SEE @UHeM_ITU

RT Vectorize and Parallelize RL Environments with JAX: Q-learning at the Speed of Light⚡ dlvr.it/SxTHL8 #parallelcomputing #jax #python #reinforcementlearning

DrMattCrowson's tweet image. RT Vectorize and Parallelize RL Environments with JAX: Q-learning at the Speed of Light⚡ dlvr.it/SxTHL8 #parallelcomputing #jax #python #reinforcementlearning

RT Supercharge Your Python Asyncio With Aiomultiprocess: A Comprehensive Guide #parallelcomputing #pythontoolbox #concurrency #python #datascience dlvr.it/SrjnYS

DrMattCrowson's tweet image. RT Supercharge Your Python Asyncio With Aiomultiprocess: A Comprehensive Guide #parallelcomputing #pythontoolbox #concurrency #python #datascience  dlvr.it/SrjnYS

Loading...

Something went wrong.


Something went wrong.


United States Trends