#randomlinearnetworkcoding 검색 결과

Matrix👨‍💻 while :;do echo $LINES $COLUMNS $(( $RANDOM % $COLUMNS)) $(printf "\U$(($RANDOM % 500))");sleep 0.05;done|gawk '{a[$3]=0;for (x in a){o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH\033[2;32m%s",o,x,$4;printf "\033[%s;%sH\033[1;37m%s\033[0;0H",a[x],x,$4;if (a[x]>=$1){a[x]=0;}}}'


Introducing RND1, the most powerful base diffusion language model (DLM) to date. RND1 (Radical Numerics Diffusion) is an experimental DLM with 30B params (3B active) with a sparse MoE architecture. We are making it open source, releasing weights, training details, and code to…


Hello everyone, Let’s break down how Random Linear Network Coding (RLNC) compares to traditional error correction methods and why it’s a game changer for Web3 & modern networks. 🔹What is RLNC ? Random Linear Network Coding (RLNC) is a network coding technique that transmits…

ahmtmchtky's tweet image. Hello everyone, 
Let’s break down how Random Linear Network Coding (RLNC) compares to traditional error correction methods and why it’s a game changer for Web3 & modern networks.

🔹What is RLNC ?
Random Linear Network Coding (RLNC) is a network coding technique that transmits…
ahmtmchtky's tweet image. Hello everyone, 
Let’s break down how Random Linear Network Coding (RLNC) compares to traditional error correction methods and why it’s a game changer for Web3 & modern networks.

🔹What is RLNC ?
Random Linear Network Coding (RLNC) is a network coding technique that transmits…
ahmtmchtky's tweet image. Hello everyone, 
Let’s break down how Random Linear Network Coding (RLNC) compares to traditional error correction methods and why it’s a game changer for Web3 & modern networks.

🔹What is RLNC ?
Random Linear Network Coding (RLNC) is a network coding technique that transmits…

landscapes of arbitrary detail and expanse can be generated by layering random noise of increasing frequency and decreasing amplitude


🧵 Everyone is chasing new diffusion models—but what about the representations they model from? We introduce Discrete Latent Codes (DLCs): - Discrete representation for diffusion models - Uncond. gen. SOTA FID (1.59 on ImageNet) - Compositional generation - Integrates with LLM 🧱

lavoiems's tweet image. 🧵 Everyone is chasing new diffusion models—but what about the representations they model from?
We introduce Discrete Latent Codes (DLCs):
- Discrete representation for diffusion models
- Uncond. gen. SOTA FID (1.59 on ImageNet)
- Compositional generation
- Integrates with LLM
🧱

Matrix while :;do echo $LINES $COLUMNS $(( $RANDOM % $COLUMNS)) $(printf "\U$(($RANDOM % 500))");sleep 0.05;done|gawk '{a[$3]=0;for (x in a){o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH\033[2;32m%s",o,x,$4;printf "\033[%s;%sH\033[1;37m%s\033[0;0H",a[x],x,$4;if (a[x] >= $1){a[x]=0;} }}'


Select one random number : 1-40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Check your DM 24 hours #RubiNetwork

RubiMigration's tweet image. Select one random number : 1-40

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40

Check your DM 24 hours #RubiNetwork

🚨Our new paper: Random Policy Valuation is Enough for LLM Reasoning with Verifiable Rewards We challenge the RL status quo. We find you don't need complex policy optimization for top-tier math reasoning. The key? Evaluating the Q function of a simple uniformly random policy. 🤯

tinner_he's tweet image. 🚨Our new paper: Random Policy Valuation is Enough for LLM Reasoning with Verifiable Rewards
We challenge the RL status quo. We find you don't need complex policy optimization for top-tier math reasoning. The key? Evaluating the Q function of a simple uniformly random policy. 🤯

Apple and Meta have published a monstruously elegant compression method that encodes model weights using pseudo-random seeds. The trick is to approximate model weights as the linear combination of a randomly generated matrix with fixed seed, and a smaller vector t.

CamutoDante's tweet image. Apple and Meta have published a monstruously elegant compression method that encodes model weights using pseudo-random seeds. 

The trick is to approximate model weights as the linear combination  of a randomly generated matrix with fixed seed, and a smaller vector t.

I don’t know if it’s real, but I did something very similar back in 2003 — I needed random numbers and just hardcoded them.

wangbin579's tweet image. I don’t know if it’s real, but I did something very similar back in 2003 — I needed random numbers and just hardcoded them.

𝐑𝐄𝐂𝖎𝐏𝐄: (1) Random points on a plane (2) Voronoi digram from points (3) Scale & turn Voronoi cells (4) Orbit the random points. Code & article: community.wolfram.com/groups/-/m/t/3…


Can we go smaller than micro-interactions? How about nano-interactions just because it sounds cool? Interactive random loader made with @rive_app


I continue on little things that GPT-5(-Codex) actually can do. This time the following setup in GPT-Codex-CLI (high mode). Challenge: let the model explain a very popular random generator: Mersenne Twister. With code, formulas, examples etc. (1/n)


BULK Net uses Reed-Solomon erasure coding for data protection and resilience to packet loss. A simple analogy: a parabola is defined by three points. If only they are sent, the loss of one point will make it impossible to restore the curve. @bulktrade 🧠

kalilinuxq's tweet image. BULK Net uses Reed-Solomon erasure coding for data protection and resilience to packet loss.

A simple analogy: a parabola is defined by three points.

If only they are sent, the loss of one point will make it impossible to restore the curve.

@bulktrade 🧠

We’ve just built the first SLM on a Fortytwo-generated dataset Announcing Fortytwo’s Strand-Rust-Coder-14B Now the best coding model for Rust Made possible by the high-quality, synthetic dataset, generated through Swarm Inference ✷ Ranked #1 on the RustEvo^2 and Hold-Out…

fortytwonetwork's tweet image. We’ve just built the first SLM on a Fortytwo-generated dataset

Announcing Fortytwo’s Strand-Rust-Coder-14B
Now the best coding model for Rust

Made possible by the high-quality, synthetic dataset, generated through Swarm Inference

✷ Ranked #1 on the RustEvo^2 and Hold-Out…

This is why you shouldn’t use Math.random() in Java for cryptographically sensitive functions (like generating a token), talk by @b4stet4 #hacklu #cryptography

x0rz's tweet image. This is why you shouldn’t use Math.random() in Java for cryptographically sensitive functions (like generating a token), talk by @b4stet4 #hacklu #cryptography

RLNC (Random Linear Network Coding) - makes data transfer more fast and effective. For example, when you want to send a file, the system cut it for a lot small pieces, mix it and sends it in different variations for the whole network. How @get_optimum uses it? Optimum uses it…

qvvvvv11's tweet image. RLNC (Random Linear Network Coding) - makes data transfer more fast and effective. For example, when you want to send a file, the system cut it for a lot small pieces, mix it and sends it in different variations for the whole network.

How @get_optimum uses it?

Optimum uses it…

#pico8 #tweetcart ::_:: cls(1) srand(34) for y=0,141,3 do x=rnd(144)-8 h=14+rnd(24) for i=0,h do circfill(x-i*2,y+2,(h-i)/6,0) end line(x,y,x,y+2,5) for i=-15,15 do line(x+i*h/80,y-abs(i)/6,x+sin(t()/5+x/150+y/200)/5*h,y-h,i>0 and 3 or 0) end end flip() goto _


LLMs have a repetition problem. ask for a joke → same joke every time ask to roll dice → always returns 4 ask for creative ideas → predictable garbage Try this instead: Generate 5 responses with their corresponding probabilities, sampled at random from the tails of the…

godofprompt's tweet image. LLMs have a repetition problem.

ask for a joke → same joke every time 
ask to roll dice → always returns 4 
ask for creative ideas → predictable garbage

Try this instead: 
Generate 5 responses with their corresponding probabilities, sampled at random from the tails of the…

Faster block, everything else equal = less MEV The question is HOW. Coding. #RLNC #RandomLinearNetworkCoding

Who’s got the best answer to how blocktime speeds impacts MEV size and extraction. I.e. faster blocks reduces MEV, or doesn’t, etc.



Coded data propagation better than naked data gossiping ✅ Web3 deserves better math & algorithm for data. Adopt coding! #RLNC #RandomLinearNetworkCoding

I cannot Imagine how brainfucked you have to be to think that faster blocks != less MEV. Eth maximalists are rapidly becoming the flat earthers of crypto. Just because your legacy chain has slow blocks doesn't make slow blocks good. EBOLA

MaxResnick1's tweet image. I cannot Imagine how brainfucked you have to be to think that faster blocks != less MEV. Eth maximalists are rapidly becoming the flat earthers of crypto. Just because your legacy chain has slow blocks doesn't make slow blocks good.

EBOLA


Read #NewPaper: "Efficient Communications in V2V Networks with Two-Way Lanes Based on Random Linear #NetworkCoding" by Yiqian Zhang, Tiantian Zhu and Congduan Li. See more details at: mdpi.com/1099-4300/25/1… #randomlinearnetworkcoding #vehicletovehiclecommunication

Entropy_MDPI's tweet image. Read #NewPaper: "Efficient Communications in V2V Networks with Two-Way Lanes Based on Random Linear #NetworkCoding" by Yiqian Zhang, Tiantian Zhu and Congduan Li.

See more details at: mdpi.com/1099-4300/25/1…

#randomlinearnetworkcoding
#vehicletovehiclecommunication

"#randomlinearnetworkcoding"에 대한 결과가 없습니다

Read #NewPaper: "Efficient Communications in V2V Networks with Two-Way Lanes Based on Random Linear #NetworkCoding" by Yiqian Zhang, Tiantian Zhu and Congduan Li. See more details at: mdpi.com/1099-4300/25/1… #randomlinearnetworkcoding #vehicletovehiclecommunication

Entropy_MDPI's tweet image. Read #NewPaper: "Efficient Communications in V2V Networks with Two-Way Lanes Based on Random Linear #NetworkCoding" by Yiqian Zhang, Tiantian Zhu and Congduan Li.

See more details at: mdpi.com/1099-4300/25/1…

#randomlinearnetworkcoding
#vehicletovehiclecommunication

Loading...

Something went wrong.


Something went wrong.


United States Trends