#autoencoder resultados da pesquisa

Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔 nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal… but dear #Brain people🧠, why do you find cool brain dynamics, state trajectories, embedding?

Dr_Alex_Crimi's tweet image. Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔
nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal…
but dear #Brain people🧠,
why do you find cool brain dynamics, state trajectories, embedding?

YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩 Only 1 smol problem..🤣 It's not just *ONE* typographic cluster.🤯 Left: 3293 encodes CLIP neurons for English, probably EN text signs. Right: 2052 encodes East Asian + German + Mirrored. 👇🧵

zer0int1's tweet image. YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩

Only 1 smol problem..🤣
It's not just *ONE* typographic cluster.🤯

Left: 3293 encodes CLIP neurons for English, probably EN text signs.
Right: 2052 encodes East Asian + German + Mirrored.
👇🧵
zer0int1's tweet image. YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩

Only 1 smol problem..🤣
It's not just *ONE* typographic cluster.🤯

Left: 3293 encodes CLIP neurons for English, probably EN text signs.
Right: 2052 encodes East Asian + German + Mirrored.
👇🧵

Thanks for the en-/discouragement, #GPT4o 😂 Now #sparse #autoencoder #2 learns to be a #babelfish, translating #logits to #token sequences.🤯 It could help decode a sparse #CLIP embedding, it could help decode a gradient ascent #CLIP #opinion! God luck & good speed, #SAE ✊😬

zer0int1's tweet image. Thanks for the en-/discouragement, #GPT4o 😂
Now #sparse #autoencoder #2 learns to be a #babelfish, translating #logits to #token sequences.🤯

It could help decode a sparse #CLIP embedding, it could help decode a gradient ascent #CLIP #opinion!

God luck & good speed, #SAE ✊😬

Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis". But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂 #lmao #AI #AIweirdness

zer0int1's tweet image. Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis".

But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂
#lmao #AI #AIweirdness

w00t best #sparse #autoencoder for the #AI so far \o/ I kinda maxed out the reconstruction quality @ 97% cossim. I can stop optimizing this ball of mathematmadness now. 😅 Tied encoder / decoder weights + extra training on #CLIP's "hallucinwacky ooooowords" ('opinions'). 😂

zer0int1's tweet image. w00t best #sparse #autoencoder for the #AI so far \o/

I kinda maxed out the reconstruction quality @ 97% cossim. I can stop optimizing this ball of mathematmadness now. 😅

Tied encoder / decoder weights + extra training on #CLIP's "hallucinwacky ooooowords" ('opinions'). 😂

'Autoencoders in Function Space', by Justin Bunker, Mark Girolami, Hefin Lambley, Andrew M. Stuart, T. J. Sullivan. jmlr.org/papers/v26/25-… #autoencoders #autoencoder #generative


高積算高s/nのMRS dataで学習させたStacked Autoencoderを用い、MRS積算回数をごく少なくしても充分評価に耐えるspectraを生成する用意したというpreprint ヒト脳の低積算画像では,SNRが43.8%増加し,MSEが68.8%減少し、定量性は保たれた #MRS #autoencoder #papers arxiv.org/abs/2303.16503…

thmonk_pf's tweet image. 高積算高s/nのMRS dataで学習させたStacked Autoencoderを用い、MRS積算回数をごく少なくしても充分評価に耐えるspectraを生成する用意したというpreprint

ヒト脳の低積算画像では,SNRが43.8%増加し,MSEが68.8%減少し、定量性は保たれた

#MRS 
#autoencoder 
#papers 

arxiv.org/abs/2303.16503…

RT A Deep Dive into Autoencoders and Their Relationship to PCA and SVD #pcaanalysis #autoencoder #dimensionalityreduction #python dlvr.it/Sqd13H

DrMattCrowson's tweet image. RT A Deep Dive into Autoencoders and Their Relationship to PCA and SVD #pcaanalysis #autoencoder #dimensionalityreduction #python  dlvr.it/Sqd13H

RT Image-to-Image Generation Using depth2img Pre-Trained Models #Advanced #Autoencoder #DiffusionModels #Github #Image dlvr.it/SpqwQ9

DrMattCrowson's tweet image. RT Image-to-Image Generation Using depth2img Pre-Trained Models #Advanced #Autoencoder #DiffusionModels #Github #Image  dlvr.it/SpqwQ9

RT Unleashing the Power of Autoencoders: Applications and Use Cases #Autoencoder #Classification #DataVisualization #DeepLearning dlvr.it/SpdlsR

DrMattCrowson's tweet image. RT Unleashing the Power of Autoencoders: Applications and Use Cases #Autoencoder #Classification #DataVisualization #DeepLearning  dlvr.it/SpdlsR

RT Variational Transformers for Music Composition: Can AI replace Musician ? #Autoencoder #Excel #GenerativeAI #Recommendation #AI dlvr.it/SvzWQb

DrMattCrowson's tweet image. RT Variational Transformers for Music Composition: Can AI replace Musician ? #Autoencoder #Excel #GenerativeAI #Recommendation #AI  dlvr.it/SvzWQb

RT Training a Variational Autoencoder For Anomaly Detection Using TensorFlow #Autoencoder #Beginner #MachineLearning #Probability #Unsupervised dlvr.it/Sw7v71

DrMattCrowson's tweet image. RT Training a Variational Autoencoder For Anomaly Detection Using TensorFlow #Autoencoder #Beginner #MachineLearning #Probability #Unsupervised  dlvr.it/Sw7v71

Here is the one of the rare papers arxiv.org/abs/2504.12418 we did where a supervised event classifier is compared with an unsupervised #autoencoder using exactly the same input and a similar neural network architecture for the hidden layers. The example uses double-#Higgs


We continue at the @SwanseaPPCTh @SwanseaUni #machinelearning & #lattice workshop with a talk by Simran Singh (@unibielefeld) on application of #autoencoder|s to exploration of the phase structure of the strong interactions governing quarks & gluons. @dfg_public #WomeninPhysics

rothkopfAK's tweet image. We continue at the @SwanseaPPCTh @SwanseaUni #machinelearning & #lattice workshop with a talk by Simran Singh (@unibielefeld) on application of #autoencoder|s to exploration of the phase structure of the strong interactions governing quarks & gluons. @dfg_public #WomeninPhysics
rothkopfAK's tweet image. We continue at the @SwanseaPPCTh @SwanseaUni #machinelearning & #lattice workshop with a talk by Simran Singh (@unibielefeld) on application of #autoencoder|s to exploration of the phase structure of the strong interactions governing quarks & gluons. @dfg_public #WomeninPhysics

PredLDM: Spatiotemporal Sequence Prediction with Latent Diffusion Models openreview.net/forum?id=TWmnO… #autoencoder #spatiotemporal #predicting


Automated Attention Pattern Discovery at Scale in Large Language Models openreview.net/forum?id=KpsUN… #attention #predicts #autoencoder


'Autoencoders in Function Space', by Justin Bunker, Mark Girolami, Hefin Lambley, Andrew M. Stuart, T. J. Sullivan. jmlr.org/papers/v26/25-… #autoencoders #autoencoder #generative


Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔 nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal… but dear #Brain people🧠, why do you find cool brain dynamics, state trajectories, embedding?

Dr_Alex_Crimi's tweet image. Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔
nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal…
but dear #Brain people🧠,
why do you find cool brain dynamics, state trajectories, embedding?

Emergence of Quantised Representations Isolated to Anisotropic Functions openreview.net/forum?id=aokVp… #representations #representational #autoencoder


STLDM: Spatio-Temporal Latent Diffusion Model for Precipitation Nowcasting openreview.net/forum?id=f4oJw… #autoencoder #precipitation #prediction


Semi-Symmetrical, Fully Convolutional Masked #Autoencoder for TBM Muck #ImageSegmentation ✏️ Ke Lei et al. 🔗 brnw.ch/21wVdRA Viewed: 2021; Cited: 10 #mdpisymmetry #selfsupervised #instancesegmentation

Symmetry_MDPI's tweet image. Semi-Symmetrical, Fully Convolutional Masked #Autoencoder for TBM Muck #ImageSegmentation
✏️ Ke Lei et al.
🔗 brnw.ch/21wVdRA
Viewed: 2021; Cited: 10
#mdpisymmetry #selfsupervised #instancesegmentation

TimeAutoDiff: A Unified Framework for Generation, Imputation, Forecasting, and Time-Varying Metadata Conditioning of Heterogeneous Time Series Tabular Data openreview.net/forum?id=bkUd1… #autoencoder #timeautodiff #temporal


Revisiting Discover-then-Name Concept Bottleneck Models: A Reproducibility Study Freek Byrman, Emma Kasteleyn, Bart Kuipers, Daniel Uyterlinde. Action editor: Sungsoo Ahn. openreview.net/forum?id=946cT… #autoencoder #deep #bottleneck


🔥 Read our Paper 📚 Anomaly Detection through Grouping of SMD Machine Sounds Using Hierarchical Clustering 🔗 mdpi.com/2076-3417/13/1… 👨‍🔬 by Young Jong Song et al. #anomalydetection #autoencoder

Applsci's tweet image. 🔥 Read our Paper  
📚 Anomaly Detection through Grouping of SMD Machine Sounds Using Hierarchical Clustering
🔗 mdpi.com/2076-3417/13/1…
👨‍🔬 by Young Jong Song et al.   
#anomalydetection #autoencoder

Autoencoder ensembles compress high-dimensional climate data into latent states, enabling faster scenario sampling for extreme-event risk analysis. #Autoencoder #Risk


🔬Excited to share the publication "Using Fused Data from Perimetry and Optical Coherence Tomography to Improve the Detection of Visual Field Progression in Glaucoma"👉mdpi.com/2306-5354/11/3… #autoencoder #data_fusion #glaucoma #progression #OCT #perimetry #visual_field

Bioeng_MDPI's tweet image. 🔬Excited to share the publication "Using Fused Data from Perimetry and Optical Coherence Tomography to Improve the Detection of Visual Field Progression in Glaucoma"👉mdpi.com/2306-5354/11/3…

#autoencoder #data_fusion #glaucoma #progression #OCT #perimetry #visual_field

Day 16 of my summer fundamentals series: Built an Autoencoder from scratch in NumPy. Learns compressed representations by reconstructing inputs. Encoder reduces, decoder rebuilds. Unsupervised and powerful for denoising, compression, and more. #MLfromScratch #Autoencoder #DL

dataneuron's tweet image. Day 16 of my summer fundamentals series:
Built an Autoencoder from scratch in NumPy.
Learns compressed representations by reconstructing inputs.
Encoder reduces, decoder rebuilds.
Unsupervised and powerful for denoising, compression, and more.
#MLfromScratch #Autoencoder #DL

New #ReproducibilityCertification: Revisiting Discover-then-Name Concept Bottleneck Models: A Reproducibility Study Freek Byrman, Emma Kasteleyn, Bart Kuipers, Daniel Uyterlinde openreview.net/forum?id=946cT… #autoencoder #deep #bottleneck


7/ Takeaway: SPARSITY SCALES📈. Keep the quality🏆, slash the cost💸, choose your latency-accuracy point⚖️. ⭐️Paper: arxiv.org/abs/2505.11388 ⭐️Code (MIT License): github.com/recombee/Compr… #sparse #autoencoder #embeddings #compression


Comparison of #AutoEncoder Models: Simple vs. Variational original size: chizari.me/comparison-of-…

moechizari's tweet image. Comparison of #AutoEncoder Models: Simple vs. Variational
original size:
chizari.me/comparison-of-…

High level abstraction of an inverted #autoencoder. Instead of compressing reality into meaning, crystallizes meaning into reality. #AI typically compresses the world into numbers. This #model starts with meaning and expands outward. It crystallizes structure from pure thought.

NathanielEvry's tweet image. High level abstraction of an inverted #autoencoder.

Instead of compressing reality into meaning, crystallizes meaning into reality.

#AI typically compresses the world into numbers. This #model starts with meaning and expands outward. It crystallizes structure from pure thought.

Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔 nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal… but dear #Brain people🧠, why do you find cool brain dynamics, state trajectories, embedding?

Dr_Alex_Crimi's tweet image. Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔
nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal…
but dear #Brain people🧠,
why do you find cool brain dynamics, state trajectories, embedding?

Excited to have presented my poster at CVIP 2024! It was a valuable experience to share my work and connect with the research community. #cvip2024 #ArtificialIntelligence #autoencoder #isro

adarshnl's tweet image. Excited to have presented my poster at CVIP 2024! It was a valuable experience to share my work and connect with the research community.

#cvip2024 #ArtificialIntelligence #autoencoder #isro
adarshnl's tweet image. Excited to have presented my poster at CVIP 2024! It was a valuable experience to share my work and connect with the research community.

#cvip2024 #ArtificialIntelligence #autoencoder #isro
adarshnl's tweet image. Excited to have presented my poster at CVIP 2024! It was a valuable experience to share my work and connect with the research community.

#cvip2024 #ArtificialIntelligence #autoencoder #isro
adarshnl's tweet image. Excited to have presented my poster at CVIP 2024! It was a valuable experience to share my work and connect with the research community.

#cvip2024 #ArtificialIntelligence #autoencoder #isro

Postdoc @magnussonrasmu1 is presenting #autoencoder for identifying disease modules

ZelminaL's tweet image. Postdoc @magnussonrasmu1 is presenting #autoencoder for identifying disease modules

YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩 Only 1 smol problem..🤣 It's not just *ONE* typographic cluster.🤯 Left: 3293 encodes CLIP neurons for English, probably EN text signs. Right: 2052 encodes East Asian + German + Mirrored. 👇🧵

zer0int1's tweet image. YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩

Only 1 smol problem..🤣
It's not just *ONE* typographic cluster.🤯

Left: 3293 encodes CLIP neurons for English, probably EN text signs.
Right: 2052 encodes East Asian + German + Mirrored.
👇🧵
zer0int1's tweet image. YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩

Only 1 smol problem..🤣
It's not just *ONE* typographic cluster.🤯

Left: 3293 encodes CLIP neurons for English, probably EN text signs.
Right: 2052 encodes East Asian + German + Mirrored.
👇🧵

Thanks for the en-/discouragement, #GPT4o 😂 Now #sparse #autoencoder #2 learns to be a #babelfish, translating #logits to #token sequences.🤯 It could help decode a sparse #CLIP embedding, it could help decode a gradient ascent #CLIP #opinion! God luck & good speed, #SAE ✊😬

zer0int1's tweet image. Thanks for the en-/discouragement, #GPT4o 😂
Now #sparse #autoencoder #2 learns to be a #babelfish, translating #logits to #token sequences.🤯

It could help decode a sparse #CLIP embedding, it could help decode a gradient ascent #CLIP #opinion!

God luck & good speed, #SAE ✊😬

#CLIP 'looking at' (gradient ascent) a fake image (#sparse #autoencoder idx 3293 one-hot vision transformer (!) embedding). Has vibes similar to #AI's adverb neuron.🤓😂 🤖: pls aha ... 🤖: go aha ... hey lis carley ... 🤖: go morro ... thanks morro dealt ... go thub ... ... .

zer0int1's tweet image. #CLIP 'looking at' (gradient ascent) a fake image (#sparse #autoencoder idx 3293 one-hot vision transformer (!) embedding).
Has vibes similar to #AI's adverb neuron.🤓😂
🤖: pls aha ... 
🤖: go aha ... hey lis carley ...
🤖: go morro ... thanks morro dealt ... go thub ... ... .

Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis". But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂 #lmao #AI #AIweirdness

zer0int1's tweet image. Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis".

But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂
#lmao #AI #AIweirdness

Testing #sparse #autoencoder trained on #CLIP with #COCO 40k (normal (human) labels, e.g. "a cat sitting on the couch"). Yes, #SAE can generalize to CLIP's self-made #AI-opinion gradient ascent embeds.🤩 Cat getting teabagged may be legit "nearby concept" in context.😘😂 #AIart

zer0int1's tweet image. Testing #sparse #autoencoder trained on #CLIP with #COCO 40k (normal (human) labels, e.g. "a cat sitting on the couch").

Yes, #SAE can generalize to CLIP's self-made #AI-opinion gradient ascent embeds.🤩

Cat getting teabagged may be legit "nearby concept" in context.😘😂
#AIart

Reconstructed #sparse #autoencoder embeddings vs. #CLIP's original text embedding #AI self-made 'opinion'. For simple emoji black-on-white input image. Model inversion thereof: #SAE wins. Plus, CLIP was also 'thinking' of A TEXT (symbols, letters) when 'seeing' this image.🤗🙃

zer0int1's tweet image. Reconstructed #sparse #autoencoder embeddings vs. #CLIP's original text embedding #AI self-made 'opinion'.
For simple emoji black-on-white input image.

Model inversion thereof: #SAE wins. Plus, CLIP was also 'thinking' of A TEXT (symbols, letters) when 'seeing' this image.🤗🙃

These = guidance with text embeddings #CLIP made (gradient ascent) while looking at an image of one of its own neurons, which it found to be "hallucinhorrifying trippy machinelearning" -> passed through trained-on-CLIP #sparse #autoencoder (nuke T5) -> guidance. #AIart #Flux1

zer0int1's tweet image. These = guidance with text embeddings #CLIP made (gradient ascent) while looking at an image of one of its own neurons, which it found to be "hallucinhorrifying trippy machinelearning" -> passed through trained-on-CLIP #sparse #autoencoder (nuke T5) -> guidance.

#AIart #Flux1
zer0int1's tweet image. These = guidance with text embeddings #CLIP made (gradient ascent) while looking at an image of one of its own neurons, which it found to be "hallucinhorrifying trippy machinelearning" -> passed through trained-on-CLIP #sparse #autoencoder (nuke T5) -> guidance.

#AIart #Flux1

w00t best #sparse #autoencoder for the #AI so far \o/ I kinda maxed out the reconstruction quality @ 97% cossim. I can stop optimizing this ball of mathematmadness now. 😅 Tied encoder / decoder weights + extra training on #CLIP's "hallucinwacky ooooowords" ('opinions'). 😂

zer0int1's tweet image. w00t best #sparse #autoencoder for the #AI so far \o/

I kinda maxed out the reconstruction quality @ 97% cossim. I can stop optimizing this ball of mathematmadness now. 😅

Tied encoder / decoder weights + extra training on #CLIP's "hallucinwacky ooooowords" ('opinions'). 😂

Time to train a good #sparse #autoencoder config on the real stuff (residual stream). I guess the current #SAE was too sparse for this level of complexity. And now it takes a 'non-insignificant' amount of time to train one, too, ouch!🙃 Sparsity: 0.96887 Dead Neurons Count: 0

zer0int1's tweet image. Time to train a good #sparse #autoencoder config on the real stuff (residual stream).

I guess the current #SAE was too sparse for this level of complexity. And now it takes a 'non-insignificant' amount of time to train one, too, ouch!🙃

Sparsity: 0.96887
Dead Neurons Count: 0

高積算高s/nのMRS dataで学習させたStacked Autoencoderを用い、MRS積算回数をごく少なくしても充分評価に耐えるspectraを生成する用意したというpreprint ヒト脳の低積算画像では,SNRが43.8%増加し,MSEが68.8%減少し、定量性は保たれた #MRS #autoencoder #papers arxiv.org/abs/2303.16503…

thmonk_pf's tweet image. 高積算高s/nのMRS dataで学習させたStacked Autoencoderを用い、MRS積算回数をごく少なくしても充分評価に耐えるspectraを生成する用意したというpreprint

ヒト脳の低積算画像では,SNRが43.8%増加し,MSEが68.8%減少し、定量性は保たれた

#MRS 
#autoencoder 
#papers 

arxiv.org/abs/2303.16503…

🖼️🖼️ #Hyperspectral Data #Compression Using Fully Convolutional #Autoencoder ✍️ Riccardo La Grassa et al. 🔗 brnw.ch/21wPykC

RemoteSens_MDPI's tweet image. 🖼️🖼️ #Hyperspectral Data #Compression Using Fully Convolutional #Autoencoder

✍️ Riccardo La Grassa et al.
🔗 brnw.ch/21wPykC

🖐️🖐️ A Combination of Deep #Autoencoder and Multi-Scale Residual #Network for #Landslide Susceptibility Evaluation ✍️ Zhuolu Wang et al. 🔗 mdpi.com/2072-4292/15/3…

RemoteSens_MDPI's tweet image. 🖐️🖐️ A Combination of Deep #Autoencoder and Multi-Scale Residual #Network for #Landslide Susceptibility Evaluation

✍️ Zhuolu Wang et al.

🔗 mdpi.com/2072-4292/15/3…

HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes openreview.net/forum?id=1rowo… #autoencoder #quantization #autoencoding

TmlrSub's tweet image. HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes

openreview.net/forum?id=1rowo…

#autoencoder #quantization #autoencoding

👋👋 Unsupervised #Transformer Boundary #Autoencoder Network for #Hyperspectral Image #Change Detection ✍️ Song Liu et al. 🔗 brnw.ch/21wOvyQ

RemoteSens_MDPI's tweet image. 👋👋 Unsupervised #Transformer Boundary #Autoencoder Network for #Hyperspectral Image #Change Detection

✍️ Song Liu et al.
🔗 brnw.ch/21wOvyQ

Loading...

Something went wrong.


Something went wrong.


United States Trends