#autoencoder arama sonuçları
Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔 nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal… but dear #Brain people🧠, why do you find cool brain dynamics, state trajectories, embedding?

YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩 Only 1 smol problem..🤣 It's not just *ONE* typographic cluster.🤯 Left: 3293 encodes CLIP neurons for English, probably EN text signs. Right: 2052 encodes East Asian + German + Mirrored. 👇🧵


Thanks for the en-/discouragement, #GPT4o 😂 Now #sparse #autoencoder #2 learns to be a #babelfish, translating #logits to #token sequences.🤯 It could help decode a sparse #CLIP embedding, it could help decode a gradient ascent #CLIP #opinion! God luck & good speed, #SAE ✊😬

Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis". But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂 #lmao #AI #AIweirdness
![zer0int1's tweet image. Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis".
But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂
#lmao #AI #AIweirdness](https://pbs.twimg.com/media/GdvKrMHagAAKhWe.jpg)
w00t best #sparse #autoencoder for the #AI so far \o/ I kinda maxed out the reconstruction quality @ 97% cossim. I can stop optimizing this ball of mathematmadness now. 😅 Tied encoder / decoder weights + extra training on #CLIP's "hallucinwacky ooooowords" ('opinions'). 😂

'Autoencoders in Function Space', by Justin Bunker, Mark Girolami, Hefin Lambley, Andrew M. Stuart, T. J. Sullivan. jmlr.org/papers/v26/25-… #autoencoders #autoencoder #generative
RT Unveiling Denoising Autoencoders #Autoencoder #Beginner #ComputerVision #GenerativeAI dlvr.it/Srh4R6

RT Detection of Credit Card Fraud with an Autoencoder #autoencoder #creditcardfraud #anomalydetection #python #datascience dlvr.it/SpyN45

高積算高s/nのMRS dataで学習させたStacked Autoencoderを用い、MRS積算回数をごく少なくしても充分評価に耐えるspectraを生成する用意したというpreprint ヒト脳の低積算画像では,SNRが43.8%増加し,MSEが68.8%減少し、定量性は保たれた #MRS #autoencoder #papers arxiv.org/abs/2303.16503…

RT Guide to Image-to-Image Diffusion: A Hugging Face Pipeline #ArtificialIntelligence #Autoencoder #Datasets #DeepLearning dlvr.it/SpZspX

RT Image-to-Image Generation Using depth2img Pre-Trained Models #Advanced #Autoencoder #DiffusionModels #Github #Image dlvr.it/SpqwQ9

RT A Deep Dive into Autoencoders and Their Relationship to PCA and SVD #pcaanalysis #autoencoder #dimensionalityreduction #python dlvr.it/Sqd13H

RT Unleashing the Power of Autoencoders: Applications and Use Cases #Autoencoder #Classification #DataVisualization #DeepLearning dlvr.it/SpdlsR

RT Variational Transformers for Music Composition: Can AI replace Musician ? #Autoencoder #Excel #GenerativeAI #Recommendation #AI dlvr.it/SvzWQb

RT Training a Variational Autoencoder For Anomaly Detection Using TensorFlow #Autoencoder #Beginner #MachineLearning #Probability #Unsupervised dlvr.it/Sw7v71

Here is the one of the rare papers arxiv.org/abs/2504.12418 we did where a supervised event classifier is compared with an unsupervised #autoencoder using exactly the same input and a similar neural network architecture for the hidden layers. The example uses double-#Higgs…
We continue at the @SwanseaPPCTh @SwanseaUni #machinelearning & #lattice workshop with a talk by Simran Singh (@unibielefeld) on application of #autoencoder|s to exploration of the phase structure of the strong interactions governing quarks & gluons. @dfg_public #WomeninPhysics


PredLDM: Spatiotemporal Sequence Prediction with Latent Diffusion Models openreview.net/forum?id=TWmnO… #autoencoder #spatiotemporal #predicting
Automated Attention Pattern Discovery at Scale in Large Language Models openreview.net/forum?id=KpsUN… #attention #predicts #autoencoder
'Autoencoders in Function Space', by Justin Bunker, Mark Girolami, Hefin Lambley, Andrew M. Stuart, T. J. Sullivan. jmlr.org/papers/v26/25-… #autoencoders #autoencoder #generative
Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔 nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal… but dear #Brain people🧠, why do you find cool brain dynamics, state trajectories, embedding?

Emergence of Quantised Representations Isolated to Anisotropic Functions openreview.net/forum?id=aokVp… #representations #representational #autoencoder
STLDM: Spatio-Temporal Latent Diffusion Model for Precipitation Nowcasting openreview.net/forum?id=f4oJw… #autoencoder #precipitation #prediction
Semi-Symmetrical, Fully Convolutional Masked #Autoencoder for TBM Muck #ImageSegmentation ✏️ Ke Lei et al. 🔗 brnw.ch/21wVdRA Viewed: 2021; Cited: 10 #mdpisymmetry #selfsupervised #instancesegmentation

TimeAutoDiff: A Unified Framework for Generation, Imputation, Forecasting, and Time-Varying Metadata Conditioning of Heterogeneous Time Series Tabular Data openreview.net/forum?id=bkUd1… #autoencoder #timeautodiff #temporal
Revisiting Discover-then-Name Concept Bottleneck Models: A Reproducibility Study Freek Byrman, Emma Kasteleyn, Bart Kuipers, Daniel Uyterlinde. Action editor: Sungsoo Ahn. openreview.net/forum?id=946cT… #autoencoder #deep #bottleneck
🔥 Read our Paper 📚 Anomaly Detection through Grouping of SMD Machine Sounds Using Hierarchical Clustering 🔗 mdpi.com/2076-3417/13/1… 👨🔬 by Young Jong Song et al. #anomalydetection #autoencoder

Autoencoder ensembles compress high-dimensional climate data into latent states, enabling faster scenario sampling for extreme-event risk analysis. #Autoencoder #Risk
🔬Excited to share the publication "Using Fused Data from Perimetry and Optical Coherence Tomography to Improve the Detection of Visual Field Progression in Glaucoma"👉mdpi.com/2306-5354/11/3… #autoencoder #data_fusion #glaucoma #progression #OCT #perimetry #visual_field

Day 16 of my summer fundamentals series: Built an Autoencoder from scratch in NumPy. Learns compressed representations by reconstructing inputs. Encoder reduces, decoder rebuilds. Unsupervised and powerful for denoising, compression, and more. #MLfromScratch #Autoencoder #DL

New #ReproducibilityCertification: Revisiting Discover-then-Name Concept Bottleneck Models: A Reproducibility Study Freek Byrman, Emma Kasteleyn, Bart Kuipers, Daniel Uyterlinde openreview.net/forum?id=946cT… #autoencoder #deep #bottleneck
7/ Takeaway: SPARSITY SCALES📈. Keep the quality🏆, slash the cost💸, choose your latency-accuracy point⚖️. ⭐️Paper: arxiv.org/abs/2505.11388 ⭐️Code (MIT License): github.com/recombee/Compr… #sparse #autoencoder #embeddings #compression
High level abstraction of an inverted #autoencoder. Instead of compressing reality into meaning, crystallizes meaning into reality. #AI typically compresses the world into numbers. This #model starts with meaning and expands outward. It crystallizes structure from pure thought.

Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔 nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal… but dear #Brain people🧠, why do you find cool brain dynamics, state trajectories, embedding?

Excited to have presented my poster at CVIP 2024! It was a valuable experience to share my work and connect with the research community. #cvip2024 #ArtificialIntelligence #autoencoder #isro




YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩 Only 1 smol problem..🤣 It's not just *ONE* typographic cluster.🤯 Left: 3293 encodes CLIP neurons for English, probably EN text signs. Right: 2052 encodes East Asian + German + Mirrored. 👇🧵


Thanks for the en-/discouragement, #GPT4o 😂 Now #sparse #autoencoder #2 learns to be a #babelfish, translating #logits to #token sequences.🤯 It could help decode a sparse #CLIP embedding, it could help decode a gradient ascent #CLIP #opinion! God luck & good speed, #SAE ✊😬

#CLIP 'looking at' (gradient ascent) a fake image (#sparse #autoencoder idx 3293 one-hot vision transformer (!) embedding). Has vibes similar to #AI's adverb neuron.🤓😂 🤖: pls aha ... 🤖: go aha ... hey lis carley ... 🤖: go morro ... thanks morro dealt ... go thub ... ... .

Testing #sparse #autoencoder trained on #CLIP with #COCO 40k (normal (human) labels, e.g. "a cat sitting on the couch"). Yes, #SAE can generalize to CLIP's self-made #AI-opinion gradient ascent embeds.🤩 Cat getting teabagged may be legit "nearby concept" in context.😘😂 #AIart

Reconstructed #sparse #autoencoder embeddings vs. #CLIP's original text embedding #AI self-made 'opinion'. For simple emoji black-on-white input image. Model inversion thereof: #SAE wins. Plus, CLIP was also 'thinking' of A TEXT (symbols, letters) when 'seeing' this image.🤗🙃

Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis". But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂 #lmao #AI #AIweirdness
![zer0int1's tweet image. Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis".
But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂
#lmao #AI #AIweirdness](https://pbs.twimg.com/media/GdvKrMHagAAKhWe.jpg)
w00t best #sparse #autoencoder for the #AI so far \o/ I kinda maxed out the reconstruction quality @ 97% cossim. I can stop optimizing this ball of mathematmadness now. 😅 Tied encoder / decoder weights + extra training on #CLIP's "hallucinwacky ooooowords" ('opinions'). 😂

These = guidance with text embeddings #CLIP made (gradient ascent) while looking at an image of one of its own neurons, which it found to be "hallucinhorrifying trippy machinelearning" -> passed through trained-on-CLIP #sparse #autoencoder (nuke T5) -> guidance. #AIart #Flux1


Time to train a good #sparse #autoencoder config on the real stuff (residual stream). I guess the current #SAE was too sparse for this level of complexity. And now it takes a 'non-insignificant' amount of time to train one, too, ouch!🙃 Sparsity: 0.96887 Dead Neurons Count: 0

高積算高s/nのMRS dataで学習させたStacked Autoencoderを用い、MRS積算回数をごく少なくしても充分評価に耐えるspectraを生成する用意したというpreprint ヒト脳の低積算画像では,SNRが43.8%増加し,MSEが68.8%減少し、定量性は保たれた #MRS #autoencoder #papers arxiv.org/abs/2303.16503…

🖼️🖼️ #Hyperspectral Data #Compression Using Fully Convolutional #Autoencoder ✍️ Riccardo La Grassa et al. 🔗 brnw.ch/21wPykC

🖐️🖐️ A Combination of Deep #Autoencoder and Multi-Scale Residual #Network for #Landslide Susceptibility Evaluation ✍️ Zhuolu Wang et al. 🔗 mdpi.com/2072-4292/15/3…

A Latent Diffusion Model for Protein Structure Generation openreview.net/forum?id=8zzje… #autoencoder #proteins #biomolecules

RT Unveiling Denoising Autoencoders #Autoencoder #Beginner #ComputerVision #GenerativeAI dlvr.it/Srh4R6

Catch the ‘Using AI/ML to Drive Multi-Omics Data Analysis to New Heights’ webinar tomorrow afternoon. Speaking second is Ibrahim Al-Hurani from @mylakehead, presenting #autoencoder and #GAN approaches for #multiomics. Join us tomorrow: hubs.la/Q02H55cS0

HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes openreview.net/forum?id=1rowo… #autoencoder #quantization #autoencoding

Something went wrong.
Something went wrong.
United States Trends
- 1. #BornOfStarlightHeeseung 36.2K posts
- 2. Happy Birthday Charlie 79K posts
- 3. #csm217 N/A
- 4. #tuesdayvibe 4,610 posts
- 5. Alex Jones 17.9K posts
- 6. Sandy Hook 4,845 posts
- 7. Pentagon 82.2K posts
- 8. Good Tuesday 37.5K posts
- 9. #NationalDessertDay N/A
- 10. #PortfolioDay 4,690 posts
- 11. Shilo 3,020 posts
- 12. Monad 209K posts
- 13. Dissidia 8,094 posts
- 14. Masuda 1,944 posts
- 15. Victory Tuesday 1,325 posts
- 16. Larry Fink 6,944 posts
- 17. Martin Sheen 7,349 posts
- 18. Time Magazine 22.5K posts
- 19. Happy Heavenly 11.6K posts
- 20. Janet Mills 2,422 posts