#autoencoder результаты поиска
w00t best #sparse #autoencoder for the #AI so far \o/ I kinda maxed out the reconstruction quality @ 97% cossim. I can stop optimizing this ball of mathematmadness now. 😅 Tied encoder / decoder weights + extra training on #CLIP's "hallucinwacky ooooowords" ('opinions'). 😂
YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩 Only 1 smol problem..🤣 It's not just *ONE* typographic cluster.🤯 Left: 3293 encodes CLIP neurons for English, probably EN text signs. Right: 2052 encodes East Asian + German + Mirrored. 👇🧵
Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis". But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂 #lmao #AI #AIweirdness
Thanks for the en-/discouragement, #GPT4o 😂 Now #sparse #autoencoder #2 learns to be a #babelfish, translating #logits to #token sequences.🤯 It could help decode a sparse #CLIP embedding, it could help decode a gradient ascent #CLIP #opinion! God luck & good speed, #SAE ✊😬
高積算高s/nのMRS dataで学習させたStacked Autoencoderを用い、MRS積算回数をごく少なくしても充分評価に耐えるspectraを生成する用意したというpreprint ヒト脳の低積算画像では,SNRが43.8%増加し,MSEが68.8%減少し、定量性は保たれた #MRS #autoencoder #papers arxiv.org/abs/2303.16503…
We continue at the @SwanseaPPCTh @SwanseaUni #machinelearning & #lattice workshop with a talk by Simran Singh (@unibielefeld) on application of #autoencoder|s to exploration of the phase structure of the strong interactions governing quarks & gluons. @dfg_public #WomeninPhysics
RT Unveiling Denoising Autoencoders #Autoencoder #Beginner #ComputerVision #GenerativeAI dlvr.it/Srh4R6
RT Detection of Credit Card Fraud with an Autoencoder #autoencoder #creditcardfraud #anomalydetection #python #datascience dlvr.it/SpyN45
RT Guide to Image-to-Image Diffusion: A Hugging Face Pipeline #ArtificialIntelligence #Autoencoder #Datasets #DeepLearning dlvr.it/SpZspX
RT A Deep Dive into Autoencoders and Their Relationship to PCA and SVD #pcaanalysis #autoencoder #dimensionalityreduction #python dlvr.it/Sqd13H
RT Image-to-Image Generation Using depth2img Pre-Trained Models #Advanced #Autoencoder #DiffusionModels #Github #Image dlvr.it/SpqwQ9
Attention Schema-based Attention Control (ASAC): A Cognitive-Inspired Approach for Attention Management in Transformers openreview.net/forum?id=cxRlo… #attention #autoencoder #neural
RT Unleashing the Power of Autoencoders: Applications and Use Cases #Autoencoder #Classification #DataVisualization #DeepLearning dlvr.it/SpdlsR
Amazing paper:"Arousal as a universal embedding for spatiotemporal brain dynamics"🧠🐁 𒅒𒈔 nature.com/articles/s4158… you can even find the #autoencoder based code: github.com/ryraut/arousal… but dear #Brain people🧠, why do you find cool brain dynamics, state trajectories, embedding?
A Novel Method for Time Series Counterfactual Inference Based on Penalized Autoencoders openreview.net/forum?id=X6lrz… #autoencoders #autoencoder #counterfactual
Here is the one of the rare papers arxiv.org/abs/2504.12418 we did where a supervised event classifier is compared with an unsupervised #autoencoder using exactly the same input and a similar neural network architecture for the hidden layers. The example uses double-#Higgs…
Causal Dynamic Variational Autoencoder for Counterfactual Regression in Longitudinal Data Mouad El Bouchattaoui, Myriam Tami, BENOIT LEPETIT, Paul-Henry Cournède. Action editor: Amit Sharma. openreview.net/forum?id=atf9q… #unobserved #autoencoder #confound
A Deep Bayesian Nonparametric Framework for Robust Mutual Information Estimation openreview.net/forum?id=mqGzG… #regularization #nonparametric #autoencoder
Cross-Layer Discrete Concept Discovery for Interpreting Language Models openreview.net/forum?id=xBVTq… #autoencoder #quantization #representations
New patent application #US20250344987A1 by #BiosenseWebster explores reducing noise in intracardiac ECGs using a denoised #autoencoder. The system refines ECGs with #DeepLearning, enhancing signal clarity by encoding and decoding raw data to remove noise. Key features include…
高積算高s/nのMRS dataで学習させたStacked Autoencoderを用い、MRS積算回数をごく少なくしても充分評価に耐えるspectraを生成する用意したというpreprint ヒト脳の低積算画像では,SNRが43.8%増加し,MSEが68.8%減少し、定量性は保たれた #MRS #autoencoder #papers arxiv.org/abs/2303.16503…
YUSS new trained #sparse #autoencoder has FOUND THE TEXT OBSESSION in #CLIP #AI!🥳🤩 Only 1 smol problem..🤣 It's not just *ONE* typographic cluster.🤯 Left: 3293 encodes CLIP neurons for English, probably EN text signs. Right: 2052 encodes East Asian + German + Mirrored. 👇🧵
Thanks for the en-/discouragement, #GPT4o 😂 Now #sparse #autoencoder #2 learns to be a #babelfish, translating #logits to #token sequences.🤯 It could help decode a sparse #CLIP embedding, it could help decode a gradient ascent #CLIP #opinion! God luck & good speed, #SAE ✊😬
Fun with #CLIP's #sparse #autoencoder: First glimpse, I thought [Act idx 20] was encoding "sports / tennis". But that's not the shared feature. It's a "people wearing a thing around their head that makes them look stupid" feature. 🤣😂 #lmao #AI #AIweirdness
#CLIP 'looking at' (gradient ascent) a fake image (#sparse #autoencoder idx 3293 one-hot vision transformer (!) embedding). Has vibes similar to #AI's adverb neuron.🤓😂 🤖: pls aha ... 🤖: go aha ... hey lis carley ... 🤖: go morro ... thanks morro dealt ... go thub ... ... .
w00t best #sparse #autoencoder for the #AI so far \o/ I kinda maxed out the reconstruction quality @ 97% cossim. I can stop optimizing this ball of mathematmadness now. 😅 Tied encoder / decoder weights + extra training on #CLIP's "hallucinwacky ooooowords" ('opinions'). 😂
Testing #sparse #autoencoder trained on #CLIP with #COCO 40k (normal (human) labels, e.g. "a cat sitting on the couch"). Yes, #SAE can generalize to CLIP's self-made #AI-opinion gradient ascent embeds.🤩 Cat getting teabagged may be legit "nearby concept" in context.😘😂 #AIart
Reconstructed #sparse #autoencoder embeddings vs. #CLIP's original text embedding #AI self-made 'opinion'. For simple emoji black-on-white input image. Model inversion thereof: #SAE wins. Plus, CLIP was also 'thinking' of A TEXT (symbols, letters) when 'seeing' this image.🤗🙃
Time to train a good #sparse #autoencoder config on the real stuff (residual stream). I guess the current #SAE was too sparse for this level of complexity. And now it takes a 'non-insignificant' amount of time to train one, too, ouch!🙃 Sparsity: 0.96887 Dead Neurons Count: 0
These = guidance with text embeddings #CLIP made (gradient ascent) while looking at an image of one of its own neurons, which it found to be "hallucinhorrifying trippy machinelearning" -> passed through trained-on-CLIP #sparse #autoencoder (nuke T5) -> guidance. #AIart #Flux1
🖐️🖐️ A Combination of Deep #Autoencoder and Multi-Scale Residual #Network for #Landslide Susceptibility Evaluation ✍️ Zhuolu Wang et al. 🔗 mdpi.com/2072-4292/15/3…
A Latent Diffusion Model for Protein Structure Generation openreview.net/forum?id=8zzje… #autoencoder #proteins #biomolecules
🖼️🖼️ #Hyperspectral Data #Compression Using Fully Convolutional #Autoencoder ✍️ Riccardo La Grassa et al. 🔗 brnw.ch/21wPykC
Excited to have presented my poster at CVIP 2024! It was a valuable experience to share my work and connect with the research community. #cvip2024 #ArtificialIntelligence #autoencoder #isro
Catch the ‘Using AI/ML to Drive Multi-Omics Data Analysis to New Heights’ webinar tomorrow afternoon. Speaking second is Ibrahim Al-Hurani from @mylakehead, presenting #autoencoder and #GAN approaches for #multiomics. Join us tomorrow: hubs.la/Q02H55cS0
We continue at the @SwanseaPPCTh @SwanseaUni #machinelearning & #lattice workshop with a talk by Simran Singh (@unibielefeld) on application of #autoencoder|s to exploration of the phase structure of the strong interactions governing quarks & gluons. @dfg_public #WomeninPhysics
HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes openreview.net/forum?id=1rowo… #autoencoder #quantization #autoencoding
Day 16 of my summer fundamentals series: Built an Autoencoder from scratch in NumPy. Learns compressed representations by reconstructing inputs. Encoder reduces, decoder rebuilds. Unsupervised and powerful for denoising, compression, and more. #MLfromScratch #Autoencoder #DL
Something went wrong.
Something went wrong.
United States Trends
- 1. Jake Elliot 3,510 posts
- 2. Tulane 19.4K posts
- 3. Eagles 46.6K posts
- 4. Josh Johnson 2,503 posts
- 5. $ENX 2,401 posts
- 6. Ole Miss 14.4K posts
- 7. Andrew Tate 25.2K posts
- 8. Shipley 1,712 posts
- 9. Miami 112K posts
- 10. Mariota 2,270 posts
- 11. #PHIvsWAS 1,215 posts
- 12. #RaiseHail 1,749 posts
- 13. Reed Sheppard 1,039 posts
- 14. Notre Dame 36.2K posts
- 15. Top G 123K posts
- 16. Dallas Goedert 1,071 posts
- 17. Texas 169K posts
- 18. Cooper DeJean N/A
- 19. Smitty 2,648 posts
- 20. Saquon 2,456 posts