#deepmath resultados da pesquisa

I keep trying to get a quest, but the NPCs only want to talk about deep learning. #deepmath

danintheory's tweet image. I keep trying to get a quest, but the NPCs only want to talk about deep learning. #deepmath

How can we get interpretable models out of deep networks? It's possible! On a model of the retina, can use it to get out models that have been handcrafted elsewhere... just do in silico experiments on your deep networks? #deepmath openreview.net/forum?id=ByMLE…

neuroecology's tweet image. How can we get interpretable models out of deep networks? It's possible! On a model of the retina, can use it to get out models that have been handcrafted elsewhere... just do in silico experiments on your deep networks? #deepmath openreview.net/forum?id=ByMLE…

Talking about the hyperbolic perceptron #deepmath

zamakany's tweet image. Talking about the hyperbolic perceptron #deepmath

The loss landscape of deep networks are like New Zealand: full of sinkholes #deepmath

neuroecology's tweet image. The loss landscape of deep networks are like New Zealand: full of sinkholes #deepmath

"We've replaced hand-crafted features with hand-crafted architectures" Rene Vidal @ #deepmath on the Mathematics of Deep Learning

gmishne's tweet image. "We've replaced hand-crafted features with hand-crafted architectures" Rene Vidal @ #deepmath on the Mathematics of Deep Learning

"Denoising as a tool to study prior distribution of images " Eero just beginning his talk . Join us to know more: youtube.com/watch?v=pTKW4J… #deepmath @deepmath1

zamakany's tweet image. "Denoising as a tool to study prior distribution of images " Eero just beginning his talk . Join us to know more: youtube.com/watch?v=pTKW4J… #deepmath @deepmath1

Looking at the dynamics of networks during training in the 'information plane' is a very useful way to understand why and how neural networks are compressing @NaftaliTishby #deepmath arxiv.org/abs/1703.00810

neuroecology's tweet image. Looking at the dynamics of networks during training in the 'information plane' is a very useful way to understand why and how neural networks are compressing  @NaftaliTishby #deepmath  arxiv.org/abs/1703.00810
neuroecology's tweet image. Looking at the dynamics of networks during training in the 'information plane' is a very useful way to understand why and how neural networks are compressing  @NaftaliTishby #deepmath  arxiv.org/abs/1703.00810

For more information on the Theory of optimization for deep learning, join #deepmath tomorrow for Misha Belkin's talk at 5:25pm EST

gmishne's tweet image. For more information on the Theory of optimization for deep learning, join #deepmath tomorrow for Misha Belkin's talk at 5:25pm EST

Rene delineating the key theoretical questions one face when trying to unpack how deep neural networks work #deepmath @deepmath1

zamakany's tweet image. Rene delineating the key theoretical questions one face when trying to unpack how deep neural networks work #deepmath @deepmath1

@niru_m is online talking about reverse engineering learned optimizers. #deepmath @deepmath1

zamakany's tweet image. @niru_m is online talking about reverse engineering learned optimizers. #deepmath  @deepmath1

For the first time since 2019 #DeepMath will be in-person, in November in San Diego 😎 Abstract submissions are open deadline August 15. We are looking forward for great submissions focusing on theory of deep networks deepmath-conference.com/submissions @deepmath1

gmishne's tweet image. For the first time since 2019 #DeepMath will be in-person, in November in San Diego 😎 Abstract submissions are open deadline August 15.  We are looking forward for great submissions focusing on theory of deep networks deepmath-conference.com/submissions
@deepmath1

Our last invited #deepmath speaker of the day: Richard Baraniuk

gmishne's tweet image. Our last invited #deepmath speaker of the day: Richard Baraniuk

Per caso qualcuno ha detto #parabola? 🧐 Se volete #studiare elementi caratteristici e avere il grafico in un click, basta inserire l'equazione nella barra di #DeepMath, in qualsiasi pagina di #YouMath. Ad esempio: y=x^2-(1/4)x+1 ➡️ youmath.it/formulari/form…)


Click sull'icona a fianco della traccia... -> #DeepMath viene compilato in automatico

YouMath's tweet image. Click sull'icona a fianco della traccia...

-> #DeepMath viene compilato in automatico

Se vuoi calcolare il #MCD, vedere i passaggi e soprattutto capire il metodo, #DeepMath può aiutarti!


Wrapping up #DeepMath, Holden Lee talks about theoretical guarantees for learning probability distributions with diffusion models; first discussing their predominance in state-of-the-art generative AI

thserra's tweet image. Wrapping up #DeepMath, Holden Lee talks about theoretical guarantees for learning probability distributions with diffusion models; first discussing their predominance in state-of-the-art generative AI
thserra's tweet image. Wrapping up #DeepMath, Holden Lee talks about theoretical guarantees for learning probability distributions with diffusion models; first discussing their predominance in state-of-the-art generative AI
thserra's tweet image. Wrapping up #DeepMath, Holden Lee talks about theoretical guarantees for learning probability distributions with diffusion models; first discussing their predominance in state-of-the-art generative AI
thserra's tweet image. Wrapping up #DeepMath, Holden Lee talks about theoretical guarantees for learning probability distributions with diffusion models; first discussing their predominance in state-of-the-art generative AI

Vidya Muthukuma talks at #DeepMath about analyzing the generalization in the overparameterized regime for the loss functions typically used for classification tasks

thserra's tweet image. Vidya Muthukuma talks at #DeepMath about analyzing the generalization in the overparameterized regime for the loss functions typically used for classification tasks
thserra's tweet image. Vidya Muthukuma talks at #DeepMath about analyzing the generalization in the overparameterized regime for the loss functions typically used for classification tasks
thserra's tweet image. Vidya Muthukuma talks at #DeepMath about analyzing the generalization in the overparameterized regime for the loss functions typically used for classification tasks

The second day of #DeepMath starts with Yuejie Chi talking about how to ensure faster convergence and generalization in low-rank matrix factorization with overparameterized models through preconditioning

thserra's tweet image. The second day of #DeepMath starts with Yuejie Chi talking about how to ensure faster convergence and generalization in low-rank matrix factorization with overparameterized models through preconditioning
thserra's tweet image. The second day of #DeepMath starts with Yuejie Chi talking about how to ensure faster convergence and generalization in low-rank matrix factorization with overparameterized models through preconditioning
thserra's tweet image. The second day of #DeepMath starts with Yuejie Chi talking about how to ensure faster convergence and generalization in low-rank matrix factorization with overparameterized models through preconditioning
thserra's tweet image. The second day of #DeepMath starts with Yuejie Chi talking about how to ensure faster convergence and generalization in low-rank matrix factorization with overparameterized models through preconditioning

.@tomgoldsteincs is promising a non-mathematical talk by replacing theorems having unrealistic assumptions with “hacky hand-wavey” experiments in favor of fundamental science at #DeepMath

thserra's tweet image. .@tomgoldsteincs is promising a non-mathematical talk by replacing theorems having  unrealistic assumptions with “hacky hand-wavey” experiments in favor of fundamental science at #DeepMath
thserra's tweet image. .@tomgoldsteincs is promising a non-mathematical talk by replacing theorems having  unrealistic assumptions with “hacky hand-wavey” experiments in favor of fundamental science at #DeepMath
thserra's tweet image. .@tomgoldsteincs is promising a non-mathematical talk by replacing theorems having  unrealistic assumptions with “hacky hand-wavey” experiments in favor of fundamental science at #DeepMath
thserra's tweet image. .@tomgoldsteincs is promising a non-mathematical talk by replacing theorems having  unrealistic assumptions with “hacky hand-wavey” experiments in favor of fundamental science at #DeepMath

Nina Balcan’s keynote starts with a neat overview of machine learning robustness, as well as of the more recent focus on reliability #DeepMath

thserra's tweet image. Nina Balcan’s keynote starts with a neat overview of machine learning robustness, as well as of the more recent focus on reliability #DeepMath
thserra's tweet image. Nina Balcan’s keynote starts with a neat overview of machine learning robustness, as well as of the more recent focus on reliability #DeepMath
thserra's tweet image. Nina Balcan’s keynote starts with a neat overview of machine learning robustness, as well as of the more recent focus on reliability #DeepMath
thserra's tweet image. Nina Balcan’s keynote starts with a neat overview of machine learning robustness, as well as of the more recent focus on reliability #DeepMath

DeepMath 2023 @deepmath1 (or FoundationalMath these days!?) is about to start .@SoledadVillar5 asked ChatGPT for some help with a joke to start the conference #DeepMath

thserra's tweet image. DeepMath 2023 @deepmath1 (or FoundationalMath these days!?) is about to start

.@SoledadVillar5 asked ChatGPT for some help with a joke to start the conference #DeepMath
thserra's tweet image. DeepMath 2023 @deepmath1 (or FoundationalMath these days!?) is about to start

.@SoledadVillar5 asked ChatGPT for some help with a joke to start the conference #DeepMath
thserra's tweet image. DeepMath 2023 @deepmath1 (or FoundationalMath these days!?) is about to start

.@SoledadVillar5 asked ChatGPT for some help with a joke to start the conference #DeepMath

Abstract deadline for #deepmath is June 15th. We are offering travel grants for 1) attendees with limited institutional funds forms.gle/ZvsRn9jcLBezBG… and 2) Dependent care travel grant forms.gle/CFnZH5WppL3yUY…

Spread the word ! DeepMath is happening again . This year at John Hopkins University (Baltimore, MD,USA). Make sure to submit your best theoretical work focusing on deep neural networks ☺️. The call for submissions is open till June 15th : deepmath-conference.com/submissions



Multiplication is more than memorized facts, habd tricks, and an algorithm. In my class we discuss what multiplication is used for, what it really IS , and finding the strategies to match a situation/problem best. I care that they “get” what’s going on✖️🤔#ocsbMath #deepmath

MrsThompsoncrew's tweet image. Multiplication is more than memorized facts, habd tricks, and an algorithm. In my class we discuss what multiplication is used for, what it really IS , and finding the strategies to match a situation/problem best. I care that they “get” what’s going on✖️🤔#ocsbMath #deepmath

Our second talk of the day is Courtney Paquette on dynamics of stochastic optimization algorithms #deepmath

gmishne's tweet image. Our second talk of the day is Courtney Paquette on dynamics of stochastic optimization algorithms 
#deepmath

Reserve your spot at @deepmath1 #Deepmath at ☀️UCSD🏖️ (Nov 17-18)! Space is limited Register here: eventbrite.com/e/deepmath-202… Dependent care travel grants: forms.gle/TGdf92UhVRsLEk…

gmishne's tweet image. Reserve your spot at @deepmath1 #Deepmath at ☀️UCSD🏖️ (Nov 17-18)! Space is limited 
Register here: eventbrite.com/e/deepmath-202…
Dependent care travel grants: forms.gle/TGdf92UhVRsLEk…

This tweet is interesting. As is the number of likes and retweets. #tweetception #deepmath #natural


Training with Syzygies too #deepmath

zamakany's tweet image. Training with Syzygies too #deepmath

I keep trying to get a quest, but the NPCs only want to talk about deep learning. #deepmath

danintheory's tweet image. I keep trying to get a quest, but the NPCs only want to talk about deep learning. #deepmath

How can we get interpretable models out of deep networks? It's possible! On a model of the retina, can use it to get out models that have been handcrafted elsewhere... just do in silico experiments on your deep networks? #deepmath openreview.net/forum?id=ByMLE…

neuroecology's tweet image. How can we get interpretable models out of deep networks? It's possible! On a model of the retina, can use it to get out models that have been handcrafted elsewhere... just do in silico experiments on your deep networks? #deepmath openreview.net/forum?id=ByMLE…

The loss landscape of deep networks are like New Zealand: full of sinkholes #deepmath

neuroecology's tweet image. The loss landscape of deep networks are like New Zealand: full of sinkholes #deepmath

Talking about the hyperbolic perceptron #deepmath

zamakany's tweet image. Talking about the hyperbolic perceptron #deepmath

Looking at the dynamics of networks during training in the 'information plane' is a very useful way to understand why and how neural networks are compressing @NaftaliTishby #deepmath arxiv.org/abs/1703.00810

neuroecology's tweet image. Looking at the dynamics of networks during training in the 'information plane' is a very useful way to understand why and how neural networks are compressing  @NaftaliTishby #deepmath  arxiv.org/abs/1703.00810
neuroecology's tweet image. Looking at the dynamics of networks during training in the 'information plane' is a very useful way to understand why and how neural networks are compressing  @NaftaliTishby #deepmath  arxiv.org/abs/1703.00810

Swing by #deepmath poster 7 to learn about "Random features with neuronal tuning" @tweetbarrage and I will be hanging out during the session tomorrow work w/ @bingbrunton

KameronDHarris's tweet image. Swing by #deepmath poster 7 to learn about "Random features with neuronal tuning"

@tweetbarrage and I will be hanging out during the session tomorrow

work w/ @bingbrunton

Rene Vidal is on stage .. The brief history of neural networks . @deepmath1 #deepmath

zamakany's tweet image. Rene Vidal is on stage .. The brief history of neural networks . @deepmath1 #deepmath

Generalization and regularization theory from Rene Vidal's talk #deepmath @deepmath1

zamakany's tweet image. Generalization and regularization theory from Rene Vidal's talk 
#deepmath @deepmath1

Our last invited #deepmath speaker of the day: Richard Baraniuk

gmishne's tweet image. Our last invited #deepmath speaker of the day: Richard Baraniuk

@niru_m is online talking about reverse engineering learned optimizers. #deepmath @deepmath1

zamakany's tweet image. @niru_m is online talking about reverse engineering learned optimizers. #deepmath  @deepmath1

Loading...

Something went wrong.


Something went wrong.


United States Trends