Samaya
@TechSamaya
Tweets in Machine Learning, Technology and Startups ML/NLP Weekly newsletter http://newsletter.samaya.tech/
You might like
Contrastive Learning has been shown to be an effective way to learn representations for various tasks. This post summarized some of the important papers for this method. blog.samaya.tech/2022/04/Contra… #DeepLearning
It's tough to get GANs to train. Lots of problems can arise, from mode collapse to training instability.This post summarized the problems and solutions. blog.samaya.tech/2022/04/GAN-.h… #DeepLearning #MachineLearning
We integrated Decision Transformers, an Offline Reinforcement Learning method, into the 🤗 transformers library and the @huggingface Hub 🥳 ➕ 9 pre-trained models for continuous control tasks in Gym 🔥 We wrote a tutorial if you want to try it 👉bit.ly/36SI4Ds
Today, we’re sharing a roundup of Meta AI’s recent cutting-edge multimodal research, which we believe will collectively lead to more interactive, immersive, and smarter AI systems of the future: ai.facebook.com/blog/advances-…
An open-source solution has a 25% better accuracy than Amazon Forecast and is 20% more accurate than fbprophet. It also performs 4x faster than Amazon Forecast and is less expensive. github.com/Nixtla/nixtla #MachineLearning
Terraform is a powerful tool for building, changing, and versioning infrastructure. With its infrastructure as code approach, execution plans, and resource graph, Terraform is a must-have for any #infrastructure engineer! github.com/hashicorp/terr… #openSource #backend
The 2022 AI Index is out, and it's full of interesting data on the industrialization of AI and mounting ethical concerns. Check it out! hai.stanford.edu/news/2022-ai-i… #MachineLearning
hai.stanford.edu
The 2022 AI Index: Industrialization of AI and Mounting Ethical Concerns | Stanford HAI
The new report highlights an AI investment boom, impressive new technical capabilities, and a fresh focus on ethics (including a new chapter on fairness and bias).
Checkout how Yahoo, Netflix or Doordash designed their recommendation system blog.samaya.tech/2022/03/recomm…
Introducing the Multimodal Bottleneck Transformer, a novel transformer-based model for multimodal fusion that restricts cross-modal attention flow to achieve state-of-the-art results on video classification tasks with less compute. Read more ↓ goo.gle/3MN3YZz
See how GPT3 explains Instance-Conditioned GAN which is a new paper but still the generated text is surprisingly good. blog.samaya.tech/2022/03/gpt3-i… #NLP #GPT3 #AI #RPA
Fine-Tuning with Hugging Face Trainer. Check out my video tutorial to learn how you can improve and simplify ML model fine-tuning workflow with Hugging Face Trainer. This API is very well defined. @huggingface rocks 🚀 Video: youtu.be/L6Dr8AFXMd8 Code: github.com/katanaml/sparr…
A short post on overfitting and how to detect it in your model. blog.samaya.tech/2022/03/overfi…
LMs can learn via inference alone through demonstrations -- but how does it work? We find that LMs do not really need correct input-output pairs. Randomly replacing labels in the demonstrations barely hurts performance, consistently over 12 models. arxiv.org/abs/2202.12837
We’re pleased to announce new advances in SEER, Meta AI’s groundbreaking self-supervised #computervision model. SEER is now not only much more powerful, it also produces fairer, more robust computer vision models. Learn more: ai.facebook.com/blog/seer-10b-…
Checkout this easy to read tutorial on how to use a pre-trained Sentiment Analysis or train one with your data #NLP
Getting started with sentiment analysis has never been easier! 🚀 In this new post, you’ll learn how to use pre-trained models, how to fine-tune your own sentiment model and how to use these models to analyze tweets in just a few lines of Python code 🔥 huggingface.co/blog/sentiment…
A ready to use GUI to build your own prompt source It is also connected to @huggingface dataset to make the process smoother Paper: arxiv.org/abs/2202.01279 Code: github.com/bigscience-wor… #NLP #GPT3 #paper #OpenSource
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts 1.2T GLaM achieves better overall zero-shot perf over GPT-3 across 29 NLP tasks, while consuming only 1/3 of the energy used to train GPT-3. arxiv.org/abs/2112.06905
United States Trends
- 1. St. John 5,969 posts
- 2. Sunderland 98K posts
- 3. Jeremiah Smith 1,623 posts
- 4. Arsenal 185K posts
- 5. Philon 1,068 posts
- 6. #GoDawgs 3,668 posts
- 7. Texas Tech 9,985 posts
- 8. Trossard 16.1K posts
- 9. Noah Thomas N/A
- 10. #SUNARS 9,266 posts
- 11. #iufb 1,293 posts
- 12. Carnell Tate N/A
- 13. Saka 33.4K posts
- 14. Mississippi State 4,030 posts
- 15. Merino 10.5K posts
- 16. Omarion Miller N/A
- 17. Mendoza 8,104 posts
- 18. Obamacare 178K posts
- 19. Nate Frazier N/A
- 20. Shapen N/A
You might like
-
clem 🤗
@ClementDelangue -
Nuxt
@nuxt_js -
Salesforce AI Research
@SFResearch -
Jordan Burgess
@jordnb -
AllenNLP
@ai2_allennlp -
Bitcoinimmo
@Bitcoin_immo -
Kitsugi
@Kitisugi -
H2O.ai
@h2oai -
Miles Brundage
@Miles_Brundage -
Plotly
@plotlygraphs -
jagcouturekids
@jagcouturekids -
Deep Learning Weekly
@dl_weekly -
Big Data Analytics News
@BDAnalyticsnews -
Smerity
@Smerity -
Black in AI
@black_in_ai
Something went wrong.
Something went wrong.