Interesting article about the "future" of data centers and key trends. The "future" for power consumption is already the present when it comes to GPU compute. ~8-10kW / server is just around corner⚡ and very few data centers are ready #datacenter #gpu datacenterdynamics.com/en/opinions/th…
datacenterdynamics.com
The future of data centers: Seven key trends
Examining the next ten years. A look at what the future holds for the data center industry
We are excited to announce the release of Stable Diffusion Version 2! Stable Diffusion V1 changed the nature of open source AI & spawned hundreds of other innovations all over the world. We hope V2 also provides many new possibilities! Link → stability.ai/blog/stable-di…
ML continues to expand in every industry. Mental health is another one of them.🧠 bhbusiness.com/2022/08/22/dig…
DALLE-2 was paywall-released recently by an extremely well-funded company. Just yesterday, a group of independent researchers released their own model (Stable Diffusion) that you can use in a few lines of code for FREE. The speed of the ML research community is insane 🤯
Ran into another startup using their own kit to cut down on cloud costs. Their savings are ~40x (!!)
Pleased to announce that my Machine Learning textbook is now a free download -- enjoy! tinyurl.com/mtzuckhy
Earth as a dynamical system is a really bad computer. A lot of information processing is concentrated in a few tiny compute nodes (brains, chips) with terrible interconnects, even as bad as use of physical translation and air pressure waves. And powered primitively by combustion.
Great page comparing many cloud GPU providers! cloud-gpus.com by @ml_contests
For people wondering why, as a "vision person", I am interested in language models: 1) the distinctions of different areas of AI are blurring very fast, see my earlier tweet thread: 2) language models are engines of generalization: evjang.com/2021/12/17/lan…
The ongoing consolidation in AI is incredible. Thread: ➡️ When I started ~decade ago vision, speech, natural language, reinforcement learning, etc. were completely separate; You couldn't read papers across areas - the approaches were completely different, often not even ML based.
DALL-E (L) vs Midjourney (R) 🧵 MJ has a certain "je ne sais quoi", the imperfections are more beautiful, a bit like an analog synth. It's often more contextually creative, and amazing w textures / vibe DALL-E deals better with very clearly instructed scenes Same prompt:
Most crucial machine learning lesson I've learned: A better algorithm may improve your model's performance, but a better dataset will double it. Data matters most.
United States Trends
- 1. #SmackDown 15.1K posts
- 2. Clemson 4,586 posts
- 3. End 1Q N/A
- 4. Kevin James 7,972 posts
- 5. Bubba 42.4K posts
- 6. Bill Clinton 144K posts
- 7. Jey Uso 2,793 posts
- 8. Cam Boozer N/A
- 9. The Miz 3,410 posts
- 10. #TNATurningPoint 3,277 posts
- 11. Bronson Reed N/A
- 12. Josh Hart N/A
- 13. End of 1st 1,585 posts
- 14. Ersson N/A
- 15. #VenezuelaConference 11.2K posts
- 16. End of the 1st 1,194 posts
- 17. Dirk 8,986 posts
- 18. #cthsfb N/A
- 19. Karl Anthony 1,511 posts
- 20. Metroid 13.9K posts
Something went wrong.
Something went wrong.