#30daysofdatascience search results
Day 2 of #30daysofdatascience Today I learnt how to work with CVS, JSON and excel data in pandas, I imported data from a JSON file and performed a few operations on it(such as getting the first 3 items from the DataFrame) and calculating the mean and the max in a given column
Starting today, I’m taking on a 30days of Data science challenge. For the next 30days, I’ll 📍Learn and solidify my DS knowledge. 📍Build projects. 📍Transition into learning ML. I’ll be sharing my journey and what I learn here. This should be fun😄 #30DaysOfDataScience
Day2 of #30DaysOfDataScience I’m still on module 4 on WQU’s data science lab. Today’s lesson was all about decision trees and building decision tree models to classify earthquake damage.
Day1 of #30DaysOfDataScience I’m currently taking the Data Science Lab course by @worldquantu Today, I: 📌Learned to load SQL data into pandas 📌Understood the different SQL joins(explanation below) 📌Built a logistic Reg model to predict earthquake damage
excited about that. Till day 7!👀 #30daysofdatascience #DataScience
Day 15&16 of #30DaysOfDataScience I noticed I was always struggling with my visualization and knowing when to use which. So, I went over to YouTube to make them stick better(see below)
Day 13&14 of #30DaysOfDataScience 📍Started module 7 on A/B testing. 📍Revisited null hypothesis and alternative hypothesis. I can’t believe the number of statistics concepts that get thrown at you while learning ML😅 One more module to go.🥹🎉
Day 12 of #30DaysOfDataScience. 📍Completed module 6, the project was a way to introduce unsupervised learning. 📍Understood concepts like inertia and silhouette score and the mathematics behind them. 📍Learned the use of PCA in dimensionality reduction.
I’m back 🙈 I had a backlog of tasks to complete in June, so I had to slow down my learning and even pause the challenge. I’ll be resuming fully from today. Happy new month🎉
Day 10&11 of #30DaysOfDataScience I’m still on the same module/project about predicting bankruptcy 📍Over the weekend, I read more on random-forest and hyper-parameter tuning using grid search. 📍Learned about gradient boosting, precision and recall.
Day9 of #30DaysOfDataScience 📍Continued the course but I had a hard time understanding cross-validation& gridsearchCV so I referenced external resources for clarity. I’d appreciate recommendations for resources that explain the theory behind ML concepts.
Day8 of #30DaysOfDataScience 📍Learned to use context handler to load json files(both compressed/non-compressed) into a data frame 📍Addressed an imbalanced dataset using resampling(see explanation below) 📍Used pickle to save our trained model to a file
Day6&7 of #30daysofDataScience 📍Completed the classification task previously mentioned. The goal of the task was to predict the top 3 villages (out of 50) that require solar panels the most, based on certain features. 📍Began Module5 lectures on @worldquantu
Day 1 of #30DaysOfDataScience 🔍 What is Data Science, really? It’s not just Python, ML, or dashboards. It’s about turning raw data into real decisions. Let’s break it down. 🧵 1. Data Collection – Scraping, APIs, databases 2. Data Cleaning – Remove noise, fix gaps
Day5 of #30DaysOfDataScience I found and read this well-detailed article on linear regression: manishmazumder5.substack.com/p/mastering-li… That’s pretty much it😀
Day4 of #30DaysOfDataScience I’m currently working on a classification task with a clean dataset(no missing values, outliers) I also had to revisit the differences between MinMaxScaler and StandardScaler and learned new feature engineering tips.
Day1 of #30DaysOfDataScience I’m currently taking the Data Science Lab course by @worldquantu Today, I: 📌Learned to load SQL data into pandas 📌Understood the different SQL joins(explanation below) 📌Built a logistic Reg model to predict earthquake damage
Starting today, I’m taking on a 30days of Data science challenge. For the next 30days, I’ll 📍Learn and solidify my DS knowledge. 📍Build projects. 📍Transition into learning ML. I’ll be sharing my journey and what I learn here. This should be fun😄 #30DaysOfDataScience
Day 13&14 of #30DaysOfDataScience 📍Started module 7 on A/B testing. 📍Revisited null hypothesis and alternative hypothesis. I can’t believe the number of statistics concepts that get thrown at you while learning ML😅 One more module to go.🥹🎉
Day 12 of #30DaysOfDataScience. 📍Completed module 6, the project was a way to introduce unsupervised learning. 📍Understood concepts like inertia and silhouette score and the mathematics behind them. 📍Learned the use of PCA in dimensionality reduction.
Day 35/30 of #30DaysOfDataScience I have written my first medium article on 'Evaluating an Estimator (Bias and Variance)' and continued making my digital note. Also, the scraping for the new dataset continues in the background. Link: link.medium.com/XlEm5ivMRIb
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
Day9 of #30DaysOfDataScience 📍Continued the course but I had a hard time understanding cross-validation& gridsearchCV so I referenced external resources for clarity. I’d appreciate recommendations for resources that explain the theory behind ML concepts.
Day8 of #30DaysOfDataScience 📍Learned to use context handler to load json files(both compressed/non-compressed) into a data frame 📍Addressed an imbalanced dataset using resampling(see explanation below) 📍Used pickle to save our trained model to a file
Day4 of #30DaysOfDataScience I’m currently working on a classification task with a clean dataset(no missing values, outliers) I also had to revisit the differences between MinMaxScaler and StandardScaler and learned new feature engineering tips.
Day3 of #30DaysOfDataScience I couldn’t post yesterday due to network issues, but I’m finally done with module4😁. The goal of this project was to predict if a building would suffer severe earthquake damages. 👇See thread for what I learned
Day3 of #30DaysOfDataScience I couldn’t post yesterday due to network issues, but I’m finally done with module4😁. The goal of this project was to predict if a building would suffer severe earthquake damages. 👇See thread for what I learned
Day2 of #30DaysOfDataScience I’m still on module 4 on WQU’s data science lab. Today’s lesson was all about decision trees and building decision tree models to classify earthquake damage.
Day 19/30 of #30DaysOfDataScience I have implemented Decision Tree on the Rain in Australia dataset and tackled 1 LC problem. Also, the scraping for the new dataset continues in the background.
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
Day 32/30 of #30DaysOfDataScience I have finished working on the New York City Taxi Fare Prediction dataset on Kaggle and tackled 1 LC problem. Also, the scraping for the new dataset continues in the background.
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
The last 6 days have been wasted to my college project, which is why I took a break from #30DaysOfDataScience. Grinding resumes tomorrow! Wish me luck!
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
Day 27/30 of #30DaysOfDataScience I completed the Rossman Store Sales dataset and tackled 1 LC problem. Also, the scraping for the new dataset continues in the background.
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
Day 1 of #30DaysOfDataScience 🔍 What is Data Science, really? It’s not just Python, ML, or dashboards. It’s about turning raw data into real decisions. Let’s break it down. 🧵 1. Data Collection – Scraping, APIs, databases 2. Data Cleaning – Remove noise, fix gaps
Day 15&16 of #30DaysOfDataScience I noticed I was always struggling with my visualization and knowing when to use which. So, I went over to YouTube to make them stick better(see below)
Day 13&14 of #30DaysOfDataScience 📍Started module 7 on A/B testing. 📍Revisited null hypothesis and alternative hypothesis. I can’t believe the number of statistics concepts that get thrown at you while learning ML😅 One more module to go.🥹🎉
Day 13&14 of #30DaysOfDataScience 📍Started module 7 on A/B testing. 📍Revisited null hypothesis and alternative hypothesis. I can’t believe the number of statistics concepts that get thrown at you while learning ML😅 One more module to go.🥹🎉
Day 12 of #30DaysOfDataScience. 📍Completed module 6, the project was a way to introduce unsupervised learning. 📍Understood concepts like inertia and silhouette score and the mathematics behind them. 📍Learned the use of PCA in dimensionality reduction.
Day 12 of #30DaysOfDataScience. 📍Completed module 6, the project was a way to introduce unsupervised learning. 📍Understood concepts like inertia and silhouette score and the mathematics behind them. 📍Learned the use of PCA in dimensionality reduction.
I’m back 🙈 I had a backlog of tasks to complete in June, so I had to slow down my learning and even pause the challenge. I’ll be resuming fully from today. Happy new month🎉
Day 10&11 of #30DaysOfDataScience I’m still on the same module/project about predicting bankruptcy 📍Over the weekend, I read more on random-forest and hyper-parameter tuning using grid search. 📍Learned about gradient boosting, precision and recall.
Day9 of #30DaysOfDataScience 📍Continued the course but I had a hard time understanding cross-validation& gridsearchCV so I referenced external resources for clarity. I’d appreciate recommendations for resources that explain the theory behind ML concepts.
Day9 of #30DaysOfDataScience 📍Continued the course but I had a hard time understanding cross-validation& gridsearchCV so I referenced external resources for clarity. I’d appreciate recommendations for resources that explain the theory behind ML concepts.
Day8 of #30DaysOfDataScience 📍Learned to use context handler to load json files(both compressed/non-compressed) into a data frame 📍Addressed an imbalanced dataset using resampling(see explanation below) 📍Used pickle to save our trained model to a file
Day8 of #30DaysOfDataScience 📍Learned to use context handler to load json files(both compressed/non-compressed) into a data frame 📍Addressed an imbalanced dataset using resampling(see explanation below) 📍Used pickle to save our trained model to a file
Day6&7 of #30daysofDataScience 📍Completed the classification task previously mentioned. The goal of the task was to predict the top 3 villages (out of 50) that require solar panels the most, based on certain features. 📍Began Module5 lectures on @worldquantu
Day6&7 of #30daysofDataScience 📍Completed the classification task previously mentioned. The goal of the task was to predict the top 3 villages (out of 50) that require solar panels the most, based on certain features. 📍Began Module5 lectures on @worldquantu
Day5 of #30DaysOfDataScience I found and read this well-detailed article on linear regression: manishmazumder5.substack.com/p/mastering-li… That’s pretty much it😀
Day5 of #30DaysOfDataScience I found and read this well-detailed article on linear regression: manishmazumder5.substack.com/p/mastering-li… That’s pretty much it😀
Day4 of #30DaysOfDataScience I’m currently working on a classification task with a clean dataset(no missing values, outliers) I also had to revisit the differences between MinMaxScaler and StandardScaler and learned new feature engineering tips.
Day4 of #30DaysOfDataScience I’m currently working on a classification task with a clean dataset(no missing values, outliers) I also had to revisit the differences between MinMaxScaler and StandardScaler and learned new feature engineering tips.
Day3 of #30DaysOfDataScience I couldn’t post yesterday due to network issues, but I’m finally done with module4😁. The goal of this project was to predict if a building would suffer severe earthquake damages. 👇See thread for what I learned
Day3 of #30DaysOfDataScience I couldn’t post yesterday due to network issues, but I’m finally done with module4😁. The goal of this project was to predict if a building would suffer severe earthquake damages. 👇See thread for what I learned
Day2 of #30DaysOfDataScience I’m still on module 4 on WQU’s data science lab. Today’s lesson was all about decision trees and building decision tree models to classify earthquake damage.
Day2 of #30DaysOfDataScience I’m still on module 4 on WQU’s data science lab. Today’s lesson was all about decision trees and building decision tree models to classify earthquake damage.
Day1 of #30DaysOfDataScience I’m currently taking the Data Science Lab course by @worldquantu Today, I: 📌Learned to load SQL data into pandas 📌Understood the different SQL joins(explanation below) 📌Built a logistic Reg model to predict earthquake damage
Day1 of #30DaysOfDataScience I’m currently taking the Data Science Lab course by @worldquantu Today, I: 📌Learned to load SQL data into pandas 📌Understood the different SQL joins(explanation below) 📌Built a logistic Reg model to predict earthquake damage
Starting today, I’m taking on a 30days of Data science challenge. For the next 30days, I’ll 📍Learn and solidify my DS knowledge. 📍Build projects. 📍Transition into learning ML. I’ll be sharing my journey and what I learn here. This should be fun😄 #30DaysOfDataScience
Starting today, I’m taking on a 30days of Data science challenge. For the next 30days, I’ll 📍Learn and solidify my DS knowledge. 📍Build projects. 📍Transition into learning ML. I’ll be sharing my journey and what I learn here. This should be fun😄 #30DaysOfDataScience
Day 36/30 of #30DaysOfDataScience With about 10% of the new dataset collected, I've started combining the files and performing some initial data cleaning and feature engineering. The scraping for this dataset continues in the background.
Day 35/30 of #30DaysOfDataScience I have written my first medium article on 'Evaluating an Estimator (Bias and Variance)' and continued making my digital note. Also, the scraping for the new dataset continues in the background. Link: link.medium.com/XlEm5ivMRIb
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
Day 34/30 of #30DaysOfDataScience I have started reading different articles and documentation on ML and making a digital note of it which I will share once it is completed and also, the scraping for the new dataset continues in the background.
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
Day 33/30 of #30DaysOfDataScience I have started reading different articles on ML and making a digital note of it which I will share once it is completed. I tackled 1 LC problem and also, the scraping for the new dataset continues in the background.
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
Day 32/30 of #30DaysOfDataScience I have finished working on the New York City Taxi Fare Prediction dataset on Kaggle and tackled 1 LC problem. Also, the scraping for the new dataset continues in the background.
Going to start with #30DaysOfDataScience today💫 I've almost completed Andrew Ng's ML course, learned NumPy, Pandas, and Matplotlib basics. I've also done 4 Kaggle notebooks and 3 datasets. Drop down some resources from where you have learnt data science.
Day 31/30 of #30DaysOfDataScience I have continued working on the New York City Taxi Fare Prediction dataset on Kaggle. Also, the scraping for the new dataset continues in the background.
Day 2 of #30daysofdatascience Today I learnt how to work with CVS, JSON and excel data in pandas, I imported data from a JSON file and performed a few operations on it(such as getting the first 3 items from the DataFrame) and calculating the mean and the max in a given column
I am happy to be part of this year's PyLadies Data Science Bootcamp. Day 1 assignment completed and I am prepared for the next 29days ahead #PyladiesGhanaDS #datascience #30DaysOfDataScience #Day1
Day 2 of #30DaysOfDataScience and today am recapping:- - Creating arrays using Numpy - exploring the created arrays - the extra stuff of Zeros, eyes etc. It seems I didn't forget a lot #Python #DataScience #DataScientist #gdsc
The more I encounter concepts, the better I understand them🤸♀️. Day12 of #30daysofDataScience with @PyLadiesGhana as a mentor. For our second office hour session, I quizzed them on concepts they learnt from their course materials.
A quick view at the syntax difference between pandas.eval and pandas.DataFrame.eval. The DataFrame.eval() treats column names as variables within the evaluated expression. #Python #30DaysOfDataScience #pandas #data #DataScience
Are you a #student looking to gain #DataScience skills Introducing: #30DaysOfDataScience…we’ll go from understanding the Python language, to creating Machine Learning models both on Azure and in Python. Excited? Register here: msft.it/6010dMFdM @BethanyJep @carlottacaste
2PM, WAT today, @Nasereliver will guide us through building a Fraud Detection Model with Python. Link in the thread #30DaysOfDataScience
Resuming my #30DaysOfDataScience which I stopped for some technical reasons. So, yesterday was officially my day 1, covering several topics ranging from introduction to data indexing and selection on pandas Series, DataFrame and Index objects.
Week One is Done and Dusted in #30DaysOfDataScience .This week challenge will be towards Data Preparation and visualization. Follow along with this roadmap : aka.ms/Data-30DS Catch up with your fellow colleagues here : github.com/microsoft/30da…
Starting a challenge for myself #30daysofDataScience #30daysofDS Today’s practice in statistics and R
Something went wrong.
Something went wrong.
United States Trends
- 1. #RHOP 4,802 posts
- 2. Chargers 13.4K posts
- 3. Rams 26.9K posts
- 4. Jassi 1,288 posts
- 5. #HereWeGo 3,091 posts
- 6. Commanders 121K posts
- 7. Seahawks 32.9K posts
- 8. Canada Dry 1,546 posts
- 9. DO NOT CAVE 15K posts
- 10. Lions 94.2K posts
- 11. 49ers 22K posts
- 12. Aaron Rodgers 4,011 posts
- 13. Jordan Walsh N/A
- 14. Khalil Mack N/A
- 15. Lenny Wilkens 4,453 posts
- 16. Stafford 10.6K posts
- 17. #90DayFianceHappilyEverAfter N/A
- 18. Tim Kaine 3,663 posts
- 19. Gizelle N/A
- 20. Dan Campbell 3,974 posts