#mlfromscratch search results

"I learned very early the difference between knowing the name of something and knowing something." ~ Richard Feynman This quote is the entire reason for my #MLfromScratch project. It's easy to import numpy. That's knowing the name. But what is a dot product actually doing?…

mrigesh_thakur's tweet image. "I learned very early the difference between knowing the name of something and knowing something." ~ Richard Feynman 

This quote is the entire reason for my #MLfromScratch project.

It's easy to import numpy. That's knowing the name. But what is a dot product actually doing?…

Today was day 5 of learning ML from basics, and quite evident... Heatmap is heating up ! Today I explored EDA through multivariate analysis, moving on to Pandas profiling. #LearnInPublic #BuildingInPublic #MLFromScratch #DataScienceCommunity #KeepLearning

TSrivastav40140's tweet image. Today was day 5 of learning ML from basics, and quite evident... Heatmap is heating up !

Today I explored EDA through multivariate analysis, moving on to Pandas profiling.

#LearnInPublic #BuildingInPublic #MLFromScratch #DataScienceCommunity  #KeepLearning

Day 19: Losses in focus: MAE, MSE, Huber, Log Loss. Different goals, different penalties. Know when to use which. #MLfromScratch #LossFunctions

dataneuron's tweet image. Day 19:
Losses in focus:
 MAE, MSE, Huber, Log Loss.
Different goals, different penalties.
Know when to use which.
#MLfromScratch #LossFunctions

Day 11 of my summer fundamentals series: Built a basic ANN from scratch — perceptron, forward pass, and gradient descent. Watched it learn to classify data over time. #MLfromScratch #ANN #DeepLearning #DataScience

dataneuron's tweet image. Day 11 of my summer fundamentals series:
Built a basic ANN from scratch — perceptron, forward pass, and gradient descent.
Watched it learn to classify data over time.
#MLfromScratch #ANN #DeepLearning #DataScience

Day 22 Smarter Steps Moved past plain gradient descent: → Momentum adds speed → NAG looks ahead → Adagrad adapts on the fly Different techniques, one goal — descend efficiently. #MLfromScratch #Optimizers #DeepLearning


That’s the core of Linear Regression! 🧠 In my next posts, I’ll code LR from scratch in Python, visualize predictions, and show real results. Curious to see it in action? Follow along! 🚀 #MLfromScratch


Today I compared MSE cost function using NumPy vs from scratch on a real-world dataset 🧠📊 Learned how math powers ML behind the scenes! ✅ Improved Python + logic 📸 Code below Next up: Gradient Descent from scratch 🔥 #MLFromScratch #Python #AIJourney ⌨️🚀

AkshatGair23428's tweet image. Today I compared MSE cost function using NumPy vs from scratch on a real-world dataset 🧠📊
Learned how math powers ML behind the scenes!
✅ Improved Python + logic
📸 Code below
Next up: Gradient Descent from scratch 🔥
#MLFromScratch #Python #AIJourney ⌨️🚀
AkshatGair23428's tweet image. Today I compared MSE cost function using NumPy vs from scratch on a real-world dataset 🧠📊
Learned how math powers ML behind the scenes!
✅ Improved Python + logic
📸 Code below
Next up: Gradient Descent from scratch 🔥
#MLFromScratch #Python #AIJourney ⌨️🚀

Day 17 of my summer fundamentals series: Built a GAN from scratch in NumPy. Generator creates, Discriminator critiques. They train together in a zero-sum game. Great for generating realistic data. #MLfromScratch #GAN #DL

dataneuron's tweet image. Day 17 of my summer fundamentals series:
Built a GAN from scratch in NumPy.
Generator creates, Discriminator critiques.
They train together in a zero-sum game.
Great for generating realistic data.
#MLfromScratch #GAN #DL

Day 15 of my summer fundamentals series: Built a GRU from scratch using NumPy. Simpler than LSTM — update and reset gates handle memory without a separate cell state. Efficient, elegant, and surprisingly capable. #MLfromScratch #GRU #DeepLearning #AI #RNN

dataneuron's tweet image. Day 15 of my summer fundamentals series:
Built a GRU from scratch using NumPy.
Simpler than LSTM — update and reset gates handle memory without a separate cell state.
Efficient, elegant, and surprisingly capable.
#MLfromScratch #GRU #DeepLearning #AI #RNN

Day 13 of my summer fundamentals series: Implemented a simple CNN from scratch — built convolution, ReLU, and pooling layers using just NumPy. Watching it learn spatial features from images felt like giving sight to math. #MLfromScratch #CNN #DeepLearning #AI

dataneuron's tweet image. Day 13 of my summer fundamentals series:
Implemented a simple CNN from scratch — built convolution, ReLU, and pooling layers using just NumPy.
Watching it learn spatial features from images felt like giving sight to math.
#MLfromScratch #CNN #DeepLearning #AI

Day 16 of my summer fundamentals series: Built an Autoencoder from scratch in NumPy. Learns compressed representations by reconstructing inputs. Encoder reduces, decoder rebuilds. Unsupervised and powerful for denoising, compression, and more. #MLfromScratch #Autoencoder #DL

dataneuron's tweet image. Day 16 of my summer fundamentals series:
Built an Autoencoder from scratch in NumPy.
Learns compressed representations by reconstructing inputs.
Encoder reduces, decoder rebuilds.
Unsupervised and powerful for denoising, compression, and more.
#MLfromScratch #Autoencoder #DL

Day 18 of my summer fundamentals series: Midway through! Built a basic Transformer from scratch—self-attention, multi-head attention, and positional encoding all in NumPy. Incredible how these pieces scale to models like GPT. #MLfromScratch #Transformers #DL

dataneuron's tweet image. Day 18 of my summer fundamentals series:
Midway through! Built a basic Transformer from scratch—self-attention, multi-head attention, and positional encoding all in NumPy.
Incredible how these pieces scale to models like GPT.
#MLfromScratch #Transformers #DL

Day 14 of my summer fundamentals series: Implemented an LSTM from scratch — built forget, input, and output gates using just NumPy. Watching it capture dependencies across time steps felt like giving memory to math. #MLfromScratch #LSTM #DeepLearning #AI #RNN

dataneuron's tweet image. Day 14 of my summer fundamentals series:
Implemented an LSTM from scratch — built forget, input, and output gates using just NumPy.
Watching it capture dependencies across time steps felt like giving memory to math.
#MLfromScratch #LSTM #DeepLearning #AI #RNN

Day 12 of my summer fundamentals series: Built a simple Recurrent Neural Network from scratch — implemented time steps, hidden states, and backprop through time (BPTT). Watched it learn from sequences. Feels like giving memory to math. #MLfromScratch #RNN #DeepLearning #AI

dataneuron's tweet image. Day 12 of my summer fundamentals series:
Built a simple Recurrent Neural Network from scratch — implemented time steps, hidden states, and backprop through time (BPTT).
Watched it learn from sequences. Feels like giving memory to math.
#MLfromScratch #RNN #DeepLearning #AI

Day 20: Losses in focus: -> Hinge Loss -> Categorical Cross-Entropy Margins vs. probabilities. SVMs vs. Softmax. Each loss, its own battlefield. Choose your weapon wisely. #MLfromScratch #LossFunctions


Day 21 Descent into Optimization Explored the core of training: → Batch GD: slow but stable → Stochastic GD: fast, noisy → Mini-Batch GD: best of both worlds Different strategies, same goal — minimize the loss. Pick your path down the slope. #MLfromScratch #GradientDescent


Day 30 🧠 Implemented Learning Rate Schedulers from scratch! → 🔹 Constant LR → ⏳ Time-Based Decay → 📉 Step Decay → 📊 Visualized how LR impacts weight updates Controlling LR = training smarter. #100DaysOfML #MLfromScratch #DeepLearning


Day 29 🧠 Implemented L1, L2, Dropout, BatchNorm & Early Stopping from scratch! → L1/L2 for regularization → Dropout to prevent overfitting → BatchNorm for stable training → Early Stopping for better generalization #MLfromScratch #DeepLearning #100DaysOfML


Today was day 5 of learning ML from basics, and quite evident... Heatmap is heating up ! Today I explored EDA through multivariate analysis, moving on to Pandas profiling. #LearnInPublic #BuildingInPublic #MLFromScratch #DataScienceCommunity #KeepLearning

TSrivastav40140's tweet image. Today was day 5 of learning ML from basics, and quite evident... Heatmap is heating up !

Today I explored EDA through multivariate analysis, moving on to Pandas profiling.

#LearnInPublic #BuildingInPublic #MLFromScratch #DataScienceCommunity  #KeepLearning

"I learned very early the difference between knowing the name of something and knowing something." ~ Richard Feynman This quote is the entire reason for my #MLfromScratch project. It's easy to import numpy. That's knowing the name. But what is a dot product actually doing?…

mrigesh_thakur's tweet image. "I learned very early the difference between knowing the name of something and knowing something." ~ Richard Feynman 

This quote is the entire reason for my #MLfromScratch project.

It's easy to import numpy. That's knowing the name. But what is a dot product actually doing?…

That’s the core of Linear Regression! 🧠 In my next posts, I’ll code LR from scratch in Python, visualize predictions, and show real results. Curious to see it in action? Follow along! 🚀 #MLfromScratch


Day 30 🧠 Implemented Learning Rate Schedulers from scratch! → 🔹 Constant LR → ⏳ Time-Based Decay → 📉 Step Decay → 📊 Visualized how LR impacts weight updates Controlling LR = training smarter. #100DaysOfML #MLfromScratch #DeepLearning


Day 29 🧠 Implemented L1, L2, Dropout, BatchNorm & Early Stopping from scratch! → L1/L2 for regularization → Dropout to prevent overfitting → BatchNorm for stable training → Early Stopping for better generalization #MLfromScratch #DeepLearning #100DaysOfML


Day 28 🧠 Activation Functions, from scratch → ReLU: max(0, x) — simple & fast → Sigmoid: squashes to (0,1), good for probs → Tanh: like sigmoid but centered at 0 #MLfromScratch #DeepLearning #ActivationFunctions #100DaysOfML


Day 27 📈 Precision@k, Recall@k, MAP → Precision: % of top-k that are relevant → Recall: % of relevant items found in top-k → MAP: Rewards correct ranking across users All implemented from scratch 🧠 #MLfromScratch #RecommenderSystems #RankingMetrics #DeepLearning


Day 26 🔍 MSE, RMSE, MAE, R² Score → MSE/RMSE: Squared error focus, penalizes large mistakes → MAE: Absolute error, more robust to outliers → R² Score: Explained variance measure #MLfromScratch #ModelEvaluation #RegressionMetrics #DeepLearning


Day 25 ROC, AUC, Log Loss, Confusion Matrix. → ROC/AUC: Threshold-based performance → Log Loss: Penalizes wrong confidence → Confusion Matrix: Metric backbone #MLfromScratch #ModelEvaluation #DeepLearning


Day 24 Metrics From Scratch Explored Accuracy, Precision, Recall & F1 Score from first principles. → Understood confusion matrix → When & why to use each metric Solid grounding in model evaluation. #MLfromScratch #DeepLearning #Metrics


Day 23 Sharper Steps Explored advanced optimizers: → RMSprop stabilizes updates → Adam blends momentum + adaptivity → AdamW decouples weight decay → Nadam adds Nesterov flair to Adam Smarter math, smoother descent. #MLfromScratch #Optimizers #DeepLearning


Day 22 Smarter Steps Moved past plain gradient descent: → Momentum adds speed → NAG looks ahead → Adagrad adapts on the fly Different techniques, one goal — descend efficiently. #MLfromScratch #Optimizers #DeepLearning


Day 21 Descent into Optimization Explored the core of training: → Batch GD: slow but stable → Stochastic GD: fast, noisy → Mini-Batch GD: best of both worlds Different strategies, same goal — minimize the loss. Pick your path down the slope. #MLfromScratch #GradientDescent


Day 20: Losses in focus: -> Hinge Loss -> Categorical Cross-Entropy Margins vs. probabilities. SVMs vs. Softmax. Each loss, its own battlefield. Choose your weapon wisely. #MLfromScratch #LossFunctions


Day 19: Losses in focus: MAE, MSE, Huber, Log Loss. Different goals, different penalties. Know when to use which. #MLfromScratch #LossFunctions

dataneuron's tweet image. Day 19:
Losses in focus:
 MAE, MSE, Huber, Log Loss.
Different goals, different penalties.
Know when to use which.
#MLfromScratch #LossFunctions

Day 18 of my summer fundamentals series: Midway through! Built a basic Transformer from scratch—self-attention, multi-head attention, and positional encoding all in NumPy. Incredible how these pieces scale to models like GPT. #MLfromScratch #Transformers #DL

dataneuron's tweet image. Day 18 of my summer fundamentals series:
Midway through! Built a basic Transformer from scratch—self-attention, multi-head attention, and positional encoding all in NumPy.
Incredible how these pieces scale to models like GPT.
#MLfromScratch #Transformers #DL

Day 17 of my summer fundamentals series: Built a GAN from scratch in NumPy. Generator creates, Discriminator critiques. They train together in a zero-sum game. Great for generating realistic data. #MLfromScratch #GAN #DL

dataneuron's tweet image. Day 17 of my summer fundamentals series:
Built a GAN from scratch in NumPy.
Generator creates, Discriminator critiques.
They train together in a zero-sum game.
Great for generating realistic data.
#MLfromScratch #GAN #DL

Day 16 of my summer fundamentals series: Built an Autoencoder from scratch in NumPy. Learns compressed representations by reconstructing inputs. Encoder reduces, decoder rebuilds. Unsupervised and powerful for denoising, compression, and more. #MLfromScratch #Autoencoder #DL

dataneuron's tweet image. Day 16 of my summer fundamentals series:
Built an Autoencoder from scratch in NumPy.
Learns compressed representations by reconstructing inputs.
Encoder reduces, decoder rebuilds.
Unsupervised and powerful for denoising, compression, and more.
#MLfromScratch #Autoencoder #DL

Day 15 of my summer fundamentals series: Built a GRU from scratch using NumPy. Simpler than LSTM — update and reset gates handle memory without a separate cell state. Efficient, elegant, and surprisingly capable. #MLfromScratch #GRU #DeepLearning #AI #RNN

dataneuron's tweet image. Day 15 of my summer fundamentals series:
Built a GRU from scratch using NumPy.
Simpler than LSTM — update and reset gates handle memory without a separate cell state.
Efficient, elegant, and surprisingly capable.
#MLfromScratch #GRU #DeepLearning #AI #RNN

Day 10 of my summer fundamentals series: Uncovering hidden patterns with PCA! Built PCA from scratch ,from covariance matrices to eigenvectors and eigenvalues. Reducing dimensions while keeping the essence of data intact. #MLfromScratch #PCA #DimensionalityReduction #DataScience

dataneuron's tweet image. Day 10 of my summer fundamentals series:
Uncovering hidden patterns with PCA!
Built PCA from scratch ,from covariance matrices to eigenvectors and eigenvalues.
Reducing dimensions while keeping the essence of data intact. 
#MLfromScratch #PCA #DimensionalityReduction #DataScience

"I learned very early the difference between knowing the name of something and knowing something." ~ Richard Feynman This quote is the entire reason for my #MLfromScratch project. It's easy to import numpy. That's knowing the name. But what is a dot product actually doing?…

mrigesh_thakur's tweet image. "I learned very early the difference between knowing the name of something and knowing something." ~ Richard Feynman 

This quote is the entire reason for my #MLfromScratch project.

It's easy to import numpy. That's knowing the name. But what is a dot product actually doing?…

Day 11 of my summer fundamentals series: Built a basic ANN from scratch — perceptron, forward pass, and gradient descent. Watched it learn to classify data over time. #MLfromScratch #ANN #DeepLearning #DataScience

dataneuron's tweet image. Day 11 of my summer fundamentals series:
Built a basic ANN from scratch — perceptron, forward pass, and gradient descent.
Watched it learn to classify data over time.
#MLfromScratch #ANN #DeepLearning #DataScience

Day 19: Losses in focus: MAE, MSE, Huber, Log Loss. Different goals, different penalties. Know when to use which. #MLfromScratch #LossFunctions

dataneuron's tweet image. Day 19:
Losses in focus:
 MAE, MSE, Huber, Log Loss.
Different goals, different penalties.
Know when to use which.
#MLfromScratch #LossFunctions

Day 3 of my summer fundamentals series: Built a Decision Tree classifier from scratch using only NumPy. No scikit-learn, no prebuilt libraries — just pure logic, entropy, and recursive splits. Learning to let the data decide. #MLfromScratch #DecisionTrees #DataScience

dataneuron's tweet image. Day 3 of my summer fundamentals series:
Built a Decision Tree classifier from scratch using only NumPy.
No scikit-learn, no prebuilt libraries — just pure logic, entropy, and recursive splits.
Learning to let the data decide.
#MLfromScratch #DecisionTrees #DataScience

Day 4 of my summer fundamentals series: Built a Random Forest from scratch with NumPy. No scikit-learn — just bootstrapping, feature randomness, and a forest of logic-driven trees. Learning how "wisdom of the crowd" applies to models too. #MLfromScratch #RandomForest #DataScience

dataneuron's tweet image. Day 4 of my summer fundamentals series:
Built a Random Forest from scratch with NumPy.
No scikit-learn — just bootstrapping, feature randomness, and a forest of logic-driven trees.
Learning how "wisdom of the crowd" applies to models too.
#MLfromScratch #RandomForest #DataScience

Day 9 of my summer fundamentals series: Unsupervised dive into K-Means Clustering! Rebuilt it from scratch—centroid updates, Euclidean distances, and convergence logic. It’s all about grouping similar data and minimizing variance. #MLfromScratch #KMeans #Clustering #DataScience

dataneuron's tweet image. Day 9 of my summer fundamentals series:
Unsupervised dive into K-Means Clustering!
Rebuilt it from scratch—centroid updates, Euclidean distances, and convergence logic.
It’s all about grouping similar data and minimizing variance.
#MLfromScratch #KMeans #Clustering #DataScience

Day 5 of my summer fundamentals series: Implemented K-Nearest Neighbors from scratch using NumPy. Just distance metrics, majority voting, and the elegance of lazy learning. Turns out, the best answers really do come from your nearest neighbors. #MLfromScratch #KNN #DataScience

dataneuron's tweet image. Day 5 of my summer fundamentals series:
Implemented K-Nearest Neighbors from scratch using NumPy.
Just distance metrics, majority voting, and the elegance of lazy learning.
Turns out, the best answers really do come from your nearest neighbors.
#MLfromScratch #KNN #DataScience

Today was day 5 of learning ML from basics, and quite evident... Heatmap is heating up ! Today I explored EDA through multivariate analysis, moving on to Pandas profiling. #LearnInPublic #BuildingInPublic #MLFromScratch #DataScienceCommunity #KeepLearning

TSrivastav40140's tweet image. Today was day 5 of learning ML from basics, and quite evident... Heatmap is heating up !

Today I explored EDA through multivariate analysis, moving on to Pandas profiling.

#LearnInPublic #BuildingInPublic #MLFromScratch #DataScienceCommunity  #KeepLearning

Day 1 of my summer fundamentals series: Implemented Simple and Multiple Linear Regression from scratch using just NumPy. No libraries, no shortcuts — just core math. Let’s see how deep this rabbit hole goes. #MLfromScratch #LinearRegression

dataneuron's tweet image. Day 1 of my summer fundamentals series:
Implemented Simple and Multiple Linear Regression from scratch using just NumPy.
No libraries, no shortcuts — just core math.
Let’s see how deep this rabbit hole goes. 
#MLfromScratch #LinearRegression

Day 15 of my summer fundamentals series: Built a GRU from scratch using NumPy. Simpler than LSTM — update and reset gates handle memory without a separate cell state. Efficient, elegant, and surprisingly capable. #MLfromScratch #GRU #DeepLearning #AI #RNN

dataneuron's tweet image. Day 15 of my summer fundamentals series:
Built a GRU from scratch using NumPy.
Simpler than LSTM — update and reset gates handle memory without a separate cell state.
Efficient, elegant, and surprisingly capable.
#MLfromScratch #GRU #DeepLearning #AI #RNN

Day 17 of my summer fundamentals series: Built a GAN from scratch in NumPy. Generator creates, Discriminator critiques. They train together in a zero-sum game. Great for generating realistic data. #MLfromScratch #GAN #DL

dataneuron's tweet image. Day 17 of my summer fundamentals series:
Built a GAN from scratch in NumPy.
Generator creates, Discriminator critiques.
They train together in a zero-sum game.
Great for generating realistic data.
#MLfromScratch #GAN #DL

Day 13 of my summer fundamentals series: Implemented a simple CNN from scratch — built convolution, ReLU, and pooling layers using just NumPy. Watching it learn spatial features from images felt like giving sight to math. #MLfromScratch #CNN #DeepLearning #AI

dataneuron's tweet image. Day 13 of my summer fundamentals series:
Implemented a simple CNN from scratch — built convolution, ReLU, and pooling layers using just NumPy.
Watching it learn spatial features from images felt like giving sight to math.
#MLfromScratch #CNN #DeepLearning #AI

Day 16 of my summer fundamentals series: Built an Autoencoder from scratch in NumPy. Learns compressed representations by reconstructing inputs. Encoder reduces, decoder rebuilds. Unsupervised and powerful for denoising, compression, and more. #MLfromScratch #Autoencoder #DL

dataneuron's tweet image. Day 16 of my summer fundamentals series:
Built an Autoencoder from scratch in NumPy.
Learns compressed representations by reconstructing inputs.
Encoder reduces, decoder rebuilds.
Unsupervised and powerful for denoising, compression, and more.
#MLfromScratch #Autoencoder #DL

Day 12 of my summer fundamentals series: Built a simple Recurrent Neural Network from scratch — implemented time steps, hidden states, and backprop through time (BPTT). Watched it learn from sequences. Feels like giving memory to math. #MLfromScratch #RNN #DeepLearning #AI

dataneuron's tweet image. Day 12 of my summer fundamentals series:
Built a simple Recurrent Neural Network from scratch — implemented time steps, hidden states, and backprop through time (BPTT).
Watched it learn from sequences. Feels like giving memory to math.
#MLfromScratch #RNN #DeepLearning #AI

Day 14 of my summer fundamentals series: Implemented an LSTM from scratch — built forget, input, and output gates using just NumPy. Watching it capture dependencies across time steps felt like giving memory to math. #MLfromScratch #LSTM #DeepLearning #AI #RNN

dataneuron's tweet image. Day 14 of my summer fundamentals series:
Implemented an LSTM from scratch — built forget, input, and output gates using just NumPy.
Watching it capture dependencies across time steps felt like giving memory to math.
#MLfromScratch #LSTM #DeepLearning #AI #RNN

Day 18 of my summer fundamentals series: Midway through! Built a basic Transformer from scratch—self-attention, multi-head attention, and positional encoding all in NumPy. Incredible how these pieces scale to models like GPT. #MLfromScratch #Transformers #DL

dataneuron's tweet image. Day 18 of my summer fundamentals series:
Midway through! Built a basic Transformer from scratch—self-attention, multi-head attention, and positional encoding all in NumPy.
Incredible how these pieces scale to models like GPT.
#MLfromScratch #Transformers #DL

Day 6 of my summer fundamentals series: Implemented Naive Bayes from scratch with NumPy. Log probabilities, conditional likelihoods, and the magic of independence assumptions. Sometimes, being naive is a feature, not a bug. #MLfromScratch #NaiveBayes #DataScience

dataneuron's tweet image. Day 6 of my summer fundamentals series:
Implemented Naive Bayes from scratch with NumPy.
Log probabilities, conditional likelihoods, and the magic of independence assumptions.
Sometimes, being naive is a feature, not a bug.
#MLfromScratch #NaiveBayes #DataScience

Day 7 of my summer fundamentals series: Implemented Support Vector Machine (SVM) from scratch using NumPy. Margins, hyperplanes, and hinge loss—geometry meets optimization. Turns out, maximum separation can be a beautiful thing. #MLfromScratch #SVM #DataScience

dataneuron's tweet image. Day 7 of my summer fundamentals series:
Implemented Support Vector Machine (SVM) from scratch using NumPy.
Margins, hyperplanes, and hinge loss—geometry meets optimization.
Turns out, maximum separation can be a beautiful thing.
#MLfromScratch #SVM #DataScience

Loading...

Something went wrong.


Something went wrong.


United States Trends