ml_quiz's profile picture. Machine Learning with Quizzes

Quiz on Machine Learning

@ml_quiz

Machine Learning with Quizzes

Which of the following is the example of Bayes Classifiers? NOTATIONS QDA: Quadratic discriminate analysis LDA: Linear discriminate analysis #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience


Bayesian Inference can be summarized into 4 major steps: a. Calculate Likelihood b. Calculate/Collect prior c. Calculate posterior d. Inference. Which one represents the correct order ? #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience


Given the data 'D' and the model (parameter) 'θ', which of the following is called likelihood ? #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience #bayesianlearning #quiz


Given the Bayes Theorem, P(parameter|data) = [P(data|parameter) * P(parameter)] / P(data) We can reduce it to: P(parameter|data) ∝ P(data|parameter) * P(parameter) #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience

True %100
False %0

7 vote · Final results


Given the Bayes Theorem, P(Y|X) = [P(X|Y) * P(Y)] / P(X), P(Y|X) refers to - " how likely is the occurrence of Y given X is already observed? " #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience #data #DataMining #programming

True %93.3
False %6.7

15 vote · Final results


Given the Bayes Theorem, P(Y|X) = [P(X|Y) * P(Y)] / P(X), P(X|Y) refers to - "how likely is it to observe X if Y was known ?". #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience #data #DataMining #programming

True %85.7
False %14.3

7 vote · Final results


Given the Bayes Theorem, P(Y|X) = [P(X|Y) * P(Y)] / P(X), What is P(Y) refers to ? NOTATIONs: BP : beliefs prior BA : beliefs after #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience #data #DataMining #programming


Generative machine learning is training a model to learn parameters maximizing the joint probability - P(X, Y), between the target variable Y and features X. #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience #data #DataMining #programming

True %85.7
False %14.3

7 vote · Final results


Discriminative models estimates: p ( y | x ) —the probability of a label y given observation x. #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #AI #ML #DataScience #data #DataMining #tech #bigdat #programming

True %100
False %0

6 vote · Final results


Hierarchical clustering can be divided into two groups: Agglomerative and Divisive. Agglomerative clustering is a top down approach whereas Divisive clustering a buttom up approach. #MachineLearning #DeepLearning #NeuralNetworks #ArtificialIntelligence #DataScience #DataScientist

True %60
False %40

5 vote · Final results


True %100
False %0

8 vote · Final results


True %81.8
False %18.2

11 vote · Final results


To avoid k-means clustering from getting stuck to a bad local minima, we should try using multiple random initializations. #MachineLearning #DeepLearning #ArtificialIntelligence #DataScience #tech #programming #data

True %88.9
False %11.1

9 vote · Final results


This account does not follow anyone
Loading...

Something went wrong.


Something went wrong.