My Report (&Account)

Machine Learning Online Test


Correct Answer: 2 points | Wrong: -1 point
Grades: A* (100% score) | A (80%-99%) | B (60%-80%) | C (40%-60%) | D (0%-40%)

1. Mean normalization can be used to simplify gradient descent for multivariate linear regression.

Question 1 of 50

2. Which of the following is expressed by the given equation Y = β0 + βTX + Ɛ which shows a real-valued dependent variable Y is modeled as function of multiple independent variables X1, X2, …, Xp ≡ X plus noise?

Question 2 of 50

3. Which of the following statements is not true about reducing a real valued feature problem into binary feature?

Question 3 of 50

4. It is possible that in the output, set S contains only phi.

Question 4 of 50

5. The target variable is represented along ____________

Question 5 of 50

6. Which of the following statements is not true about the picture shown below?

Question 6 of 50

7. Which of the following objective functions is not solved by Subgradient?

Question 7 of 50

8. Multivariate linear regression belongs to which category?

Question 8 of 50

9. Spectrum Kernel count the number of substrings in common.

Question 9 of 50

10. (x(1), y(1)) = 1, 1.5, (x(2), y(2)) = 2, 3, (x(3), y(3)) = 3, 4.5. Hypothesis: h(x) = t1x, where t1 = 1.5. How much error is obtained?

Question 10 of 50

11. The goal of a support vector machine is to find the optimal separating hyperplane which minimizes the margin of the training data.

Question 11 of 50

12. Consider the dataset given below where T and F represent True and False respectively. What is the entropy H (Rain)?

Temperature Cloud Rain
Low T T
Low T T
Medium T F
Medium T T
High T F
High F F

Question 12 of 50

13. What is the entropy at P = 0.5 from the given figure?

Question 13 of 50

14. The optimum separation hyperplane (OSH) is the linear classifier with the minimum margin.

Question 14 of 50

15. Who invented BFGS?

Question 15 of 50

16. Which of the following is represented by the below figure?

Question 16 of 50

17. Support vector machine is a generative classifier.

Question 17 of 50

18. Ax = b => [4 2, 2 3][x1, x2] = [2, 2]. Let x0, the initial guess be [1, 1]. What is the residual vector?

Question 18 of 50

19. Identify the parametric machine learning algorithm.

Question 19 of 50

20. More sophisticated averaging schemes can improve the convergence speed in the case of strongly convex functions.

Question 20 of 50

21. Ensembles tend to yield better results when there is a significant diversity among the models.

Question 21 of 50

22. When does the hypothesis change in the Find-S algorithm, while iteration?

Question 22 of 50

23. Which of the following statements is not a step in Minimum error pruning?

Question 23 of 50

24. When was logistic regression invented?

Question 24 of 50

25. Consider there are 7 weak learners, out of which 4 learners are voted as FAKE for a social media account and 3 learners are voted as REAL. What will be the final prediction for the account if we are using a majority voting method?

Question 25 of 50

26. Setting large values of K in kNN is computationally inexpensive.

Question 26 of 50

27. The output in a logistic regression problem is yes (equivalent to 1 or true). What is its possible value?

Question 27 of 50

28. Which of the following statements is false about the base - learners?

Question 28 of 50

29. What is assumed while using empirical risk minimization with inductive bias?

Question 29 of 50

30. Which of the following ECOC designs uses n = (Nc−1).T dichotomizers, where T stands for the number of binary tree structures to be embedded?

Question 30 of 50

31. How is the version space represented?

Question 31 of 50

32. Which of the following statements is not true about the C parameter in SVM?

Question 32 of 50

33. What is the advantage of the list-then-eliminate algorithm?

Question 33 of 50

34. A learner can be deemed consistent if it produces a hypothesis that perfectly fits the __________

Question 34 of 50

35. Ax = b => [3 2, 2 3][x1, x2] = [8, 6]. Let x0, the initial guess be [2, 1]. What is the residual vector?

Question 35 of 50

36. Which of the following statements is not true about Subgradient method?

Question 36 of 50

37. Error is defined over the _____________

Question 37 of 50

38. The Subgradient is a descent method.

Question 38 of 50

39. In Classification trees the value obtained by terminal node in the training data is the mode of observations falling in that region.

Question 39 of 50

40. Error strongly depends on distribution D.

Question 40 of 50

41. Stochastic gradient descent has the possibility of escaping from local minima.

Question 41 of 50

42. In error-correcting output codes (ECOC), the main classification task is defined in terms of a number of subtasks that are implemented by the base-learners.

Question 42 of 50

43. Assume that we are training a boosting classifier using decision stumps on the given dataset. Then which of the given examples will have their weights increased at the end of the first iteration?

Question 43 of 50

44. From the below table where the target is to predict play or not (Yes or No) based on weather condition, what is the Gini index for Climate = Sunny?

Day Climate Temperature Wind Decision
1 Sunny Cool Strong Yes
2 Sunny Hot Weak No
3 Rainy Medium Weak Yes
4 Winter Cool Weak Yes
5 Rainy Cool Strong No
6 Winter Cool Strong No
7 Sunny Hot Strong No

Question 44 of 50

45. Who invented logistic regression?

Question 45 of 50

46. Which of the following is an example of stacking?

Question 46 of 50

47. Which of the following statements is not true about Lagrange multipliers?

Question 47 of 50

48. In a linear regression problem, h(x) is the predicted value of the target variable, y is the actual value of the target variable, m is the number of training examples. What do we try to minimize?

Question 48 of 50

49. Which of the following statements is not true about the Regression trees?

Question 49 of 50

50. Given the entropy for a split, Esplit = 0.39 and the entropy before the split, Ebefore = 1. What is the Information Gain for the split?

Question 50 of 50


 



Machine Learning Certification Test

Machine Learning Certification Test is a free certification exam. However, you need to score an A grade in each of the "Certification Level Tests 1 to 10" to be eligible to take part in this certification test. So, take all the "10 Tests" starting from Certification Level 1 upto Level 10, before taking the final Certification test.


Level 1 to 10 Tests:
Total Questions: 25, Total Time: 30 min, Correct Answer: 2 points, Wrong Answer: -1 point

Certification Test:
Total Questions: 50, Total Time: 1 hour, Correct Answer: 2 points, Wrong Answer: -1 point

Machine Learning Internship Test

If you scored either Grade A* or Grade A in our Machine Learning Internship Test, then you can apply for Internship at Sanfoundry in Machine Learning.


Total Questions: 50, Total Time: 1 hour, Correct Answer: 2 points, Wrong Answer: -1 point

Machine Learning Job Test

It is designed to test and improve your skills for a successful career, as well as to apply for jobs.


Total Questions: 50, Total Time: 1 hour, Correct Answer: 2 points, Wrong Answer: -1 point

Note: Before you get started on these series of online tests, you should practice our collection of 1000 MCQs on Machine Learning .

Sanfoundry Scoring & Grading System

Sanfoundry tests and quizzes are designed to provide a real-time online exam experience. Here is what you need to know about them.

  • Scoring System: You get 2 points for each correct answer but lose 1 point for every wrong answer.
  • Grading System: Your grade depends on your final score and can be one of the following:

    • Grade A* - Genius (100%)
    • Grade A - Excellent (80% to 99%)
    • Grade B - Good (60% to 80%)
    • Grade C - Average (40% to 60%)
    • Grade D - Poor (0% to 40%)
advertisement
advertisement
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.