MLT Unit 2 Part 3 Regression & Bayesian Learning

Que2.23. Write short note on hyperplane( Decision face). Answer 1. A hyperplane in an n- dimensional Euclidean space is a flat, n- 1 dimensional subset of that space that divides the space into two disconnected corridor. 2. For illustration let’s assume a line to be one dimensional Euclidean space. 3. Now pick a point on the line, this point divides the line into two corridor. 4. The line has 1 dimension, while the point has 0 confines. So a point is a hyperplane of the line. 5. For two confines we saw that the separating line was the hyperplane. 6. also, for three confines a aeroplane
with two confines divides the 3d space into two corridor and therefore act as a hyperplane. 7. therefore for a space of n confines we’ve a hyperplane of n- 1 confines separating it into two corridor Que2.24. What are the advantages and disadvantags of SVM? Answer Advantages of SVM are 1. Guaranteed optimality Owing to the nature of Convex Optimization, the result will always be global minimum, not a original minimum. 2. The cornucopia of executions We can pierce it accessibly. 3. SVM can be used for linearly divisible as well asnon-linearly divisible data. Linearly divisible data pases hard periphery whereasnon-linearly divisible data poses a soft periphery. 4. SVMs give compliance to thesemi-supervised literacy models. It can be used in areas where the data is labeled as well as unlabeled. It only requires a condition to the minimization problem which is known as the transductive SVM. 5. point Mapping used to be quite a cargo on the computational complexity of the overall training performance of the model. still, with the help of Kernel Trick, SVM can carry out the point mapping using the simple fleck product. Disadvantages of SVM 1. SVM doesn’t give the stylish performance for handling textbook structures as compared to other algorithms that are used in handling textbook data. This leads to loss of successional information and thereby, leading to worse performance. 2. SVM can not return the probabilistic confidence value that’s analogous to logistic retrogression. This doesn’t give important explanation as the confidence of vaticination is important in several operations. 3. The choice of the kernel is maybe the biggest limitation of the support vector machine. Considering so numerous kernels present, it becomes delicate to choose the right bone
for the data. Que2.25. Explain the parcels of SVM. Answer Following are the parcels of SVM 1. Inflexibility in choosing a similarity function Sparseness of result when dealing with large data sets only support vectors are used to specify the separating hyperplane 2. Capability to handle large point spaces complexity doesn’t depend on the dimensionality of the point space 3. Overfitting can be controlled by soft periphery approach A simple convex optimization problem which is guaranteed to meet to a single global result Que2.26. What are the parameters used in support vector classifier? Answer Parameters used in support vector classifier are 1. Kernel Kernel, is named grounded on the type of data and also the type of metamorphosis. b. By dereliction, the kernel is Radial Base Function Kernel( RBF). 2. Gamma a. This parameter decides how far the influence of a single training illustration reaches during metamorphosis, which in turn affects how tightly the decision boundaries end up girding points in the input space. , points further piecemeal are considered If there’s a small value of gamma. analogous. So, further points are grouped together and have smoother decision boundaries( may be less accurate). Larger values of gamma cause points to be closer together( may cause overfitting). 3. The’ C’ parameter a. This parameter controls the quantum of regularization applied on the data. Large values of C mean low regularization which in turn causes the training data to fit veritably well( may beget overfitting). Lower values of C mean advanced regularization which causes the model to be more tolerant of crimes( may lead to lower delicacy). Que3.1. Describe the introductory language used in decision tree. Answer introductory language used in decision trees are 1. Root knot It represents entire population or sample and this farther gets divided into two or further homogeneous sets. 2. unyoking It’s a process of dividing a knot into two or furthersub-nodes. 3. Decision knot When asub-node splits into farthersub-nodes, also it is called decision knot. 4. Leaf/ Terminal knot Bumps that don’t resolve is called splint or terminal knot. 5. cutting When we removesub-nodes of a decision knot, this process is called pruning. This process is contrary to splitting process. 6. Branch/sub-tree A sub section of entire tree is called branch or subtree. 7. Parent and child knot A knot which is divided intosub-nodes is called parent knot ofsub-nodes where assub-nodes are the child of parent knot. Que3.2. Why do we use decision tree? Answer 1. Decision trees can be imaged, simple to understand and interpret. 2. They bear lower data medication whereas other ways frequently bear data normalization, the creation of ersatz variables and junking of blank values. 3. The cost of using the tree( for prognosticating data) is logarithmic in the number of data points used to train the tree. 4. Decision trees can handle both categorical and numerical data whereas other ways are specialized for only one type of variable. 5. Decision trees can handlemulti-output problems. 6. Decision tree is a white box model i.e., the explanation for the condition can be explained fluently by Boolean sense because there are two labors. For illustration yes or no. 7. Decision trees can be used indeed if hypotheticals are violated by the dataset from which the data is taken. Que3.3. How can we express decision trees? Answer 1. Decision trees classify cases by sorting them down the tree from the root to splint knot, which provides the bracket of the case. 2. An case is classified by starting at the root knot of the tree, testing the trait specified by this knot, also moving down the tree branch corresponding to the value of the trait as shown inFig.3.3.1. 3. This process is also repeated for the subtree embedded at the new knot. 4. The decision tree inFig.3.3.1 classifies a particular morning according to whether it’s suitable for playing tennis and returning the bracket associated with the particular splint. 5. For illustration, the case Outlook = Rain, Temperature = Hot, moisture = High, Wind = Strong) would be sorted down the left most branch of this decision tree and would thus be classified as a negative case. 6. In other words, decision tree represent a disjunction of convergences of constraints on the trait values of cases. Outlook = Sunny moisture = Normal)( Outlook = Overcast) Outlook = Rain Wind = Weak)

Leave a Comment