Que3.15. What are the performance confines used for instancebased literacy algorithm? Answer Performance dimension used for case- grounded literacy algorithm are 1. Generality a. This is the class of generalities that describe the representation of an algorithm. IBL algorithms can pac- learn any conception whose boundary is a union of a finite number of unrestricted hyperactive- angles of finite size. 2. delicacy This conception describes the delicacy of bracket. 3. Learning rate a. This is the speed at which bracket delicacy increases during training. It’s a more useful index of the performance of the literacy algorithm than delicacy for finite- sized training sets. 4. objectification costs a. These are incurred while streamlining the conception descriptions with a single training case. b. They include bracket costs. 5. storehouse demand This is the size of the conception description for IBL algorithms, which is defined as the number of saved cases used for bracket opinions. Que3.16. What are the functions of case- grounded literacy? Answer Functions of case- grounded literacy are 1. Similarity function a. This computes the similarity between a training case i and the cases in the conception description. parallels are numeric- valued. 2. Bracket function a. This receives the similarity function’s results and the bracket performance records of the cases in the conception description. b. It yields a bracket fori. 3. Concept description updater a. This maintains records on bracket performance and decides which cases to include in the conception description. Inputs include i, the similarity results, the bracket results, and a current conception description. It yields the modified conception description. Que3.17. What are the advantages and disadvantages of instancebased literacy? Answer Advantages of case- grounded literacy 1. literacy is trivial. 2. Works efficiently. 3. Noise resistant. 4. Rich representation, arbitrary decision shells. 5. Easy to understand. Disadvantages of case- grounded literacy 1. Need lots of data. 2. Computational cost is high. 3. confined to x Rn. 4. Implicit weights of attributes( need normalization). 5. Need large space for storehouse i.e., bear large memory. 6. precious operation time. Que3.18. Describe K- Nearest Neighbour algorithm with way. Answer 1. The KNN bracket algorithm is used to decide the new case should belong to which class. 2. When K = 1, we’ve the nearest neighbour algorithm. 3. KNN bracket is incremental. 4. KNN bracket doesn’t have a training phase, all cases are stored. Training uses indexing to find neighbours snappily. 5. During testing, KNN bracket algorithm has to find K- nearest neighbours of a new case. This is time consuming if we do total comparison. 6. K- nearest neighbours use the original neighborhood to gain a vaticination. Algorithm Let m be the number of training data samples. Let p be an unknown point. 1. Store the training samples in an array of data points array. This means each element of this array represents a tuple( x, y). 2. For i = to m Calculate Euclidean distance d( arr( i), p). 3 Make set S of K lowest distances attained. Each of these distances corresponds to an formerly classified data point. 4. Return the maturity marker amongS. Que3.19. What are the advantages and disadvantages of K- nearest neighbour algorithm? Answer Advantages of KNN algorithm 1. No training period KNN is called lazy learner( Instance- grounded literacy). b. It doesn’t learn anything in the training period. It doesn’t decide any discriminational function from the training data. c. In other words, there’s no training period for it. It stores the training dataset and learns from it only at the time of making real time prognostications. d. This makes the KNN algorithm important faster than other algorithms that bear training for illustration, SVM, Linear Regressionetc. 2. Since the KNN algorithm requires no training before making prognostications, new data can be added seamlessly which won’t impact the delicacy of the algorithm. 3. KNN is veritably easy to apply. There are only two parameters needed to apply KNN i.e., the value of K and the distance function( for illustration, Euclidean). Disadvantages of KNN 1. Doesn’t work well with large dataset In large datasets, the cost of calculating the distance between the new point and each being points is huge which degrades the performance of the algorithm. 2. Doesn’t work well with high confines The KNN algorithm does not work well with high dimensional data because with large number of confines, it becomes delicate for the algorithm to calculate the distance in each dimension. 3. Need point scaling We need to do point scaling( standardization and normalization) before applying KNN algorithm to any dataset. If we do not do so, KNN may induce wrong prognostications. 4. Sensitive to noisy data, missing values and outliers KNN is sensitive to noise in the dataset. We need to manually represent missing values and remove outliers. Que3.20. Explain locally weighted retrogression. Answer 1. Model- grounded styles, similar as neural networks and the admixture of Gaussians, use the data to make a parameterized model. 2. After training, the model is used for prognostications and the data are generally discarded. 3. In discrepancy, memory- grounded styles arenon-parametric approaches that explicitly retain the training data, and use it each time a vaticination needs to be made. 4. Locally Weighted Retrogression( LWR) is a memory- grounded system that performs a retrogression around a point using only training data that are original to that point. 5. LWR was suitable for real- time control by constructing an LWR- grounded system that learned a delicate juggling task. 6. The Topsoil( Locally Estimated Scatterplot Smoothing) model performs a direct retrogression on points in the data set, ladened by a kernel centered atx. 7. The kernel shape is a design parameter for which the original Topsoil model uses a tricubic kernel hi( x) = h( x – xi) = exp( – k( x – xi) 2), where k is a smoothing parameter. 8. For brevity, we will drop the argument x for hi( x), and define n = ihi. We can also write the estimated means and covariances as 9. We use the data covariances to express the tentative prospects and their estimated dissonances Que3.21. Explain Radial Base Function( RBF). Answer 1. A Radial Base Function( RBF) is a function that assigns a real value to each input from its sphere( it is a real- value function), and the value produced by the RBF is always an absolute value i.e., it’s a measure of distance and can not be negative. Machine Learning ways 3 – 19 L( CS/ IT- Sem- 5) 2. Euclidean distance( the straight- line distance) between two points in Euclidean space is used. 3. Radial base functions are used to compare functions, similar as neural networks acts as function approximators. 4. The following sum represents a radial base function network y( x) = 1 –) N i i i w x x 5. The radial base functions act as activation functions. 6. The approximant y( x) is differentiable with respect to the weights which are learned using iterative update styles common among neural networks. Que3.22. Explain the armature of a radial base function network. Answer 1. Radial Base Function( RBF) networks have three layers an input subcaste, a retired subcaste with anon-linear RBF activation function and a direct affair subcaste. 2. The input can be modeled as a vector of real figures x Rn. 3. The affair of the network is also a scalar function of the input vector, Rn R, and is given by vector for neuron i and ai is the weight of neuron i in the direct affair neuron. 4. Functions that depend only on the distance from a center vector are radially symmetric about that vector. 5. In the introductory form all inputs are connected to each retired neuron. 6. The radial base function is taken to be Gaussian x – ci) = exp – – 2 i x c 7. The Gaussian base functions are original to the center vector in the sense that lim( –) x i x c = 0 i.e., changing parameters of one neuron has only a small effect for input values that are far down from the center of that neuron. 8. Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of Rn. 9. This means that an RBF network with enough retired neurons can approximate any nonstop function on a unrestricted, bounded set with arbitrary perfection. 10. The parameters ai, ci,, and are determined in a manner that optimizes the fit between and the data.