MLT Unit 1 Part 4 Introduction

Que13.17. What are the colorful clustering ways? Answer 1. Clustering ways are used for combining observed exemplifications into clusters or groups which satisfy two following main criteria a. Each group or cluster is homogeneous i.e., exemplifications belong to the same group are analogous to each other. b. Each group or cluster should be different from other clusters i.e., exemplifications that belong to one cluster should be different from the exemplifications of the other clusters. 2. Depending on the clustering ways, clusters can be expressed in different ways linked clusters may be exclusive, so that any illustration belongs to only one cluster. b. They may be lapping i.e., an illustration may belong to several clusters. c. They may be probabilistic i.e., an illustration belongs to each cluster with a certain probability. Clusters might have hierarchical structure. a. Once a criterion function has been named, clustering becomes a well- defined problem in separate optimization. We find those partitions of the set of samples that extremize the criterion function. c. The sample set is finite, there are only a finite number of possible partitions. d. The clustering problem can always be answered by total recitation. 1. Hierarchical clustering a. This system works by grouping data expostulate into a tree of clusters. b. This system can be farther classified depending on whether the hierarchical corruption is formed in bottom up( incorporating) or top down( splitting) fashion. Following are the two types of hierarchical clustering Agglomerative hierarchical clustering This bottom up strategy thresholds by placing each object in its own cluster and also merges these infinitesimal clusters into larger and larger clusters, until all of the objects are in a single cluster. Divisive hierarchical clustering i. This top down strategy does the reverse of agglomerative strategy by starting with all objects in one cluster. ii. It subdivides the cluster into lower and lower pieces until each object forms a cluster on its own. 2. Partitional clustering a. This system first creates an original set of number of partitions where each partition represents a cluster. b. The clusters are formed to optimize an objective partition criterion similar as a diversity function grounded on distance so that the objects within a cluster are analogous whereas the objects of different clusters are different. Following are the types of partitioning styles Centroid grounded clustering i. In this, it takes the input parameter and partitions a set of object into a number of clusters so that performing intracluster similarity is high but the intercluster similarity is low. ii. Cluster similarity is measured in terms of the mean value of the objects in the cluster, which can be viewed as the cluster’s centroid or center of graveness. Model- grounded clustering This system hypothesizes a model for each of the cluster and finds the stylish fit of the data to that model. Que1.18. Describe underpinning literacy. Answer 1. underpinning literacy is the study of how creatures and artificial systems can learn to optimize their geste
in the face of prices and corrections. 2. underpinning literacy algorithms related to styles of dynamic programming which is a general approach to optimal control. 3. underpinning literacy marvels have been observed in cerebral studies of beast geste
, and in neurobiological examinations of neuromodulation and dependence . 4. The task of underpinning literacy is to use observed prices to learn an optimal policy for the terrain. An optimal policy is a policy that maximizes the anticipated total price. Que1.19. Explain decision tree in detail. Answer 1. A decision tree is a flowchart structure in which each internal knot represents a test on a point, each splint knot represents a class marker and branches represent convergences of features that lead to those class markers. 2. The paths from root to splint represent bracket rules. 4. Decision tree is the prophetic modelling approach used in statistics, data mining and machine literacy. 5. Decision trees are constructed via an algorithmic approach that identifies the ways to resolve a data set grounded on different conditions. 6. Decision trees are anon-parametric supervised literacy system used for both bracket and retrogression tasks. 7. Bracket trees are the tree models where the target variable can take a separate set of values. 8. Retrogression trees are the decision trees where the target variable can take nonstop set of values. Que1.20. What are the way used for making decision tree? Answer way used for making decision tree are 1. Get list of rows( dataset) which are taken into consideration for making decision tree( recursively at each knot). 2. Calculate query of our dataset or Gini contamination or how important our data is mixed upetc. 3. induce list of all question which needs to be asked at that knot. 4. Partition rows into True rows and False rows grounded on each question asked. 5. Calculate information gain grounded on Gini contamination and partition of data from former step. 6. Update loftiest information gain grounded on each question asked. 7 Update question grounded on information gain( advanced information gain). 8. Divide the knot on question. reprise again from step 1 until we get pure knot( splint bumps) Que1.21. What are the advantages and disadvantages of decision tree system? Answer Advantages of decision tree system are 1. Decision trees are suitable to induce accessible rules. 2. Decision trees perform bracket without taking calculation. 3. Decision trees are suitable to handle both nonstop and categorical variables. 4. Decision trees give a clear suggestion for the fields that are important for vaticination or bracket. Disadvantages of decision tree system are 1. Decision trees are less applicable for estimation tasks where the thing is to prognosticate the value of a nonstop trait. 2. Decision trees are prone to crimes in bracket problems with numerous class and fairly small number of training exemplifications. 3. Decision tree are computationally precious to train. At each knot, each seeker blistering field must be sorted before its stylish split can be set up. 4. In decision tree algorithms, combinations of fields are used and a hunt must be made for optimal combining weights. Pruning algorithms can also be precious since numerous seekersub-trees must be formed and compared. Que1.22. Write short note on Bayesian belief networks. Answer 1. Bayesian belief networks specify common tentative probability distributions. 2. They’re also known as belief networks, Bayesian networks, or probabilistic networks. 3. A Belief Network allows class tentative liberties to be defined between subsets of variables. 4. It provides a graphical model of unproductive relationship on which literacy can be performed. 5. We can use a trained Bayesian network for bracket. 6. There are two factors that define a Bayesian belief network Directed acyclic graph i. Each knot in a directed acyclic graph represents a arbitrary variable. ii. These variable may be separate or nonstop valued. iii. These variables may correspond to the factual trait given in the data. Directed acyclic graph representation The following illustration shows a directed acyclic graph for six Boolean variables. i. The bow in the illustration allows representation of unproductive knowledge. ii. For illustration, lung cancer is told by a person’s family history of lung cancer, as well as whether or not the person is a smoker. iii. It’s worth noting that the variable PositiveX-ray is independent of whether the case has a family history of lung cancer or that the case is a smoker, given that we know the case has lung cancer. tentative probability table The tentative probability table for the values of the variable LungCancer( LC) showing each possible combination of the values of its parent bumps, FamilyHistory( FH), and Smoker( S) is as follows Que1.23. Write a short note on support vector machine. Answer 1. A Support Vector Machine( SVM) is machine literacy algorithm that analyzes data for bracket and retrogression analysis. 2. SVM is a supervised literacy system that looks at data and feathers it into one of two orders. 3. An SVM labors a chart of the sorted data with the perimeters between the two as far piecemeal as possible. 4. operations of SVM Text and hypertext bracket ii. Image bracket iii. Feting handwritten characters iv. Biological lores, including protein bracket Que1.24. Explain inheritable algorithm with inflow map. Answer inheritable algorithm( GA) 1. The inheritable algorithm is a system for working both constrained and unconstrained optimization problems that’s grounded on natural selection. 2. The inheritable algorithm constantly modifies a population of individual results. 3. At each step, the inheritable algorithm selects individualities at arbitrary from the current population to be parents and uses them to produce the children for the coming generation. 4. Over consecutive generations, the population evolves toward an optimal result. Flow map The inheritable algorithm uses three main types of rules at each step to produce the coming generation from the current population Selection rule Selection rules elect the individualities, called parents, that contribute to the population at the coming generation. Crossover rule Crossover rules combine two parents to form children for the coming generation. Mutation rule Mutation rules apply arbitrary changes to individual parents to form children.

Leave a Comment