category-banner

Essentials Of Artificial Neural Networks Training Ppt

Rating:
80%

You must be logged in to download this presentation.

Favourites
Loading...

PowerPoint presentation slides

Presenting Essentials of Artificial Neural Networks. These slides are 100 percent made in PowerPoint and are compatible with all screen types and monitors. They also support Google Slides. Premium Customer Support available. Suitable for use by managers, employees, and organizations. These slides are easily customizable. You can edit the color, text, icon, and font size to suit your requirements.

People who downloaded this PowerPoint presentation also viewed the following :

Content of this Powerpoint Presentation

Slide 1

This slide depicts a separator slide for the session introduction to sales fundamentals.

Slide 2

This slide depicts the layers in a Neural network. There are three layers known as the input layer, hidden layer, and output layer.

Instructor’s Notes: 

  • Input Layer: The input layer comes first. This layer will accept the data, which will then send it to the remainder of the network
  • Hidden Layer: The hidden layer is the second of the three layers. A neural network’s hidden layers can be one or more in number. The hidden layers are the ones that are accountable for neural networks' high performance and complexity. They may do a number of tasks at once, such as data transformation, automatic feature development, etc
  • Output Layer: The output layer is the final layer. The outcome or output of the problem is stored in the output layer. The input layer receives raw photos, and the output layer delivers the final version of the result

Slide 3

This slide illustrates the artificial neurons present in the layers of an Artificial Neural Network (ANN). It draws a comparison between the biological neurons present in human brains with an artificial neuron in an ANN.

Instructor’s Notes: A layer is made up of microscopic units known as neurons. A neuron in a neural network can be better understood with the help of biological neurons. An artificial neuron is analogous to a biological neuron. It takes information from other neurons, processes it, and then produces an output. Here, X1 and X2 are the artificial neurons' inputs, f(X) is the processing done on the inputs, and y is the neuron's output.

Slide 4

This slide gives an introduction to activation function in a neural network and discusses its importance. The activation function is used to calculate a weighted total and then add bias to it to determine whether a neuron should be activated or not.

Slide 5

This slide illustrates Threshold Function which is a type of Activation Function in a neural network. This function has only two possible outputs: either 0 or 1. They're generally employed when only two kinds of data are to be classified.

Slide 6

This slide depicts Sigmoid Function which is a type of activation function. The range of values for the sigmoid function is 0 to 1. This function is primarily employed in the output layer because it is from this node that we obtain the anticipated value

Slide 7

This slide illustrates Rectifier or ReLU function which is a type of activation function. Relu (Rectified Linear Units) is primarily employed in hidden layers of a Neural Network due to its rectified nature. Relu is a half-rectified function starting at the bottom. It moves continuously up to a certain point and then increases to its maximum after a certain period

Slide 8

This slide depicts Hyperbolic Tangent Function which is a type of activation function. The Tanh function is an upgraded version of the sigmoid function that includes a range of values from -1 to 1. Tanh has a sigmoid curve form, although with a distinct range of values. The benefit is that, depending on their projections, both negative and positive outcomes are plotted separately

Slide 9

This slide describes the working of an Artificial Neural Network. Artificial Neural Networks can be considered weighted directed graphs in which neurons are compared to nodes, and connections between neurons are compared to weighted edges.

Instructor’s Notes: A neuron's processing element receives many signals, which are occasionally altered at the receiving synapse before being summed at the processing element. If it reaches the threshold, it becomes an input to other neurons, and the cycle begins again

Slide 10

This slide introduces the concept of gradient descent. Gradient Descent is an optimization process used in Machine Learning algorithms to minimize the cost function (the error between the actual and predicted output). It is mainly used to update the learning model's parameters.

Slide 11

This slide lists types of gradient descent. These include batch gradient descent, stochastic gradient descent, and mini-batch gradient descent.

Instructor’s Notes: 

  • Batch Gradient Descent: Batch gradient descent adds the errors for each point in a training set before updating the model after all training instances have been reviewed. This process is known as the Training Epoch. Batch gradient descent usually gives a steady error gradient and convergence, although choosing the local minimum rather than the global minimum isn't always the best solution
  • Stochastic Gradient Descent: Stochastic gradient descent creates a training epoch for each example in the dataset and changes the parameters of each training example, sequentially. These frequent updates can provide greater detail and speed, but they can also produce noisy gradients, which can aid in surpassing the local minimum and locating the global one
  • Mini-Batch Gradient Descent: Mini-batch gradient descent combines the principles of both batch gradient descent with stochastic gradient descent. It divides the training dataset into distinct groups and updates them separately. This method balances batch gradient descent's computing efficiency and stochastic gradient descent's speed

Slide 12

This slide describes the concept of backpropagation along with its two types that are Static backpropagation and Recurrent backpropagation. Backpropagation is useful in calculating the gradient of a loss function with respect to all of the network's weights.

Instructor’s Notes: 

  • Static Backpropagation: The mapping of static input generates a static output in this type of backpropagation. It is used to address challenges like optical character recognition that requires static classification
  • Recurrent Backpropagation: The Recurrent Propagation is directed forward or conducted till a specific set value, or threshold value is attained. The error is evaluated and propagated backward after reaching a particular value

Slide 13

This slide lists the advantages of Backpropagation. These are that it is simple & straightforward, adaptable & efficient, and it does not require any unique characteristics. 

Slide 14

This slide lists the disadvantages of Backpropagation. These are that the data input determines the function of the entire network on a particular issue, networks are susceptible to noisy data, and a matrix based technique is used.

Slide 15

This slide lists the advantages of Artificial Neural Networks. These advantages include that ANN stores information on the entire network, its ability to work with insufficient knowledge, its fault tolerance, a distributed memory, and parallel processing ability.

Instructor’s Notes: 

  • Storing information on the entire network: Information is saved on the entire network, not in a database, as done in traditional programming. The network continues to function despite losing a few pieces of information in one location
  • Ability to work with insufficient knowledge: After training the Artificial Neural Network, the data can yield output even with limited or insufficient information
  • Fault tolerance: The output of an ANN is unaffected by the corruption of one or more of its cells. The networks become fault-tolerant as a result of this characteristic
  • Distributed memory: It is required to determine the examples and train the network according to the intended output. The network's progress is proportional to the instances chosen. If the event cannot be displayed to the network in all aspects, it may generate inaccurate results
  • Parallel processing ability: Artificial Neural Networks have the computational power to perform multiple tasks at once
  • Gradual Corruption: The network is not immediately corroded as its performance degrades and slows over time.

Slide 16

This slide lists the disadvantages of Artificial Neural Networks. These disadvantages include ANNs’ hardware dependence, unexplained functioning of network, difficulty of showing problem to the network, and no clarity on the proper network structure that works.

Instructor’s Notes: 

  • Hardware dependence: The structure of Artificial Neural Networks necessitates parallel processing power. As a result, the implementation is equipment dependent
  • Unexplained functioning of network: When ANN provides a probing solution, it does not explain why or how it was chosen. This decreases trust in the network 
  • Difficulty of showing problem to the network: ANNs are able to work with numerical data. Before introducing ANN to a problem, it must be transformed into numerical values. The display mechanism chosen here will directly impact the network's performance, depending on the user's skill level
  • No clarity on the proper network structure that works: No explicit rule determines the structure of artificial neural networks. Experience and trial & error are used to create the ideal network structure

Ratings and Reviews

80% of 100
Write a review
Most Relevant Reviews

2 Item(s)

per page:
  1. 80%

    by Dennis Stone

    I was able to find the right choice of PPT template for my thesis with SlideTeam. Thank you for existing.
  2. 80%

    by Jacob Wilson

    The website is jam-packed with fantastic and creative templates for a variety of business concepts. They are easy to use and customize.

2 Item(s)

per page: