iMADE Workshops
Current location: Home > Workshops > iMADE Workshops

  • 04272021

    Workshop #9 Reinforcement Learning: Actor-CriticThe Actor Critic Process At each time-step t, we take the current state (St) from the environment and pass it as an input through our Actor and our Critic. Our Policy takes the state, outputs an action (At), and receives a new state (St+1) and a reward (Rt+1). Thanks to that: the Critic computes the value of taking that action at that…

  • 04272021

    Workshop #8 Unsupervised Learning: Principal Component Analysis Motivation Principal component analysis is extremely useful for deriving an overall, linearly independent, trend for a given dataset with many variables. It allows you to extract important relationships out of variables that may or may not be related. Another application of principal component analysis is for display - instead of representing a number of different variables, you can create principal components for just a few and plot them. …

  • 04272021

    Workshop #7 Supervised Learning: Naive Bayesian Classifier What is it? Naive Bayes is a classification technique that uses probabilities we already know to determine how to classify input. These probabilities are related to existing classes and what features they have. In the example above, we choose the class that most resembles our input as its classification. This technique is based around using Bayes’ Theorem. If you’re unfamiliar with what Bayes’ Theorem is, don’t worry! We will explain it in the next secti…

  • 04232021

    Workshop #6 Reinforcement Learning:Policy Gradient Introducing Policy Gradient Today, we’ll learn a policy-based reinforcement learning technique called Policy Gradients.In policy-based methods, instead of learning a value function that tells us what is the expected sum of rewards given a state and an action, we learn directly the policy function that maps state to action (select actions without using a value function). It means that we directly try to optimize our policy function π without worrying …

  • 04192021

    Workshop #5 Supervised Learning: FFBPN An introduction to backpropagation In the picture above, the input is transformed first through the hidden layer 1, then the second one and finally an output is predicted. Each transformation is controlled by a set of weights (and biases). During training, to indeed learn something, the network needs to adjust these weights to minimize the error (also called the loss function) between the expected outputs and the ones it maps from the g…

  • 04192021

    Workshop #4 Unsupervised Learning: Apriori Algorithm and FP-Growth Apriori Algorithm for Association Rule Mining Different statistical algorithms have been developed to implement association rule mining, and Apriori is one such algorithm. In this lab we will study the theory behind the Apriori Algorithm and will later implement Apriori algorithm in Python. Theory of Apriori Algorithm There are three major components of Apriori algorithm: 1. Support 2. Confidence 3. Lift We w…

  • 04162021

    Workshop #3 Reinforcement Learning:Q-Learning Mathematics: the Q-Learning algorithm Introducing the Q-learning algorithm process Each of the colored boxes is one step. Let’s understand each of these steps in detail. In our robot example, we have four actions (a=4) and five states (s=5). So we will build a table with four columns and five rows. Steps 4 and 5: evaluate Now we have taken an action and observed an outcome and rew…

  • 04152021

    Workshop #2 Unsupervised Learning: K-MeansClustering Clustering is the process of grouping similar data and isolating dissimilar data. We want the data points in clusters we come up with to share some common properties that separate them from data points in other clusters. Ultimately, we’ll end up with a number of groups that meet these requirements. This probably sounds familiar because on the surface it sounds a lot like classification. But be aware that clustering and classification solve two very differe…

  • 04152021

    Workshop #1 Supervised Learning: KNN Introduction K-Nearest Neighbors (KNN) is a basic classifier for machine learning. A classifier takes an already labeled data set, and then it trys to label new data points into one of the catagories. So, we are trying to identify what class an object is in. To do this we look at the closest points (neighbors) to the object and the class with the majority of neighbors will be the class that we identify the object to be in. The k is the number of nearest neighbors to …

  • 9 Records 1/1 Page

    到价提醒[关闭]