Bootstrapping and Artificial Intelligence


Introduction

Bootstrapping : An expression i find extremely interesting. Bootstrapping originally means the impossible action of one lifting himself using his own bootstraps (The one shown in the figure) . Bootstrapping -also named booting- has been used in a wide range of scientific terms in Science in general; especially Computer Science.

BootStrapping

A Man Bootstrping himself

Examples of Applying bootstrapping :

In Bussiness, Bootstrapping is to start a business without external help/capital.

In Statistics, Bootstrapping is a resampling technique used to obtain estimates of summary statistics.

In Computing in general, Bootstrapping is the summary of the process of a simple computer system activating a more complicated computer system.

In Compilers, Bootstrapping is writing a compiler for a computer language using the computer language itself to code the compiler.

In Networks, A Bootstrapping Node is a network node that helps newly joining nodes successfully join a P2P network.

In linguistics, Bootstrapping is a theory of language acquisition.

Bootstrapping And AI …

Bootstrapping in AI is using a week learning method to provide the starting information for a stronger learning method. For Example, Consider a Classifier that classifies a set of samples . It Uses Clustering (“week” Unsupervised Learning ) to estimate the cluster of each sample, then considers the estimated cluster for each sample its REAL class in the next “stronger” supervised learning  which can finally achieve high performance.

So As u see, The Classifier has built estimates based on its OWN estimates. It has predicted new stuff based on its OWN previous predictions.

Bootstrapping and Reinforcement Learning

Since Reinforcement Learning is based on Dynamic Programming.  Bootstrapping plays an important role here too.

Reinforcement Learning Methods are mainly classified into its 2 practical techniques : Monte-Carlo Methods and Temporal-Difference Methods.

In Monte-Carlo Methods the agent doesn’t receive any reward for its actions except after its goal is achieved. So No Bootstrapping here occurs because the reward is a “REAL ASSURED” reward.

On The contrary in Temporal-Difference Methods, The Agents receives rewards after every action it does by estimating whether this action has made the goal closer or not.These “estimated” rewards affect its future actions. Thus the Agent using Temporal-Difference Learning bootstraps all the way until it achieves it desired goal.

In The End …

Bootstrapping is vital to Machine Learning because it increases the speed of learning and makes machine learning resemble human learning.

This was just an introduction. Maybe We’ll talk about it more later. But Now We have more interesting stuff to talk about 😉

Leave a comment