Practical Deep Learning for Coders Course - Tabular Models (Linear Regression & Random Forests)

This blog-post series captures my weekly notes while I attend the fastaiv5 course conducted by University of Queensland with fast.ai. So off to week 5, where we will get started playing with tabular models.
fastai
fastaicourse
Author

Kurian Benoy

Published

July 28, 2022

What is important for an ML practitioner

Most of the time as a practitioner, your job is to connect a set of inputs to the sets of outputs you want with machine learning algorithm together in a framework. According to Jeremy what is important is how you tweak on first layer and last layer of neural network. The middle layer is usually not that important.

Remind yourself these concepts

Before getting started with this lesson, let’s remind ourselves What is matrix & vector?

image

In this chaper, you may stumble into terms like matrix-vector multiplication, matrix matrix products etc. So it’s a good idea to remind yourself with the concept of matrix multiplication and broadcasting.

In this chapter three notebooks where covered, so it’s bit more hectic compared to the previous chapters to be honest. The notebooks covered were:

For the course there was close to one month gap between fifth and sixth lesson, because of exams in University of Queenzland.

Linear model & neural network from scratch notebook

In this notebook first few sections covers on data cleaning and feature engineering with pandas. A few notes which I jotted down, when I started looking into the lesson at first.

  • In pandas never delete columns
  • You can replace missing values using mode of column
  • We can have multiple modes, so choose the first element as 0
  • In first baseline model, don’t do complicated things at the start.
  • for categorical variables we can set dummy variables for Pclass with pd.get_dummies

Then the notebook progresses first into building:

  1. Linear models
  2. Neural networks
  3. Deep Learning
Note

This notebook is a pre-requisite for lesson 7 when we are covering collabrative filtering also.

Why you should use a framework?

This notebook, does some interesting feature engineering followed by building models with fastai framework. It also shows how to use ensembling with fastai library and to get in the top 25% of accuracy.

I have seen this cliche argument that for learning ML, you need to go into details and using frameworks is a step down. Jeremy emphasises always use good frameworks on top of it. Rather than re-inventing from scratch. Lot of the success of fast.ai comes from it not asking practitioners to go into details. One of the reasons I like frameworks like blurr, Icevision is also because of that and it’s helping users who are familiar with fastai to easily build complex computer vision and NLP models.

During a conversation with Icevision core-developer, Dickson Neoh:

In icevision, within 10 minutes I can train an object detection model with any dataset. It may not be most accurate, yet I can iterate so quickly.

How random forests really work?

Jeremy was know as the random forest guy before he became know as the Deep learning person. One of the cool things about random forest is it’s very hard to get something wrong unlike logistic regression.

Random forests are really intereptables, and helps in getting good accuracy. He also covered about gradient boosted trees during this lesson.

Homework dataset

To practise these techniques, I feel a good place to start is by participating in Kaggle Tabular Playground dataset competition or previous tabular competitions in Kaggle.