My Journey using FastAI — Part II

Juan Cruz Alric Cortabarria
LatinXinAI
Published in
4 min readJul 30, 2022

--

In this entry, we will try to get better results than what we got on the first model.

First of all, we are going to use google colab. So please follow the next link to create one.

Now we can start importing all the necessary packages that we will use.

%%capture
!pip install timm
import timm
!pip install fastbook
from fastbook import *
from fastai.vision.all import *

Important please make sure that you first install the timm library so that you can avoid future problems when running fastai packages.

We are going to continue with the same problem as before. We are going to try and predict whether a forest is on fire or not. To do this first we need images. We can get those using fastai “search_images_ddg” and the “download_images” functions as follows:

searches = 'forest', 'forest in fire'
path = Path('fire_or_not')
if not path.exists():
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
results = search_images_ddg(f'{o} photo')
download_images(dest, urls=results[:500])
try:
resize_images(path/o, max_size=400, dest=path/o)
except:
pass

We are going to loop for each search item and download all the necessary images.

example of how the images are going to be saved

Once we got all the images we are going to check for corrupted ones and delete them so that our model does not crush.

failed = verify_images(get_image_files(path))
failed.map(Path.unlink);
len(failed)

Now we have the images we need to create our DataBlock

dls = DataBlock(
blocks = (ImageBlock, CategoryBlock),
get_items = get_image_files,
splitter = RandomSplitter(valid_pct=0.2, seed=42),
get_y = parent_label,
item_tfms = RandomResizedCrop(224, min_scale=0.75),
batch_tfms= aug_transforms()
).dataloaders(path)

The inputs are going to be images “ImageBlock” and the outputs are going to be categories “CategoryBlock”

We can get the items we need by using “get_image_files”

We can specify a splitter, this will retrieve some of the data to use for validation purposes. In this case, we are using 20% of the data for validation.

“get_y” getting the label is as easy as just calling the parent_label function. This will get the labels we created when we downloaded the images: “forest” and “forest in fire”

We have got our dataloaders. Now we need to look for a “pre-trained” model to use in our learner. I really like using “convnext” they are really reliable and quite light.

timm.list_models('convnext*')
Different convnext models

You can choose anyone you like. However, I recommend the “22k” models that got 22k categories trained on. We are going to use the “convnext_small_in22k”. We are ready to create our learner.

learn = vision_learner(dls, 'convnext_small_in22k',                metrics=error_rate).to_fp16()

The “.to_fp16()” just use less memory so that the model is lighter.

We are going to look for a learning rate so that out model can train without problems. Fastai has the “lr_find” method. We are going to find the valley and slide.

learn.lr_find(suggest_funcs=(valley, slide))
Learning rate between batch

Choosing a lr between the valley and the slide is a really good ley of thumb.

Finally, we can create again the learner by choosing the above learning rate

learn = vision_learner(dls, 'convnext_small_in22k', metrics=error_rate).to_fp16()

We are ready to train the model. Just run “fine_tune”

learn.fine_tune(4, 0.0003669646321213804)
Training of the model

We can see that we got a perfect score and the “valid_loss” continues to decrease. We definitely got a better result than the older mode.

Older model results:

old model training

We can print the top losses, to see some examples:

top losses example

As you can see with a few modifications to the older model we can create a more powerful one.

Hope you enjoy this simple yet powerful model. I will appreciate it if you give it a clap. Have a nice day! Please feel free to leave a comment if you didn’t understand something I will gladly help you out :).

Added a working URL where you can test the model. Give it a try!

LatinX in AI (LXAI) logo

Do you identify as Latinx and are working in artificial intelligence or know someone who is Latinx and is working in artificial intelligence?

Don’t forget to hit the 👏 below to help support our community — it means a lot! Thank you :)

--

--

Juan Cruz Alric Cortabarria
LatinXinAI

Machine Learning enthusiast, love video games and working out.