How to Use Tefla: Simple Deep Learning Wrapper for Tensorflow
I have been using this framework for about 4 months now and I find it very intriguing. What I personally love about this framework is how easy it is to use. Just three to four commands and you're all set!
So let's cut to the chase and start coding!!
If you want more instructions, you can visit https://github.com/litan/tefla.
First, you have to install it by downloading the source code and following the instructions here.
Pre-Process the Data
This step is somewhat optional — you can decide how you want to process your data. Different problems need different preprocessing techniques. And for that, Tefla provides some very good cropping methods.
For more information, click here.
Keep these two things in mind:
For a dominant object cropping, you can use
This takes two parameters fname, which is your full file path, and the shape of target image that you want. Keep in mind, in neural networks, it is necessary to resize your image so that it matches your input dimensions.
For all object cropping (
Sometimes, you might want to remove the redundant background and keep all the objects in the image — this is where this function comes in.
You can find the instruction here.
Essentially, both methods take the same input.
You can explore and play with it and then decide which function suits your data better.
|-- Data_Dir | |-- training_image_size (eg. training_256, for 256 image size) | |-- validation_image_size (eg. validation_256, for 256 image size) | |-- test_image_size (eg. test_256, for 256 image size) | |-- training_labels.csv(eg. data -> filename,label, header -> image,label) | |-- validation_labels.csv(eg. data -> filename,label, header -> image,label) | |-- test_labels.csv(eg. data -> filename,label, header -> image,label)
Structuring your data is important when you're using Tefla — this framework is based more on file operations, unlike Keras, which is based more on array operations.
To process faster when you're using Tefla, buy SSD. If you're using Keras, you should have a good amount of RAM. With that said, this is only relevant if you have a large amount of data. For small amounts of data (tens of gb's), Tefla and Keras are quite similiar.
To lower memory usage, you can use batch generators in Keras. However, they can get tricky sometimes. If you hate making batch generators, then Tefla is perfect for you
Building a Model
Let's get to the fun part!
tefla.experiments, you can find prebuild models — that is exactly what you need for this step.
Open the file here abd find a model that can give you up to 99.74% accuracy on mnist data set.
Twist this model and beat the record of 99.79% on mnist dataset.
Training a Model
Just one command and you could be done with training!
python -m tefla.train --help
for help and type:
python -m tefla.train --model path/to/your/model --training_cnf path/to/train_cnf.py/for/that/model --data_dir /path/to/data/dir
train_cnf.py file is a config file, which is required for Tefla to do magics like:
- data augmentation
- making summary for tensorboard
- adding reqularization
- setting batch size,
- and selecting optimizers
This is the most important file, which you have to consider while training your model. Tweaking this file can shoot accuracy of your model or drop if you did something wrong.
Retrain your model from any epoch by passing start_epoch and resume_lr parameters in the train command. It is that simple!!
Predict by Using Trained Model
python -m tefla.predict --help
training_cnfremains the same as that of training.
weights_fromwill pass the path of the epoch weights, with which you want to start prediction.
Note: the weights will be stored in the weights folder, wich you can find in the root directory.
And that is it — you are done with prediction.
Test Your Model
python -m tefla.metrics --help
- Press enter and you should see a confusion matrix with various other parameters and score.
Wasn't that easy and straightforward? I hope you found this post helpful!
If you have any questions, feel free to leave a comment below this post. If you have anything to add, feel free to let me know!