How and why I built An ML based Python API hosted on Heroku
Hardcore App Developer and Mixed Reality freak. Love creating apps with creative app designs, innovative architecture, and accelerated platform performance. Looking to contribute as an effective team member.
The problem I wanted to solve
Since long I had been thinking of creating an cloud API. But I didn’t want the traditional Hello World API or a simple SQL Flask API — supporting the classical user name and email ID GET, PUT, POST, DELETE REST requests. Since AI and ML are so pervasive now, I thought of giving ML a try — and it was easy
What is does this API do?
I made a cloud API with a simple(pre-trained) image classification ML model. For every PUT request with some image data in PUT body , the server generates a feature vector for the image, compares it with other images and provides the Top 5 closest matches.
I used PyTorch with pre-trained models for generating feature vectors and SkLearn for the closest image match. I choose Heroku as the platform to host the API since it offers a lot of integrations/add-ons and fast to start with. Flask was used for the python backend logic.
The process of building An ML based Python API hosted on Heroku
I forked this github repo for the image classification part. I referenced some medium blogs to get the Flask server rolling. I've had some experience in deploying apps on Heroku, only the github integration pipeline feature was new.
Here's the gist of the API
# endpoint to detect image @app.route("/image_clustering", methods=["PUT"]) def image_clustering(): # Image converted in Base64 encoded byte stream bytes = request.get_data() results = search(bytes) # returns a list of image matches return jsonify(results)
Challenges I faced
- Slug Size: Heroku provides a max slug size of 500 MB, the total data an app can hold(code, executables, and media files like images, pdfs) after compression. This is an issue since python libraries like SciPy and PyTorch are pretty hefty in size — PyTorch with CUDA 8 is ~600MB alone. But there are workarounds.
- PyTorch Version: PyTorch provides a CPU only build variant, a small 45 MB library providing all the features we need for deployment.
- SciPy Manual Install: An online install of SciPy on Heroku turns out to be buggy - more of it here. Instead I downloaded the SciPy whl(or the source code) as a file and manually installed it on the server.
- JPEG vs PNG: In the demo code, I’ve used JPEG extension files(with base64 encoded data beginning from /9j/9…). So if you ping the server with images in other formats like PNG(base64 encoded data starts with iVi…) you will get an error.
You can take a demo in many ways
- Install an app - get it here on your android device. Click an image it’ll take some time and will display the closest matching images.
- Use this python script. Just enter the relative path of the image on your desktop and it will display the results
- Make make a PUT request online(I recommend Hurl.it for starters) to this URLhttp://beard-app.herokuapp.com/image_clustering with base64 encoded image data as body.
For the sample cat image shown, the results are
[ [ 0.8155140106062471, "https://www.googleapis.com/download/storage/v1/b/python-clustering-api.appspot.com/o/images%2FFace%2F124.jpg?generation=1522585329188523&alt=media" ], [ 0.8145577585207011, "https://www.googleapis.com/download/storage/v1/b/python-clustering-api.appspot.com/o/images%2FFace%2F242.jpg?generation=1522585299229997&alt=media" ], [ 0.7914929138145477, "https://www.googleapis.com/download/storage/v1/b/python-clustering-api.appspot.com/o/images%2FFace%2F212.jpg?generation=1522584727478100&alt=media" ], [ 0.7806927914191767, "https://www.googleapis.com/download/storage/v1/b/python-clustering-api.appspot.com/o/images%2FFace%2F099.jpg?generation=1522585251917855&alt=media" ], [ 0.6948463995381056, "https://www.googleapis.com/download/storage/v1/b/python-clustering-api.appspot.com/o/images%2FFace%2F119.jpg?generation=1522584693369035&alt=media" ] ]
- I learnt about Flask and other backend microframeworks
- I learnt how to make a cloud API.
- I got to know how to develop an app and bring a machine learning model into existence.
Tips and advice
- Cloud ML frameworks can turn out to be buggy(or large in size ), so prefer a lightweight or CPU only variant
- ML models may take a lot of processing time(even 2s is a lot!) and PUT request may time out. So make sure to structure the API into small chunks that respond fast.
Final thoughts and next steps
The API is a good example of how app development and ML work together. I'm looking for more such projects and partners to learn and create such meaningful apps.