Series 3 Exporting LSTM Gender Classification and Serving With Flask

October 12, 2020
Tensorflow Text Classification NLP LSTM

Hello again, so this is the last part of our series about developing gender classification model with deep learning approach. in the previous post we know already how to deploy our model directly into TFServing and run the service inside a docker container, if you did not read the previous series please follow this links:

  1. Training LSTM Model for gender classification
  2. Serving model with Tensorflow Serving

For the last part of our series, we will learn and deploy our trained model with Flask, so it will attached to your web application built with flask. first we need to export the trained model.

Exporting model

model.save("gender_lstm_model.h5")

setting up docker environment for flask application

Our flask application will be served with docker container, so we need to prepare several files first as shown below

Kinaras-MBP:FlaskServing kinara$ tree
.
├── Dockerfile
├── app
│   ├── __init__.py
│   └── model_weight
│       ├── char_dictionary.json
│       └── gender_lstm_model.h5
├── config.py
├── docker-compose.yml
├── requirements.txt
└── run.py

4 directories, 11 files

for docker related files: Dockerfile and docker-compose.yml

FROM amd64/python:3.7.6-slim-stretch

RUN apt update --fix-missing

RUN apt install -y htop
RUN apt install -y git
RUN apt install -y supervisor
RUN apt install -y libblas-dev liblapack-dev
RUN apt install -y libpython3-dev build-essential libpcre3-dev libatlas-dev
RUN apt install -y libhdf5-dev
RUN apt install -y libssl-dev 
RUN apt install -f libffi-dev

# set local datetimezone
ENV TZ Asia/Jakarta
RUN apt install -y locales

RUN sed -i -e 's/# id_ID.UTF-8 UTF-8/id_ID.UTF-8 UTF-8/' /etc/locale.gen && \
    locale-gen
RUN locale-gen id_ID.utf8

#create user
ARG user=tagger
ARG group=tagger
ARG uid=1000
ARG gid=1001
RUN adduser ${user}

USER ${user}
RUN mkdir /home/${user}/src
RUN mkdir /home/${user}/log

# add requirements files 
ADD requirements.txt /home/${user}/src/requirements.txt

USER root
RUN pip install -r /home/${user}/src/requirements.txt

EXPOSE 9901
version: '2'
services:
  tagger:
    container_name: 'genderapi'
    build: .
    volumes: 
      - .:/home/tagger/src
    ports: 
      - 9901:9901
    environment:
      - FLASK_APP=/home/tagger/src/run.py
      - FLASK_DEBUG=1
      - ENV=devel
    command: flask run --host=0.0.0.0 --port=9901

create requirements file: requirements.txt

aniso8601
Flask
flask-restplus
Flask-WTF
h5py
Jinja2
Keras
Keras-Applications
Keras-Preprocessing
pytz
termcolor
tensorflow
numpy

The main flask application files will be config.py, run.py and __init__.py

#config.py
#!/usr/bin/env python
import os 
import sys 
from os import path 

APP_NAME = 'GenderPredictionAPI'

BASEDIR = os.path.abspath(os.path.dirname(__file__))

# APP DEBUG
DEBUG = True
USE_RELOADER = False

# APP SECRET
SECRET_KEY = 'change this with your own secret key'
#run.py
#!/usr/bin/env python 
from app import app 

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=9901, debug=True)

for main Flask application scripts __init__.py

#!/usr/bin/env python 
import os 
os.environ['TF_CPP_MIN_LOG_LEVEL']  = "2"

import sys 
from os import path 

import json 
from datetime import datetime

from flask import Flask, Blueprint, redirect, request, url_for, jsonify, make_response

import tensorflow as tf
from tensorflow.python.keras import backend as k
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.preprocessing.sequence import pad_sequences
import numpy as np

#!----------------------------------------
# App Config
#!----------------------------------------
app = Flask(__name__, instance_relative_config=True)

# load config
app.config.from_object('config')
app.config.from_pyfile('application.cfg', silent=True)

# default controller 
@app.route('/', methods=['GET'])
def index():
    response = jsonify({
        'msg': 'Gender Prediction Api!',
        'path': request.path, 
        'systime': int(datetime.now().strftime('%s'))
    })
    response.status_code = 200
    return response

# we load our char embeddings (previously from dataset preparation)
vocab_index = {}
with open('/'.join([app.config['BASEDIR'], "app/model_weight/char_dictionary.json"]), "r") as f:
    vocab_index = json.loads(f.read())

# we load model here
gender_model = load_model('/'.join([app.config['BASEDIR'], "app/model_weight/gender_lstm_model.h5"]))


# our predict endpoint with post method
@app.route('/predict', methods=['POST'])
def predict():
    predicted_gender = 'UNK'

    # read input from users 
    payload = request.json 
    if 'name' in payload:
        name = payload['name']

        if len(name) > 32:
            response = jsonify({
                'msg': 'maximum character is 32',
                'path': request.path,
            })
            response.status_code = 412

        # predict with model 
        q_name = list(name.lower())
        test_dt = [vocab_index[x] for x in q_name]
        test_dt = pad_sequences([test_dt], maxlen=32)
        pad = np.array(test_dt[0])
        # predict with model
        res = gender_model.predict(pad.reshape(1, pad.shape[0]), batch_size=1, verbose=2)[0]

        conf_score = 0.0
        if np.argmax(res) == 0:
            predicted_gender = 'Female'
            conf_score = res[0] * 100
        elif np.argmax(res) == 1:
            predicted_gender = 'Male'
            conf_score = res[1] * 100
        
        response = jsonify({
            'msg': 'success predict gender',
            'name': name,
            'gender': predicted_gender,
            'confidence_score': '{:.2f}%'.format(conf_score)
        })
        response.status_code = 200

    else:
        response = jsonify({
            'msg': 'Name required',
            'path': request.path,
        })
        response.status_code = 200

    return response

lastly, we copy characters embedding char_dictionary.json file and exported model gender_lstm_model.h5 file under app/model_weight/ directory

run the docker

docker-compose up --build

Our flask application can be accessed from localhost:9901

Testing timee

curl --request POST   --url http://localhost:9901/predict   --header 'content-type: application/json'   --data '{
"name": "Anda Perdana"
}'
{
  "confidence_score": "90.79%",
  "gender": "Male",
  "msg": "success predict gender",
  "name": "Anda Perdana"
}

another one

curl --request POST   --url http://localhost:9901/predict   --header 'content-type: application/json'   --data '{
"name": "Charita Utami"
}'
{
  "confidence_score": "93.12%",
  "gender": "Female",
  "msg": "success predict gender",
  "name": "Charita Utami"
}

This is the end of our series about developing Gender prediction model with LSTM. In summary, instead of using Tensorflow Serving, we can also deploy our model with flask, and this maybe one of the easiest way to publish your model into production ready application.

The complete codes are available in this repository: https://github.com/yudanta/lstm-gender-classification

Thank you!

comments powered by Disqus