8 years experience | 20 endorsements
* My exposure to NLP first started at university and I was doing an internship at (2015-2016). I mainly worked on insurance products and ...
* My exposure to NLP first started at university and I was doing an internship at (2015-2016). I mainly worked on insurance products and building social media analytics for them. As part of this dashboard I developed a sentiment analysis view and one for tracking active Twitter with company hashtags in them. We would classify tweets into service tickets, celebration or emergency. This helped companies effectively work with their social media.
* My next touchpoint contact at NLP was at SAP 1 year after my initial stint. In SAP, I got to use and build models for deep learning. I build models to match candidates and resumes, doing Word2Vec embeddings, and then prioritise the resumes based on WMD distances. Similarly, for candidates who are looking for resumes, we built job built this was productionized using TF-IDF as WMD had O(n^3) complexity. I also experimented with Siamese networks, Q&A kinds of models during this phase.
* I used to work on the side projects with the latest findings in fields like BERT, attention models in NLP, ELMO, NER tagging models as well as APIs from OpenAI, AWS and Google.
* In 2021, got an opportunity to leverage my NLP skills one more time, this time in the first phase of text extraction from log books and daily reports on container vessels. In the second phase, I tried to build recommendations based on the vessel engine parameters and reports from crew.
* In my current startup we are demoing the potential of NLP in the insurance industry. We build tools to do claim prioritisation based on user details from their policy details and try to match those items in claims. We do detect fraud in the claim process using the matching for hospital records to other details like description of diseases of patients and their billing details.
* In recent years I have been focusing increasingly on LLMs and conversational models, these focused not only reusing the pretrained embedding models like OpenAI GPT2/3, but also training own models