Dealing with Lack of Computational Resources in NLP
For a while I was thinking of sharing my personal experience about how I, as a student, have tried to overcome the challenge of unavailability of computational resources like lack of proper GPUs, memory storage or even high latency when doing NLP tasks. So, I decided to list all of the things I have learned so far and make a blog post out of them. Disclaimer: This post is not meant to be the exhaustive list of all of the options available out there, but rather my own personal journey throughout my education so far....
Simple Guide to Building and Deploying Transformer Models with Hugging Face
In this post, I want to talk about how to use Hugging Face to build and deploy our NLP model as a web application. To do this, I鈥檓 going to use the Distilled Turkish BERT model and fine-tune it on a Turkish review dataset. Later, I鈥檓 going to make use of the Streamlit library and Hugging Face Spaces to showcase the final model so that the other people can use it without any code....
Transformers as Feature Extractors
Transformers continue to be one of the most frequently used models for various NLP tasks since 2017. However, due to their high computational resource requirements and difficult maintenance, they may not be the most efficient choice out there all the time. This is especially true for simple sentiment classification tasks. In such circumstances, among the alternatives, there is the feature-based approach, where we use transformers as feature extractors for a simple model....