Creating a Secure and Smart Chatbot Using GPT-4 and Vector Database | Project Showcase

Today, businesses are looking for new ways to improve customer interactions.
Recently, I had the chance to work on a super exciting project: creating a smart chatbot using Generative AI (Gen AI) technologies like GPT-4.
This project was part of an MVP for an R&D team at a global e-commerce group.
Let me explain!

The Project

The goal was simple: create a chatbot that could answer employees’ questions using the company’s internal data. We needed to make sure all the information stayed private, so we used a secure instance on Azure to keep everything safe.

The Cool Tech We Used

GPT-4 and Friends

We based our chatbot on GPT-4, an advanced AI model known for generating awesome text. We used a special version of GPT-4 that can handle up to 8,000 tokens, which means it can process long and complex questions.

We also tested other language models like Mistral, LLAMA, and OLLAMA to make sure the chatbot was as smart as possible.
We tried to fine tuned these models to understand and answer questions using data we pulled from the company’s internal Confluence pages.

Vector DB

To make the chatbot even better at finding the right answers, we used a Retrieval-Augmented Generation (RAG) setup.
This meant using a vector database to quickly search through data and bring up the information.
While Elasticsearch helped with indexing, vector databases like Pinecone, Milvus, and Faiss are great for speedy and accurate searches.

Testing Faiss vectorisation

Azure Services

We relied on a bunch of Azure services to build and maintain the chatbot. Like:

  • Azure Cognitive Services: Running the GPT-4 model.
  • Azure Machine Learning: Handled model training and kept the chatbot’s brain up-to-date.
  • Azure Cognitive Search: Helped us dig through data quickly.
  • Azure Blob Storage: Stored all our data and files.
  • Azure Functions and Azure Logic Apps: Automated the boring stuff like data extraction and model updates.
  • Azure DevOps: Managed our CI/CD pipeline, making sure everything ran smoothly.

As a member in the team, I only managed the Azure Cognitive service.

My Fun with DevOps

Aside from building the chatbot, I also took charge of the DevOps part of the project. We used GitHub Actions, AWS IAM, and Fargate to deploy the front end application.
Automating the deployment process meant we could roll out updates fast and keep the team focused on making the application smoother.

Making It Work

Data Extraction and Cleaning

We wrote Python scripts to pull data from Confluence, clean it up, and get it ready for the chatbot.
This was key to making sure the chatbot always had the latest info and could give accurate answers.

Training the Model

We automated the process of training the chatbot whenever new data came in.
This way, the chatbot always stayed sharp and could handle new questions without missing an update (new page/updates).

User Experience

We wanted the chatbot to be super user-friendly.
So, thanks to our UX Designer, we added features like dark mode and made sure it was responsive. We used Next JS, Tailwind CSS, tRPC. The result? A smooth, fun experience for anyone using the chatbot.

Security and Monitoring

Security was a top priority. We followed best practices to keep data safe and monitored the use of AI tokens to manage costs.
Using GitHub Actions for our CI/CD pipeline, we could push updates and fixes quickly without disrupting the service.

Wrapping Up

Working on this project was an absolute fun! I learned a lot about RAG and IA in a R&D team.
The combination of GPT-4, Vector DB, and Azure services made it possible to build a chatbot that was both smart and secure.

You Might Also Like

Leave a Reply