In recent years, chatbots have become increasingly popular to provide customer service, answer questions, and engage with users. They can be used on websites, messaging platforms, and social media. Suppose we offer any service, and you want to build a chatbot service. First, we collect the user question data and then train the model (from scratch). However, recently, we have had powerful LLM models like Gemini Pro. In the beginning, most companies used the BERT Model. The capabilities of Gemini Pro have become increasingly prominent.
Gemini is an AI technology developed by Google that appears to be focused on advancements in language understanding and processing. It is likely part of Google’s suite of machine learning and artificial intelligence tools, designed to handle complex tasks such as natural language understanding, language translation, content generation, and possibly more, depending on the specific capabilities of the Gemini system. This technology could be integrated into various Google products and services to enhance user experience with more intuitive and intelligent interactions.
Gemini has three variants: Gemini Ultra, Gemini, Pro, and Gemini Nano.
Gemini Pro is like Gemini’s big sibling, part of the T5 (Text-To-Text Transfer Transformer) family, just like its sibling. If Gemini is the clever language wizard, then Gemini Pro is the wizard with upgraded powers – the advanced version.
In simple terms, Gemini Pro is an advanced version of Gemini with enhanced capabilities and powers.
Feature | Gemini | Gemini Pro |
---|---|---|
Model size | Large | Extra large |
Training data | Massive dataset of text and code | Even larger dataset of text and code |
Performance | Good | Excellent |
Efficiency | Good | Excellent |
Applications | Text generation, machine translation, summarization, question answering | Text generation, machine translation, summarization, question answering, chatbot development |
The Gemini Pro API, provided by Google, empowers developers to integrate advanced language models into their applications. Leveraging state-of-the-art natural language processing capabilities, Gemini Pro enables us to create dynamic and context-aware chatbots that respond intelligently to user queries.
Let’s delve into the precise steps for crafting a sophisticated Conversational Q&A Chatbot using the Gemini Pro Free API.
GOOGLE_API_KEY=your_google_api_key
mkdir chatbot_project
cd chatbot_project
conda create -p ./venv python=3.11 -y
conda activate ./venv
touch requirements.txt
streamlit
google-generativeai
python-dotenv
And save the above packages.
pip install -r requirements.txt
from dotenv import load_dotenv
load_dotenv() ## loading all the environment variables
import streamlit as st
import os
import google.generativeai as genai
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
model = genai.GenerativeModel("gemini-pro")
chat = model.start_chat(history=[])
def get_gemini_response(question):
response = chat.send_message(question, stream=True)
return response
st.set_page_config(page_title="Q&A Demo")
st.header("Gemini LLM Application")
if 'chat_history' not in st.session_state:
st.session_state['chat_history'] = []
input = st.text_input("Input: ", key="input")
submit = st.button("Ask the question")
if submit and input:
response = get_gemini_response(input)
st.session_state['chat_history'].append(("You", input))
st.subheader("The Response is")
for chunk in response:
st.write(chunk.text)
st.session_state['chat_history'].append(("Bot", chunk.text))
st.subheader("The Chat History is")
for role, text in st.session_state['chat_history']:
st.write(f"{role}: {text}")
touch app.py
Paste the code below:
## loading all the environment variables
from dotenv import load_dotenv
load_dotenv()
import streamlit as st
import os
import google.generativeai as genai
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
## function to load Gemini Pro model and get repsonses
model=genai.GenerativeModel("gemini-pro")
chat = model.start_chat(history=[])
def get_gemini_response(question):
response=chat.send_message(question,stream=True)
return response
##initialize our streamlit app
st.set_page_config(page_title="Q&A Demo")
st.header("Gemini LLM Application")
# Initialize session state for chat history if it doesn't exist
if 'chat_history' not in st.session_state:
st.session_state['chat_history'] = []
input=st.text_input("Input: ",key="input")
submit=st.button("Ask the question")
if submit and input:
response=get_gemini_response(input)
# Add user query and response to session state chat history
st.session_state['chat_history'].append(("You", input))
st.subheader("The Response is")
for chunk in response:
st.write(chunk.text)
st.session_state['chat_history'].append(("Bot", chunk.text))
st.subheader("The Chat History is")
for role, text in st.session_state['chat_history']:
st.write(f"{role}: {text}")
This line imports the load_dotenv function from the dotenv module. This function loads environment variables from a file (in this case, the .env file).
streamlit run app.py
Fig: UI Of Chat Bot Conversation
Fig: UI Of Chat Bot Conversation(History)
This guide introduced you to the powerful Gemini Pro API for creating smart chatbots. We covered setting up your free API account, creating a chatbot to chat and answer questions, and organizing your project in a virtual environment. Now equipped with these skills, you can bring your chatbot ideas to life. Happy coding!