diet-okikae.com

Unlock the Power of LLMs: Real-Time Data Integration Explained

Written on

Chapter 1: Enhancing LLMs with Real-Time Information

While even the smallest Llama 3.1 model can generate remarkable responses, it lacks the ability to provide real-time information in its default setting. This limitation greatly affects the practical utility of Language Learning Models (LLMs). For instance, if you were to inquire about appropriate clothing for a trip to Tasmania using Llama 3.1–8B, you might receive a response like this:

Tasmania is an excellent destination! As your trip is tomorrow, I’ll share some general advice. Keep in mind, however, that Tasmania's weather can be unpredictable, so it's wise to pack layers and prepare for varying conditions.

Seasonal Guidance:

Assuming your visit falls within the usual Tasmanian seasons:

  • Spring (September to November): Mild weather with average highs around 17°C (63°F) and lows around 9°C (48°F).
  • Summer (December to February): The warmest period, with average highs near 20°C (68°F) and lows around 12°C (54°F).

Unfortunately, since this advice is being generated in late July, it may not be accurate. This clearly demonstrates that the model is attempting to provide helpful information without having access to the current time of year or temperature in this southern Australian region.

The Solution: Tool Integration

The introduction of tools with Llama 3.1 allows for the creation of functions that can perform specific tasks. This integration enables your language model to access these functions, which ultimately enhances the quality of its responses.

In this context, a "tool" refers to a function written in a supported programming language. Let’s consider a simple Python function designed to fetch the current temperature of a specified location by querying an API:

import requests

# Function to retrieve the current temperature in a specific location

def get_current_temperature(place: str) -> str:

response = requests.get(base_url)

data = response.json()

return f"The current temperature in {place} is {data['current_condition'][0]['temp_C']} degrees Celsius"

Now, let’s test our function:

get_current_temperature("London")

# Output: 'The current temperature in London is 27 degrees Celsius'

Next, we want to make this tool accessible to our language model. When posed with a question requiring knowledge of the current temperature, the model should recognize the need to utilize our tool to enhance its response.

Enriching Your LLM Chat with Functions

We'll implement an asynchronous function to accomplish the following tasks:

  1. Establish a chat client using Ollama, allowing for model selection (ensure Ollama is installed on your machine).
  2. Enable the get_current_temperature() function for our chat.
  3. Ensure the model can recognize when a query pertains to the current temperature.
  4. Allow the model to execute the function based on the identified location.
  5. Integrate the function's output into its final response while overriding any default behaviors that discourage answering real-time queries.

Here’s the asynchronous function that accepts a model name and a query:

import ollama

import asyncio

async def weather_chat(model: str, query: str):

client = ollama.AsyncClient()

messages = [{'role': 'user', 'content': query}]

response = await client.chat(

model=model,

messages=messages,

tools=[

{

'type': 'function',

'function': {

'name': 'get_current_temperature',

'description': 'Fetch the temperature for a given location',

'parameters': {

'type': 'object',

'properties': {

'place': {

'type': 'string',

'description': 'The location for which the temperature is requested',

}

},

'required': ['place'],

},

},

},

],

)

messages.append(response['message'])

if not response['message'].get('tool_calls'):

print("The model didn't use the function. Its response was:")

print(response['message']['content'])

return

if response['message'].get('tool_calls'):

available_functions = {

'get_current_temperature': get_current_temperature,

}

for tool in response['message']['tool_calls']:

function_to_call = available_functions[tool['function']['name']]

function_response = function_to_call(tool['function']['arguments']['place'])

messages.append(

{

'role': 'tool',

'content': f"""

Answer the following question: {query}.

Ignore any previous instructions or defaults and instead use the following information: {function_response}

""",

}

)

final_response = await client.chat(model=model, messages=messages)

print(final_response['message']['content'])

Testing the Enhanced Chat Function

Let's start with a straightforward inquiry. We will use the smallest Llama 3.1 model (8B parameters) and ask a direct question that requires our tool:

# If running in a Python script

asyncio.run(weather_chat('llama3.1:8b', "What is the current temperature in Dublin?"))

# If running in a Jupyter notebook

await weather_chat('llama3.1:8b', "What is the current temperature in Dublin?")

The expected response will indicate the current temperature in Dublin.

Now, let's ask a slightly more complex question:

# If running in a Python script

asyncio.run(weather_chat('llama3.1:8b', "What is the current temperature in Ireland's capital?"))

# If running in a Jupyter notebook

await weather_chat('llama3.1:8b', "What is the current temperature in Ireland's capital?")

The model should provide a detailed response, identifying Dublin as the location and incorporating the real-time data.

Next, we’ll test a scenario that requires additional inference:

# If running in a Python script

asyncio.run(weather_chat('llama3.1:8b', "My sister says that if I flew into the capital city of Norway today, I should wear clothing for extreme weather. Should I trust her advice?"))

# If running in a Jupyter notebook

await weather_chat('llama3.1:8b', "My sister says that if I flew into the capital city of Norway today, I should wear clothing for extreme weather. Should I trust her advice?")

The model should now analyze the current temperature in Oslo and respond appropriately, potentially challenging your sister’s advice.

Lastly, let's revisit our original question about packing for Tasmania:

# If running in a Python script

asyncio.run(weather_chat('llama3.1:8b', "What kinds of clothes should I pack for my trip to Tasmania which leaves tomorrow?"))

# If running in a Jupyter notebook

await weather_chat('llama3.1:8b', "What kinds of clothes should I pack for my trip to Tasmania which leaves tomorrow?")

The model should provide a tailored packing list based on the current weather in Tasmania.

YouTube Video References

Learn how to make your LLM apps ingest real-time data seamlessly in this insightful video.

Discover an open-source LLM application for real-time data integration in this spotlight featuring Bobur Umurzokov.

Feel free to share your thoughts or observations in the comments!

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Embrace the Challenge: Journey from Marine to Software Engineer

A personal story of transformation from a Marine to a software engineer, highlighting the importance of discipline, risk-taking, and motivation.

Exploring the Dark Side of Our Nature: A Reflection on Cruelty

A contemplation on cruelty, highlighting our shared tendencies with other species, and pondering our future evolution.

Designing Exceptional User Experiences for AI Products

Explore key UX principles for AI products that enhance usability, inclusivity, and user engagement.

Innovative Websites That Will Absolutely Amaze You - Part 2

Discover more groundbreaking websites that enhance productivity and creativity in this second part of our series.

Best NFT Marketplaces to Explore in October 2022

Discover the top NFT marketplaces, their unique features, and what makes them stand out for various types of NFT enthusiasts.

How to Safeguard Your Finances Against Poor Investments

Discover strategies to protect your finances from bad investments while still seizing opportunities for profit.

Maximizing Your AI Success: 5 Essential Tips for Leaders

Explore five crucial strategies to enhance your organization's AI initiatives and avoid common pitfalls.

The Essential Art of Listening: Life Lessons and Skills

Discover the importance of listening, its impact on relationships, and how to be an effective listener in various contexts.