Date:

Building Generative AI Applications on Amazon Bedrock with AWS SDK for Python

Solution Overview

The solution uses an AWS SDK for Python script with features that invoke Anthropic’s Claude 3 Sonnet on Amazon Bedrock. By using this FM, it generates an output using a prompt as input. The following diagram illustrates the solution architecture.

Prerequisites

Before you invoke the Amazon Bedrock API, make sure you have the following:

Deploy the Solution

After you complete the prerequisites, you can start using Amazon Bedrock. Begin by scripting with the following steps:

  1. Import the required libraries:
  1. Set up the Boto3 client to use the Amazon Bedrock runtime and specify the AWS Region:

# Set up the Amazon Bedrock client
bedrock_client = boto3.client(
service_name=”bedrock-runtime”,
region_name=”us-east-1″
)

  1. Define the model to invoke using its model ID. In this example, we use Anthropic’s Claude 3 Sonnet on Amazon Bedrock:

# Define the model ID
model_id = “anthropic.claude-3-sonnet-20240229-v1:0”

  1. Assign a prompt, which is your message that will be used to interact with the FM at invocation:

# Prepare the input prompt.
prompt = “Hello, how are you?”

Prompt engineering techniques can improve FM performance and enhance results.

Processing the Payload

Before invoking the Amazon Bedrock model, we need to define a payload, which acts as a set of instructions and information guiding the model’s generation process. This payload structure varies depending on the chosen model. In this example, we use Anthropic’s Claude 3 Sonnet on Amazon Bedrock. Think of this payload as the blueprint for the model, and provide it with the necessary context and parameters to generate the desired text based on your specific prompt. Let’s break down the key elements within this payload:

  • anthropic_version – This specifies the exact Amazon Bedrock version you’re using.
  • max_tokens – This sets a limit on the total number of tokens the model can generate in its response. Tokens are the smallest meaningful unit of text (word, punctuation, subword) processed and generated by large language models (LLMs).
  • temperature – This parameter controls the level of randomness in the generated text. Higher values lead to more creative and potentially unexpected outputs, and lower values promote more conservative and consistent results.
  • top_k – This defines the number of most probable candidate words considered at each step during the generation process.
  • top_p – This influences the sampling probability distribution for selecting the next word. Higher values favor frequent words, whereas lower values allow for more diverse and potentially surprising choices.
  • messages – This is an array containing individual messages for the model to process.
  • role – This defines the sender’s role within the message (the user for the prompt you provide).
  • content – This array holds the actual prompt text itself, represented as a “text” type object.
  1. Define the payload as follows:

payload = {
“anthropic_version”: “bedrock-2023-05-31”,
“max_tokens”: 2048,
“temperature”: 0.9,
“top_k”: 250,
“top_p”: 1,
“messages”: [
{
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: prompt
}
]
}
]
}

Invoking the Model

You have set the parameters and the FM you want to interact with. Now you send a request to Amazon Bedrock by providing the FM to interact with and the payload that you defined:

# Invoke the Amazon Bedrock model
response = bedrock_client.invoke_model(
modelId=model_id,
body=json.dumps(payload)
)

Processing the Response

After the request is processed, you can display the result of the generated text from Amazon Bedrock:

# Process the response
result = json.loads(response[“body”].read())
generated_text = “”.join([output[“text”] for output in result[“content”]])
print(f”Response: {generated_text}”)

Clean Up

When you’re done using Amazon Bedrock, clean up temporary resources like IAM users and Amazon CloudWatch logs to avoid unnecessary charges. Cost considerations depend on usage frequency, chosen model pricing, and resource utilization while the script runs. See Amazon Bedrock Pricing for pricing details and cost-optimization strategies like selecting appropriate models, optimizing prompts, and monitoring usage.

Conclusion

In this post, we demonstrated how to programmatically interact with Amazon Bedrock FMs using Boto3. We explored invoking a specific FM and processing the generated text, showcasing the potential for developers to use these models in their applications for a variety of use cases, such as:

  • Text generation – Generate creative content like poems, scripts, musical pieces, or even different programming languages
  • Code completion – Enhance developer productivity by suggesting relevant code snippets based on existing code or prompts
  • Data summarization – Extract key insights and generate concise summaries from large datasets
  • Conversational AI – Develop chatbots and virtual assistants that can engage in natural language conversations

About the Author

Merlin Naidoo is a Senior Technical Account Manager at AWS with over 15 years of experience in digital transformation and innovative technical solutions. His passion is connecting with people from all backgrounds and leveraging technology to create meaningful opportunities that empower everyone. When he’s not immersed in the world of tech, you can find him taking part in active sports.

Frequently Asked Questions

Q: What is Amazon Bedrock?

A: Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API.

Q: What is Boto3?
A: Boto3 is an AWS SDK for Python that allows developers to interact with AWS services, including Amazon Bedrock.

Q: How do I invoke an FM using Boto3?
A: You can invoke an FM using Boto3 by defining the model ID, payload, and other parameters, and then sending a request to the Amazon Bedrock API.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here