diff --git a/source/_integrations/google_generative_ai_conversation.markdown b/source/_integrations/google_generative_ai_conversation.markdown
index 71a3a217c3c..836f9efe9a0 100644
--- a/source/_integrations/google_generative_ai_conversation.markdown
+++ b/source/_integrations/google_generative_ai_conversation.markdown
@@ -52,6 +52,12 @@ Maximum Tokens to Return in Response:
### Service `google_generative_ai_conversation.generate_content`
+
+
+ This service isn't tied to any integration entry, so it won't use the model, prompt, or any of the other settings in your options. If you only want to pass text, you should use the `conversation.process` service.
+
+
+
Allows you to ask Gemini Pro or Gemini Pro Vision to generate content from a prompt consisting of text and optionally images.
This service populates [response data](/docs/scripts/service-calls#use-templates-to-handle-response-data) with the generated content.