Vertexai

Control your Generative AI costs with the Vertex API’s context caching

Control your Generative AI costs with the Vertex API’s context caching

Note: This blog has two authors. What is context caching? Vertex AI is a Google Cloud machine learning (ML) platform that, among other things, provides access to a collection of generative AI models. This includes the models known under the common name “Gemini models”. When you interact with these models you provide it with all the information about your inquiry. The Gemini models accept information in multiple formats including text, video and audio.