Convert Figma logo to code with AI

google-gemini logodeprecated-generative-ai-swift

This SDK is now deprecated, use the unified Firebase SDK.

1,074
173
1,074
26

Top Related Projects

Stable Diffusion with Core ML on Apple Silicon

6,134

Swift for TensorFlow

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Quick Overview

The google-gemini/generative-ai-swift repository is an official Swift SDK for Google's Generative AI models, including Gemini. It provides a convenient way for iOS and macOS developers to integrate Google's powerful AI capabilities into their Swift applications, enabling features like text generation, image analysis, and multimodal interactions.

Pros

  • Official SDK from Google, ensuring reliability and up-to-date support
  • Seamless integration with Swift and Apple platforms (iOS, macOS)
  • Supports various Gemini models, including text, vision, and multimodal capabilities
  • Well-documented with clear examples and usage guidelines

Cons

  • Limited to Google's Generative AI models, not a general-purpose AI library
  • Requires API key and potential usage costs for production applications
  • May have limitations based on Google's API policies and quotas
  • Relatively new, so the ecosystem and community support are still growing

Code Examples

  1. Initializing the GenerativeAI client:
import GoogleGenerativeAI

let apiKey = "YOUR_API_KEY"
let config = GenerationConfig(temperature: 0.9, topP: 1, topK: 1, maxOutputTokens: 2048)
let model = GenerativeModel(name: "gemini-pro", apiKey: apiKey, generationConfig: config)
  1. Generating text with the Gemini model:
let prompt = "Write a short story about a robot learning to paint."
let response = try await model.generateContent(prompt)
if let text = response.text {
    print(text)
}
  1. Analyzing an image with Gemini Vision:
let image = UIImage(named: "example.jpg")!
let prompt = "Describe what you see in this image."
let response = try await model.generateContent(prompt, image: image)
if let description = response.text {
    print(description)
}

Getting Started

  1. Install the SDK using Swift Package Manager. Add the following to your Package.swift file:
dependencies: [
    .package(url: "https://github.com/google/generative-ai-swift", from: "0.1.0")
]
  1. Import the library in your Swift file:
import GoogleGenerativeAI
  1. Initialize the client with your API key and start using the Generative AI features:
let apiKey = "YOUR_API_KEY"
let model = GenerativeModel(name: "gemini-pro", apiKey: apiKey)
let response = try await model.generateContent("Hello, Gemini!")
print(response.text ?? "No response")

Competitor Comparisons

Stable Diffusion with Core ML on Apple Silicon

Pros of ml-stable-diffusion

  • Focuses on stable diffusion models, offering specialized image generation capabilities
  • Optimized for Apple devices, leveraging Core ML for efficient performance
  • Provides a comprehensive implementation of the stable diffusion pipeline

Cons of ml-stable-diffusion

  • Limited to image generation tasks, unlike the broader AI capabilities of generative-ai-swift
  • Requires more domain-specific knowledge to use effectively
  • May have a steeper learning curve for developers new to stable diffusion concepts

Code Comparison

ml-stable-diffusion:

let pipeline = try StableDiffusionPipeline(resourcesAt: resourcesURL, configuration: configuration, disableSafety: false)
let images = try pipeline.generateImages(prompt: "A cute cat", imageCount: 1)

generative-ai-swift:

let model = GenerativeModel(name: "gemini-pro", apiKey: "YOUR_API_KEY")
let response = try await model.generateContent("Describe a cute cat")
print(response.text)

The ml-stable-diffusion code focuses on image generation, while generative-ai-swift provides a more general-purpose AI interaction. The former requires more setup and configuration, while the latter offers a simpler interface for diverse AI tasks.

6,134

Swift for TensorFlow

Pros of Swift for TensorFlow

  • More comprehensive machine learning framework with broader capabilities
  • Deeper integration with TensorFlow ecosystem and tools
  • Larger community and more extensive documentation

Cons of Swift for TensorFlow

  • Steeper learning curve for developers new to TensorFlow
  • Potentially more complex setup and configuration
  • Less focused on generative AI specifically

Code Comparison

Swift for TensorFlow:

import TensorFlow

let model = Sequential {
  Dense(units: 64, activation: relu)
  Dense(units: 10, activation: softmax)
}

Generative AI Swift:

import GenerativeAI

let model = GenerativeModel(name: "gemini-pro")
let response = try await model.generateContent(prompt)

Swift for TensorFlow provides a more low-level approach to building neural networks, while Generative AI Swift offers a higher-level API specifically for generative AI tasks. The former gives more control over model architecture, while the latter simplifies interaction with pre-trained models like Gemini.

Generative AI Swift is more focused on ease of use for generative AI applications, making it quicker to implement for specific use cases. However, Swift for TensorFlow offers more flexibility and power for a wider range of machine learning tasks beyond just generative AI.

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Pros of coremltools

  • Broader scope: Supports a wide range of machine learning models and frameworks
  • Mature ecosystem: Well-established with extensive documentation and community support
  • Cross-platform: Enables deployment on various Apple platforms (iOS, macOS, tvOS, watchOS)

Cons of coremltools

  • Limited to Apple ecosystem: Not suitable for non-Apple platforms
  • Steeper learning curve: Requires understanding of Core ML concepts and Apple's development environment

Code Comparison

coremltools:

import coremltools as ct

model = ct.convert('model.h5', source='keras')
model.save('MyModel.mlmodel')

generative-ai-swift:

import GenerativeAI

let model = GenerativeModel(name: "gemini-pro")
let response = try await model.generateContent("Hello, world!")

Summary

coremltools offers a comprehensive solution for deploying machine learning models across Apple platforms, with broad framework support and a mature ecosystem. However, it's limited to the Apple ecosystem and has a steeper learning curve. generative-ai-swift, on the other hand, provides a more focused and straightforward approach for integrating Google's Gemini models into Swift applications, but with a narrower scope and less flexibility in terms of supported models and platforms.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

[Deprecated] Google AI Swift SDK for the Gemini API

With Gemini 2.0, we took the chance to create a unified SDK for mobile developers who want to use Google's GenAI models (Gemini, Veo, Imagen, etc). As part of that process, we took all of the feedback from this SDK and what developers like about other SDKs in the ecosystem to directly work with the Firebase SDK. We don't plan to add anything to this SDK or making any further changes. We know how disruptive an SDK change can be and don't take this change lightly, but our goal is to create an extremely simple and clear path for developers to build with our models so it felt necessary to make this change.

Thank you for building with Gemini and let us know if you need any help!