Convert Figma logo to code with AI

danmacnish logocartoonify

python app to turn a photograph into a cartoon

2,051
202
2,051
28

Top Related Projects

[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)

Official tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”

1,415

Official PyTorch repo for JoJoGAN: One Shot Face Stylization

人像卡通化探索项目 (photo-to-cartoon translation project)

Quick Overview

Cartoonify is a project that transforms real-world images into cartoon-like drawings using a thermal printer. It combines computer vision, machine learning, and hardware integration to create a unique device that captures photos and prints them as cartoons in real-time.

Pros

  • Innovative combination of hardware and software for a unique user experience
  • Uses advanced computer vision and machine learning techniques
  • Open-source project with potential for customization and improvement
  • Creates a fun, interactive way to generate cartoon-style images

Cons

  • Requires specific hardware components, which may be challenging to source
  • Limited to black and white output due to thermal printer constraints
  • May have difficulty accurately detecting and rendering complex scenes
  • Dependent on pre-trained models, which may not work well for all types of images

Code Examples

# Load and preprocess an image
image = cv2.imread('input_image.jpg')
image = cv2.resize(image, (256, 256))
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Detect objects in the image using YOLO
boxes = yolo.detect_image(image)
# Generate cartoon outlines
cartoon = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
# Print the cartoon image
printer.printImage(cartoon_image)
printer.printText("Cartoonified!")

Getting Started

  1. Clone the repository:

    git clone https://github.com/danmacnish/cartoonify.git
    cd cartoonify
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Download the YOLO weights file and place it in the bin directory.

  4. Connect the thermal printer and camera to your Raspberry Pi.

  5. Run the main script:

    python3 main.py
    

Competitor Comparisons

[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

Pros of AnimeGANv2

  • Produces high-quality anime-style images with better color preservation and detail retention
  • Offers multiple pre-trained models for different anime styles
  • Supports both image and video processing

Cons of AnimeGANv2

  • Requires more computational resources due to its complex architecture
  • Limited customization options for non-technical users
  • Longer processing time compared to simpler cartoonification methods

Code Comparison

AnimeGANv2:

output = model(input_image, anime_style)

Cartoonify:

edges = cv2.Canny(image, 100, 200)
color = cv2.bilateralFilter(image, 9, 300, 300)
cartoon = cv2.bitwise_and(color, color, mask=edges)

AnimeGANv2 uses a deep learning model to transform images, while Cartoonify relies on traditional computer vision techniques. AnimeGANv2's approach allows for more sophisticated and anime-specific transformations, but Cartoonify's method is simpler and faster to execute.

AnimeGANv2 is better suited for users seeking high-quality anime-style transformations and have access to powerful hardware. Cartoonify is more appropriate for quick, lightweight cartoonification effects or for users with limited computational resources.

sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)

Pros of style2paints

  • More advanced AI-based colorization and style transfer
  • Supports a wider range of artistic styles and customization options
  • Offers both automatic and interactive modes for user control

Cons of style2paints

  • More complex setup and dependencies
  • Requires more computational resources
  • Steeper learning curve for users

Code Comparison

style2paints:

def predict(self, sketch, hint, style):
    sketch = self.preprocess(sketch)
    hint = self.preprocess(hint)
    style = self.encode_style(style)
    return self.model.predict([sketch, hint, style])

cartoonify:

def cartoonify(image):
    edges = cv2.Canny(image, 100, 200)
    color = cv2.bilateralFilter(image, 9, 300, 300)
    cartoon = cv2.bitwise_and(color, color, mask=edges)
    return cartoon

style2paints uses a more sophisticated neural network approach for style transfer and colorization, while cartoonify relies on traditional computer vision techniques for edge detection and color smoothing. style2paints offers greater flexibility and artistic control but requires more setup and computational power. cartoonify is simpler to use and implement but provides less customization and may produce less refined results for complex artistic styles.

Official tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”

Pros of White-box-Cartoonization

  • More advanced and sophisticated cartoonization algorithm
  • Produces higher quality and more visually appealing results
  • Offers better preservation of image details and structure

Cons of White-box-Cartoonization

  • Requires more computational resources and processing time
  • More complex implementation and setup process
  • Less user-friendly for beginners or non-technical users

Code Comparison

White-box-Cartoonization:

guided_filter = GuidedFilter(r=5, eps=0.2)
output = guided_filter(input_photo, input_photo, input_cartoon)
output = output.clip(0, 1)

Cartoonify:

edges = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 5)
color = cv2.bilateralFilter(image, 9, 300, 300)
cartoon = cv2.bitwise_and(color, color, mask=edges)

White-box-Cartoonization uses a guided filter approach for more refined cartoonization, while Cartoonify employs simpler edge detection and bilateral filtering techniques. The White-box-Cartoonization code snippet demonstrates a more advanced filtering process, potentially resulting in higher quality output. Cartoonify's code is more straightforward and easier to understand for beginners, but may produce less sophisticated results.

1,415

Official PyTorch repo for JoJoGAN: One Shot Face Stylization

Pros of JoJoGAN

  • Utilizes advanced GAN technology for more realistic and diverse style transfers
  • Offers a wider range of artistic styles, including anime and manga-inspired looks
  • Provides better preservation of facial features and expressions in the transformed images

Cons of JoJoGAN

  • Requires more computational resources and longer processing times
  • Has a steeper learning curve for users unfamiliar with GANs and deep learning
  • May produce less consistent results across different input images

Code Comparison

JoJoGAN:

def style_mixing(G, latents, w_styles, truncation_psi=0.7):
    with torch.no_grad():
        w_styles = [G.mapping(z, None, truncation_psi=truncation_psi) for z in w_styles]
        w_styles = torch.cat(w_styles, dim=0)
        w = G.mapping(latents, None, truncation_psi=truncation_psi)
        w[:, :8] = w_styles[:, :8]
        img = G.synthesis(w)
    return img

Cartoonify:

def cartoonify(img):
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    gray = cv2.medianBlur(gray, 5)
    edges = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
    color = cv2.bilateralFilter(img, 9, 300, 300)
    cartoon = cv2.bitwise_and(color, color, mask=edges)
    return cartoon

人像卡通化探索项目 (photo-to-cartoon translation project)

Pros of photo2cartoon

  • More advanced AI-based approach using GANs for realistic cartoon-style transformations
  • Supports real-time processing on mobile devices
  • Actively maintained with recent updates and improvements

Cons of photo2cartoon

  • Requires more computational resources due to complex AI models
  • Limited customization options for cartoon styles
  • Steeper learning curve for developers to understand and modify the codebase

Code Comparison

photo2cartoon:

def predict(self, img):
    img = self.preprocess(img)
    img_fake = self.model(img)
    img_fake = self.postprocess(img_fake)
    return img_fake

cartoonify:

def cartoonify(self, image):
    edges = cv2.Canny(image, 100, 200)
    color = cv2.bilateralFilter(image, 9, 300, 300)
    cartoon = cv2.bitwise_and(color, color, mask=edges)
    return cartoon

The photo2cartoon project uses a more sophisticated AI-based approach, while cartoonify relies on traditional image processing techniques. photo2cartoon's code snippet shows the use of a pre-trained model for prediction, whereas cartoonify applies edge detection and filtering algorithms directly to the input image.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Draw This.

Draw This is a polaroid camera that draws cartoons. You point, and shoot - and out pops a cartoon; the camera's best interpretation of what it saw. The camera is a mash up of a neural network for object recognition, the google quickdraw dataset, a thermal printer, and a raspberry pi.

If you'd like to try it out for yourself, the good folks at Kapwing have created an online version!

photo

The software can run both on a desktop environment (OSX, Linux) such as a laptop, or an embedded environment on a raspberry pi.

Desktop installation (only tested on OSX and linux)

  • Requirements:
    • Python 2.7*
    • Cairo (on OSX brew install cairo)
  • install dependencies using pip install -r requirements_desktop.txt from the cartoonify subdirectory.
  • run app from command line using python run.py.
  • select 'yes' when asked to download the cartoon dataset (~5GB) and tensorflow model (~100MB).
  • close the app using ctrl-C once the downloads have finished.
  • start the app again using cartoonify.
  • you will be prompted to enter the filepath to an image for processing. Enter the absolute filepath surrounded by double quotes.

*Unfortunately python 2.7 is required because the correct python 3 wheels are not available for both the pi and desktop.

Raspberry pi wiring

The following wiring diagram will get you started with a shutter button and a status LED. If the software is working correctly, the status LED should light up for 2-3 seconds when the shutter is pressed while the raspi is processing an image. If the light stays on, something has gone wrong (most likely the camera is unplugged).

IMPORTANT NOTE the diagram below shows AA cells, however this is not correct. You must use eneloop cells to power the camera - these cells deliver 1.2V each, as well as enough current to drive the raspi and thermal printer.

Wiring diagram

Raspberry pi installation

  • requirements:

    • raspberry pi 3
    • rasbian stretch image on 16gb SD card (8gb too small)
    • internet access on the raspi
    • pip + python
    • raspi camera v2
    • a button, led, 220 ohm resistor and breadboard
    • (optional) Thermal printer to suit a raspi 3. I used this printer here. Note you will need to use the printer TTL serial interface as per the wiring diagram above, rather than USB.
  • install docker on the raspi by running: curl -sSL https://get.docker.com | sh

  • set up and enable the raspi camera through raspi-config

  • clone the source code from this repo

  • run ./raspi-build.sh. This will download the google quickdraw dataset and tensorflow model, then build the required docker image.

  • run ./raspi-run.sh. This will start the docker image.

Troubleshooting

  • Check the log files in the cartoonify/logs folder for any error messages.
  • The most common issue when running on a raspi is not having the camera plugged in correctly.
  • If nothing is printing, check the logs then check whether images are being saved to cartoonify/images.
  • Check that you can manually print something from the thermal printer from the command line.