Convert Figma logo to code with AI

anopara logogenetic-drawing

A genetic algorithm toy project for drawing

2,174
196
2,174
10

Top Related Projects

12,644

Reproducing images with geometric primitives.

python app to turn a photograph into a cartoon

Code and data for paper "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511

Torch implementation of neural style algorithm

Quick Overview

The anopara/genetic-drawing repository is a Python project that uses genetic algorithms to recreate images using simple geometric shapes. It evolves a population of shapes over multiple generations to produce an approximation of a target image, demonstrating the power of evolutionary algorithms in image processing and generative art.

Pros

  • Creates unique artistic renditions of images using geometric shapes
  • Demonstrates practical application of genetic algorithms in a visual context
  • Customizable parameters allow for experimentation with different outcomes
  • Provides a fun and educational way to learn about evolutionary algorithms

Cons

  • Can be computationally intensive, especially for high-resolution images or complex shapes
  • Quality of output depends heavily on parameter tuning and may require trial and error
  • Limited to using simple geometric shapes, which may not capture fine details in complex images
  • Convergence to a satisfactory result can be slow for certain types of images

Code Examples

  1. Creating a genetic drawing instance:
from genetic_drawing import GeneticDrawing

gd = GeneticDrawing(
    image_path="path/to/target_image.jpg",
    population_size=50,
    mutation_rate=0.01,
    num_generations=1000
)
  1. Running the evolution process:
gd.evolve()
  1. Saving the result:
gd.save_result("output_image.png")
  1. Customizing shape types:
from genetic_drawing import ShapeType

gd = GeneticDrawing(
    image_path="path/to/target_image.jpg",
    shape_types=[ShapeType.CIRCLE, ShapeType.TRIANGLE, ShapeType.RECTANGLE]
)

Getting Started

To use the genetic-drawing library, follow these steps:

  1. Clone the repository:

    git clone https://github.com/anopara/genetic-drawing.git
    
  2. Install the required dependencies:

    pip install -r requirements.txt
    
  3. Run the example script:

    from genetic_drawing import GeneticDrawing
    
    gd = GeneticDrawing(image_path="examples/mona_lisa.jpg")
    gd.evolve()
    gd.save_result("mona_lisa_genetic.png")
    

This will create a genetic drawing of the Mona Lisa using default parameters. Adjust the parameters as needed to experiment with different results.

Competitor Comparisons

12,644

Reproducing images with geometric primitives.

Pros of primitive

  • Supports multiple primitive shapes (triangles, rectangles, ellipses, etc.)
  • Offers both CLI and Go library for integration
  • Provides options for output formats (SVG, PNG, GIF)

Cons of primitive

  • Limited to geometric primitives
  • May require more computational resources for complex images
  • Less flexibility in terms of artistic style

Code comparison

primitive:

model := primitive.NewModel(input.Bounds())
for i := 0; i < n; i++ {
    model.Step(primitive.ShapeType(shapeType), alpha, 1000)
}

genetic-drawing:

population = create_population(POP_SIZE, img)
for generation in range(MAX_GENERATION):
    population = evolve(population, img)
    best = get_best(population)

Key differences

  • primitive focuses on geometric shapes, while genetic-drawing uses a genetic algorithm approach
  • primitive is written in Go, genetic-drawing in Python
  • genetic-drawing allows for more organic, painterly results
  • primitive offers more output options and is potentially faster for simpler images

Both projects aim to recreate images using computational methods, but they take different approaches to achieve their goals. The choice between them depends on the desired artistic style and specific use case.

python app to turn a photograph into a cartoon

Pros of Cartoonify

  • Utilizes machine learning for image transformation
  • Produces cartoon-style output images
  • Offers a physical camera implementation

Cons of Cartoonify

  • Limited customization options for output style
  • Requires specific hardware for the camera setup
  • Less focus on the evolutionary aspect of image generation

Code Comparison

Genetic-drawing:

def mutate(self):
    r = random.random()
    if r < 0.1:
        self.color = self.color.mutate()
    elif r < 0.2:
        self.pos = self.pos.mutate()
    elif r < 0.3:
        self.radius = max(0, self.radius + random.gauss(0, 0.5))

Cartoonify:

def get_dominant_color(image):
    pixels = np.float32(image).reshape(-1, 3)
    n_colors = 5
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 200, .1)
    flags = cv2.KMEANS_RANDOM_CENTERS
    _, labels, centroids = cv2.kmeans(pixels, n_colors, None, criteria, 10, flags)
    palette = np.uint8(centroids)
    return palette[np.argmax(np.bincount(labels))]

Genetic-drawing focuses on evolving shapes and colors through mutation, while Cartoonify emphasizes color analysis and transformation for cartoon-like effects. Genetic-drawing offers more flexibility in artistic output, whereas Cartoonify provides a specific stylization approach.

Code and data for paper "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511

Pros of deep-photo-styletransfer

  • Produces high-quality photorealistic results
  • Preserves the content structure of the original image
  • Utilizes deep learning techniques for style transfer

Cons of deep-photo-styletransfer

  • Requires more computational resources
  • Less interactive and real-time than genetic-drawing
  • May have limitations in handling diverse art styles

Code Comparison

deep-photo-styletransfer:

def wct_core(cont_feat, styl_feat):
    cFSize = cont_feat.size()
    c_mean = torch.mean(cont_feat,1) # c x (h x w)
    c_mean = c_mean.unsqueeze(1).expand_as(cont_feat)
    cont_feat = cont_feat - c_mean

genetic-drawing:

def mutate(self):
    if random.random() < self.mutation_rate:
        self.color = [random.randint(0, 255) for _ in range(3)]
    if random.random() < self.mutation_rate:
        self.position = (random.randint(0, self.image_size[0]), random.randint(0, self.image_size[1]))

The code snippets highlight the different approaches:

  • deep-photo-styletransfer uses tensor operations for feature manipulation
  • genetic-drawing employs randomization for mutation of drawing elements

Both repositories offer unique approaches to image manipulation, with deep-photo-styletransfer focusing on neural style transfer and genetic-drawing utilizing evolutionary algorithms for artistic rendering.

Torch implementation of neural style algorithm

Pros of neural-style

  • Utilizes deep learning techniques for style transfer, potentially producing more sophisticated results
  • Offers a wider range of style transfer options and parameters
  • Has a larger community and more extensive documentation

Cons of neural-style

  • Requires more computational resources and longer processing times
  • Has a steeper learning curve due to its complexity and dependencies

Code Comparison

neural-style:

local cmd = torch.CmdLine()
cmd:option('-style_image', 'examples/inputs/seated-nude.jpg', 'Style target image')
cmd:option('-content_image', 'examples/inputs/tubingen.jpg', 'Content target image')
cmd:option('-image_size', 512, 'Maximum height / width of generated image')

genetic-drawing:

parser.add_argument('--image', type=str, default='image/monalisa.jpg', help='image path')
parser.add_argument('--population', type=int, default=50, help='population size')
parser.add_argument('--max_generation', type=int, default=500000, help='max generation')

Both repositories focus on image manipulation, but neural-style uses deep learning for style transfer, while genetic-drawing employs genetic algorithms for image recreation. neural-style offers more sophisticated results but requires more resources, while genetic-drawing is simpler but may produce less refined outputs.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Genetic Drawing

This is a toy project I did around 2017 for imitating a drawing process given a target image (inspired by many examples of genetic drawing on the internet, and this was my take on it, mostly as an exercise).

Due to a popular request, it is now opensource 🙂

Examples of generated images:

It also supports user-created sampling masks, in case you'd like to specify regions where more brushstrokes are needed (for ex, to allocate more finer details)

Python

you would need the following python 3 libraries:

  • opencv 3.4.1
  • numpy 1.16.2
  • matplotlib 3.0.3
  • and Jupyter Notebook

To start, open the GeneticDrawing.ipynb and run the example code