AIGODLIKE-ComfyUI-Translation
A plugin for multilingual translation of ComfyUI,This plugin implements translation of resident menu bar/search bar/right-click context menu/node, etc
Top Related Projects
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
A latent text-to-image diffusion model
Stable Diffusion web UI
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
High-Resolution Image Synthesis with Latent Diffusion Models
Quick Overview
The AIGODLIKE/AIGODLIKE-ComfyUI-Translation repository is a project that provides translations for the ComfyUI user interface, a popular tool for generating and editing images using AI models. The project aims to make ComfyUI accessible to users from different language backgrounds.
Pros
- Multilingual Support: The project offers translations for multiple languages, making ComfyUI accessible to a wider audience.
- Community Contributions: The project encourages community involvement, allowing users to contribute translations and improve the overall quality of the translations.
- Regularly Updated: The project is actively maintained, with regular updates to keep up with the latest changes in the ComfyUI application.
- Easy Integration: The translations can be easily integrated into the ComfyUI application, providing a seamless user experience.
Cons
- Limited Language Coverage: While the project supports multiple languages, the coverage may not be comprehensive, and some users may still encounter untranslated content.
- Potential Inaccuracies: As the translations are community-driven, there may be instances of inaccurate or inconsistent translations, which could lead to confusion for users.
- Dependency on ComfyUI: The project is tightly coupled with the ComfyUI application, and any changes or updates to ComfyUI may require corresponding updates to the translation project.
- Lack of Offline Support: The translations are hosted online, and users may need an internet connection to access the latest translations, which could be a limitation for some users.
Code Examples
This project is not a code library, so there are no code examples to provide.
Getting Started
This project is not a code library, so there are no getting started instructions to provide.
Competitor Comparisons
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Pros of diffusers
- Comprehensive library for state-of-the-art diffusion models
- Extensive documentation and examples for various use cases
- Active development and frequent updates from the Hugging Face team
Cons of diffusers
- Steeper learning curve for beginners compared to ComfyUI's visual interface
- Requires more coding knowledge to implement custom workflows
Code Comparison
AIGODLIKE-ComfyUI-Translation (JSON configuration):
{
"zh_CN": {
"Load Image": "加载图像",
"Image": "图像",
"VAE Encode": "VAE 编码"
}
}
diffusers (Python code):
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipeline("A beautiful landscape").images[0]
image.save("landscape.png")
AIGODLIKE-ComfyUI-Translation focuses on providing multilingual support for ComfyUI, a visual interface for AI image generation. It's more accessible for non-programmers but limited in scope.
diffusers offers a powerful and flexible Python library for working with various diffusion models, including text-to-image generation, inpainting, and more. It provides greater control and customization options but requires programming skills to utilize effectively.
A latent text-to-image diffusion model
Pros of stable-diffusion
- More established and widely adopted in the AI image generation community
- Offers a complete text-to-image generation pipeline
- Extensive documentation and community support
Cons of stable-diffusion
- Larger codebase and potentially more complex to understand and modify
- May require more computational resources to run
Code Comparison
AIGODLIKE-ComfyUI-Translation (translation-related code):
def translate_text(text, target_language):
translator = Translator()
translated = translator.translate(text, dest=target_language)
return translated.text
stable-diffusion (image generation-related code):
@torch.no_grad()
def sample_from_v(v, model, steps):
x = v
for i in range(steps):
x = model.p_sample(x, torch.full((x.shape[0],), i, device=x.device, dtype=torch.long))
return x
AIGODLIKE-ComfyUI-Translation focuses on providing translations for ComfyUI, while stable-diffusion is a complete text-to-image generation model. The code snippets reflect their different purposes, with AIGODLIKE-ComfyUI-Translation handling text translation and stable-diffusion dealing with image generation processes.
Stable Diffusion web UI
Pros of stable-diffusion-webui
- More mature and widely adopted project with a larger user base
- Extensive features and options for image generation and manipulation
- Well-documented with active community support
Cons of stable-diffusion-webui
- Steeper learning curve for beginners due to numerous options
- Can be resource-intensive, requiring more powerful hardware
Code Comparison
stable-diffusion-webui:
def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iteration=0, position_in_batch=0):
index = position_in_batch + iteration * p.batch_size
clip_skip = getattr(p, 'clip_skip', opts.CLIP_stop_at_last_layers)
token_merging_ratio = getattr(p, 'token_merging_ratio', 0)
token_merging_ratio_hr = getattr(p, 'token_merging_ratio_hr', 0)
AIGODLIKE-ComfyUI-Translation:
def get_language_list():
language_list = []
for file in os.listdir(language_path):
if file.endswith('.json'):
language_list.append(file[:-5])
return language_list
The code snippets show different functionalities: stable-diffusion-webui focuses on creating info text for generated images, while AIGODLIKE-ComfyUI-Translation deals with language file management for translations.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Pros of ComfyUI
- More comprehensive and feature-rich UI for AI image generation
- Larger community and more active development
- Supports a wider range of models and workflows
Cons of ComfyUI
- Steeper learning curve for beginners
- Requires more system resources to run effectively
- Less focus on multilingual support
Code Comparison
AIGODLIKE-ComfyUI-Translation:
def translate_text(text, target_language):
translator = Translator()
translated = translator.translate(text, dest=target_language)
return translated.text
ComfyUI:
class ImageUploadNode:
@classmethod
def INPUT_TYPES(s):
return {"required": {"image": ("IMAGE",)}}
RETURN_TYPES = ("IMAGE",)
FUNCTION = "upload_image"
def upload_image(self, image):
return (image,)
The AIGODLIKE-ComfyUI-Translation code focuses on language translation, while ComfyUI's code snippet demonstrates a node for image processing within the UI. This reflects the different primary purposes of each project: AIGODLIKE-ComfyUI-Translation aims to provide multilingual support, while ComfyUI offers a more comprehensive set of image generation and manipulation tools.
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
Pros of InvokeAI
- More comprehensive AI image generation toolkit with a wider range of features
- Actively maintained with frequent updates and improvements
- Larger community and better documentation
Cons of InvokeAI
- Steeper learning curve due to more complex functionality
- Requires more computational resources to run effectively
- Less focused on translation and localization aspects
Code Comparison
InvokeAI:
from invokeai.app.invocations.baseinvocation import BaseInvocation, InvocationContext
from invokeai.app.invocations.primitives import ImageField, ImageOutput
class ExampleInvocation(BaseInvocation):
image: ImageField = ImageField()
def invoke(self, context: InvocationContext) -> ImageOutput:
# Image processing logic here
AIGODLIKE-ComfyUI-Translation:
import os
import json
def load_language(language):
with open(f"language/{language}.json", "r", encoding="utf-8") as file:
return json.load(file)
def translate(key, language_data):
return language_data.get(key, key)
The code comparison shows that InvokeAI focuses on image processing and generation, while AIGODLIKE-ComfyUI-Translation is primarily concerned with language translation and localization for ComfyUI. InvokeAI's code demonstrates a more complex structure for handling image-related tasks, whereas AIGODLIKE-ComfyUI-Translation's code is simpler and directly related to language file handling and translation functions.
High-Resolution Image Synthesis with Latent Diffusion Models
Pros of stablediffusion
- More comprehensive and feature-rich, offering a complete text-to-image generation pipeline
- Larger community and wider adoption, leading to more resources and support
- Includes pre-trained models and extensive documentation
Cons of stablediffusion
- Steeper learning curve due to its complexity and broader scope
- Requires more computational resources for training and inference
- Less focused on specific UI translation tasks
Code Comparison
AIGODLIKE-ComfyUI-Translation:
import json
def load_language(language):
with open(f"language/{language}.json", "r", encoding="utf-8") as file:
return json.load(file)
stablediffusion:
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipe("a photo of an astronaut riding a horse on mars").images[0]
The code snippets highlight the different focus areas of the two projects. AIGODLIKE-ComfyUI-Translation deals with loading language files for UI translation, while stablediffusion demonstrates the ease of generating images using pre-trained models.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
åå«ï¼ä»¥åæåçå·¥ä½
Hiï¼åä½ComfyUIç©å®¶ï¼å¤§å®¶å¥½ï¼ææ¯åªå©ä¸ç¶è¾£æ¤é ±ï¼è¿æ¯ç¬¬ä¸æ¬¡ï¼ä¹æ¯åºè¯¥æ¯æåä¸æ¬¡å大家å¨è¿é交æµã
ä»è¿ä¸ªå·¥å ·è¯çä¸å¹´å¤ä»¥æ¥ï¼ä¸ºæ°åä¸AI使ç¨è æä¾äºå¤è¯è¨ç¿»è¯ï¼ä½ä¸ºä¸ä¸ªç¿»è¯è ï¼æå¾æ¬£æ °è½çå°æ人认å¯æ们çå¼æºå·¥ä½ã
ä½å¦å¤§å®¶æè§ï¼éçComfyUI_frontendå客æ·ç«¯çæ¨åºï¼è¿ä¸å¥ç¼ç¼è¡¥è¡¥çç¿»è¯ç³»ç»ï¼ä¸åè½éåºæ°çæ¡æ¶ï¼åæ¶ComfyUIæ¨åºäºæ´å è¿çå 置翻è¯ã
æ以ï¼å¨åComfyUIä½è åå®æ¹äººåæ²éä¹åï¼æ们å³å®åæ¢ç»´æ¤è¿ä¸ªå·¥å ·ï¼å¹¶å°ä¹åçè´¡ç®æ ¸å¿è½¬å°ComfyUIæ¬èº«ã
æ¥ä¸æ¥
-
æ们å°äºåè¿ç§»ç°æè¯æ¡å°ComfyUIæ ¸å¿ãï¼é¿å æ°ä»¥ä¸è®¡çä¸æåºè§é¢/æç« ï¼å è¯è¨åçæ§ä¹ï¼ã
-
ä»æ§ç»´æ¤ä¸æ®µæ¶é´ï¼ä»¥ç¡®ä¿ä½¿ç¨ç¿»è¯çèç¨æ·è½å¤ç»§ç»ç¨ä¸æ®µæ¶é´ã
-
å¼å¯¼å¯¹ç¿»è¯æå ´è¶£çç¿»è¯è ï¼å å ¥å°ComfyUIå®æ¹ç¤¾åºå¯¹ç¿»è¯è¿è¡å®ååä¿®æ£ã
å ³äºè¯¦ç»å 容->大家å¯ä»¥æ¥çä¸è¿ç¯æç« https://blog.comfy.org/p/native-localization-support-i18n
æåï¼æ谢大家ä¸å¹´å¤ä»¥æ¥çä¿¡èµï¼æè°¢åä½é宵达æ¦æä¾è¯æ¡çç¿»è¯è ï¼ä»¥å强è¡å¼åèè±å严éçå¼åè 们ï¼
æ¿å¼æºè£å æ°¸åï¼
AIGODLIKE-ComfyUI-Translation
A plugin for multilingual translation of ComfyUIï¼This plugin implements translation of resident menu bar/search bar/right-click context menu/node, etc
2024/09/06 Support the latest ComfyUI interface
https://github.com/user-attachments/assets/9418fba8-f499-4414-9c7f-4d548ff77c49
ComfyUI users in other languages, I need your help
I hope ComfyUI can support more languages besides Chinese and English, such as French, German, Japanese, Korean, etc. However, I believe that translation should be done by native speakers of each language. So I need your help, let's go fight for ComfyUI together!
[Korean] Korean translation needs help~
[Japanese] Japanese translation needs help~
Language supported
COMFYUI Translation | ç®ä½ä¸æ | ç¹é«ä¸æ | English | æ¥æ¬èª | íêµì´ | Ð ÑÑÑкий | Your language |
---|---|---|---|---|---|---|---|
Menu | â | â | â | â | â | â | TODO |
NodeCategory | â | â | â | â | â | â | TODO |
Nodes | â | â | â | â | â | â | TODO |
Function
- Translate all UI of ComfyUI
- Direct language switching (limitation: custom names will be removed) https://github.com/AIGODLIKE/AIGODLIKE-COMFYUI-TRANSLATION/assets/116185401/e43182b7-8932-4358-bc65-ade7bddf27c5
- Support for adding other languages
- Support translation custom nodes
- (2023/8/16) Support one-click switching between English/currently set language
- (2023/8/19) Support for multilingual translation of custom nodes * (in production)
Custom Node Name | ç®ä¸ | ç¹ä¸ | English | æ¥æ¬èª | íêµì´ | Ð ÑÑÑкий |
---|---|---|---|---|---|---|
3D-MeshTool | â | TODO | â | TODO | TODO | TODO |
3D-Pack | â | TODO | â | TODO | TODO | TODO |
Advanced Encode | â | â | â | TODO | TODO | TODO |
Advanced ControlNet | â | â | â | TODO | TODO | TODO |
AGL-ComfyUI-Translation | â | â | â | â | TODO | TODO |
AlekPet Nodes | â | â | â | â | TODO | TODO |
AnimateAnyone | â | TODO | â | TODO | TODO | TODO |
AnimateDiff | â | â | â | â | TODO | TODO |
AnimateDiff-Evolved | â | â | â | TODO | TODO | TODO |
AnyLine | â | TODO | â | TODO | TODO | TODO |
AnyText | â | TODO | â | TODO | TODO | TODO |
Automatic CFG | â | TODO | â | TODO | TODO | TODO |
BiRefNet | â | TODO | â | TODO | TODO | TODO |
BiRefNet Hugo | â | TODO | â | TODO | TODO | TODO |
BitsandBytes NF4 | â | TODO | â | TODO | TODO | TODO |
BrushNet (kijai) | â | TODO | â | TODO | TODO | TODO |
BrushNet (nullquant) | â | TODO | â | TODO | TODO | TODO |
Bxb | â | TODO | â | TODO | TODO | TODO |
CCSR | â | TODO | â | TODO | TODO | TODO |
Champ | â | TODO | â | TODO | TODO | TODO |
CLIP Seg | â | â | â | â | TODO | TODO |
CogVideo | â | TODO | â | TODO | TODO | TODO |
ComfyRoll | â | â | â | TODO | TODO | TODO |
ControlNet LLLite | â | â | â | TODO | TODO | TODO |
ControlNet Preprocessors | â | â | â | â | TODO | TODO |
ControlNet Preprocessors AUX | â | â | â | â | TODO | TODO |
ControlNeXt SVD | â | TODO | â | TODO | TODO | TODO |
Crystools | â | TODO | â | TODO | TODO | â |
Cutoff | â | â | â | â | TODO | TODO |
Custom-Scripts | â | â | â | TODO | TODO | TODO |
cg-use-everywhere | â | TODO | â | TODO | TODO | TODO |
cg-image-picker | â | TODO | â | TODO | TODO | TODO |
Davemane42 Nodes | â | â | â | â | TODO | TODO |
Dagthomas Nodes | â | â | â | â | TODO | TODO |
Derfuu Nodes | â | TODO | â | TODO | TODO | TODO |
DynamiCrafter (kijai) | â | TODO | â | TODO | TODO | TODO |
DynamiCrafter (ExponentialML) | â | TODO | â | TODO | TODO | TODO |
DynamicThresholding | â | â | â | TODO | TODO | TODO |
EasyAnimate (chaojie) | â | TODO | â | TODO | TODO | TODO |
EasyAnimate (kijai) | â | TODO | â | TODO | TODO | TODO |
Easy Tools | â | TODO | â | TODO | TODO | TODO |
Easy Use | â | TODO | â | TODO | TODO | TODO |
Eesahes Nodes | â | TODO | â | TODO | TODO | TODO |
Efficiency Nodes | â | â | â | â | TODO | TODO |
ELLA (ExponentialML) | â | TODO | â | TODO | TODO | TODO |
ELLA (Tencent) | â | TODO | â | TODO | TODO | TODO |
EllangoK Postprocessing | â | â | â | TODO | TODO | TODO |
Essentials | â | TODO | â | TODO | TODO | TODO |
Execution-Inversion | â | TODO | â | TODO | TODO | TODO |
ExLlama nodes | â | â | â | TODO | TODO | TODO |
experiments | â | â | â | TODO | TODO | TODO |
Face Analysis | â | TODO | â | TODO | TODO | TODO |
Fast Decode | â | â | â | â | TODO | TODO |
Florence2 | â | TODO | â | TODO | TODO | TODO |
Flowty CRM | â | TODO | â | TODO | TODO | TODO |
Flowty TripoSR | â | TODO | â | TODO | TODO | TODO |
Frame Interpolation | â | TODO | â | TODO | TODO | TODO |
FreeU Advanced | â | TODO | â | TODO | TODO | TODO |
IC-Light (kijai) | â | TODO | â | TODO | TODO | TODO |
IC-Light-Wrapper (kijai) | â | TODO | â | TODO | TODO | TODO |
IF AI tools | â | TODO | â | TODO | TODO | TODO |
Image Resize | â | TODO | â | TODO | TODO | TODO |
Instant Mesh | â | TODO | â | TODO | TODO | TODO |
IPAdapter | â | â | â | TODO | TODO | TODO |
IPAdapter_plus | â | â | â | TODO | TODO | TODO |
Image Grid | â | â | â | TODO | TODO | TODO |
Impact Pack | â | â | â | TODO | TODO | TODO |
Impact Subpack | â | â | â | TODO | TODO | TODO |
Inpaint CropAndStitch | â | TODO | â | TODO | TODO | TODO |
Inpaint Nodes | â | TODO | â | TODO | TODO | TODO |
Inspire Pack | â | â | â | TODO | TODO | TODO |
InstantID (cubiq) | â | TODO | â | TODO | TODO | TODO |
InstantID (ZHO) | â | TODO | â | TODO | TODO | TODO |
Joy Caption | â | TODO | â | TODO | TODO | TODO |
KJ Nodes | â | TODO | â | TODO | TODO | TODO |
kkTranslator | â | TODO | â | TODO | TODO | TODO |
Kolors (kijai) | â | TODO | â | TODO | TODO | TODO |
Kolors (MinusZone) | â | TODO | â | TODO | TODO | TODO |
LaMa Preprocessor | â | TODO | â | TODO | TODO | TODO |
Latent2RGB | â | â | â | â | TODO | TODO |
LayerDiffuse | â | TODO | â | TODO | TODO | TODO |
LayerStyle | â | TODO | â | TODO | TODO | TODO |
LCM | â | TODO | â | TODO | TODO | TODO |
Literals | â | TODO | â | TODO | TODO | TODO |
LivePortrait(KJ) | â | TODO | â | TODO | TODO | TODO |
LivePortrait-Advanced | â | TODO | â | TODO | TODO | TODO |
LoadLoraWithTags | â | TODO | â | TODO | TODO | TODO |
Logic | â | TODO | â | TODO | TODO | TODO |
LoraAutoTrigger | â | TODO | â | TODO | TODO | TODO |
MagicClothing | â | TODO | â | TODO | TODO | TODO |
Manager | â | â | â | TODO | TODO | â |
Marigold | â | TODO | â | TODO | TODO | TODO |
Masquerade Nodes | â | â | â | TODO | TODO | TODO |
Math | â | TODO | â | TODO | TODO | TODO |
Mixlab Nodes | â | TODO | â | TODO | TODO | TODO |
MoonDream | â | TODO | â | TODO | TODO | TODO |
MotionCtrl | â | TODO | â | TODO | TODO | TODO |
MotionCtrl-SVD | â | TODO | â | TODO | TODO | TODO |
MTB | â | TODO | â | TODO | TODO | TODO |
N-Sidebar | â | TODO | â | TODO | TODO | TODO |
Noise | â | â | â | TODO | TODO | TODO |
NormalLighting | â | TODO | â | TODO | TODO | TODO |
Paint By Example | â | TODO | â | TODO | TODO | TODO |
Perturbed-Attention | â | TODO | â | TODO | TODO | TODO |
Portrai Master | â | TODO | â | TODO | TODO | TODO |
Power Noise Suite | â | TODO | â | TODO | TODO | TODO |
Prompt Composer | â | TODO | â | TODO | TODO | TODO |
Prompt MZ | â | TODO | â | TODO | TODO | TODO |
Prompt Reader | â | TODO | â | TODO | TODO | TODO |
PuLID (cubiq) | â | TODO | â | TODO | TODO | TODO |
QR | â | â | â | TODO | TODO | TODO |
Quick Connections | â | TODO | â | TODO | TODO | TODO |
Omost | â | TODO | â | TODO | TODO | TODO |
OneButtonPrompt | â | TODO | â | TODO | TODO | TODO |
ReActor | â | TODO | â | TODO | TODO | TODO |
ResAdapter | â | TODO | â | TODO | TODO | TODO |
Restart-Sampling | â | â | â | TODO | TODO | TODO |
Roop | â | TODO | â | TODO | TODO | TODO |
rgthree | â | TODO | â | TODO | TODO | TODO |
SD-Latent-Interposer | â | TODO | â | TODO | TODO | TODO |
SDXL_prompt_styler | â | â | â | TODO | TODO | TODO |
SeargeSDXL | â | â | â | TODO | TODO | TODO |
Segment Anything | â | TODO | â | TODO | TODO | TODO |
Segment Anything 2 | â | TODO | â | TODO | TODO | TODO |
StabilityNodes | â | â | â | TODO | TODO | TODO |
SUPIR | â | TODO | â | TODO | TODO | TODO |
TiledDiffusion | â | TODO | â | TODO | TODO | TODO |
TiledKSampler | â | â | â | â | TODO | TODO |
TinyTerra | â | TODO | â | TODO | TODO | TODO |
ToonCrafter | â | TODO | â | TODO | TODO | TODO |
TripoAPI | â | TODO | â | TODO | TODO | TODO |
UltimateSDUpscale | â | â | â | TODO | TODO | TODO |
Vextra Nodes | â | â | â | TODO | TODO | TODO |
Video Matting | â | TODO | â | TODO | TODO | TODO |
Visual Style Prompting | â | TODO | â | TODO | TODO | TODO |
VLM Nodes | â | TODO | â | TODO | TODO | TODO |
WAS Suite | â | â | â | TODO | TODO | TODO |
WD14-Tagger | â | â | â | TODO | TODO | TODO |
zfkun | â | TODO | â | TODO | TODO | TODO |
The above only includes translations for the UI. If you are a developer and need me to help you translate your interface, you can go directly to the ComfyUI Plugins List to add your custom node project, or send an issue, as long as I can see it, I will translate it (it will take some time)
How to install
AIGODLIKE-COMFYUI-TRANSLATION is equivalent to a custom node, you can use any method you like, just put it in folder custom_nodes Then run:
cd ComfyUI/custom_nodes
git clone https://github.com/AIGODLIKE/AIGODLIKE-COMFYUI-TRANSLATION.git
How to use
For new UI:
For legacy UI:
How to add other languagesï¼translatorï¼
-
Create a new 'Language Name' folder in the plugin directory (e.g. example folder)
-
Find the LocaleMap.js file and add the language code with the same name as the first step folder in it
export const LOCALES = { "zh-CN": { "nativeName": "ä¸æ", "englishName": "Chinese Simplified" }, "en-US": { "nativeName": "English (US)", "englishName": "English (US)" }, "example": { "nativeName": "exampleDisplayName", "englishName": "enName" }, }
-
After completing the above two steps, restart the ComfyUI service to find the 'exampleDisplayName' language type in the 'AGLTranslation language' settings bar
How to add custom node translationsï¼translatorï¼
- Translation files are currently divided into three types
- Node information translation (including node name, node connector, node component) corresponding translation file
Your language folder/Nodes/somenode.json
- Node classification information (used for right-click the new node menu) corresponds to the translation file
Your language folder/NodeCategory.json
- Menu information (including resident menu, settings panel, right-click context menu, search menu, etc.) corresponds to translated files
Your language folder/Menu.json
- Node information translation (including node name, node connector, node component) corresponding translation file
- Node information translation can be placed in multiple JSON files under 'Your language folder/Nodes/' based on different nodes
- All translation files are in JSON format, please fill in strictly according to the JSON file format
Translation examples
- Node Translation Format
{ "KSampler": { "title": "KSampler[example translation]", "inputs": { "model": "模å", "positive": "æ£åæ示è¯", "negative": "ååæ示è¯", "latent_image": "æ½ç©ºé´" }, "widgets": { "seed": "éæºç§", "control_after_generate": "è¿è¡åæä½", "steps": "æ¥æ°", "cfg": "CFG", "sampler_name": "éæ ·å¨", "scheduler": "è°åº¦å¨", "denoise": "éåª" }, "outputs": { "LATENT": "æ½ç©ºé´", } }, "Load VAE": {} }
- Node classification translation format
{ "Add Node": "Add Node[example]", "Add Group": "Add Group[example]", "Search": "Search[example]", "Queue size:": "Queue size[example]:", "Queue Prompt": "Queue Prompt[example]", "Extra options": "Extra options[example]" }
- Menu information translation format
{ "conditioning": "conditioning[example]", "latent": "latent[example]", "loaders": "loaders[example]", "image": "image[example]" }
Limitations
- Supports direct switching of any language node to the target language, but will lose custom names
- A small portion of options that use Enum type data cannot be translated
Top Related Projects
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
A latent text-to-image diffusion model
Stable Diffusion web UI
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
High-Resolution Image Synthesis with Latent Diffusion Models
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot