Convert Figma logo to code with AI

databrickslabs logodolly

Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform

10,811
1,154
10,811
5

Top Related Projects

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

23,528

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

4,381

Modeling, training, eval, and inference code for OLMo

LLM training code for Databricks foundation models

Quick Overview

Databricks' Dolly is an open-source large language model (LLM) trained on the Databricks machine learning platform. It's designed to be a smaller, more accessible alternative to larger proprietary models, focusing on instruction-following capabilities. Dolly aims to democratize access to LLMs and promote research in the field.

Pros

  • Open-source and freely available for research and commercial use
  • Trained on a diverse dataset, including instruction-following examples
  • Relatively small model size (12 billion parameters) makes it more accessible for fine-tuning and deployment
  • Demonstrates strong performance on various natural language processing tasks

Cons

  • Less powerful than larger proprietary models like GPT-3 or GPT-4
  • Limited multilingual capabilities compared to some other models
  • May require fine-tuning for specific domain applications
  • Potential for generating biased or incorrect information, as with all LLMs

Code Examples

# Load the Dolly model using the Hugging Face Transformers library
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
# Generate text using the Dolly model
prompt = "Explain the concept of machine learning in simple terms:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# Fine-tune Dolly on a custom dataset
from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=4,
    save_steps=10_000,
    save_total_limit=2,
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=your_custom_dataset,
    data_collator=lambda data: {'input_ids': torch.stack([f[0] for f in data]),
                                'attention_mask': torch.stack([f[1] for f in data]),
                                'labels': torch.stack([f[0] for f in data])},
)

trainer.train()

Getting Started

To get started with Dolly, follow these steps:

  1. Install the required libraries:

    pip install transformers torch
    
  2. Load the model and tokenizer:

    from transformers import AutoTokenizer, AutoModelForCausalLM
    
    tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
    model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
    
  3. Generate text:

    prompt = "Your prompt here"
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_length=200)
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)
    print(response)
    

Competitor Comparisons

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Pros of gpt-neox

  • Larger scale model training capabilities, suitable for more complex language tasks
  • More extensive documentation and community support
  • Designed for distributed training across multiple GPUs/nodes

Cons of gpt-neox

  • Higher computational requirements and more complex setup
  • Less focus on fine-tuning for specific tasks
  • Steeper learning curve for beginners

Code comparison

gpt-neox:

from megatron.neox_arguments import NeoXArgs
from megatron.global_vars import set_global_variables, get_tokenizer
from megatron.training import pretrain

neox_args = NeoXArgs.from_ymls("configs/your_config.yml")
set_global_variables(neox_args)

dolly:

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")

The code comparison shows that gpt-neox requires more setup and configuration, while dolly uses a simpler approach with pre-trained models. gpt-neox offers more flexibility for custom training, whereas dolly focuses on ease of use for fine-tuning and inference.

23,528

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

Pros of JARVIS

  • More comprehensive and feature-rich, offering a wider range of AI capabilities
  • Better integration with Azure services and Microsoft's ecosystem
  • More active development and frequent updates

Cons of JARVIS

  • Higher complexity and steeper learning curve
  • Potentially higher resource requirements and costs
  • Less focused on specific language model tasks compared to Dolly

Code Comparison

JARVIS (Python):

from jarvis.core import Jarvis

jarvis = Jarvis()
response = jarvis.chat("What's the weather like today?")
print(response)

Dolly (Python):

from dolly import Dolly

dolly = Dolly()
response = dolly.generate("What's the weather like today?")
print(response)

Summary

JARVIS is a more comprehensive AI platform with broader capabilities and better integration with Microsoft services. However, it may be more complex and resource-intensive. Dolly, on the other hand, is more focused on language model tasks and may be simpler to use for specific applications. The code comparison shows that both projects have similar basic usage patterns, but JARVIS likely offers more advanced features and customization options.

4,381

Modeling, training, eval, and inference code for OLMo

Pros of OLMo

  • More comprehensive documentation and research papers
  • Larger community and active development
  • Broader range of pre-trained models and tasks

Cons of OLMo

  • Higher computational requirements
  • Steeper learning curve for beginners
  • Less focus on specific business applications

Code Comparison

OLMo:

from olmo import OLMoTokenizer, OLMoForCausalLM

tokenizer = OLMoTokenizer.from_pretrained("allenai/olmo-7b")
model = OLMoForCausalLM.from_pretrained("allenai/olmo-7b")

Dolly:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")

Both repositories provide large language models, but they differ in their focus and implementation. OLMo offers a more research-oriented approach with extensive documentation and a variety of pre-trained models. It's suitable for advanced users and researchers exploring different NLP tasks.

Dolly, on the other hand, is more business-oriented and easier to get started with, making it a good choice for practical applications and those new to language models. However, it may have fewer options for customization and advanced research compared to OLMo.

The code comparison shows that both models can be easily loaded using similar methods, with OLMo using its custom classes and Dolly utilizing the Hugging Face Transformers library.

LLM training code for Databricks foundation models

Pros of llm-foundry

  • More comprehensive and flexible framework for training and fine-tuning large language models
  • Supports a wider range of model architectures and training techniques
  • Offers advanced features like distributed training and efficient data loading

Cons of llm-foundry

  • Higher complexity and steeper learning curve compared to Dolly
  • Requires more computational resources and expertise to utilize effectively
  • Less focused on specific use cases, potentially overwhelming for beginners

Code Comparison

llm-foundry:

from composer import Trainer
from composer.models import HuggingFaceModel

model = HuggingFaceModel(...)
trainer = Trainer(
    model=model,
    train_dataloader=train_dataloader,
    eval_dataloader=eval_dataloader,
    optimizers=optimizer,
    max_duration="1ep",
)
trainer.fit()

Dolly:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Dolly

Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on pythia-12b, Dolly is trained on ~15k instruction/response fine tuning records databricks-dolly-15k generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. dolly-v2-12b is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based.

Databricks is committed to ensuring that every organization and individual benefits from the transformative power of artificial intelligence. The Dolly model family represents our first steps along this journey, and we’re excited to share this technology with the world.

The model is available on Hugging Face as databricks/dolly-v2-12b.

Model Overview

dolly-v2-12b is a 12 billion parameter causal language model created by Databricks that is derived from EleutherAI’s Pythia-12b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA)

Known Limitations

Performance Limitations

dolly-v2-12b is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses.

The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, dolly-v2-12b struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that dolly-v2-12b does not have some capabilities, such as well-formatted letter writing, present in the original model.

Dataset Limitations

Like all language models, dolly-v2-12b reflects the content and limitations of its training corpuses.

  • The Pile: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations.

  • databricks-dolly-15k: The training data on which dolly-v2-12b is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.

Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations.

Getting Started with Response Generation

If you'd like to simply test the model without training, the model is available on Hugging Face as databricks/dolly-v2-12b.

To use the model with the transformers library on a machine with A100 GPUs:

from transformers import pipeline
import torch

instruct_pipeline = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")

You can then use the pipeline to answer instructions:

instruct_pipeline("Explain to me the difference between nuclear fission and fusion.")

Generating on Other Instances

A100 instance types are not available in all cloud regions, or can be hard to provision. Inference is possible on other GPU instance types.

A10 GPUs

The 6.9B and 2.8B param models should work as-is.

To generate using the 12B param model on A10s (ex: g5.4xlarge, 1 x A10 24GB), it's necessary to load and run generating using 8-bit weights, which impacts the results slightly:

  • Also install bitsandbytes
  • Add model_kwargs={'load_in_8bit': True} to the pipeline() command shown above

V100 GPUs

When using V100s (ex: p3.2xlarge, 1 x V100 16GB, NC6s_v3), in all cases, set torch_dtype=torch.float16 in pipeline() instead.

Otherwise, follow the steps above. The 12B param model may not function well in 8-bit on V100s.

Getting Started with Training

  • Add the dolly repo to Databricks (under Repos click Add Repo, enter https://github.com/databrickslabs/dolly.git, then click Create Repo).
  • Start a 13.x ML (includes Apache Spark 3.4.0, GPU, Scala 2.12) or later single-node cluster with node type having 8 A100 GPUs (e.g. Standard_ND96asr_v4 or p4d.24xlarge). Note that these instance types may not be available in all regions, or may be difficult to provision. In Databricks, note that you must select the GPU runtime first, and unselect "Use Photon", for these instance types to appear (where supported).
  • Open the train_dolly notebook in the Repo (which is the train_dolly.py file in the Github dolly repo), attach to your GPU cluster, and run all cells. When training finishes, the notebook will save the model under /dbfs/dolly_training.

Training on Other Instances

A100 instance types are not available in all cloud regions, or can be hard to provision. Training is possible on other GPU instance types, for smaller Dolly model sizes, and with small modifications to reduce memory usage. These modifications are not optimal, but are simple to make.

Select your GPU family type from the gpu_family widget, enter the number of GPUs available in the num_gpus widget, and then run the rest of the code. A number of different options will be set for you to train the model for one of the following GPU types:

  • A100 (default)
  • A10
  • V100

Details of the different configurations are below.

A100 GPUs

A100 GPUs are preferred for training all model sizes, and are the only GPUs that can train the 12B param model in a reasonable amount of time. As such, this is the default configuration, as set in the a100_config.json deepspeed config file.

A10 GPUs

Training the 12B param model is not recommended on A10s.

To train the 6.9B param model on A10 instances (ex: g5.24xlarge, 4 x A10 24GB; Standard_NV72ads_A10_v5, 2 x A10), simply select a10 from the gpu_family widget and enter the number of GPUs available in the num_gpus widget, then run the rest of the code. This will use the a10_config.json deepspeed config file, which makes the following changes:

  • per-device-train-batch-size and per-device-eval-batch-size are set to 3 in the train_dolly.py invocation of deepspeed
  • Within the "zero_optimization" section of the deepspeed config, we have added:
    "offload_optimizer": {
      "device": "cpu",
      "pin_memory": true
    },
    

V100 GPUs

To run on V100 instances with 32GB of GPU memory (ex: p3dn.24xlarge or Standard_ND40rs_v2), simply select v100 from the gpu_family widget and enter the number of GPUs available in the num_gpus widget, and then run the rest of the code. This will use the v100_config.json deepspeed config file, which makes the following changes:

  • It makes the changes described above for A10s
  • It enables fp16 floating point format
  • It sets the per-device-train-batch-size and per-device-eval-batch-size to 3

You may be able to slightly increase the batch size with 32GB instances, compared to what works above for 24GB A10s.

Running Unit Tests Locally

pyenv local 3.8.13
python -m venv .venv
. .venv/bin/activate
pip install -r requirements_dev.txt
./run_pytest.sh

Citation

@online{DatabricksBlog2023DollyV2,
    author    = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
    title     = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
    year      = {2023},
    url       = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
    urldate   = {2023-06-30}
}