dbtpl
Command line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server
Top Related Projects
Generate type-safe code from SQL
general purpose extensions to golang's database/sql
Generate a Go ORM tailored to your database schema.
Command line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server
Database migrations. CLI and Golang library.
Quick Overview
dbtpl is a Go library that provides a simple templating system for database queries. It allows developers to write SQL queries with placeholders and then safely replace those placeholders with actual values, helping to prevent SQL injection attacks and improve query readability.
Pros
- Enhances SQL query security by preventing SQL injection vulnerabilities
- Improves code readability by separating SQL logic from parameter values
- Supports multiple database drivers, including MySQL, PostgreSQL, and SQLite
- Lightweight and easy to integrate into existing Go projects
Cons
- Limited documentation and examples available
- Not as feature-rich as some more comprehensive query builders
- Requires manual string manipulation for complex dynamic queries
- May have a slight learning curve for developers new to Go or SQL templating
Code Examples
- Basic query with a single parameter:
query, args, err := dbtpl.New("SELECT * FROM users WHERE id = ?").
Parse(1)
if err != nil {
log.Fatal(err)
}
// Execute query with database driver
- Query with multiple parameters:
query, args, err := dbtpl.New("INSERT INTO users (name, email) VALUES (?, ?)").
Parse("John Doe", "john@example.com")
if err != nil {
log.Fatal(err)
}
// Execute query with database driver
- Using named parameters:
query, args, err := dbtpl.New("SELECT * FROM products WHERE category = :category AND price > :price").
ParseNamed(map[string]interface{}{
"category": "electronics",
"price": 100,
})
if err != nil {
log.Fatal(err)
}
// Execute query with database driver
Getting Started
To use dbtpl in your Go project, follow these steps:
-
Install the library:
go get github.com/xo/dbtpl
-
Import the library in your Go code:
import "github.com/xo/dbtpl"
-
Create a new template and parse it with values:
query, args, err := dbtpl.New("SELECT * FROM users WHERE name = ?"). Parse("John Doe") if err != nil { log.Fatal(err) } // Use query and args with your database driver to execute the query
Competitor Comparisons
Generate type-safe code from SQL
Pros of sqlc
- More active development and larger community support
- Supports multiple database dialects (PostgreSQL, MySQL, SQLite)
- Generates type-safe code with better performance
Cons of sqlc
- Steeper learning curve due to more complex query syntax
- Limited support for custom types and advanced SQL features
- Requires writing SQL queries manually, which can be verbose
Code Comparison
sqlc query:
-- name: GetAuthor :one
SELECT * FROM authors
WHERE id = $1 LIMIT 1;
dbtpl query:
type Author struct {
ID int `db:"id"`
Name string `db:"name"`
}
func (a *Author) ByID(ctx context.Context, id int) error {
return db.GetContext(ctx, a, "SELECT * FROM authors WHERE id = $1", id)
}
Summary
sqlc offers more features and better type safety, but requires writing raw SQL. dbtpl provides a simpler approach with Go-based query definitions, but may lack some advanced features. The choice between them depends on project requirements and developer preferences.
general purpose extensions to golang's database/sql
Pros of sqlx
- More mature and widely adopted project with a larger community
- Supports multiple database drivers (MySQL, PostgreSQL, SQLite)
- Provides a rich set of features, including named parameters and result scanning
Cons of sqlx
- Steeper learning curve due to more complex API
- Requires more boilerplate code for simple queries
- May have performance overhead for some operations compared to raw SQL
Code Comparison
sqlx:
var users []User
err := sqlx.Select(db, &users, "SELECT * FROM users WHERE active = ?", true)
dbtpl:
users, err := db.Query(ctx, "SELECT * FROM users WHERE active = ?", true)
Additional Notes
sqlx offers a more comprehensive solution for database interactions in Go, with features like struct mapping and named queries. dbtpl, on the other hand, focuses on simplicity and ease of use, providing a thin wrapper around database/sql.
While sqlx is more feature-rich, dbtpl may be preferred for projects requiring a lightweight database interface or those prioritizing simplicity over extensive functionality. The choice between the two depends on the specific needs of the project and the developer's preferences.
Generate a Go ORM tailored to your database schema.
Pros of sqlboiler
- More active development with frequent updates and contributions
- Supports a wider range of databases, including PostgreSQL, MySQL, and SQLite
- Offers more advanced features like custom types and hooks
Cons of sqlboiler
- Steeper learning curve due to more complex configuration options
- Generates larger codebase, which may be overkill for smaller projects
- Requires more setup and configuration compared to dbtpl
Code Comparison
dbtpl example:
type User struct {
ID int `db:"id"`
Name string `db:"name"`
}
sqlboiler example:
type User struct {
ID int `boil:"id" json:"id" toml:"id" yaml:"id"`
Name string `boil:"name" json:"name" toml:"name" yaml:"name"`
}
Both tools generate similar struct definitions, but sqlboiler includes additional tags for various serialization formats.
Key Differences
- dbtpl focuses on simplicity and ease of use, while sqlboiler offers more advanced features and customization options
- sqlboiler generates more comprehensive database interaction code, including query builders and relationship methods
- dbtpl has a smaller footprint and is easier to integrate into existing projects, while sqlboiler provides a more complete ORM-like experience
Overall, dbtpl is better suited for smaller projects or those requiring minimal database interaction, while sqlboiler shines in larger, more complex applications with extensive database operations.
Command line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server
Pros of dbtpl
- Actively maintained with recent commits and releases
- Supports multiple database drivers (MySQL, PostgreSQL, SQLite)
- Provides a flexible template system for database operations
Cons of dbtpl
- Limited documentation and examples
- Smaller community and fewer contributors
- May have a steeper learning curve for beginners
Code Comparison
dbtpl:
db, err := dbtpl.Open("postgres", "user=postgres dbname=mydb sslmode=disable")
if err != nil {
log.Fatal(err)
}
defer db.Close()
Additional Notes
Both repositories (dbtpl and dbtpl>) appear to be the same project. The comparison above is based on the features and characteristics of the dbtpl repository. The repository named "dbtpl>" seems to be a typo or mistake in the question, as it doesn't exist as a separate project.
dbtpl is a Go package that provides a template-based approach to database operations. It aims to simplify database interactions by allowing developers to write SQL queries as templates and execute them with dynamic parameters.
The project's main focus is on providing a flexible and efficient way to work with databases in Go applications. While it offers some advantages, such as support for multiple database drivers and a template system, it may have limitations in terms of documentation and community support compared to more established database libraries in the Go ecosystem.
Database migrations. CLI and Golang library.
Pros of migrate
- More comprehensive database migration tool supporting multiple databases
- Active development with frequent updates and larger community support
- CLI tool for easy integration into development workflows
Cons of migrate
- More complex setup and configuration required
- Larger codebase and dependencies may increase project size
Code comparison
migrate:
m, err := migrate.New(
"file://migrations",
"postgres://localhost:5432/database?sslmode=enable"
)
if err != nil {
log.Fatal(err)
}
m.Steps(2)
dbtpl:
db, err := sql.Open("postgres", "postgres://localhost/mydb")
if err != nil {
log.Fatal(err)
}
err = dbtpl.RunTemplates(db, "templates")
if err != nil {
log.Fatal(err)
}
Key differences
- migrate focuses on version-controlled database migrations
- dbtpl is primarily for generating and executing SQL templates
- migrate offers more database support and migration features
- dbtpl provides a simpler approach for basic SQL template execution
Use cases
migrate:
- Complex database schema management
- Multi-environment deployments
- Continuous integration/deployment pipelines
dbtpl:
- Simple database setup and initialization
- Generating and executing SQL queries from templates
- Smaller projects with basic database needs
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
dbtpl
dbtpl
is a command-line tool to inspect and generate templated code based on
a database schema or a custom database query.
In addition to being able to generate standardized "model" code for a database,
dbtpl
is also capable of creating schema creation scripts for a database,
generating JSON/YAML definitions, and Graphviz diagrams
for schemas.
Installing | Building | Using | Releases
Supported languages
At the moment, dbtpl
only supports Go. Support for
other languages is possible, but not currently planned.
How it works
In schema mode, dbtpl
connects to your database and generates code using Go
templates. dbtpl
works by using database metadata and SQL introspection
queries to discover the types and relationships contained within a schema, and
applying a standard set of base (or customized) Go templates
against the discovered relationships.
Currently, dbtpl
can generate types for tables, enums, stored procedures, and
custom SQL queries for PostgreSQL, MySQL, Oracle, Microsoft SQL Server, and
SQLite3 databases.
Note: While the code generated by dbtpl is production quality, it is not the goal, nor the intention for dbtpl to be a "silver bullet," nor to completely eliminate the manual authoring of SQL / Go code.
In query mode, dbtpl
parses your query to generate code from Go templates. It
finds related tables in your database to ensure type safety.
Database Feature Support
The following is a matrix of the feature support for each database:
PostgreSQL | MySQL | Oracle | Microsoft SQL Server | SQLite | |
---|---|---|---|---|---|
Models | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
Primary Keys | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
Foreign Keys | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
Indexes | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
Stored Procs | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
Functions | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
ENUM types | :white_check_mark: | :white_check_mark: | |||
Custom types | :white_check_mark: |
Installing
dbtpl
can be installed via Release, via Homebrew, via AUR, via
Scoop or via Go:
Installing via Release
- Download a release for your platform
- Extract the
dbtpl
ordbtpl.exe
file from the.tar.bz2
or.zip
file - Move the extracted executable to somewhere on your
$PATH
(Linux/macOS) or%PATH%
(Windows)
Installing via Homebrew (macOS and Linux)
Install dbtpl
from the xo/xo
tap in the usual way with the brew
command:
# install
$ brew install xo/xo/dbtpl
Installing via AUR (Arch Linux)
Install dbtpl
from the Arch Linux AUR in the usual way with the yay
command:
# install
$ yay -S dbtpl
Alternately, build and install using makepkg
:
# clone package repo and make/install package
$ git clone https://aur.archlinux.org/dbtpl.git && cd dbtpl
$ makepkg -si
==> Making package: dbtpl 0.4.4-1 (Sat 11 Nov 2023 02:28:28 PM WIB)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
...
Installing via Scoop (Windows)
Install dbtpl
using Scoop:
# Optional: Needed to run a remote script the first time
> Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
# install scoop if not already installed
> irm get.scoop.sh | iex
# install dbtpl with scoop
> scoop install dbtpl
Installing via Go
Install dbtpl
in the usual Go fashion:
# install latest dbtpl version
$ go install github.com/xo/dbtpl@latest
Quickstart
The following is a quick overview of using dbtpl
on the command-line:
# Make an output directory for generated code.
$ mkdir -p models
# Generate code from your Postgres schema. (Default output folder is models)
$ dbtpl schema postgres://user:pass@host/dbname
# Generate code from a Microsoft SQL schema using a custom template directory (see notes below)
$ mkdir -p mssqlmodels
$ dbtpl schema mssql://user:pass@host/dbname -o mssqlmodels --src custom/templates
# Generate code from a custom SQL query for Postgres
$ dbtpl query postgres://user:pass@host/dbname -M -B -2 -T AuthorResult << ENDSQL
SELECT
a.name::varchar AS name,
b.type::integer AS my_type
FROM authors a
INNER JOIN authortypes b ON a.id = b.author_id
WHERE
a.id = %%authorID int%%
LIMIT %%limit int%%
ENDSQL
# Build generated code - verify it compiles
$ go build ./models/
$ go build ./mssqlmodels/
Command Line Options
The following are dbtpl
's command-line commands, arguments, and options:
$ dbtpl --help-long
usage: dbtpl [<flags>] <command> [<args> ...]
Flags:
--help Show context-sensitive help (also try --help-long and
--help-man).
-v, --verbose enable verbose output
--version display version and exit
Commands:
help [<command>...]
Show help.
query [<flags>] <DSN>
Generate code for a database custom query from a template.
-s, --schema=<name> database schema name
-t, --template=go template type (createdb, dot, go, json, yaml;
default: go)
-f, --suffix=<ext> file extension suffix for generated files
(otherwise set by template type)
-o, --out=models out path (default: models)
-a, --append enable append mode
-S, --single=<file> enable single file output
-D, --debug debug generated code (writes generated code
to disk without post processing)
-Q, --query="" custom database query (uses stdin if not
provided)
-T, --type=<name> type name
--type-comment="" type comment
-F, --func=<name> func name
--func-comment="" func comment
-M, --trim enable trimming whitespace
-B, --strip enable stripping type casts
-1, --one enable returning single (only one) result
-l, --flat enable returning unstructured values
-X, --exec enable exec (no introspection performed)
-I, --interpolate enable interpolation of embedded params
-L, --delimiter=%% delimiter used for embedded params (default:
%%)
-Z, --fields=<field> override field names for results
-U, --allow-nulls allow result fields with NULL values
-d, --src=<path> template source directory
-2, --go-not-first disable package comment (ie, not first
generated file)
--go-int32=int int32 type (default: int)
--go-uint32=uint uint32 type (default: uint)
--go-pkg=<name> package name
--go-tag="" ... build tags
--go-import="" ... package imports
--go-uuid=<pkg> uuid type package
--go-custom=<name> package name for custom types
--go-conflict=Val name conflict suffix (default: Val)
--go-initialism=<val> ... add initialism (i.e ID, API, URI)
--go-esc=none ... escape fields (none, schema, table, column,
all; default: none)
-g, --go-field-tag=<tag> field tag
--go-context=only context mode (disable, both, only; default:
only)
--go-inject="" insert code into generated file headers
--go-inject-file=<file> insert code into generated file headers from
a file
--go-legacy enables legacy v1 template funcs
--go-enum-table-prefix enables table name prefix to enums
--json-indent=" " indent spacing
--json-ugly disable indentation
schema [<flags>] <DSN>
Generate code for a database schema from a template.
-s, --schema=<name> database schema name
-t, --template=go template type (createdb, dot, go, json, yaml;
default: go)
-f, --suffix=<ext> file extension suffix for generated files
(otherwise set by template type)
-o, --out=models out path (default: models)
-a, --append enable append mode
-S, --single=<file> enable single file output
-D, --debug debug generated code (writes generated code
to disk without post processing)
-k, --fk-mode=smart foreign key resolution mode (smart, parent,
field, key; default: smart)
-i, --include=<glob> ... include types (<type>)
-e, --exclude=<glob> ... exclude types/fields (<type>[.<field>])
-j, --use-index-names use index names as defined in schema for
generated code
-d, --src=<path> template source directory
--createdb-fmt=<path> fmt command (default:
/home/ken/.npm-global/bin/sql-formatter)
--createdb-fmt-opts=<opts> ...
fmt options (default: -u, -l={{ . }}, -i=2,
--lines-between-queries=2)
--createdb-constraint enable constraint name in output (postgres,
mysql, sqlite3)
--createdb-escape=none escape mode (none, types, all; default: none)
--createdb-engine="" mysql table engine (default: InnoDB)
--createdb-trim-comment trim leading comment from views and procs
(--no-createdb-trim-comment)
--dot-defaults="" ... default statements (default: node
[shape=none, margin=0])
--dot-bold bold header row
--dot-color="" header color (default: lightblue)
--dot-row="" row value template (default: {{ .Name }}: {{
.Type.Type }})
--dot-direction enable edge directions
-2, --go-not-first disable package comment (ie, not first
generated file)
--go-int32=int int32 type (default: int)
--go-uint32=uint uint32 type (default: uint)
--go-pkg=<name> package name
--go-tag="" ... build tags
--go-import="" ... package imports
--go-uuid=<pkg> uuid type package
--go-custom=<name> package name for custom types
--go-conflict=Val name conflict suffix (default: Val)
--go-initialism=<val> ... add initialism (i.e ID, API, URI)
--go-esc=none ... escape fields (none, schema, table, column,
all; default: none)
-g, --go-field-tag=<tag> field tag
--go-context=only context mode (disable, both, only; default:
only)
--go-inject="" insert code into generated file headers
--go-inject-file=<file> insert code into generated file headers from
a file
--go-legacy enables legacy v1 template funcs
--go-enum-table-prefix enables table name prefix to enums
--json-indent=" " indent spacing
--json-ugly disable indentation
--postgres-oids enable postgres OIDs
dump [<flags>] <out>
Dump internal templates to path.
-t, --template=go template type (createdb, dot, go, json, yaml; default:
go)
-f, --suffix=<ext> file extension suffix for generated files (otherwise set
by template type)
About Base Templates
dbtpl
provides a set of generic "base" templates for each of the
supported databases, but it is understood these templates are not suitable for
every organization or every schema out there. As such, you can author your own
custom templates, or modify the base templates available in the dbtpl
source
tree, and use those with dbtpl
by a passing a directory path via the --src
flag.
For non-trivial schemas, custom templates are the most practical, common, and
best way to use dbtpl
(see below quickstart and related example).
Custom Template Quickstart
The following is a quick overview of copying the base templates contained in
the dbtpl
project's templates/
directory, editing to suit, and
using with dbtpl
:
# Create a working directory
$ mkdir -p my-tpl
# Dump an embedded template to disk
$ dbtpl dump -t createdb my-tpl
# edit base template files
$ vi my-tpl/*.go.tpl
# see command line options for the template
$ dbtpl schema --src my-tpl --help
# generate a schema using the custom template
$ dbtpl schema --src my-tpl -o models postgres://user:pass@host/db
See the Custom Template example below for more information on adapting the base
templates in the dbtpl
source tree for use within your own project.
Storing Project Templates
Ideally, custom templates for your project/schema should be stored alongside
your project. and generated as part of an automated build pipeline using go generate
:
# Add to custom dbtpl command to go generate:
$ tee -a gen.go << END
package mypackage
//go:generate dbtpl postgres://user:pass@host/db -o models --src templates
END
# Run go generate
$ go generate
# Add custom templates and gen.go to project
$ git add templates gen.go && git commit -m 'Adding custom dbtpl templates for models'
Note: via the
--template
/-t
parameter ofdbtpl dump
you can generate other templates withdbtpl
. The default template is thego
template.
Template Language/Syntax
dbtpl
templates are standard Go text templates. Please see the documentation
for Go's standard text/template
package
for information concerning the syntax, logic, and variable use within Go
templates.
Template Context and File Layout
The contexts (ie, the .
identifier in templates) made available to custom
templates can be found in templates/types.go
(see below table for more information on which file uses which type).
Each language, has its own set of templates for $TYPE
and are
available in the templates/.
Template File | Description |
---|---|
*.go | Template logic |
hdr.dbtpl.*.tpl | File header template. Executed with content for each generated file. |
db.dbtpl.*.tpl | Package level template with base types and interface data. Generated once per package. |
query.dbtpl.*.tpl | Template for custom query execution. |
schema.dbtpl.*.tpl | Template for custom query's generated type. |
* - is the template type, for example go
, json
, yaml
, etc.
Examples
Example: End-to-End
Please see the booktest example for a full end-to-end
example for each supported database, showcasing how to use a database schema
with dbtpl
, and the resulting code generated by dbtpl
.
Additionally, please see the northwind
and
django
for a demonstration of running dbtpl
against larger
schema and against databases from other frameworks. Please note that these
examples are works in progress, and may not work properly in all scenarios.
Example: Ignoring Fields
Sometimes you may wish to have the database manage the values of columns
instead of having them managed by code generated by dbtpl
. As such, when you
need dbtpl
to ignore fields for a database schema, you can use the -e
or
--exclude
flag. For example, a common use case is to define a table with
created_at
and/or modified_at
timestamps fields, where the database is
responsible for setting column values on INSERT
and UPDATE
, respectively.
Consider the following PostgreSQL schema where a users
table has a
created_at
and modified_at
field, where created_at
has a default value of
now()
and where modified_at
is updated by a trigger on UPDATE
:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name text NOT NULL DEFAULT '' UNIQUE,
created_at timestamptz default now(),
modified_at timestamptz default now()
);
CREATEOR REPLACE FUNCTION update_modified_column() RETURNS TRIGGER AS $$
BEGIN
NEW.modified_at= now();
RETURN NEW;
END;
$$language 'plpgsql';
CREATE TRIGGER update_users_modtime BEFORE UPDATE ON users
FOR EACH ROW EXECUTE PROCEDURE update_modified_column();
We can ensure that these columns are managed by PostgreSQL and not by the
application logic but by dbtpl
by passing the --exclude
or -e
flag:
# Ignore special fields
$ dbtpl schema postgres://user:pass@host/db -e users.created_at -e users.modified_at
# or, To ignore these fields in all tables
$ dbtpl schema postgres://user:pass@host/db -e *.created_at -e *.modified_at
Example: Custom Template -- adding a GetMostRecent
lookup for all tables (Go)
Often, a schema has a common layout/pattern, such as every table having a
created_at
and modified_at
field (as in the PostgreSQL schema in the
previous example). It is then a common use-case to have a GetMostRecent
lookup for each table type, retrieving the most recently modified rows for each
table (up to some limit, N).
To accomplish this with dbtpl
, we will need to create our own set of custom
templates, and then add a GetMostRecent
lookup to the .type.go.tpl
template.
First, we dump the base dbtpl
Go template:
$ mkdir -p my-tpl
$ dbtpl dump my-tpl
We can now modify the templates to suit our specific schema, adding lookups, helpers, or anything else necessary for our schema.
To add a GetMostRecent
lookup, we edit our copy of the typedef.dbtpl.go.tpl
template:
$ vi templates/gotpl/schema/typedef.dbtpl.go.tpl
And add the following templated GetMostRecent
func at the end of the file:
// GetMostRecent{{ $type.Name }} returns n most recent rows from '{{ $table }}',
// ordered by "created_at" in descending order.
func GetMostRecent{{ $type.Name }}(ctx context.Context, db DB, n int) ([]*{{ $type.Name }}, error) {
const sqlstr = `SELECT ` +
`{{ $type.Fields "created_at" "modified_at" }}` +
`FROM {{ $table }} ` +
`ORDER BY created_at DESC LIMIT $1`
rows, err := db.QueryContext(ctx, sqlstr, n)
if err != nil {
return nil, logerror(err)
}
defer rows.Close()
// load results
var res []*{{ $type.Name }}
for rows.Next() {
{{ $short }} := {{ $type.Name }}{
{{- if $type.PrimaryKey }}
_exists: true,
{{ end -}}
}
// scan
if err := rows.Scan({{ fieldnames $type.Fields (print "&" $short) }}); err != nil {
return nil, logerror(err)
}
res = append(res, &{{ $short }})
}
return res, nil
}
We can then use the templates in conjunction with dbtpl
to generate our "model"
code:
$ dbtpl schema postgres://user:pass@localhost/dbname --src templates/
There will now be a GetMostRecentUsers
func defined in models/user.dbtpl.go
,
which can be used as follows:
db, err := dburl.Open("postgres://user:pass@localhost/dbname")
if err != nil { /* ... */ }
// retrieve 15 most recent items
mostRecentUsers, err := models.GetMostRecentUsers(context.Background(), db, 15)
if err != nil { /* ... */ }
for _, user := range users {
log.Printf("got user: %+v", user)
}
Using SQL Drivers
Please note that the base dbtpl
templates do not import any SQL drivers. It is
left for the user of dbtpl
's generated code to import the actual drivers. For
reference, these are the expected drivers to use with the code generated by
dbtpl
:
Database (driver) | Package |
---|---|
PostgreSQL (postgres) | github.com/lib/pq |
SQLite3 (sqlite3) | github.com/mattn/go-sqlite3 |
MySQL (mysql) | github.com/go-sql-driver/mysql |
Microsoft SQL Server (mssql) | github.com/microsoft/go-mssqldb |
Oracle (ora) | github.com/sijms/go-ora/v2 |
Additionally, please see below for usage notes on specific SQL database drivers.
MySQL (mysql)
If your schema or custom query contains table or column names that need to be
escaped using any of the --escape-*
options, you must pass the sql_mode=ansi
option to the MySQL driver:
$ dbtpl --escape-all 'mysql://user:pass@host/?parseTime=true&sql_mode=ansi' -o models
And when opening a database connection:
db, err := dburl.Open("mysql://user:pass@host/?parseTime=true&sql_mode=ansi")
Additionally, when working with date/time column types in MySQL, one should
pass the parseTime=true
option to the MySQL driver:
$ dbtpl schema 'mysql://user:pass@host/dbname?parseTime=true' -o models
And when opening a database connection:
db, err := dburl.Open("mysql://user:pass@host/dbname?parseTime=true")
SQLite3 (sqlite3)
While not required, one should specify the loc=auto
option when using dbtpl
with a SQLite3 database:
$ dbtpl schema 'file:mydatabase.sqlite3?loc=auto' -o models
And when opening a database connection:
db, err := dburl.Open("file:mydatabase.sqlite3?loc=auto")
About Primary Keys
For row inserts dbtpl
determines whether the primary key is
automatically generated by the DB or must be provided by the application for the
table row being inserted. For example a table that has a primary key that is
also a foreign key to another table, or a table that has multiple primary keys
in a many-to-many link table, it is desired that the application provide the
primary key(s) for the insert rather than the DB.
dbtpl
will query the schema to determine if the database provides an automatic
primary key and if the table does not provide one then it will require that the
application provide the primary key for the object passed to the Insert method.
Below is information on how the logic works for each database type to determine
if the DB automatically provides the PK.
PostgreSQL Auto PK Logic
- Checks for a sequence that is owned by the table in question.
MySQL Auto PK Logic
- Checks for an autoincrement row in the information_schema for the table in question.
SQLite Auto PK Logic
- Checks the SQL that is used to generate the table contains the AUTOINCREMENT keyword.
- Checks that the table was created with the primary key type of INTEGER.
If either of the above conditions are satisfied then the PK is determined to be automatically provided by the DB. For the case of integer PK's when you want to override that the PK be manually provided then you can define the key type as INT instead of INTEGER, for example as in the following many-to-many link table:
CREATE TABLE site_contacts (
contact_id INT NOT NULL,
site_id INT NOT NULL,
PRIMARY KEY(contact_id,siteid),
FOREIGN KEY(contact_id) REFERENCES contacts (contact_id),
FOREIGN KEY(site_id) REFERENCES sites (site_id)
)
SQL Server Auto PK Logic
- Checks for an identity associated with one of the columns for the table in question.
Oracle Auto PK Logic
ALWAYS GENERATED
types will be parsed as Auto PK types for Oracle.
About dbtpl: Design, Origin, Philosophy, and History
dbtpl
can likely get you 99% "of the way there" on medium or large database
schemas and 100% of the way there for small or trivial database schemas. In
short, dbtpl
is a great launching point for developing standardized packages
for standard database abstractions/relationships, and dbtpl
's most common
use-case is indeed in a code generation pipeline, ala stringer
.
Design
dbtpl
is NOT designed to be an ORM or to generate an ORM. Instead, dbtpl
is
designed to vastly reduce the overhead/redundancy of (re-)writing types and
funcs for common database queries/relationships -- it is not meant to be
a "silver bullet".
History
dbtpl
was originally developed while migrating a large application written in
PHP to Go. The schema in use in the original app, while well-designed, had
become inconsistent over multiple iterations/generations, mainly due to
different naming styles adopted by various developers/database admins over the
preceding years. Additionally, some components had been written in different
languages (Ruby, Java) and had also accumulated significant drift from the
original application and accompanying schema. Simultaneously, a large amount of
growth meant that the PHP/Ruby code could no longer efficiently serve the
traffic volumes.
In late 2014/early 2015, a decision was made to unify and strip out certain backend services and to fully isolate the API from the original application, allowing the various components to instead speak to a common API layer instead of directly to the database, and to build that service layer in Go.
However, unraveling the old PHP/Ruby/Java code became a large headache, as the code, the database, and the API, all had significant drift -- thus, underlying function names, fields, and API methods no longer coincided with the actual database schema, and were named differently in each language. As such, after a round of standardizing names, dropping cruft, and adding a few relationship changes to the schema, the various codebases were fixed to match the schema changes. After that was determined to be a success, the next target was to rewrite the backend services in Go.
In order to keep a similar and consistent workflow for the developers, the previous code generator (written in PHP and Twig templates) was modified to generate Go code. Additionally, at this time, but tangential to the story, the API definitions were ported from JSON to Protobuf to make use of its code generation abilities as well.
dbtpl
is the open source version of that code generation tool, and is the
fruits of those development efforts. It is hoped that others will be able to
use and expand dbtpl
to support other databases -- SQL or otherwise -- and
that dbtpl
can become a common tool in any Go developer's toolbox.
In May of 2025, the project was renamed from xo
to dbtpl
to more readily
convey the tool's purpose.
Goals
Part of dbtpl
's goals is to avoid writing an ORM, or an ORM-like in Go, and
to instead generate static, type-safe, fast, and idiomatic Go code across
languages and databases.
Additionally, the dbtpl
developers are of the opinion that relational
databases should have proper, well-designed relationships and all the related
definitions should reside within the database schema itself: ie, a
"self-documenting" schema. dbtpl
is an end to that pursuit.
Related Projects
- dburl - a Go package providing a standard, URL style mechanism for parsing and opening database connection URLs
- usql - a universal command-line interface for SQL databases
Other Projects
The following projects work with similar concepts as dbtpl
:
Go Generators
Go ORM-likes
Top Related Projects
Generate type-safe code from SQL
general purpose extensions to golang's database/sql
Generate a Go ORM tailored to your database schema.
Command line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server
Database migrations. CLI and Golang library.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot