Top Related Projects
Pure Go Postgres driver for database/sql
The fantastic ORM library for Golang, aims to be developer friendly
general purpose extensions to golang's database/sql
PostgreSQL driver and toolkit for Go
Microsoft SQL server driver written in go language
sqlite3 driver for go using database/sql
Quick Overview
Go-sql-driver/mysql is a popular MySQL driver for Go's database/sql package. It provides a pure Go MySQL driver implementation, allowing Go applications to interact with MySQL databases efficiently and reliably.
Pros
- Pure Go implementation, ensuring easy installation and cross-platform compatibility
- Excellent performance and low memory footprint
- Supports the latest MySQL and MariaDB features
- Actively maintained with frequent updates and bug fixes
Cons
- Limited to MySQL and MariaDB databases only
- May require additional configuration for optimal performance in high-concurrency scenarios
- Some advanced MySQL features might not be fully supported or require workarounds
Code Examples
- Connecting to a MySQL database:
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
)
db, err := sql.Open("mysql", "user:password@tcp(127.0.0.1:3306)/dbname")
if err != nil {
log.Fatal(err)
}
defer db.Close()
- Executing a simple query:
rows, err := db.Query("SELECT id, name FROM users WHERE active = ?", 1)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var id int
var name string
if err := rows.Scan(&id, &name); err != nil {
log.Fatal(err)
}
fmt.Printf("ID: %d, Name: %s\n", id, name)
}
- Inserting data with prepared statements:
stmt, err := db.Prepare("INSERT INTO users(name, email) VALUES(?, ?)")
if err != nil {
log.Fatal(err)
}
defer stmt.Close()
result, err := stmt.Exec("John Doe", "john@example.com")
if err != nil {
log.Fatal(err)
}
lastID, err := result.LastInsertId()
if err != nil {
log.Fatal(err)
}
fmt.Printf("Inserted user with ID: %d\n", lastID)
Getting Started
-
Install the driver:
go get -u github.com/go-sql-driver/mysql
-
Import the driver in your Go code:
import ( "database/sql" _ "github.com/go-sql-driver/mysql" )
-
Connect to your MySQL database:
db, err := sql.Open("mysql", "user:password@tcp(127.0.0.1:3306)/dbname") if err != nil { log.Fatal(err) } defer db.Close()
-
Start using the
db
object to interact with your MySQL database using standarddatabase/sql
methods.
Competitor Comparisons
Pure Go Postgres driver for database/sql
Pros of pq
- Native PostgreSQL driver, offering better performance for PostgreSQL-specific features
- Supports more advanced PostgreSQL-specific data types and functions
- Better handling of PostgreSQL-specific error codes and messages
Cons of pq
- Limited to PostgreSQL databases, lacking the flexibility of mysql for multiple database systems
- Less active development and community support compared to mysql
- May require more manual configuration for certain connection parameters
Code Comparison
pq:
import (
"database/sql"
_ "github.com/lib/pq"
)
db, err := sql.Open("postgres", "user=pqgotest dbname=pqgotest sslmode=verify-full")
mysql:
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
)
db, err := sql.Open("mysql", "user:password@tcp(localhost:3306)/dbname")
Both drivers implement the database/sql
interface, but pq uses a space-separated connection string, while mysql uses a URL-like format. pq also supports more PostgreSQL-specific connection options, such as sslmode
.
The fantastic ORM library for Golang, aims to be developer friendly
Pros of GORM
- Provides an Object-Relational Mapping (ORM) layer, simplifying database operations
- Offers features like migrations, associations, and hooks out of the box
- Supports multiple databases with a unified interface
Cons of GORM
- Higher learning curve due to additional abstractions and features
- Potential performance overhead for complex queries
- May obscure underlying SQL operations, making optimization more challenging
Code Comparison
GORM:
db.Create(&User{Name: "John", Age: 30})
var user User
db.First(&user, "name = ?", "John")
mysql:
stmt, _ := db.Prepare("INSERT INTO users(name, age) VALUES(?, ?)")
stmt.Exec("John", 30)
rows, _ := db.Query("SELECT * FROM users WHERE name = ?", "John")
Summary
GORM provides a higher-level abstraction for database operations, offering convenience and additional features at the cost of some performance and direct SQL control. The mysql driver, on the other hand, provides a lower-level interface, giving more control over SQL operations but requiring more manual work for common tasks.
general purpose extensions to golang's database/sql
Pros of sqlx
- Provides a higher-level API with additional features like named query parameters and struct scanning
- Offers convenient methods for common operations, reducing boilerplate code
- Supports multiple database drivers, not limited to MySQL
Cons of sqlx
- Adds an extra layer of abstraction, which may impact performance in some cases
- Has a steeper learning curve compared to the simpler mysql driver
- Requires additional setup and configuration for advanced features
Code Comparison
mysql:
rows, err := db.Query("SELECT id, name FROM users WHERE id = ?", 1)
var id int
var name string
for rows.Next() {
err := rows.Scan(&id, &name)
}
sqlx:
var users []User
err := db.Select(&users, "SELECT id, name FROM users WHERE id = ?", 1)
Summary
sqlx provides a more feature-rich and convenient API for database operations, while mysql offers a simpler, lower-level interface. sqlx is better suited for complex applications with diverse database needs, while mysql is ideal for projects requiring direct control over database interactions and maximum performance. The choice between the two depends on the specific requirements of your project and your preferred level of abstraction.
PostgreSQL driver and toolkit for Go
Pros of pgx
- Native support for PostgreSQL-specific features like LISTEN/NOTIFY and COPY
- Better performance due to custom protocol implementation
- More extensive type support, including custom types and arrays
Cons of pgx
- Steeper learning curve due to more complex API
- Less widespread adoption compared to mysql driver
- Specific to PostgreSQL, limiting database flexibility
Code Comparison
mysql:
db, err := sql.Open("mysql", "user:password@/dbname")
rows, err := db.Query("SELECT * FROM users WHERE id = ?", 1)
pgx:
conn, err := pgx.Connect(context.Background(), "postgres://user:password@localhost:5432/dbname")
rows, err := conn.Query(context.Background(), "SELECT * FROM users WHERE id = $1", 1)
Both drivers support the database/sql interface, but pgx offers additional features when used directly. The pgx example shows its context-aware API, while mysql uses the standard database/sql interface.
Microsoft SQL server driver written in go language
Pros of go-mssqldb
- Native support for Microsoft SQL Server, including specific features and data types
- Better performance for MSSQL-specific operations and queries
- More comprehensive support for MSSQL authentication methods, including Windows Authentication
Cons of go-mssqldb
- Limited to Microsoft SQL Server, less versatile than mysql driver
- Smaller community and potentially fewer resources compared to mysql driver
- May require more setup and configuration for non-Windows environments
Code Comparison
go-mssqldb:
db, err := sql.Open("mssql", "server=localhost;user id=sa;password=secret;database=mydb")
mysql:
db, err := sql.Open("mysql", "user:password@tcp(localhost:3306)/mydb")
Both drivers implement the database/sql
interface, allowing for similar usage patterns. The main difference lies in the connection string format and database-specific features. go-mssqldb provides more options for MSSQL-specific configurations, while mysql offers a simpler connection string format for MySQL databases.
sqlite3 driver for go using database/sql
Pros of go-sqlite3
- Serverless and self-contained, ideal for embedded systems and local applications
- Lightweight and requires no configuration, making it easier to set up and use
- Supports concurrent read operations, enhancing performance for read-heavy workloads
Cons of go-sqlite3
- Limited scalability compared to mysql, not suitable for large-scale applications
- Lacks advanced features like user management and network access control
- May have slower write performance, especially for concurrent write operations
Code Comparison
go-sqlite3:
import (
"database/sql"
_ "github.com/mattn/go-sqlite3"
)
db, err := sql.Open("sqlite3", "./database.db")
mysql:
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
)
db, err := sql.Open("mysql", "user:password@tcp(127.0.0.1:3306)/dbname")
Both libraries use the standard database/sql
interface, making it easy to switch between them. The main difference lies in the connection string format and the imported driver. go-sqlite3 uses a file path for the database, while mysql requires a more complex connection string including user credentials and network details.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Go-MySQL-Driver
A MySQL-Driver for Go's database/sql package
Features
- Lightweight and fast
- Native Go implementation. No C-bindings, just pure Go
- Connections over TCP/IPv4, TCP/IPv6, Unix domain sockets or custom protocols
- Automatic handling of broken connections
- Automatic Connection Pooling (by database/sql package)
- Supports queries larger than 16MB
- Full
sql.RawBytes
support. - Intelligent
LONG DATA
handling in prepared statements - Secure
LOAD DATA LOCAL INFILE
support with file allowlisting andio.Reader
support - Optional
time.Time
parsing - Optional placeholder interpolation
Requirements
- Go 1.20 or higher. We aim to support the 3 latest versions of Go.
- MySQL (5.7+) and MariaDB (10.5+) are supported.
- TiDB is supported by PingCAP.
- go-mysql would work with Percona Server, Google CloudSQL or Sphinx (2.2.3+).
- Maintainers won't support them. Do not expect issues are investigated and resolved by maintainers.
- Investigate issues yourself and please send a pull request to fix it.
Installation
Simple install the package to your $GOPATH with the go tool from shell:
go get -u github.com/go-sql-driver/mysql
Make sure Git is installed on your machine and in your system's PATH
.
Usage
Go MySQL Driver is an implementation of Go's database/sql/driver
interface. You only need to import the driver and can use the full database/sql
API then.
Use mysql
as driverName
and a valid DSN as dataSourceName
:
import (
"database/sql"
"time"
_ "github.com/go-sql-driver/mysql"
)
// ...
db, err := sql.Open("mysql", "user:password@/dbname")
if err != nil {
panic(err)
}
// See "Important settings" section.
db.SetConnMaxLifetime(time.Minute * 3)
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(10)
Examples are available in our Wiki.
Important settings
db.SetConnMaxLifetime()
is required to ensure connections are closed by the driver safely before connection is closed by MySQL server, OS, or other middlewares. Since some middlewares close idle connections by 5 minutes, we recommend timeout shorter than 5 minutes. This setting helps load balancing and changing system variables too.
db.SetMaxOpenConns()
is highly recommended to limit the number of connection used by the application. There is no recommended limit number because it depends on application and MySQL server.
db.SetMaxIdleConns()
is recommended to be set same to db.SetMaxOpenConns()
. When it is smaller than SetMaxOpenConns()
, connections can be opened and closed much more frequently than you expect. Idle connections can be closed by the db.SetConnMaxLifetime()
. If you want to close idle connections more rapidly, you can use db.SetConnMaxIdleTime()
since Go 1.15.
DSN (Data Source Name)
The Data Source Name has a common format, like e.g. PEAR DB uses it, but without type-prefix (optional parts marked by squared brackets):
[username[:password]@][protocol[(address)]]/dbname[?param1=value1&...¶mN=valueN]
A DSN in its fullest form:
username:password@protocol(address)/dbname?param=value
Except for the databasename, all values are optional. So the minimal DSN is:
/dbname
If you do not want to preselect a database, leave dbname
empty:
/
This has the same effect as an empty DSN string:
dbname
is escaped by PathEscape() since v1.8.0. If your database name is dbname/withslash
, it becomes:
/dbname%2Fwithslash
Alternatively, Config.FormatDSN can be used to create a DSN string by filling a struct.
Password
Passwords can consist of any character. Escaping is not necessary.
Protocol
See net.Dial for more information which networks are available. In general you should use a Unix domain socket if available and TCP otherwise for best performance.
Address
For TCP and UDP networks, addresses have the form host[:port]
.
If port
is omitted, the default port will be used.
If host
is a literal IPv6 address, it must be enclosed in square brackets.
The functions net.JoinHostPort and net.SplitHostPort manipulate addresses in this form.
For Unix domain sockets the address is the absolute path to the MySQL-Server-socket, e.g. /var/run/mysqld/mysqld.sock
or /tmp/mysql.sock
.
Parameters
Parameters are case-sensitive!
Notice that any of true
, TRUE
, True
or 1
is accepted to stand for a true boolean value. Not surprisingly, false can be specified as any of: false
, FALSE
, False
or 0
.
allowAllFiles
Type: bool
Valid Values: true, false
Default: false
allowAllFiles=true
disables the file allowlist for LOAD DATA LOCAL INFILE
and allows all files.
Might be insecure!
allowCleartextPasswords
Type: bool
Valid Values: true, false
Default: false
allowCleartextPasswords=true
allows using the cleartext client side plugin if required by an account, such as one defined with the PAM authentication plugin. Sending passwords in clear text may be a security problem in some configurations. To avoid problems if there is any possibility that the password would be intercepted, clients should connect to MySQL Server using a method that protects the password. Possibilities include TLS / SSL, IPsec, or a private network.
allowFallbackToPlaintext
Type: bool
Valid Values: true, false
Default: false
allowFallbackToPlaintext=true
acts like a --ssl-mode=PREFERRED
MySQL client as described in Command Options for Connecting to the Server
allowNativePasswords
Type: bool
Valid Values: true, false
Default: true
allowNativePasswords=false
disallows the usage of MySQL native password method.
allowOldPasswords
Type: bool
Valid Values: true, false
Default: false
allowOldPasswords=true
allows the usage of the insecure old password method. This should be avoided, but is necessary in some cases. See also the old_passwords wiki page.
charset
Type: string
Valid Values: <name>
Default: none
Sets the charset used for client-server interaction ("SET NAMES <value>"
). If multiple charsets are set (separated by a comma), the following charset is used if setting the charset fails. This enables for example support for utf8mb4
(introduced in MySQL 5.5.3) with fallback to utf8
for older servers (charset=utf8mb4,utf8
).
See also Unicode Support.
checkConnLiveness
Type: bool
Valid Values: true, false
Default: true
On supported platforms connections retrieved from the connection pool are checked for liveness before using them. If the check fails, the respective connection is marked as bad and the query retried with another connection.
checkConnLiveness=false
disables this liveness check of connections.
collation
Type: string
Valid Values: <name>
Default: utf8mb4_general_ci
Sets the collation used for client-server interaction on connection. In contrast to charset
, collation
does not issue additional queries. If the specified collation is unavailable on the target server, the connection will fail.
A list of valid charsets for a server is retrievable with SHOW COLLATION
.
The default collation (utf8mb4_general_ci
) is supported from MySQL 5.5. You should use an older collation (e.g. utf8_general_ci
) for older MySQL.
Collations for charset "ucs2", "utf16", "utf16le", and "utf32" can not be used (ref).
See also Unicode Support.
clientFoundRows
Type: bool
Valid Values: true, false
Default: false
clientFoundRows=true
causes an UPDATE to return the number of matching rows instead of the number of rows changed.
columnsWithAlias
Type: bool
Valid Values: true, false
Default: false
When columnsWithAlias
is true, calls to sql.Rows.Columns()
will return the table alias and the column name separated by a dot. For example:
SELECT u.id FROM users as u
will return u.id
instead of just id
if columnsWithAlias=true
.
interpolateParams
Type: bool
Valid Values: true, false
Default: false
If interpolateParams
is true, placeholders (?
) in calls to db.Query()
and db.Exec()
are interpolated into a single query string with given parameters. This reduces the number of roundtrips, since the driver has to prepare a statement, execute it with given parameters and close the statement again with interpolateParams=false
.
This can not be used together with the multibyte encodings BIG5, CP932, GB2312, GBK or SJIS. These are rejected as they may introduce a SQL injection vulnerability!
loc
Type: string
Valid Values: <escaped name>
Default: UTC
Sets the location for time.Time values (when using parseTime=true
). "Local" sets the system's location. See time.LoadLocation for details.
Note that this sets the location for time.Time values but does not change MySQL's time_zone setting. For that see the time_zone system variable, which can also be set as a DSN parameter.
Please keep in mind, that param values must be url.QueryEscape'ed. Alternatively you can manually replace the /
with %2F
. For example US/Pacific
would be loc=US%2FPacific
.
timeTruncate
Type: duration
Default: 0
Truncate time values to the specified duration. The value must be a decimal number with a unit suffix ("ms", "s", "m", "h"), such as "30s", "0.5m" or "1m30s".
maxAllowedPacket
Type: decimal number
Default: 64*1024*1024
Max packet size allowed in bytes. The default value is 64 MiB and should be adjusted to match the server settings. maxAllowedPacket=0
can be used to automatically fetch the max_allowed_packet
variable from server on every connection.
multiStatements
Type: bool
Valid Values: true, false
Default: false
Allow multiple statements in one query. This can be used to bach multiple queries. Use Rows.NextResultSet() to get result of the second and subsequent queries.
When multiStatements
is used, ?
parameters must only be used in the first statement. interpolateParams can be used to avoid this limitation unless prepared statement is used explicitly.
It's possible to access the last inserted ID and number of affected rows for multiple statements by using sql.Conn.Raw()
and the mysql.Result
. For example:
conn, _ := db.Conn(ctx)
conn.Raw(func(conn any) error {
ex := conn.(driver.Execer)
res, err := ex.Exec(`
UPDATE point SET x = 1 WHERE y = 2;
UPDATE point SET x = 2 WHERE y = 3;
`, nil)
// Both slices have 2 elements.
log.Print(res.(mysql.Result).AllRowsAffected())
log.Print(res.(mysql.Result).AllLastInsertIds())
})
parseTime
Type: bool
Valid Values: true, false
Default: false
parseTime=true
changes the output type of DATE
and DATETIME
values to time.Time
instead of []byte
/ string
The date or datetime like 0000-00-00 00:00:00
is converted into zero value of time.Time
.
readTimeout
Type: duration
Default: 0
I/O read timeout. The value must be a decimal number with a unit suffix ("ms", "s", "m", "h"), such as "30s", "0.5m" or "1m30s".
rejectReadOnly
Type: bool
Valid Values: true, false
Default: false
rejectReadOnly=true
causes the driver to reject read-only connections. This
is for a possible race condition during an automatic failover, where the mysql
client gets connected to a read-only replica after the failover.
Note that this should be a fairly rare case, as an automatic failover normally happens when the primary is down, and the race condition shouldn't happen unless it comes back up online as soon as the failover is kicked off. On the other hand, when this happens, a MySQL application can get stuck on a read-only connection until restarted. It is however fairly easy to reproduce, for example, using a manual failover on AWS Aurora's MySQL-compatible cluster.
If you are not relying on read-only transactions to reject writes that aren't supposed to happen, setting this on some MySQL providers (such as AWS Aurora) is safer for failovers.
Note that ERROR 1290 can be returned for a read-only
server and this option will
cause a retry for that error. However the same error number is used for some
other cases. You should ensure your application will never cause an ERROR 1290
except for read-only
mode when enabling this option.
serverPubKey
Type: string
Valid Values: <name>
Default: none
Server public keys can be registered with mysql.RegisterServerPubKey
, which can then be used by the assigned name in the DSN.
Public keys are used to transmit encrypted data, e.g. for authentication.
If the server's public key is known, it should be set manually to avoid expensive and potentially insecure transmissions of the public key from the server to the client each time it is required.
timeout
Type: duration
Default: OS default
Timeout for establishing connections, aka dial timeout. The value must be a decimal number with a unit suffix ("ms", "s", "m", "h"), such as "30s", "0.5m" or "1m30s".
tls
Type: bool / string
Valid Values: true, false, skip-verify, preferred, <name>
Default: false
tls=true
enables TLS / SSL encrypted connection to the server. Use skip-verify
if you want to use a self-signed or invalid certificate (server side) or use preferred
to use TLS only when advertised by the server. This is similar to skip-verify
, but additionally allows a fallback to a connection which is not encrypted. Neither skip-verify
nor preferred
add any reliable security. You can use a custom TLS config after registering it with mysql.RegisterTLSConfig
.
writeTimeout
Type: duration
Default: 0
I/O write timeout. The value must be a decimal number with a unit suffix ("ms", "s", "m", "h"), such as "30s", "0.5m" or "1m30s".
connectionAttributes
Type: comma-delimited string of user-defined "key:value" pairs
Valid Values: (<name1>:<value1>,<name2>:<value2>,...)
Default: none
Connection attributes are key-value pairs that application programs can pass to the server at connect time.
System Variables
Any other parameters are interpreted as system variables:
<boolean_var>=<value>
:SET <boolean_var>=<value>
<enum_var>=<value>
:SET <enum_var>=<value>
<string_var>=%27<value>%27
:SET <string_var>='<value>'
Rules:
- The values for string variables must be quoted with
'
. - The values must also be url.QueryEscape'ed!
(which implies values of string variables must be wrapped with
%27
).
Examples:
autocommit=1
:SET autocommit=1
time_zone=%27Europe%2FParis%27
:SET time_zone='Europe/Paris'
transaction_isolation=%27REPEATABLE-READ%27
:SET transaction_isolation='REPEATABLE-READ'
Examples
user@unix(/path/to/socket)/dbname
root:pw@unix(/tmp/mysql.sock)/myDatabase?loc=Local
user:password@tcp(localhost:5555)/dbname?tls=skip-verify&autocommit=true
Treat warnings as errors by setting the system variable sql_mode
:
user:password@/dbname?sql_mode=TRADITIONAL
TCP via IPv6:
user:password@tcp([de:ad:be:ef::ca:fe]:80)/dbname?timeout=90s&collation=utf8mb4_unicode_ci
TCP on a remote host, e.g. Amazon RDS:
id:password@tcp(your-amazonaws-uri.com:3306)/dbname
Google Cloud SQL on App Engine:
user:password@unix(/cloudsql/project-id:region-name:instance-name)/dbname
TCP using default port (3306) on localhost:
user:password@tcp/dbname?charset=utf8mb4,utf8&sys_var=esc%40ped
Use the default protocol (tcp) and host (localhost:3306):
user:password@/dbname
No Database preselected:
user:password@/
Connection pool and timeouts
The connection pool is managed by Go's database/sql package. For details on how to configure the size of the pool and how long connections stay in the pool see *DB.SetMaxOpenConns
, *DB.SetMaxIdleConns
, and *DB.SetConnMaxLifetime
in the database/sql documentation. The read, write, and dial timeouts for each individual connection are configured with the DSN parameters readTimeout
, writeTimeout
, and timeout
, respectively.
ColumnType
Support
This driver supports the ColumnType
interface introduced in Go 1.8, with the exception of ColumnType.Length()
, which is currently not supported. All Unsigned database type names will be returned UNSIGNED
with INT
, TINYINT
, SMALLINT
, MEDIUMINT
, BIGINT
.
context.Context
Support
Go 1.8 added database/sql
support for context.Context
. This driver supports query timeouts and cancellation via contexts.
See context support in the database/sql package for more details.
[!IMPORTANT] The
QueryContext
,ExecContext
, etc. variants provided bydatabase/sql
will cause the connection to be closed if the provided context is cancelled or timed out before the result is received by the driver.
LOAD DATA LOCAL INFILE
support
For this feature you need direct access to the package. Therefore you must change the import path (no _
):
import "github.com/go-sql-driver/mysql"
Files must be explicitly allowed by registering them with mysql.RegisterLocalFile(filepath)
(recommended) or the allowlist check must be deactivated by using the DSN parameter allowAllFiles=true
(Might be insecure!).
To use a io.Reader
a handler function must be registered with mysql.RegisterReaderHandler(name, handler)
which returns a io.Reader
or io.ReadCloser
. The Reader is available with the filepath Reader::<name>
then. Choose different names for different handlers and DeregisterReaderHandler
when you don't need it anymore.
See the godoc of Go-MySQL-Driver for details.
time.Time
support
The default internal output type of MySQL DATE
and DATETIME
values is []byte
which allows you to scan the value into a []byte
, string
or sql.RawBytes
variable in your program.
However, many want to scan MySQL DATE
and DATETIME
values into time.Time
variables, which is the logical equivalent in Go to DATE
and DATETIME
in MySQL. You can do that by changing the internal output type from []byte
to time.Time
with the DSN parameter parseTime=true
. You can set the default time.Time
location with the loc
DSN parameter.
Caution: As of Go 1.1, this makes time.Time
the only variable type you can scan DATE
and DATETIME
values into. This breaks for example sql.RawBytes
support.
Unicode support
Since version 1.5 Go-MySQL-Driver automatically uses the collation utf8mb4_general_ci
by default.
Other charsets / collations can be set using the charset
or collation
DSN parameter.
- When only the
charset
is specified, theSET NAMES <charset>
query is sent and the server's default collation is used. - When both the
charset
andcollation
are specified, theSET NAMES <charset> COLLATE <collation>
query is sent. - When only the
collation
is specified, the collation is specified in the protocol handshake and theSET NAMES
query is not sent. This can save one roundtrip, but note that the server may ignore the specified collation silently and use the server's default charset/collation instead.
See http://dev.mysql.com/doc/refman/8.0/en/charset-unicode.html for more details on MySQL's Unicode support.
Testing / Development
To run the driver tests you may need to adjust the configuration. See the Testing Wiki-Page for details.
Go-MySQL-Driver is not feature-complete yet. Your help is very appreciated. If you want to contribute, you can work on an open issue or review a pull request.
See the Contribution Guidelines for details.
License
Go-MySQL-Driver is licensed under the Mozilla Public License Version 2.0
Mozilla summarizes the license scope as follows:
MPL: The copyleft applies to any files containing MPLed code.
That means:
- You can use the unchanged source code both in private and commercially.
- When distributing, you must publish the source code of any changed files licensed under the MPL 2.0 under a) the MPL 2.0 itself or b) a compatible license (e.g. GPL 3.0 or Apache License 2.0).
- You needn't publish the source code of your library as long as the files licensed under the MPL 2.0 are unchanged.
Please read the MPL 2.0 FAQ if you have further questions regarding the license.
You can read the full terms here: LICENSE.
Top Related Projects
Pure Go Postgres driver for database/sql
The fantastic ORM library for Golang, aims to be developer friendly
general purpose extensions to golang's database/sql
PostgreSQL driver and toolkit for Go
Microsoft SQL server driver written in go language
sqlite3 driver for go using database/sql
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot