textdistance
📐 Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.
Top Related Projects
Multilingual text (NLP) processing toolkit
Fuzzy String Matching in Python
The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity
🪼 a python library for doing approximate and phonetic matching of strings.
Quick Overview
The textdistance
library is a Python package that provides a collection of algorithms for calculating the distance between two text strings. It supports a wide range of distance metrics, including Levenshtein, Hamming, Jaccard, and many others. The library is designed to be easy to use and highly customizable, making it a useful tool for a variety of text processing tasks.
Pros
- Comprehensive: The library supports a wide range of distance metrics, allowing users to choose the most appropriate algorithm for their specific use case.
- Customizable: The library provides a flexible API that allows users to easily configure and extend the available distance metrics.
- Efficient: The library is optimized for performance, with many of the distance algorithms implemented in Cython for improved speed.
- Well-documented: The library has extensive documentation, including detailed examples and usage guides, making it easy to get started.
Cons
- Limited to Python: The library is only available for Python, which may be a limitation for users who need to work with other programming languages.
- Potential for Confusion: With so many distance metrics available, it can be challenging for users to choose the most appropriate one for their needs.
- Dependency on External Libraries: The library relies on several external libraries, which may increase the complexity of the installation process.
- Limited Community Support: The library has a relatively small community compared to some other Python libraries, which may limit the availability of support and resources.
Code Examples
Here are a few examples of how to use the textdistance
library:
import textdistance
# Calculate the Levenshtein distance between two strings
distance = textdistance.levenshtein('hello', 'world')
print(distance) # Output: 4
# Calculate the Jaccard similarity between two sets of words
set1 = {'apple', 'banana', 'cherry'}
set2 = {'banana', 'cherry', 'date'}
similarity = textdistance.jaccard(set1, set2)
print(similarity) # Output: 0.5
# Use a custom distance metric
class MyDistance(textdistance.Distance):
def distance(self, a, b):
return abs(len(a) - len(b))
distance = MyDistance().distance('hello', 'world')
print(distance) # Output: 1
Getting Started
To get started with the textdistance
library, you can install it using pip:
pip install textdistance
Once you have the library installed, you can start using it in your Python code. Here's a simple example that demonstrates how to calculate the Levenshtein distance between two strings:
import textdistance
text1 = "hello"
text2 = "world"
distance = textdistance.levenshtein(text1, text2)
print(f"The Levenshtein distance between '{text1}' and '{text2}' is {distance}")
This will output:
The Levenshtein distance between 'hello' and 'world' is 4
You can also explore the various distance metrics available in the library and customize them to suit your specific needs. The library's documentation provides detailed examples and usage guides to help you get started.
Competitor Comparisons
Multilingual text (NLP) processing toolkit
Pros of Polyglot
- Polyglot provides a wide range of language processing capabilities, including language detection, named entity recognition, and sentiment analysis.
- The library supports a large number of languages, making it a versatile tool for multilingual applications.
- Polyglot has a modular design, allowing users to selectively import only the components they need, reducing the overall package size.
Cons of Polyglot
- Polyglot may have a steeper learning curve compared to TextDistance, as it covers a broader range of language processing tasks.
- The library's performance may be slower than TextDistance for specific text distance calculations, as it is designed for a wider range of language processing tasks.
- Polyglot's documentation may not be as comprehensive as TextDistance, making it more challenging for beginners to get started.
Code Comparison
TextDistance:
from textdistance import levenshtein
levenshtein('hello', 'world') # 4
Polyglot:
from polyglot.text import Text
text = Text('Hello, world!')
print(text.language.code) # 'en'
Fuzzy String Matching in Python
Pros of FuzzyWuzzy
- FuzzyWuzzy provides a more comprehensive set of string matching algorithms, including Levenshtein distance, Jaro-Winkler distance, and Soundex.
- FuzzyWuzzy has a larger user base and more active development, with more contributors and a higher number of stars on GitHub.
- FuzzyWuzzy includes additional features like partial string matching and process extraction, which can be useful in certain use cases.
Cons of FuzzyWuzzy
- TextDistance is generally more lightweight and focused, with a smaller codebase and fewer dependencies.
- TextDistance may be more suitable for simple string comparison tasks, as it has a more streamlined API and can be easier to integrate into certain projects.
- TextDistance supports a wider range of programming languages, including Python, JavaScript, and Ruby, while FuzzyWuzzy is primarily focused on Python.
Code Comparison
TextDistance:
from textdistance import levenshtein
levenshtein('hello', 'world') # 4
FuzzyWuzzy:
from fuzzywuzzy import fuzz
fuzz.levenshtein('hello', 'world') # 4
The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity
Pros of python-Levenshtein
- Faster performance: python-Levenshtein is implemented in C, which makes it significantly faster than the pure Python implementation in TextDistance.
- Supports more distance metrics: python-Levenshtein provides a wider range of distance metrics, including Levenshtein, Hamming, and Damerau-Levenshtein.
- Actively maintained: The python-Levenshtein project has been actively maintained, with regular updates and bug fixes.
Cons of python-Levenshtein
- Limited to Levenshtein-based metrics: While python-Levenshtein offers a wider range of Levenshtein-based metrics, it does not support other distance metrics like Jaccard, Cosine, or Jaro-Winkler, which are available in TextDistance.
- Dependency on C: The C-based implementation of python-Levenshtein may be a drawback for some users who prefer a pure Python solution or have difficulty installing C-based dependencies.
- Fewer features: Compared to TextDistance, python-Levenshtein has a more limited set of features, such as the lack of support for n-gram-based distance metrics.
Code Comparison
TextDistance (Python):
from textdistance import levenshtein
distance = levenshtein('hello', 'world')
print(distance) # Output: 4
python-Levenshtein (C-based):
from Levenshtein import distance
distance = distance('hello', 'world')
print(distance) # Output: 4
As you can see, the usage of the two libraries is quite similar, with the main difference being the import statement and the function name.
🪼 a python library for doing approximate and phonetic matching of strings.
Pros of Jellyfish
- Jellyfish provides a wider range of string similarity and distance metrics, including Levenshtein, Jaro-Winkler, and Soundex, among others.
- The library is well-documented and has a large user community, making it easier to find support and resources.
- Jellyfish is written in pure Python, making it more portable and easier to integrate into a variety of projects.
Cons of Jellyfish
- Jellyfish may be slower than TextDistance for certain operations, as it is written in pure Python without any low-level optimizations.
- The library has a larger dependency footprint, which may be a concern for projects with strict dependency requirements.
Code Comparison
TextDistance:
from textdistance import levenshtein
levenshtein('hello', 'world') # 4
Jellyfish:
import jellyfish
jellyfish.levenshtein_distance('hello', 'world') # 4
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
TextDistance
TextDistance -- python library for comparing distance between two or more sequences by many algorithms.
Features:
- 30+ algorithms
- Pure python implementation
- Simple usage
- More than two sequences comparing
- Some algorithms have more than one implementation in one class.
- Optional numpy usage for maximum speed.
Algorithms
Edit based
Algorithm | Class | Functions |
---|---|---|
Hamming | Hamming | hamming |
MLIPNS | MLIPNS | mlipns |
Levenshtein | Levenshtein | levenshtein |
Damerau-Levenshtein | DamerauLevenshtein | damerau_levenshtein |
Jaro-Winkler | JaroWinkler | jaro_winkler , jaro |
Strcmp95 | StrCmp95 | strcmp95 |
Needleman-Wunsch | NeedlemanWunsch | needleman_wunsch |
Gotoh | Gotoh | gotoh |
Smith-Waterman | SmithWaterman | smith_waterman |
Token based
Algorithm | Class | Functions |
---|---|---|
Jaccard index | Jaccard | jaccard |
SørensenâDice coefficient | Sorensen | sorensen , sorensen_dice , dice |
Tversky index | Tversky | tversky |
Overlap coefficient | Overlap | overlap |
Tanimoto distance | Tanimoto | tanimoto |
Cosine similarity | Cosine | cosine |
Monge-Elkan | MongeElkan | monge_elkan |
Bag distance | Bag | bag |
Sequence based
Algorithm | Class | Functions |
---|---|---|
longest common subsequence similarity | LCSSeq | lcsseq |
longest common substring similarity | LCSStr | lcsstr |
Ratcliff-Obershelp similarity | RatcliffObershelp | ratcliff_obershelp |
Compression based
Normalized compression distance with different compression algorithms.
Classic compression algorithms:
Algorithm | Class | Function |
---|---|---|
Arithmetic coding | ArithNCD | arith_ncd |
RLE | RLENCD | rle_ncd |
BWT RLE | BWTRLENCD | bwtrle_ncd |
Normal compression algorithms:
Algorithm | Class | Function |
---|---|---|
Square Root | SqrtNCD | sqrt_ncd |
Entropy | EntropyNCD | entropy_ncd |
Work in progress algorithms that compare two strings as array of bits:
Algorithm | Class | Function |
---|---|---|
BZ2 | BZ2NCD | bz2_ncd |
LZMA | LZMANCD | lzma_ncd |
ZLib | ZLIBNCD | zlib_ncd |
See blog post for more details about NCD.
Phonetic
Algorithm | Class | Functions |
---|---|---|
MRA | MRA | mra |
Editex | Editex | editex |
Simple
Algorithm | Class | Functions |
---|---|---|
Prefix similarity | Prefix | prefix |
Postfix similarity | Postfix | postfix |
Length distance | Length | length |
Identity similarity | Identity | identity |
Matrix similarity | Matrix | matrix |
Installation
Stable
Only pure python implementation:
pip install textdistance
With extra libraries for maximum speed:
pip install "textdistance[extras]"
With all libraries (required for benchmarking and testing):
pip install "textdistance[benchmark]"
With algorithm specific extras:
pip install "textdistance[Hamming]"
Algorithms with available extras: DamerauLevenshtein
, Hamming
, Jaro
, JaroWinkler
, Levenshtein
.
Dev
Via pip:
pip install -e git+https://github.com/life4/textdistance.git#egg=textdistance
Or clone repo and install with some extras:
git clone https://github.com/life4/textdistance.git
pip install -e ".[benchmark]"
Usage
All algorithms have 2 interfaces:
- Class with algorithm-specific params for customizing.
- Class instance with default params for quick and simple usage.
All algorithms have some common methods:
.distance(*sequences)
-- calculate distance between sequences..similarity(*sequences)
-- calculate similarity for sequences..maximum(*sequences)
-- maximum possible value for distance and similarity. For any sequence:distance + similarity == maximum
..normalized_distance(*sequences)
-- normalized distance between sequences. The return value is a float between 0 and 1, where 0 means equal, and 1 totally different..normalized_similarity(*sequences)
-- normalized similarity for sequences. The return value is a float between 0 and 1, where 0 means totally different, and 1 equal.
Most common init arguments:
qval
-- q-value for split sequences into q-grams. Possible values:- 1 (default) -- compare sequences by chars.
- 2 or more -- transform sequences to q-grams.
- None -- split sequences by words.
as_set
-- for token-based algorithms:- True --
t
andttt
is equal. - False (default) --
t
andttt
is different.
- True --
Examples
For example, Hamming distance:
import textdistance
textdistance.hamming('test', 'text')
# 1
textdistance.hamming.distance('test', 'text')
# 1
textdistance.hamming.similarity('test', 'text')
# 3
textdistance.hamming.normalized_distance('test', 'text')
# 0.25
textdistance.hamming.normalized_similarity('test', 'text')
# 0.75
textdistance.Hamming(qval=2).distance('test', 'text')
# 2
Any other algorithms have same interface.
Articles
A few articles with examples how to use textdistance in the real world:
- Guide to Fuzzy Matching with Python
- String similarity â the basic know your algorithms guide!
- Normalized compression distance
Extra libraries
For main algorithms textdistance try to call known external libraries (fastest first) if available (installed in your system) and possible (this implementation can compare this type of sequences). Install textdistance with extras for this feature.
You can disable this by passing external=False
argument on init:
import textdistance
hamming = textdistance.Hamming(external=False)
hamming('text', 'testit')
# 3
Supported libraries:
Algorithms:
- DamerauLevenshtein
- Hamming
- Jaro
- JaroWinkler
- Levenshtein
Benchmarks
Without extras installation:
algorithm | library | time |
---|---|---|
DamerauLevenshtein | rapidfuzz | 0.00312 |
DamerauLevenshtein | jellyfish | 0.00591 |
DamerauLevenshtein | pyxdameraulevenshtein | 0.03335 |
DamerauLevenshtein | textdistance | 0.83524 |
Hamming | Levenshtein | 0.00038 |
Hamming | rapidfuzz | 0.00044 |
Hamming | jellyfish | 0.00091 |
Hamming | textdistance | 0.03531 |
Jaro | rapidfuzz | 0.00092 |
Jaro | jellyfish | 0.00191 |
Jaro | textdistance | 0.07365 |
JaroWinkler | rapidfuzz | 0.00094 |
JaroWinkler | jellyfish | 0.00195 |
JaroWinkler | textdistance | 0.07501 |
Levenshtein | rapidfuzz | 0.00099 |
Levenshtein | Levenshtein | 0.00122 |
Levenshtein | jellyfish | 0.00254 |
Levenshtein | pylev | 0.15688 |
Levenshtein | textdistance | 0.53902 |
Total: 24 libs.
Yeah, so slow. Use TextDistance on production only with extras.
Textdistance use benchmark's results for algorithm's optimization and try to call fastest external lib first (if possible).
You can run benchmark manually on your system:
pip install textdistance[benchmark]
python3 -m textdistance.benchmark
TextDistance show benchmarks results table for your system and save libraries priorities into libraries.json
file in TextDistance's folder. This file will be used by textdistance for calling fastest algorithm implementation. Default libraries.json already included in package.
Running tests
All you need is task. See Taskfile.yml for the list of available commands. For example, to run tests including third-party libraries usage, execute task pytest-external:run
.
Contributing
PRs are welcome!
- Found a bug? Fix it!
- Want to add more algorithms? Sure! Just make it with the same interface as other algorithms in the lib and add some tests.
- Can make something faster? Great! Just avoid external dependencies and remember that everything should work not only with strings.
- Something else that do you think is good? Do it! Just make sure that CI passes and everything from the README is still applicable (interface, features, and so on).
- Have no time to code? Tell your friends and subscribers about
textdistance
. More users, more contributions, more amazing features.
Thank you :heart:
Top Related Projects
Multilingual text (NLP) processing toolkit
Fuzzy String Matching in Python
The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity
🪼 a python library for doing approximate and phonetic matching of strings.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot