Convert Figma logo to code with AI

life4 logotextdistance

📐 Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.

3,354
249
3,354
9

Top Related Projects

Multilingual text (NLP) processing toolkit

Fuzzy String Matching in Python

The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity

🪼 a python library for doing approximate and phonetic matching of strings.

Quick Overview

The textdistance library is a Python package that provides a collection of algorithms for calculating the distance between two text strings. It supports a wide range of distance metrics, including Levenshtein, Hamming, Jaccard, and many others. The library is designed to be easy to use and highly customizable, making it a useful tool for a variety of text processing tasks.

Pros

  • Comprehensive: The library supports a wide range of distance metrics, allowing users to choose the most appropriate algorithm for their specific use case.
  • Customizable: The library provides a flexible API that allows users to easily configure and extend the available distance metrics.
  • Efficient: The library is optimized for performance, with many of the distance algorithms implemented in Cython for improved speed.
  • Well-documented: The library has extensive documentation, including detailed examples and usage guides, making it easy to get started.

Cons

  • Limited to Python: The library is only available for Python, which may be a limitation for users who need to work with other programming languages.
  • Potential for Confusion: With so many distance metrics available, it can be challenging for users to choose the most appropriate one for their needs.
  • Dependency on External Libraries: The library relies on several external libraries, which may increase the complexity of the installation process.
  • Limited Community Support: The library has a relatively small community compared to some other Python libraries, which may limit the availability of support and resources.

Code Examples

Here are a few examples of how to use the textdistance library:

import textdistance

# Calculate the Levenshtein distance between two strings
distance = textdistance.levenshtein('hello', 'world')
print(distance)  # Output: 4

# Calculate the Jaccard similarity between two sets of words
set1 = {'apple', 'banana', 'cherry'}
set2 = {'banana', 'cherry', 'date'}
similarity = textdistance.jaccard(set1, set2)
print(similarity)  # Output: 0.5

# Use a custom distance metric
class MyDistance(textdistance.Distance):
    def distance(self, a, b):
        return abs(len(a) - len(b))

distance = MyDistance().distance('hello', 'world')
print(distance)  # Output: 1

Getting Started

To get started with the textdistance library, you can install it using pip:

pip install textdistance

Once you have the library installed, you can start using it in your Python code. Here's a simple example that demonstrates how to calculate the Levenshtein distance between two strings:

import textdistance

text1 = "hello"
text2 = "world"
distance = textdistance.levenshtein(text1, text2)
print(f"The Levenshtein distance between '{text1}' and '{text2}' is {distance}")

This will output:

The Levenshtein distance between 'hello' and 'world' is 4

You can also explore the various distance metrics available in the library and customize them to suit your specific needs. The library's documentation provides detailed examples and usage guides to help you get started.

Competitor Comparisons

Multilingual text (NLP) processing toolkit

Pros of Polyglot

  • Polyglot provides a wide range of language processing capabilities, including language detection, named entity recognition, and sentiment analysis.
  • The library supports a large number of languages, making it a versatile tool for multilingual applications.
  • Polyglot has a modular design, allowing users to selectively import only the components they need, reducing the overall package size.

Cons of Polyglot

  • Polyglot may have a steeper learning curve compared to TextDistance, as it covers a broader range of language processing tasks.
  • The library's performance may be slower than TextDistance for specific text distance calculations, as it is designed for a wider range of language processing tasks.
  • Polyglot's documentation may not be as comprehensive as TextDistance, making it more challenging for beginners to get started.

Code Comparison

TextDistance:

from textdistance import levenshtein
levenshtein('hello', 'world')  # 4

Polyglot:

from polyglot.text import Text
text = Text('Hello, world!')
print(text.language.code)  # 'en'

Fuzzy String Matching in Python

Pros of FuzzyWuzzy

  • FuzzyWuzzy provides a more comprehensive set of string matching algorithms, including Levenshtein distance, Jaro-Winkler distance, and Soundex.
  • FuzzyWuzzy has a larger user base and more active development, with more contributors and a higher number of stars on GitHub.
  • FuzzyWuzzy includes additional features like partial string matching and process extraction, which can be useful in certain use cases.

Cons of FuzzyWuzzy

  • TextDistance is generally more lightweight and focused, with a smaller codebase and fewer dependencies.
  • TextDistance may be more suitable for simple string comparison tasks, as it has a more streamlined API and can be easier to integrate into certain projects.
  • TextDistance supports a wider range of programming languages, including Python, JavaScript, and Ruby, while FuzzyWuzzy is primarily focused on Python.

Code Comparison

TextDistance:

from textdistance import levenshtein
levenshtein('hello', 'world')  # 4

FuzzyWuzzy:

from fuzzywuzzy import fuzz
fuzz.levenshtein('hello', 'world')  # 4

The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity

Pros of python-Levenshtein

  • Faster performance: python-Levenshtein is implemented in C, which makes it significantly faster than the pure Python implementation in TextDistance.
  • Supports more distance metrics: python-Levenshtein provides a wider range of distance metrics, including Levenshtein, Hamming, and Damerau-Levenshtein.
  • Actively maintained: The python-Levenshtein project has been actively maintained, with regular updates and bug fixes.

Cons of python-Levenshtein

  • Limited to Levenshtein-based metrics: While python-Levenshtein offers a wider range of Levenshtein-based metrics, it does not support other distance metrics like Jaccard, Cosine, or Jaro-Winkler, which are available in TextDistance.
  • Dependency on C: The C-based implementation of python-Levenshtein may be a drawback for some users who prefer a pure Python solution or have difficulty installing C-based dependencies.
  • Fewer features: Compared to TextDistance, python-Levenshtein has a more limited set of features, such as the lack of support for n-gram-based distance metrics.

Code Comparison

TextDistance (Python):

from textdistance import levenshtein
distance = levenshtein('hello', 'world')
print(distance)  # Output: 4

python-Levenshtein (C-based):

from Levenshtein import distance
distance = distance('hello', 'world')
print(distance)  # Output: 4

As you can see, the usage of the two libraries is quite similar, with the main difference being the import statement and the function name.

🪼 a python library for doing approximate and phonetic matching of strings.

Pros of Jellyfish

  • Jellyfish provides a wider range of string similarity and distance metrics, including Levenshtein, Jaro-Winkler, and Soundex, among others.
  • The library is well-documented and has a large user community, making it easier to find support and resources.
  • Jellyfish is written in pure Python, making it more portable and easier to integrate into a variety of projects.

Cons of Jellyfish

  • Jellyfish may be slower than TextDistance for certain operations, as it is written in pure Python without any low-level optimizations.
  • The library has a larger dependency footprint, which may be a concern for projects with strict dependency requirements.

Code Comparison

TextDistance:

from textdistance import levenshtein
levenshtein('hello', 'world')  # 4

Jellyfish:

import jellyfish
jellyfish.levenshtein_distance('hello', 'world')  # 4

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

TextDistance

TextDistance logo

Build Status PyPI version Status License

TextDistance -- python library for comparing distance between two or more sequences by many algorithms.

Features:

  • 30+ algorithms
  • Pure python implementation
  • Simple usage
  • More than two sequences comparing
  • Some algorithms have more than one implementation in one class.
  • Optional numpy usage for maximum speed.

Algorithms

Edit based

AlgorithmClassFunctions
HammingHamminghamming
MLIPNSMLIPNSmlipns
LevenshteinLevenshteinlevenshtein
Damerau-LevenshteinDamerauLevenshteindamerau_levenshtein
Jaro-WinklerJaroWinklerjaro_winkler, jaro
Strcmp95StrCmp95strcmp95
Needleman-WunschNeedlemanWunschneedleman_wunsch
GotohGotohgotoh
Smith-WatermanSmithWatermansmith_waterman

Token based

AlgorithmClassFunctions
Jaccard indexJaccardjaccard
Sørensen–Dice coefficientSorensensorensen, sorensen_dice, dice
Tversky indexTverskytversky
Overlap coefficientOverlapoverlap
Tanimoto distanceTanimototanimoto
Cosine similarityCosinecosine
Monge-ElkanMongeElkanmonge_elkan
Bag distanceBagbag

Sequence based

AlgorithmClassFunctions
longest common subsequence similarityLCSSeqlcsseq
longest common substring similarityLCSStrlcsstr
Ratcliff-Obershelp similarityRatcliffObershelpratcliff_obershelp

Compression based

Normalized compression distance with different compression algorithms.

Classic compression algorithms:

AlgorithmClassFunction
Arithmetic codingArithNCDarith_ncd
RLERLENCDrle_ncd
BWT RLEBWTRLENCDbwtrle_ncd

Normal compression algorithms:

AlgorithmClassFunction
Square RootSqrtNCDsqrt_ncd
EntropyEntropyNCDentropy_ncd

Work in progress algorithms that compare two strings as array of bits:

AlgorithmClassFunction
BZ2BZ2NCDbz2_ncd
LZMALZMANCDlzma_ncd
ZLibZLIBNCDzlib_ncd

See blog post for more details about NCD.

Phonetic

AlgorithmClassFunctions
MRAMRAmra
EditexEditexeditex

Simple

AlgorithmClassFunctions
Prefix similarityPrefixprefix
Postfix similarityPostfixpostfix
Length distanceLengthlength
Identity similarityIdentityidentity
Matrix similarityMatrixmatrix

Installation

Stable

Only pure python implementation:

pip install textdistance

With extra libraries for maximum speed:

pip install "textdistance[extras]"

With all libraries (required for benchmarking and testing):

pip install "textdistance[benchmark]"

With algorithm specific extras:

pip install "textdistance[Hamming]"

Algorithms with available extras: DamerauLevenshtein, Hamming, Jaro, JaroWinkler, Levenshtein.

Dev

Via pip:

pip install -e git+https://github.com/life4/textdistance.git#egg=textdistance

Or clone repo and install with some extras:

git clone https://github.com/life4/textdistance.git
pip install -e ".[benchmark]"

Usage

All algorithms have 2 interfaces:

  1. Class with algorithm-specific params for customizing.
  2. Class instance with default params for quick and simple usage.

All algorithms have some common methods:

  1. .distance(*sequences) -- calculate distance between sequences.
  2. .similarity(*sequences) -- calculate similarity for sequences.
  3. .maximum(*sequences) -- maximum possible value for distance and similarity. For any sequence: distance + similarity == maximum.
  4. .normalized_distance(*sequences) -- normalized distance between sequences. The return value is a float between 0 and 1, where 0 means equal, and 1 totally different.
  5. .normalized_similarity(*sequences) -- normalized similarity for sequences. The return value is a float between 0 and 1, where 0 means totally different, and 1 equal.

Most common init arguments:

  1. qval -- q-value for split sequences into q-grams. Possible values:
    • 1 (default) -- compare sequences by chars.
    • 2 or more -- transform sequences to q-grams.
    • None -- split sequences by words.
  2. as_set -- for token-based algorithms:
    • True -- t and ttt is equal.
    • False (default) -- t and ttt is different.

Examples

For example, Hamming distance:

import textdistance

textdistance.hamming('test', 'text')
# 1

textdistance.hamming.distance('test', 'text')
# 1

textdistance.hamming.similarity('test', 'text')
# 3

textdistance.hamming.normalized_distance('test', 'text')
# 0.25

textdistance.hamming.normalized_similarity('test', 'text')
# 0.75

textdistance.Hamming(qval=2).distance('test', 'text')
# 2

Any other algorithms have same interface.

Articles

A few articles with examples how to use textdistance in the real world:

Extra libraries

For main algorithms textdistance try to call known external libraries (fastest first) if available (installed in your system) and possible (this implementation can compare this type of sequences). Install textdistance with extras for this feature.

You can disable this by passing external=False argument on init:

import textdistance
hamming = textdistance.Hamming(external=False)
hamming('text', 'testit')
# 3

Supported libraries:

  1. jellyfish
  2. py_stringmatching
  3. pylev
  4. Levenshtein
  5. pyxDamerauLevenshtein

Algorithms:

  1. DamerauLevenshtein
  2. Hamming
  3. Jaro
  4. JaroWinkler
  5. Levenshtein

Benchmarks

Without extras installation:

algorithmlibrarytime
DamerauLevenshteinrapidfuzz0.00312
DamerauLevenshteinjellyfish0.00591
DamerauLevenshteinpyxdameraulevenshtein0.03335
DamerauLevenshteintextdistance0.83524
HammingLevenshtein0.00038
Hammingrapidfuzz0.00044
Hammingjellyfish0.00091
Hammingtextdistance0.03531
Jarorapidfuzz0.00092
Jarojellyfish0.00191
Jarotextdistance0.07365
JaroWinklerrapidfuzz0.00094
JaroWinklerjellyfish0.00195
JaroWinklertextdistance0.07501
Levenshteinrapidfuzz0.00099
LevenshteinLevenshtein0.00122
Levenshteinjellyfish0.00254
Levenshteinpylev0.15688
Levenshteintextdistance0.53902

Total: 24 libs.

Yeah, so slow. Use TextDistance on production only with extras.

Textdistance use benchmark's results for algorithm's optimization and try to call fastest external lib first (if possible).

You can run benchmark manually on your system:

pip install textdistance[benchmark]
python3 -m textdistance.benchmark

TextDistance show benchmarks results table for your system and save libraries priorities into libraries.json file in TextDistance's folder. This file will be used by textdistance for calling fastest algorithm implementation. Default libraries.json already included in package.

Running tests

All you need is task. See Taskfile.yml for the list of available commands. For example, to run tests including third-party libraries usage, execute task pytest-external:run.

Contributing

PRs are welcome!

  • Found a bug? Fix it!
  • Want to add more algorithms? Sure! Just make it with the same interface as other algorithms in the lib and add some tests.
  • Can make something faster? Great! Just avoid external dependencies and remember that everything should work not only with strings.
  • Something else that do you think is good? Do it! Just make sure that CI passes and everything from the README is still applicable (interface, features, and so on).
  • Have no time to code? Tell your friends and subscribers about textdistance. More users, more contributions, more amazing features.

Thank you :heart: