Top Related Projects
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
NLTK Source
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Library for fast text representation and classification.
Topic Modelling for Humans
An open-source NLP research library, built on PyTorch.
Quick Overview
spaCy is an open-source library for advanced Natural Language Processing in Python. It's designed to be fast, efficient, and production-ready, offering a wide range of linguistic features including tokenization, part-of-speech tagging, dependency parsing, and named entity recognition.
Pros
- High performance and efficiency, suitable for large-scale production environments
- Provides pre-trained models for multiple languages
- Easy-to-use API with intuitive object-oriented design
- Extensive documentation and active community support
Cons
- Steeper learning curve compared to some other NLP libraries
- Limited customization options for certain components
- Requires more memory and computational resources than simpler NLP tools
- Some advanced features may require additional setup or dependencies
Code Examples
- Basic text processing:
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
for token in doc:
print(token.text, token.pos_, token.dep_)
- Named Entity Recognition:
import spacy
nlp = spacy.load("en_core_web_sm")
text = "When Sebastian Thrun started working on self-driving cars at Google in 2007, few people outside of the company took him seriously."
doc = nlp(text)
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
- Sentence segmentation:
import spacy
nlp = spacy.load("en_core_web_sm")
text = "This is the first sentence. This is another sentence. And here's the third one!"
doc = nlp(text)
for sent in doc.sents:
print(sent.text)
Getting Started
To get started with spaCy, follow these steps:
- Install spaCy:
pip install spacy
- Download a pre-trained model:
python -m spacy download en_core_web_sm
- Use spaCy in your Python script:
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Hello, world! This is a sample text.")
for token in doc:
print(token.text, token.pos_)
This will tokenize the text and print each token along with its part-of-speech tag.
Competitor Comparisons
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pros of transformers
- Broader range of pre-trained models for various NLP tasks
- Supports state-of-the-art transformer architectures (BERT, GPT, etc.)
- Extensive community support and frequent updates
Cons of transformers
- Higher computational requirements and slower inference
- Steeper learning curve for beginners
- Less focus on traditional NLP tasks like tokenization and parsing
Code comparison
spaCy:
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
for ent in doc.ents:
print(ent.text, ent.label_)
transformers:
from transformers import pipeline
nlp = pipeline("ner")
text = "Apple is looking at buying U.K. startup for $1 billion"
results = nlp(text)
for result in results:
print(f"{result['word']}: {result['entity']}")
Both libraries offer powerful NLP capabilities, but they cater to different use cases. spaCy excels in efficient processing and traditional NLP tasks, while transformers focuses on cutting-edge deep learning models for various NLP applications.
NLTK Source
Pros of NLTK
- Comprehensive library with a wide range of NLP tools and resources
- Excellent for educational purposes and linguistic research
- Large collection of corpora and lexical resources
Cons of NLTK
- Slower performance compared to spaCy, especially for large-scale processing
- Less streamlined API and workflow for common NLP tasks
- Requires more manual setup and configuration for advanced tasks
Code Comparison
NLTK:
import nltk
nltk.download('punkt')
text = "Hello, world! This is a sample sentence."
tokens = nltk.word_tokenize(text)
pos_tags = nltk.pos_tag(tokens)
spaCy:
import spacy
nlp = spacy.load("en_core_web_sm")
text = "Hello, world! This is a sample sentence."
doc = nlp(text)
tokens = [token.text for token in doc]
pos_tags = [(token.text, token.pos_) for token in doc]
Both libraries offer tokenization and part-of-speech tagging, but spaCy provides a more streamlined API and faster processing. NLTK requires explicit downloading of resources, while spaCy uses pre-trained models. spaCy's pipeline approach allows for more efficient processing of multiple NLP tasks in a single pass.
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Pros of Stanza
- Supports a wider range of languages (over 60)
- Provides more detailed linguistic annotations, including dependency parsing and named entity recognition
- Offers pre-trained neural models for various languages
Cons of Stanza
- Slower processing speed compared to spaCy
- Requires more memory and computational resources
- Less extensive documentation and community support
Code Comparison
Stanza
import stanza
nlp = stanza.Pipeline('en')
doc = nlp("Hello world!")
for sentence in doc.sentences:
print([word.text for word in sentence.words])
spaCy
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Hello world!")
for token in doc:
print(token.text)
Both libraries offer similar functionality for basic NLP tasks, but their implementation and usage differ slightly. Stanza provides more detailed linguistic information, while spaCy is known for its speed and efficiency. The choice between the two depends on the specific requirements of your project, such as language support, processing speed, and the level of linguistic detail needed.
Library for fast text representation and classification.
Pros of fastText
- Faster training and inference times for large datasets
- Supports unsupervised learning of word representations
- Efficient text classification with hierarchical softmax
Cons of fastText
- Less comprehensive NLP pipeline compared to spaCy
- Limited support for advanced linguistic features
- Fewer pre-trained models available out-of-the-box
Code Comparison
fastText:
import fasttext
model = fasttext.train_unsupervised('data.txt', model='skipgram')
vector = model.get_word_vector('example')
spaCy:
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp('This is an example sentence.')
vector = doc[0].vector
Key Differences
- fastText focuses on efficient word embeddings and text classification
- spaCy offers a full NLP pipeline with tokenization, POS tagging, and named entity recognition
- fastText is better suited for large-scale text processing tasks
- spaCy provides more detailed linguistic analysis and is more versatile for various NLP tasks
Use Cases
fastText is ideal for:
- Rapid text classification on large datasets
- Word embedding generation for massive corpora
spaCy excels in:
- Detailed linguistic analysis
- Building comprehensive NLP pipelines
- Projects requiring advanced NLP features like dependency parsing
Topic Modelling for Humans
Pros of Gensim
- Specialized in topic modeling and document similarity
- Efficient processing of large text corpora
- Extensive support for various word embedding models
Cons of Gensim
- Less comprehensive NLP pipeline compared to spaCy
- Limited support for named entity recognition and syntactic parsing
- Steeper learning curve for beginners in NLP
Code Comparison
spaCy:
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
for ent in doc.ents:
print(ent.text, ent.label_)
Gensim:
from gensim import corpora, models
texts = [["apple", "buy", "startup"], ["uk", "billion", "dollar"]]
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
lda_model = models.LdaModel(corpus, num_topics=2, id2word=dictionary)
Both libraries offer powerful NLP capabilities, but they focus on different aspects. spaCy provides a comprehensive NLP pipeline with excellent performance for various tasks, while Gensim specializes in topic modeling and document similarity. The choice between them depends on the specific NLP requirements of your project.
An open-source NLP research library, built on PyTorch.
Pros of AllenNLP
- More flexible and customizable for research and experimentation
- Stronger support for deep learning models and PyTorch integration
- Extensive documentation and tutorials for advanced NLP tasks
Cons of AllenNLP
- Steeper learning curve for beginners
- Slower processing speed for basic NLP tasks
- Less focus on production-ready, out-of-the-box solutions
Code Comparison
AllenNLP
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/bert-base-srl-2020.03.24.tar.gz")
result = predictor.predict(sentence="The cat sat on the mat.")
spaCy
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("The cat sat on the mat.")
for token in doc:
print(token.text, token.pos_, token.dep_)
AllenNLP offers more flexibility for custom models and research, while spaCy provides faster, production-ready solutions for common NLP tasks. AllenNLP excels in deep learning and complex NLP problems, whereas spaCy is more user-friendly for basic language processing and named entity recognition.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
spaCy: Industrial-strength NLP
spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products.
spaCy comes with pretrained pipelines and currently supports tokenization and training for 70+ languages. It features state-of-the-art speed and neural network models for tagging, parsing, named entity recognition, text classification and more, multi-task learning with pretrained transformers like BERT, as well as a production-ready training system and easy model packaging, deployment and workflow management. spaCy is commercial open-source software, released under the MIT license.
ð« Version 3.7 out now! Check out the release notes here.
ð Documentation
Documentation | |
---|---|
âï¸ spaCy 101 | New to spaCy? Here's everything you need to know! |
ð Usage Guides | How to use spaCy and its features. |
ð New in v3.0 | New features, backwards incompatibilities and migration guide. |
ðª Project Templates | End-to-end workflows you can clone, modify and run. |
ð API Reference | The detailed reference for spaCy's API. |
â© GPU Processing | Use spaCy with CUDA-compatible GPU processing. |
ð¦ Models | Download trained pipelines for spaCy. |
ð¦ Large Language Models | Integrate LLMs into spaCy pipelines. |
ð Universe | Plugins, extensions, demos and books from the spaCy ecosystem. |
âï¸ spaCy VS Code Extension | Additional tooling and features for working with spaCy's config files. |
ð©âð« Online Course | Learn spaCy in this free and interactive online course. |
ð° Blog | Read about current spaCy and Prodigy development, releases, talks and more from Explosion. |
ðº Videos | Our YouTube channel with video tutorials, talks and more. |
ð Changelog | Changes and version history. |
ð Contribute | How to contribute to the spaCy project and code base. |
ð Swag | Support us and our work with unique, custom-designed swag! |
Custom NLP consulting, implementation and strategic advice by spaCyâs core development team. Streamlined, production-ready, predictable and maintainable. Send us an email or take our 5-minute questionnaire, and well'be in touch! Learn more → |
ð¬ Where to ask questions
The spaCy project is maintained by the spaCy team. Please understand that we won't be able to provide individual support via email. We also believe that help is much more valuable if it's shared publicly, so that more people can benefit from it.
Type | Platforms |
---|---|
ð¨ Bug Reports | GitHub Issue Tracker |
ð Feature Requests & Ideas | GitHub Discussions |
ð©âð» Usage Questions | GitHub Discussions · Stack Overflow |
ð¯ General Discussion | GitHub Discussions |
Features
- Support for 70+ languages
- Trained pipelines for different languages and tasks
- Multi-task learning with pretrained transformers like BERT
- Support for pretrained word vectors and embeddings
- State-of-the-art speed
- Production-ready training system
- Linguistically-motivated tokenization
- Components for named entity recognition, part-of-speech-tagging, dependency parsing, sentence segmentation, text classification, lemmatization, morphological analysis, entity linking and more
- Easily extensible with custom components and attributes
- Support for custom models in PyTorch, TensorFlow and other frameworks
- Built in visualizers for syntax and NER
- Easy model packaging, deployment and workflow management
- Robust, rigorously evaluated accuracy
ð For more details, see the facts, figures and benchmarks.
â³ Install spaCy
For detailed installation instructions, see the documentation.
- Operating system: macOS / OS X · Linux · Windows (Cygwin, MinGW, Visual Studio)
- Python version: Python 3.7+ (only 64 bit)
- Package managers: pip · conda (via
conda-forge
)
pip
Using pip, spaCy releases are available as source packages and binary wheels.
Before you install spaCy and its dependencies, make sure that your pip
,
setuptools
and wheel
are up to date.
pip install -U pip setuptools wheel
pip install spacy
To install additional data tables for lemmatization and normalization you can
run pip install spacy[lookups]
or install
spacy-lookups-data
separately. The lookups package is needed to create blank models with
lemmatization data, and to lemmatize in languages that don't yet come with
pretrained models and aren't powered by third-party libraries.
When using pip it is generally recommended to install packages in a virtual environment to avoid modifying system state:
python -m venv .env
source .env/bin/activate
pip install -U pip setuptools wheel
pip install spacy
conda
You can also install spaCy from conda
via the conda-forge
channel. For the
feedstock including the build recipe and configuration, check out
this repository.
conda install -c conda-forge spacy
Updating spaCy
Some updates to spaCy may require downloading new statistical models. If you're
running spaCy v2.0 or higher, you can use the validate
command to check if
your installed models are compatible and if not, print details on how to update
them:
pip install -U spacy
python -m spacy validate
If you've trained your own models, keep in mind that your training and runtime inputs must match. After updating spaCy, we recommend retraining your models with the new version.
ð For details on upgrading from spaCy 2.x to spaCy 3.x, see the migration guide.
ð¦ Download model packages
Trained pipelines for spaCy can be installed as Python packages. This means
that they're a component of your application, just like any other module. Models
can be installed using spaCy's download
command, or manually by pointing pip to a path or URL.
Documentation | |
---|---|
Available Pipelines | Detailed pipeline descriptions, accuracy figures and benchmarks. |
Models Documentation | Detailed usage and installation instructions. |
Training | How to train your own pipelines on your data. |
# Download best-matching version of specific model for your spaCy installation
python -m spacy download en_core_web_sm
# pip install .tar.gz archive or .whl from path or URL
pip install /Users/you/en_core_web_sm-3.0.0.tar.gz
pip install /Users/you/en_core_web_sm-3.0.0-py3-none-any.whl
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz
Loading and using models
To load a model, use spacy.load()
with the model name or a path to the model data directory.
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("This is a sentence.")
You can also import
a model directly via its full name and then call its
load()
method with no arguments.
import spacy
import en_core_web_sm
nlp = en_core_web_sm.load()
doc = nlp("This is a sentence.")
ð For more info and examples, check out the models documentation.
â Compile from source
The other way to install spaCy is to clone its GitHub repository and build it from source. That is the common way if you want to make changes to the code base. You'll need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, virtualenv and git installed. The compiler part is the trickiest. How to do that depends on your system.
Platform | |
---|---|
Ubuntu | Install system-level dependencies via apt-get : sudo apt-get install build-essential python-dev git . |
Mac | Install a recent version of XCode, including the so-called "Command Line Tools". macOS and OS X ship with Python and git preinstalled. |
Windows | Install a version of the Visual C++ Build Tools or Visual Studio Express that matches the version that was used to compile your Python interpreter. |
For more details and instructions, see the documentation on compiling spaCy from source and the quickstart widget to get the right commands for your platform and Python version.
git clone https://github.com/explosion/spaCy
cd spaCy
python -m venv .env
source .env/bin/activate
# make sure you are using the latest pip
python -m pip install -U pip setuptools wheel
pip install -r requirements.txt
pip install --no-build-isolation --editable .
To install with extras:
pip install --no-build-isolation --editable .[lookups,cuda102]
ð¦ Run tests
spaCy comes with an extensive test suite. In order to run the
tests, you'll usually want to clone the repository and build spaCy from source.
This will also install the required development dependencies and test utilities
defined in the requirements.txt
.
Alternatively, you can run pytest
on the tests from within the installed
spacy
package. Don't forget to also install the test utilities via spaCy's
requirements.txt
:
pip install -r requirements.txt
python -m pytest --pyargs spacy
Top Related Projects
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
NLTK Source
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Library for fast text representation and classification.
Topic Modelling for Humans
An open-source NLP research library, built on PyTorch.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot