Top Related Projects
结巴中文分词
中文分词 词性标注 命名实体识别 依存句法分析 成分句法分析 语义依存分析 语义角色标注 指代消解 风格 转换 语义相似度 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理
Python library for processing Chinese text
百度NLP:分词,词性标注,命名实体识别,词重要性
keras implement of transformers for humans
Quick Overview
Synonyms is a Chinese Natural Language Processing (NLP) library for word similarity and sentence similarity calculations. It provides tools for semantic analysis, word embedding, and related NLP tasks specifically tailored for the Chinese language.
Pros
- Specialized for Chinese language processing
- Offers both word and sentence similarity calculations
- Includes pre-trained word vectors for immediate use
- Supports custom word vectors and dictionaries
Cons
- Limited documentation in English
- May require additional resources for optimal performance
- Focused primarily on similarity calculations, lacking broader NLP features
- Relatively small community compared to more general-purpose NLP libraries
Code Examples
- Calculate word similarity:
import synonyms
r = synonyms.compare('北京', '上海')
print(r) # Output: 0.63395087
- Find synonyms for a given word:
import synonyms
words, scores = synonyms.nearby('北京')
print(words[:3]) # Output: ['首都', '城市', '中国']
print(scores[:3]) # Output: [0.896611, 0.754657, 0.74305]
- Calculate sentence similarity:
import synonyms
sen1 = '我喜欢吃苹果'
sen2 = '我喜欢吃香蕉'
r = synonyms.compare(sen1, sen2, seg=True)
print(r) # Output: 0.9285714285714286
Getting Started
To get started with Synonyms:
- Install the library:
pip install synonyms
- Import and use in your Python script:
import synonyms
# Calculate word similarity
similarity = synonyms.compare('北京', '上海')
print(f"Similarity between '北京' and '上海': {similarity}")
# Find synonyms
words, scores = synonyms.nearby('学习')
print(f"Synonyms for '学习': {words[:5]}")
print(f"Scores: {scores[:5]}")
Note: Make sure you have sufficient disk space (about 1.5GB) for the pre-trained word vectors.
Competitor Comparisons
结巴中文分词
Pros of jieba
- More mature and widely adopted Chinese text segmentation library
- Supports multiple segmentation modes (accurate, full, search engine)
- Extensive documentation and community support
Cons of jieba
- Primarily focused on word segmentation, not synonyms
- Less emphasis on semantic analysis and word relationships
Code Comparison
jieba:
import jieba
seg_list = jieba.cut("我来到北京清华大学", cut_all=False)
print("Default Mode: " + "/ ".join(seg_list))
Synonyms:
import synonyms
words = synonyms.seg("我来到北京清华大学")
print(words)
Key Differences
- jieba is primarily a word segmentation tool, while Synonyms focuses on both segmentation and synonym generation
- Synonyms provides word vectors and similarity calculations, which are not core features of jieba
- jieba offers more granular control over segmentation modes, while Synonyms emphasizes semantic understanding
Use Cases
- jieba: Best for applications requiring precise Chinese word segmentation
- Synonyms: Ideal for projects needing both segmentation and semantic analysis, such as text similarity comparisons or synonym suggestions
Community and Maintenance
- jieba: Larger user base, more frequent updates, and extensive third-party integrations
- Synonyms: Smaller but growing community, with a focus on semantic analysis and NLP applications
中文分词 词性标注 命名实体识别 依存句法分析 成分句法分析 语义依存分析 语义角色标注 指代消解 风格转换 语义相似度 新词发 现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理
Pros of HanLP
- More comprehensive NLP toolkit with broader functionality
- Better support for traditional Chinese characters
- More active development and frequent updates
Cons of HanLP
- Steeper learning curve due to more complex API
- Larger library size, potentially impacting performance
- Requires more setup and configuration
Code Comparison
HanLP:
from pyhanlp import *
text = "我爱北京天安门"
print(HanLP.segment(text))
Synonyms:
import synonyms
words = synonyms.seg("我爱北京天安门")
print(words)
Both libraries offer word segmentation functionality, but HanLP provides more detailed output with part-of-speech tagging. Synonyms focuses primarily on word similarity and segmentation, while HanLP offers a wider range of NLP tasks.
HanLP is better suited for projects requiring advanced NLP capabilities in Chinese, including named entity recognition, dependency parsing, and more. Synonyms is more appropriate for simpler tasks focused on word relationships and basic segmentation.
Consider your project's specific needs, performance requirements, and the level of NLP functionality required when choosing between these libraries.
Python library for processing Chinese text
Pros of SnowNLP
- Broader functionality including sentiment analysis, text classification, and word segmentation
- Includes tools for pinyin conversion and simplified/traditional Chinese conversion
- More comprehensive documentation and examples
Cons of SnowNLP
- Less focused on synonyms and semantic similarity
- May require more setup and configuration for specific tasks
- Not as actively maintained (last update was in 2020)
Code Comparison
SnowNLP example:
from snownlp import SnowNLP
s = SnowNLP(u'这个东西真心很赞')
print(s.sentiments) # Sentiment analysis
print(s.pinyin) # Pinyin conversion
Synonyms example:
import synonyms
print(synonyms.nearby('人脸'))
print(synonyms.compare('北京', '上海', seg=True))
Summary
SnowNLP offers a wider range of NLP functionalities for Chinese text processing, including sentiment analysis and text classification. However, it's less focused on synonyms and semantic similarity compared to Synonyms. SnowNLP provides more comprehensive documentation but hasn't been updated as recently as Synonyms. The choice between the two depends on the specific NLP tasks required for your project.
百度NLP:分词,词性标注,命名实体识别,词重要性
Pros of LAC
- Offers comprehensive Chinese language processing capabilities, including word segmentation, part-of-speech tagging, and named entity recognition
- Provides pre-trained models for various domains, enhancing accuracy and performance
- Supports both Python and C++ interfaces, offering flexibility for different development environments
Cons of LAC
- Primarily focused on Chinese language processing, limiting its applicability for other languages
- May require more computational resources due to its comprehensive feature set
- Has a steeper learning curve compared to simpler synonym-focused libraries
Code Comparison
LAC:
from LAC import LAC
lac = LAC(mode='lac')
text = "LAC是个优秀的中文处理工具"
result = lac.run(text)
print(result)
Synonyms:
import synonyms
word = "优秀"
synonyms = synonyms.nearby(word)
print(synonyms)
Key Differences
LAC is a more comprehensive Chinese language processing tool, offering a wide range of features beyond synonym detection. It's well-suited for complex NLP tasks in Chinese. Synonyms, on the other hand, is more focused on providing synonym functionality and is simpler to use for basic word similarity tasks. LAC may be preferred for large-scale Chinese NLP projects, while Synonyms could be more appropriate for quick synonym lookups or simpler language processing needs.
keras implement of transformers for humans
Pros of bert4keras
- More comprehensive and flexible BERT implementation
- Supports multiple BERT variants and architectures
- Better suited for advanced NLP tasks beyond synonyms
Cons of bert4keras
- Steeper learning curve and more complex setup
- Requires more computational resources
- May be overkill for simple synonym-related tasks
Code Comparison
Synonyms:
import synonyms
synonyms.nearby("你好")
bert4keras:
from bert4keras.models import build_transformer_model
from bert4keras.tokenizers import Tokenizer
model = build_transformer_model(config_path, checkpoint_path)
tokenizer = Tokenizer(dict_path)
Summary
Synonyms is a lightweight library focused specifically on Chinese synonym detection and word similarity. It's easy to use and suitable for simple NLP tasks. bert4keras, on the other hand, is a more powerful and versatile BERT implementation that can handle a wide range of NLP tasks but requires more setup and resources. Choose Synonyms for quick synonym-related projects, and bert4keras for more complex NLP applications requiring BERT's capabilities.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Synonyms
Chinese Synonyms for Natural Language Processing and Understanding.
æ´å¥½çä¸æè¿ä¹è¯ï¼è天æºå¨äººãæºè½é®çå·¥å ·å ã
synonyms
å¯ä»¥ç¨äºèªç¶è¯è¨ç解çå¾å¤ä»»å¡ï¼ææ¬å¯¹é½ï¼æ¨èç®æ³ï¼ç¸ä¼¼åº¦è®¡ç®ï¼è¯ä¹å移ï¼å
³é®åæåï¼æ¦å¿µæåï¼èªå¨æè¦ï¼æç´¢å¼æçã
为æä¾ç¨³å®ãå¯é ãé¿æä¼åçæå¡ï¼Synonyms æ¹ä¸ºä½¿ç¨ æ¥æ¾è®¸å¯è¯, v1.0 并é对æºå¨å¦ä¹ 模åçä¸è½½è¿è¡æ¶è´¹ï¼è¯¦è§è¯ä¹¦ååºãä¹åçè´¡ç®è ï¼çªåºè´¡ç®ç代ç è´¡ç®è ï¼ï¼å¯ä¸æ们èç³»ï¼è®¨è®ºæ¶è´¹é®é¢ã-- Chatopera Inc. @ Oct. 2023
Table of Content:
- Install
- Usage
- Quick Get Start
- Valuation
- Benchmark
- Statement
- References
- Frequently Asked Questions
- License
Welcome
Follow steps below to install and activate packages.
1/3 Install Sourcecodes Package
pip install -U synonyms
å½å稳å®çæ¬ v3.xã
2/3 Config license id
Synonyms's machine learning model package(s) requires a License from Chatopera License Store, first purchase a License and get the license id
from Licenses page on Chatopera License Store(license id
ï¼å¨è¯ä¹¦ååºï¼è¯ä¹¦è¯¦æ
页ï¼ç¹å»ãå¤å¶è¯ä¹¦æ è¯ã).
Secondly, set environment variable in your terminal or shell scripts as below.
- For Shell Users
e.g. Shell, CMD Scripts on Linux, Windows, macOS.
# Linux / macOS
export SYNONYMS_DL_LICENSE=YOUR_LICENSE
## e.g. if your license id is `FOOBAR`, run `export SYNONYMS_DL_LICENSE=FOOBAR`
# Windows
## 1/2 Command Prompt
set SYNONYMS_DL_LICENSE=YOUR_LICENSE
## 2/2 PowerShell
$env:SYNONYMS_DL_LICENSE='YOUR_LICENSE'
- For Python Code Users
Jupyter Notebook, etc.
import os
os.environ["SYNONYMS_DL_LICENSE"] = "YOUR_LICENSE"
_licenseid = os.environ.get("SYNONYMS_DL_LICENSE", None)
print("SYNONYMS_DL_LICENSE=", _licenseid)
æ示ï¼å®è£ åå次使ç¨ä¼ä¸è½½è¯åéæ件ï¼ä¸è½½é度åå³äºç½ç»æ åµã
3/3 Download Model Package
Last, download the model package by command or script -
python -c "import synonyms; synonyms.display('è½é')" # download word vectors file
Usage
æ¯æ使ç¨ç¯å¢åéé ç½®åè¯è¯è¡¨å word2vec è¯åéæ件ã
ç¯å¢åé | æè¿° |
---|---|
SYNONYMS_WORD2VEC_BIN_MODEL_ZH_CN | ä½¿ç¨ word2vec è®ç»çè¯åéæ件ï¼äºè¿å¶æ ¼å¼ã |
SYNONYMS_WORDSEG_DICT | ä¸æåè¯ä¸»åå ¸ï¼æ ¼å¼å使ç¨åè |
SYNONYMS_DEBUG | ["TRUE"|"FALSE"], æ¯å¦è¾åºè°è¯æ¥å¿ï¼è®¾ç½®ä¸º âTRUEâ è¾åºï¼é»è®¤ä¸º âFALSEâ |
synonyms#nearby(word [, size = 10])
import synonyms
print("人è¸: ", synonyms.nearby("人è¸"))
print("è¯å«: ", synonyms.nearby("è¯å«"))
print("NOT_EXIST: ", synonyms.nearby("NOT_EXIST"))
synonyms.nearby(WORD [,SIZE])
è¿åä¸ä¸ªå
ç»ï¼å
ç»ä¸å
å«ä¸¤é¡¹ï¼([nearby_words], [nearby_words_score])
ï¼nearby_words
æ¯ WORD çè¿ä¹è¯ä»¬ï¼ä¹ä»¥ list çæ¹å¼åå¨ï¼å¹¶ä¸æç
§è·ç¦»çé¿åº¦ç±è¿åè¿æåï¼nearby_words_score
æ¯nearby_words
ä¸**对åºä½ç½®**çè¯çè·ç¦»çåæ°ï¼åæ°å¨(0-1)åºé´å
ï¼è¶æ¥è¿äº 1ï¼ä»£è¡¨è¶ç¸è¿ï¼SIZE
æ¯è¿åè¯æ±æ°éï¼é»è®¤ 10ãæ¯å¦:
synonyms.nearby(人è¸, 10) = (
["å¾ç", "å¾å", "éè¿è§å¯", "æ°åå¾å", "å ä½å¾å½¢", "è¸é¨", "å¾è±¡", "æ¾å¤§é", "é¢å", "Mii"],
[0.597284, 0.580373, 0.568486, 0.535674, 0.531835, 0.530
095, 0.525344, 0.524009, 0.523101, 0.516046])
å¨ OOV çæ
åµä¸ï¼è¿å ([], [])
ï¼ç®åçåå
¸å¤§å°: 435,729ã
synonyms#compare(sen1, sen2 [, seg=True])
两个å¥åçç¸ä¼¼åº¦æ¯è¾
sen1 = "åçåå²æ§åé©"
sen2 = "åçåå²æ§åé©"
r = synonyms.compare(sen1, sen2, seg=True)
å ¶ä¸ï¼åæ° seg 表示 synonyms.compare æ¯å¦å¯¹ sen1 å sen2 è¿è¡åè¯ï¼é»è®¤ä¸º Trueãè¿åå¼ï¼[0-1]ï¼å¹¶ä¸è¶æ¥è¿äº 1 代表两个å¥åè¶ç¸ä¼¼ã
æå¸å¼é¢æ¹å vs éè·¯å³å®å½è¿: 0.429
æå¸å¼é¢æ¹å vs æå¸æå¼éè·¯: 0.93
åçåå²æ§åé© vs åçåå²æ§åé©: 1.0
synonyms#display(word [, size = 10])
以å好çæ¹å¼æå°è¿ä¹è¯ï¼æ¹ä¾¿è°è¯ï¼display(WORD [, SIZE])
è°ç¨äº synonyms#nearby
æ¹æ³ã
>>> synonyms.display("é£æº")
'é£æº'è¿ä¹è¯ï¼
1. é£æº:1.0
2. ç´åæº:0.8423391
3. 客æº:0.8393003
4. æ»ç¿æº:0.7872388
5. åç¨é£æº:0.7832081
6. æ°´ä¸é£æº:0.77857226
7. è¿è¾æº:0.7724742
8. èªæº:0.7664748
9. èªç©ºå¨:0.76592904
10. æ°èªæº:0.74209654
SIZE
æ¯æå°è¯æ±è¡¨çæ°éï¼é»è®¤ 10ã
synonyms#describe()
æå°å½åå çæè¿°ä¿¡æ¯ï¼
>>> synonyms.describe()
Vocab size in vector model: 435729
model_path: /Users/hain/chatopera/Synonyms/synonyms/data/words.vector.gz
version: 3.18.0
{'vocab_size': 435729, 'version': '3.18.0', 'model_path': '/chatopera/Synonyms/synonyms/data/words.vector.gz'}
synonyms#v(word)
è·å¾ä¸ä¸ªè¯è¯çåéï¼è¯¥åé为 numpy ç arrayï¼å½è¯¥è¯è¯æ¯æªç»å½è¯æ¶ï¼æåº KeyError å¼å¸¸ã
>>> synonyms.v("é£æº")
array([-2.412167 , 2.2628384 , -7.0214124 , 3.9381874 , 0.8219283 ,
-3.2809453 , 3.8747153 , -5.217062 , -2.2786229 , -1.2572327 ],
dtype=float32)
synonyms#sv(sentence, ignore=False)
è·å¾ä¸ä¸ªåè¯åå¥åçåéï¼åé以 BoW æ¹å¼ç»æ
sentence: å¥åæ¯åè¯åéè¿ç©ºæ ¼èåèµ·æ¥
ignore: æ¯å¦å¿½ç¥OOVï¼Falseæ¶ï¼éæºçæä¸ä¸ªåé
synonyms#seg(sentence)
ä¸æåè¯
synonyms.seg("ä¸æè¿ä¹è¯å·¥å
·å
")
åè¯ç»æï¼ç±ä¸¤ä¸ª list ç»æçå ç»ï¼åå«æ¯åè¯å对åºçè¯æ§ã
(['ä¸æ', 'è¿ä¹è¯', 'å·¥å
·å
'], ['nz', 'n', 'n'])
该åè¯ä¸å»åç¨è¯åæ ç¹ã
synonyms#keywords(sentence [, topK=5, withWeight=False])
æåå ³é®è¯ï¼é»è®¤æç §éè¦ç¨åº¦æåå ³é®è¯ã
keywords = synonyms.keywords("9æ15æ¥ä»¥æ¥ï¼å°ç§¯çµãé«éãä¸æçå为çéè¦åä½ä¼ä¼´ï¼åªè¦æ²¡æç¾å½çç¸å
³è®¸å¯è¯ï¼é½æ æ³ä¾åºè¯çç»å为ï¼èä¸è¯å½é
çå½äº§è¯çä¼ä¸ï¼ä¹å éç¨ç¾å½ææ¯ï¼èæ æ³ä¾è´§ç»å为ãç®åå为é¨ååå·çææºäº§ååºç°è´§å°çç°è±¡ï¼è¥è¯¥å½¢å¿æç»ä¸å»ï¼å为ææºä¸å¡å°éåéåã")
Contribution
Get more logs for debugging, set environment variable.
SYNONYMS_DEBUG=TRUE
PCA
以â人è¸â为ä¾ä¸»è¦æååæï¼
Quick Get Start
$ pip install -r Requirements.txt
$ python demo.py
Change logs
æ´æ°æ åµè¯´æã
Voice of Users
ç¨æ·æä¹è¯´ï¼
Data
data is built based on wikidata-corpus.
Valuation
åä¹è¯è¯æ
ãåä¹è¯è¯æãæ¯æ¢ 家驹çäººäº 1983 å¹´ç¼çºèæï¼ç°å¨ä½¿ç¨å¹¿æ³çæ¯å工大社ä¼è®¡ç®ä¸ä¿¡æ¯æ£ç´¢ç 究ä¸å¿ç»´æ¤çãåä¹è¯è¯ææ©å±çãï¼å®ç²¾ç»çå°ä¸æè¯æ±ååæ大类åå°ç±»ï¼æ¢³çäºè¯æ±é´çå ³ç³»ï¼åä¹è¯è¯ææ©å±çå å«è¯è¯ 7 ä¸ä½æ¡ï¼å ¶ä¸ 3 ä¸ä½æ¡è¢«ä»¥å¼æ¾æ°æ®å½¢å¼å ±äº«ã
ç¥ç½, HowNet
HowNetï¼ä¹è¢«ç§°ä¸ºç¥ç½ï¼å®å¹¶ä¸åªæ¯ä¸ä¸ªè¯ä¹åå ¸ï¼èæ¯ä¸ä¸ªç¥è¯ç³»ç»ï¼è¯æ±ä¹é´çå ³ç³»æ¯å ¶ä¸ä¸ªåºæ¬ä½¿ç¨åºæ¯ãç¥ç½å å«è¯è¯ 8 ä½æ¡ã
å½é ä¸å¯¹è¯è¯ç¸ä¼¼åº¦ç®æ³çè¯ä»·æ åæ®ééç¨ Miller&Charles åå¸çè±è¯è¯å¯¹éç人工å¤å®å¼ã该è¯å¯¹éç±å对é«åº¦ç¸å ³ãå对ä¸åº¦ç¸å ³ãå对ä½åº¦ç¸å ³å ± 30 个è±è¯è¯å¯¹ç»æ,ç¶å让 38 个åè¯è å¯¹è¿ 30 对è¿è¡è¯ä¹ç¸å ³åº¦å¤æï¼æååä»ä»¬çå¹³åå¼ä½ä¸ºäººå·¥å¤å®æ åãç¶åä¸åè¿ä¹è¯å·¥å ·ä¹å¯¹è¿äºè¯æ±è¿è¡ç¸ä¼¼åº¦è¯åï¼ä¸äººå·¥å¤å®æ ååæ¯è¾ï¼æ¯å¦ä½¿ç¨ç®å°æ£®ç¸å ³ç³»æ°ãå¨ä¸æé¢åï¼ä½¿ç¨è¿ä¸ªè¯è¡¨çç¿»è¯çè¿è¡ä¸æè¿ä¹è¯æ¯è¾ä¹æ¯å¸¸ç¨çåæ³ã
对æ¯
Synonyms çè¯è¡¨å®¹éæ¯ 435,729ï¼ä¸é¢éæ©ä¸äºå¨åä¹è¯è¯æãç¥ç½å Synonyms é½åå¨çå 个è¯ï¼ç»åºå ¶è¿ä¼¼åº¦ç对æ¯ï¼
注ï¼åä¹è¯æåç¥ç½æ°æ®ãåæ°æ¥æºãSynonyms ä¹å¨ä¸æä¼åä¸ï¼æ°çåæ°å¯è½åä¸å¾ä¸ä¸è´ã
æ´å¤æ¯å¯¹ç»æã
Used by
Benchmark
Test with py3, MacBook Pro.
python benchmark.py
++++++++++ OS Name and version ++++++++++
Platform: Darwin
Kernel: 16.7.0
Architecture: ('64bit', '')
++++++++++ CPU Cores ++++++++++
Cores: 4
CPU Load: 60
++++++++++ System Memory ++++++++++
meminfo 8GB
synonyms#nearby: 100000 loops, best of 3 epochs: 0.209 usec per loop
Live Sharing
线ä¸å享å®å½: Synonyms ä¸æè¿ä¹è¯å·¥å ·å @ 2018-02-07
Statement
Synonymsåå¸è¯ä¹¦ MITãæ°æ®åç¨åºå¯ç¨äºç 究ååä¸äº§åï¼å¿ 须注æå¼ç¨åå°åï¼æ¯å¦åå¸çä»»ä½åªä½ãæåãæå¿æå客çå 容ã
@online{Synonyms:hain2017,
author = {Hai Liang Wang, Hu Ying Xi},
title = {ä¸æè¿ä¹è¯å·¥å
·å
Synonyms},
year = 2017,
url = {https://github.com/chatopera/Synonyms},
urldate = {2017-09-27}
}
References
word2vec åçæ¨å¯¼ä¸ä»£ç åæ
Frequently Asked Questions (FAQ)
- æ¯å¦æ¯ææ·»å åè¯å°è¯è¡¨ä¸ï¼
ä¸æ¯æï¼æ¬²äºè§£æ´å¤è¯·ç #5
- è¯åéçè®ç»æ¯ç¨åªä¸ªå·¥å ·ï¼
Google åå¸çword2vecï¼è¯¥åºç± C è¯è¨ç¼åï¼å å使ç¨æçé«ï¼è®ç»é度快ãgensim å¯ä»¥å è½½ word2vec è¾åºç模åæ件ã
- ç¸ä¼¼åº¦è®¡ç®çæ¹æ³æ¯ä»ä¹ï¼
Authors
èªç¶è¯è¨å¤çæ¨èå ¥é¨&å·¥å ·ä¹¦
æ¬ä¹¦ç± Synonyms ä½è åä¸èä½ã
å¿«éè´ä¹¦é¾æ¥
ãæºè½é®çä¸æ·±åº¦å¦ä¹ ã è¿æ¬ä¹¦æ¯æå¡äºåå¤å ¥é¨æºå¨å¦ä¹ åèªç¶è¯è¨å¤ççå¦çå软件工ç¨å¸çï¼å¨ç论ä¸ä»ç»äºå¾å¤åçãç®æ³ï¼åæ¶ä¹æä¾å¾å¤ç¤ºä¾ç¨åºå¢å å®è·µæ§ï¼è¿äºç¨åºè¢«æ±æ»å°ç¤ºä¾ç¨åºä»£ç åºï¼è¿äºç¨åºä¸»è¦æ¯å¸®å©å¤§å®¶ç解åçåç®æ³çï¼æ¬¢è¿å¤§å®¶ä¸è½½åæ§è¡ã代ç åºçå°åæ¯ï¼
https://github.com/l11x0m7/book-of-qna-code
Give credits to
SentenceSim: ç¸ä¼¼åº¦è¯æµè¯æ
License
Chunsong Public License, version 1.0
Project Sponsor
Chatopera äºæå¡
Chatopera äºæå¡æ¯ä¸ç«å¼å®ç°è天æºå¨äººçäºæå¡ï¼ææ¥å£è°ç¨æ¬¡æ°è®¡è´¹ãChatopera äºæå¡æ¯ Chatopera æºå¨äººå¹³å°ç软件å³æå¡å®ä¾ãå¨äºè®¡ç®åºç¡ä¸ï¼Chatopera äºæå¡å±äº**è天æºå¨äººå³æå¡**çäºæå¡ã
Chatopera æºå¨äººå¹³å°å æ¬ç¥è¯åºãå¤è½®å¯¹è¯ãæå¾è¯å«åè¯é³è¯å«çç»ä»¶ï¼æ ååè天æºå¨äººå¼åï¼æ¯æä¼ä¸ OA æºè½é®çãHR æºè½é®çãæºè½å®¢æåç½ç»è¥éçåºæ¯ãä¼ä¸ IT é¨é¨ãä¸å¡é¨é¨åå© Chatopera äºæå¡å¿«é让è天æºå¨äººä¸çº¿ï¼
Top Related Projects
结巴中文分词
中文分词 词性标注 命名实体识别 依存句法分析 成分句法分析 语义依存分析 语义角色标注 指代消解 风格转换 语义相似度 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理
Python library for processing Chinese text
百度NLP:分词,词性标注,命名实体识别,词重要性
keras implement of transformers for humans
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot