Convert Figma logo to code with AI

InsaneLife logoChineseNLPCorpus

中文自然语言处理数据集,平时做做实验的材料。欢迎补充提交合并。

4,225
779
4,225
6

Top Related Projects

大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP

Quick Overview

The InsaneLife/ChineseNLPCorpus repository is a collection of various Chinese natural language processing (NLP) datasets, including news articles, social media posts, and other textual data. This repository aims to provide a comprehensive resource for researchers and developers working on Chinese NLP tasks.

Pros

  • Diverse Datasets: The repository contains a wide range of Chinese NLP datasets, covering different domains and genres, which can be useful for various NLP tasks.
  • Open-Source: The datasets are freely available and can be used for research and development purposes, promoting collaboration and advancement in the field.
  • Regularly Updated: The repository is actively maintained, and new datasets are added periodically, ensuring the availability of the latest resources.
  • Detailed Documentation: The repository provides detailed documentation for each dataset, including descriptions, usage instructions, and licensing information.

Cons

  • Uneven Quality: The quality and preprocessing of the datasets may vary, as they are contributed by different sources, which can introduce challenges in consistent usage.
  • Limited Metadata: Some datasets may lack detailed metadata or annotations, which can limit their usefulness for specific NLP tasks.
  • Language Barrier: The documentation and dataset descriptions are primarily in Chinese, which may pose a challenge for non-Chinese-speaking users.
  • Potential Licensing Issues: While the datasets are open-source, users should carefully review the licensing terms for each dataset to ensure compliance.

Getting Started

To use the datasets from the InsaneLife/ChineseNLPCorpus repository, follow these steps:

  1. Clone the repository to your local machine:
git clone https://github.com/InsaneLife/ChineseNLPCorpus.git
  1. Navigate to the cloned repository:
cd ChineseNLPCorpus
  1. Explore the available datasets in the data directory. Each dataset has its own subdirectory with a README file providing detailed information about the dataset, including its description, format, and usage instructions.

  2. Depending on the dataset you want to use, follow the specific instructions in the corresponding README file to download, preprocess, and utilize the data for your NLP tasks.

  3. If you encounter any issues or have questions, refer to the repository's documentation or create a new issue on the GitHub repository.

Competitor Comparisons

大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP

Pros of brightmart/nlp_chinese_corpus

  • Larger dataset: brightmart/nlp_chinese_corpus contains a more extensive collection of Chinese natural language processing (NLP) datasets, including news articles, social media posts, and other textual data.
  • Diverse data sources: The corpus includes data from various sources, such as news websites, social media platforms, and online forums, providing a more comprehensive representation of Chinese language usage.
  • Detailed documentation: The repository provides detailed documentation, including descriptions of the datasets, their sources, and instructions for using the data.

Cons of brightmart/nlp_chinese_corpus

  • Potential data quality issues: As the corpus is compiled from various online sources, there may be concerns about the accuracy, reliability, and consistency of the data.
  • Limited preprocessing: Compared to InsaneLife/ChineseNLPCorpus, brightmart/nlp_chinese_corpus may have less extensive data preprocessing and cleaning, which could impact the usability of the data for certain NLP tasks.
  • Licensing and usage restrictions: The licensing and usage terms for the datasets in brightmart/nlp_chinese_corpus may be less permissive than those in InsaneLife/ChineseNLPCorpus, potentially limiting the flexibility of how the data can be used.

Code Comparison

Here's a brief code comparison between the two repositories:

InsaneLife/ChineseNLPCorpus:

import pandas as pd

# Load the dataset
df = pd.read_csv('path/to/dataset.csv')

# Preprocess the data
df['text'] = df['text'].str.replace('\n', ' ')
df['text'] = df['text'].str.lower()

brightmart/nlp_chinese_corpus:

import os
import json

# Load the dataset
with open('path/to/dataset.json', 'r') as f:
    data = json.load(f)

The key differences are that InsaneLife/ChineseNLPCorpus uses a CSV file format and includes some basic data preprocessing, while brightmart/nlp_chinese_corpus uses a JSON file format without any apparent preprocessing steps in the provided code snippet.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

[TOC]

ChineseNlpCorpus

中文自然语言处理数据集,平时做做实验的材料。欢迎补充提交合并。

阅读理解

阅读理解数据集按照方法主要有:抽取式、分类(观点提取)。按照篇章又分为单篇章、多篇章,比如有的问题答案可能需要从多个文章中提取,每个文章可能都只是一部分,那么多篇章提取就会面临怎么合并,合并的时候怎么去掉重复的,保留补充的。

名称规模说明单位论文下载评测
DuReader30万问题 140万文档 66万答案问答阅读理解数据集百度链接链接2018 NLP Challenge on MRC 2019 Language and Intelligence Challenge on MRC
$DuReader_{robust}$2.2万问题单篇章、抽取式阅读理解数据集百度链接评测
CMRC 20182万问题篇章片段抽取型阅读理解哈工大讯飞联合实验室链接链接第二届“讯飞杯”中文机器阅读理解评测
$DuReader_{yesno}$9万观点型阅读理解数据集百度链接评测
$DuReader_{checklist}$1万抽取式数据集百度链接

任务型对话数据

Medical DS

复旦大学发布的基于百度拇指医生上真实对话数据的,面向任务型对话的中文医疗诊断数据集。

名称规模创建日期作者单位论文下载
Medical DS710个对话 67种症状 4种疾病2018年Liu et al.复旦大学链接链接

千言数据集

包含知识对话、推荐对话、画像对话。详细见官网 千言里面还有很多数据集,见:https://www.luge.ai/#/

CATSLU

之前的一些对话数据集集中于语义理解,而工业界真实情况ASR也会有错误,往往被忽略。CATSLU而是一个中文语音+NLU文本理解的对话数据集,可以从语音信号到理解端到端进行实验,例如直接从音素建模语言理解(而非word or token)。

数据统计:

image-20200910233858454

官方说明手册:CATSLU 数据下载:https://sites.google.com/view/CATSLU/home

NLPCC2018 Shared Task 4

中文呢真实商用车载语音任务型对话系统的对话日志.

名称规模创建日期作者单位论文下载评测
NLPCC2018 Shared Task 45800对话 2.6万问题2018年zhao et al.腾讯链接训练开发集 测试集NLPCC 2018 Spoken Language Understanding in Task-oriented Dialog Systems

NLPCC每年都会举办,包含大量中文数据集,如对话、qa、ner、情感检测、摘要等任务

SMP

这是一系类数据集,每年都会有新的数据集放出。

SMP-2020-ECDT小样本对话语言理解数据集

论文中叫FewJoint 基准数据集,来自于讯飞AIUI开放平台上真实用户语料和专家构造的语料(比例大概为3:7),包含59个真实domain,目前domain最多的对话数据集之一,可以避免构造模拟domain,非常适合小样本和元学习方法评测。其中45个训练domain,5个开发domain,9个测试domain。

数据集介绍:新闻链接

数据集论文:https://arxiv.org/abs/2009.08138 数据集下载地址:https://atmahou.github.io/attachments/FewJoint.zip 小样本工具平台主页地址:https://github.com/AtmaHou/MetaDialog

SMP-2019-NLU

包含领域分类、意图识别和语义槽填充三项子任务的数据集。训练数据集下载:trian.json,目前只获取到训练集,如果有同学有测试集,欢迎提供。

Train
Domain24
Intent29
Slot63
Samples2579

SMP-2017

中文对话意图识别数据集,官方git和数据: https://github.com/HITlilingzhi/SMP2017ECDT-DATA

数据集:

Train
Train samples2299
Dev samples770
Test samples666
Domain31

论文:https://arxiv.org/abs/1709.10217

文本分类

新闻分类

  • 今日头条中文新闻(短文本)分类数据集 :https://github.com/fateleak/toutiao-text-classfication-dataset
    • 数据规模:共**38万条**,分布于15个分类中。
    • 采集时间:2018å¹´05月。
    • 以0.7 0.15 0.15做分割 。
  • 清华新闻分类语料:
    • 根据新浪新闻RSS订阅频道2005~2011年间的历史数据筛选过滤生成。
    • 数据量:**74万篇新闻文档**(2.19 GB)
    • 小数据实验可以筛选类别:体育, 财经, 房产, 家居, 教育, 科技, 时尚, 时政, 游戏, 娱乐
    • http://thuctc.thunlp.org/#%E8%8E%B7%E5%8F%96%E9%93%BE%E6%8E%A5
    • rnn和cnn实验:https://github.com/gaussic/text-classification-cnn-rnn
  • 中科大新闻分类语料库:http://www.nlpir.org/?action-viewnews-itemid-145

情感/观点/评论 倾向性分析

数据集数据概览下载
ChnSentiCorp_htl_all7000 多条酒店评论数据,5000 多条正向评论,2000 多条负向评论地址
waimai_10k某外卖平台收集的用户评价,正向 4000 条,负向 约 8000 条地址
online_shopping_10_cats10 个类别,共 6 万多条评论数据,正、负向评论各约 3 万条, 包括书籍、平板、手机、水果、洗发水、热水器、蒙牛、衣服、计算机、酒店地址
weibo_senti_100k10 万多条,带情感标注 新浪微博,正负向评论约各 5 万条地址
simplifyweibo_4_moods36 万多条,带情感标注 新浪微博,包含 4 种情感, 其中喜悦约 20 万条,愤怒、厌恶、低落各约 5 万条地址
dmsc_v228 部电影,超 70 万 用户,超 200 万条 评分/评论 数据地址
yf_dianping24 万家餐馆,54 万用户,440 万条评论/评分数据地址
yf_amazon52 万件商品,1100 多个类目,142 万用户,720 万条评论/评分数据地址
百度千言情感分析数据集包括句子级情感分类(Sentence-level Sentiment Classification)、评价对象级情感分类(Aspect-level Sentiment Classification)、观点抽取(Opinion Target Extraction)地址

实体识别&词性标注&分词

另外这三个链接里面数据集也挺全的,链接:

句法&语义解析

依存句法

语义解析

数据集单/多表语言复杂度数据库/表格训练集验证集测试集文档
NL2SQL单中文简单5,291/5,29141,5224,3968,141NL2SQL
CSpider多中英复杂166/8766,8319541,906CSpider
DuSQL多中文复杂200/81322,5212,4823,759DuSQL

信息抽取

搜索匹配

千言文本相似度

百度千言文本相似度,主要包含LCQMC/BQ Corpus/PAWS-X,见官网,丰富文本匹配的数据,可以作为目标匹配数据集的源域数据,进行多任务学习/迁移学习。

OPPO手机搜索排序

OPPO手机搜索排序query-title语义匹配数据集。

链接: https://pan.baidu.com/s/1KzLK_4Iv0CHOkkut7TJBkA?pwd=ju52 提取码: ju52

网页搜索结果评价(SogouE)

推荐系统

数据集数据概览下载地址
ez_douban5 万多部电影(3 万多有电影名称,2 万多没有电影名称),2.8 万 用户,280 万条评分数据点击查看
dmsc_v228 部电影,超 70 万 用户,超 200 万条 评分/评论 数据点击查看
yf_dianping24 万家餐馆,54 万用户,440 万条评论/评分数据点击查看
yf_amazon52 万件商品,1100 多个类目,142 万用户,720 万条评论/评分数据点击查看

百科数据

维基百科

维基百科会定时将语料库打包发布:

百度百科

只能自己爬,爬取得链接:https://pan.baidu.com/share/init?surl=i3wvfil提取码 neqs 。

指代消歧

CoNLL 2012 :http://conll.cemantix.org/2012/data.html

预训练:(词向量or模型)

BERT

  1. 开源代码:https://github.com/google-research/bert
  2. 模型下载:BERT-Base, Chinese: Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M parameters

BERT变种模型:

模型参数git
Chinese-BERT-base108MBERT
Chinese-BERT-wwm-ext108MChinese-BERT-wwm
RBT338MChinese-BERT-wwm
ERNIE 1.0 Base 中文108MERNIE、ernie模型转成tensorflow模型:tensorflow_ernie
RoBERTa-large334MRoBERT
XLNet-mid209MXLNet-mid
ALBERT-large59MChinese-ALBERT
ALBERT-xlargeChinese-ALBERT
ALBERT-tiny4MChinese-ALBERT
chinese-roberta-wwm-ext108MChinese-BERT-wwm
chinese-roberta-wwm-ext-large330MChinese-BERT-wwm

ELMO

  1. 开源代码:https://github.com/allenai/bilm-tf
  2. 预训练的模型:https://allennlp.org/elmo

腾讯词向量

腾讯AI实验室公开的中文词向量数据集包含800多万中文词汇,其中每个词对应一个200维的向量。

下载地址:https://ai.tencent.com/ailab/nlp/en/download.html

上百种预训练中文词向量

https://github.com/Embedding/Chinese-Word-Vectors

中文完形填空数据集

https://github.com/ymcui/Chinese-RC-Dataset

中华古诗词数据库

最全中华古诗词数据集,唐宋两朝近一万四千古诗人, 接近5.5万首唐诗加26万宋诗. 两宋时期1564位词人,21050首词。

https://github.com/chinese-poetry/chinese-poetry

保险行业语料库

https://github.com/Samurais/insuranceqa-corpus-zh

汉语拆字字典

英文可以做char embedding,中文不妨可以试试拆字

https://github.com/kfcd/chaizi

中文数据集平台

NLP工具

THULAC: https://github.com/thunlp/THULAC :包括中文分词、词性标注功能。

HanLP:https://github.com/hankcs/HanLP

哈工大LTP https://github.com/HIT-SCIR/ltp

NLPIR https://github.com/NLPIR-team/NLPIR

jieba https://github.com/yanyiwu/cppjieba

百度千言数据集:https://github.com/luge-ai/luge-ai