robotstxt
The repository contains Google's robots.txt parser and matcher as a C++ library (compliant to C++11).
Top Related Projects
Boilerplate configurations for various web servers.
Quick Overview
The google/robotstxt repository is an open-source C++ library for parsing and matching robots.txt files. It provides a standardized way to handle robots.txt rules, which are used by websites to communicate with web crawlers about which parts of the site should or should not be crawled.
Pros
- Officially maintained by Google, ensuring high-quality and up-to-date implementation
- Provides a simple and efficient API for parsing and matching robots.txt rules
- Cross-platform compatibility (Linux, macOS, Windows)
- Includes comprehensive unit tests and documentation
Cons
- Limited to C++ language, which may not be suitable for all projects
- Requires some setup and compilation, which might be challenging for beginners
- Lacks built-in support for fetching robots.txt files from the web
- May require additional dependencies for certain features (e.g., RE2 for regular expressions)
Code Examples
- Parsing a robots.txt file:
#include "robots.h"
std::string robots_txt_content = "User-agent: *\nDisallow: /private/\n";
robotstxt::RobotsTxtParser parser;
std::unique_ptr<robotstxt::RobotsTxtParser::ParsedRobots> parsed_robots = parser.Parse(robots_txt_content);
- Checking if a URL is allowed:
#include "matcher.h"
robotstxt::RobotsMatcher matcher;
bool allowed = matcher.AllowedByRobots(robots_txt_content, "Googlebot", "https://example.com/public/page.html");
- Using the library with regular expressions:
#include "robots.h"
#include <re2/re2.h>
std::string robots_txt_content = "User-agent: *\nDisallow: /private*\n";
robotstxt::RobotsTxtParser parser;
std::unique_ptr<robotstxt::RobotsTxtParser::ParsedRobots> parsed_robots = parser.Parse(robots_txt_content);
RE2 user_agent_re("Googlebot");
RE2 url_re("/private/secret.html");
bool matches = robotstxt::RobotsMatcher::PathAllowedByRobots(*parsed_robots, "Googlebot", "/private/secret.html", &user_agent_re, &url_re);
Getting Started
-
Clone the repository:
git clone https://github.com/google/robotstxt.git
-
Build the library:
cd robotstxt mkdir build && cd build cmake .. make
-
Include the library in your C++ project:
#include "robots.h" #include "matcher.h"
-
Link against the built library when compiling your project.
Competitor Comparisons
Boilerplate configurations for various web servers.
Pros of server-configs
- Comprehensive server configuration templates for multiple platforms (Apache, Nginx, IIS, etc.)
- Covers a wide range of best practices for security, performance, and SEO
- Active community with regular updates and contributions
Cons of server-configs
- Requires more setup and configuration compared to robotstxt's focused approach
- May include unnecessary configurations for some use cases
- Steeper learning curve due to the breadth of covered topics
Code Comparison
robotstxt (C++):
bool RobotsMatcher::AllowedByRobots(const std::string& user_agent,
const GURL& url) {
return GetAllowedPathsForAgent(user_agent).IsAllowed(url.path());
}
server-configs (Apache .htaccess):
<IfModule mod_autoindex.c>
Options -Indexes
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [QSA,L]
</IfModule>
Summary
robotstxt focuses specifically on parsing and handling robots.txt files, while server-configs provides a broader set of server configuration templates for various platforms. robotstxt is more specialized and easier to implement for its specific use case, while server-configs offers a comprehensive approach to server configuration but requires more setup and knowledge to utilize effectively.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Google Robots.txt Parser and Matcher Library
The repository contains Google's robots.txt parser and matcher as a C++ library (compliant to C++14).
About the library
The Robots Exclusion Protocol (REP) is a standard that enables website owners to control which URLs may be accessed by automated clients (i.e. crawlers) through a simple text file with a specific syntax. It's one of the basic building blocks of the internet as we know it and what allows search engines to operate.
Because the REP was only a de-facto standard for the past 25 years, different implementers implement parsing of robots.txt slightly differently, leading to confusion. This project aims to fix that by releasing the parser that Google uses.
The library is slightly modified (i.e. some internal headers and equivalent symbols) production code used by Googlebot, Google's crawler, to determine which URLs it may access based on rules provided by webmasters in robots.txt files. The library is released open-source to help developers build tools that better reflect Google's robots.txt parsing and matching.
For webmasters, we included a small binary in the project that allows testing a single URL and user-agent against a robots.txt.
Building the library
Quickstart
We included with the library a small binary to test a local robots.txt against a user-agent and URL. Running the included binary requires:
- A compatible platform (e.g. Windows, macOS, Linux, etc.). Most platforms are fully supported.
- A compatible C++ compiler supporting at least C++14. Most major compilers are supported.
- Git for interacting with the source code repository. To install Git, consult the Set Up Git guide on GitHub.
- Although you are free to use your own build system, most of the documentation within this guide will assume you are using Bazel. To download and install Bazel (and any of its dependencies), consult the Bazel Installation Guide
Building with Bazel
Bazel is the official build system for the library, which is supported on most major platforms (Linux, Windows, MacOS, for example) and compilers.
To build and run the binary:
$ git clone https://github.com/google/robotstxt.git robotstxt
Cloning into 'robotstxt'...
...
$ cd robotstxt/
bazel-robots$ bazel test :robots_test
...
/:robots_test PASSED in 0.1s
Executed 1 out of 1 test: 1 test passes.
...
bazel-robots$ bazel build :robots_main
...
Target //:robots_main up-to-date:
bazel-bin/robots_main
...
bazel-robots$ bazel run robots_main -- ~/local/path/to/robots.txt YourBot https://example.com/url
user-agent 'YourBot' with url 'https://example.com/url' allowed: YES
Building with CMake
CMake is the community-supported build system for the library.
To build the library using CMake, just follow the steps below:
$ git clone https://github.com/google/robotstxt.git robotstxt
Cloning into 'robotstxt'...
...
$ cd robotstxt/
...
$ mkdir c-build && cd c-build
...
$ cmake .. -DROBOTS_BUILD_TESTS=ON
...
$ make
...
$ make test
Running tests...
Test project robotstxt/c-build
Start 1: robots-test
1/1 Test #1: robots-test ...................... Passed 0.02 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.02 sec
...
$ robots ~/local/path/to/robots.txt YourBot https://example.com/url
user-agent 'YourBot' with url 'https://example.com/url' allowed: YES
Notes
Parsing of robots.txt files themselves is done exactly as in the production version of Googlebot, including how percent codes and unicode characters in patterns are handled. The user must ensure however that the URI passed to the AllowedByRobots and OneAgentAllowedByRobots functions, or to the URI parameter of the robots tool, follows the format specified by RFC3986, since this library will not perform full normalization of those URI parameters. Only if the URI is in this format, the matching will be done according to the REP specification.
Also note that the library, and the included binary, do not handle
implementation logic that a crawler might apply outside of parsing and matching,
for example: Googlebot-Image
respecting the rules specified for User-agent: Googlebot
if not explicitly defined in the robots.txt file being tested.
License
The robots.txt parser and matcher C++ library is licensed under the terms of the Apache license. See LICENSE for more information.
Links
To learn more about this project:
- check out the Robots Exclusion Protocol standard,
- how Google Handles robots.txt,
- or for a high level overview, the robots.txt page on Wikipedia.
Top Related Projects
Boilerplate configurations for various web servers.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot