Top Related Projects
httpx is a fast and multi-purpose HTTP toolkit that allows running multiple probes using the retryablehttp library.
Fast web fuzzer written in Go
Directory/File, DNS and VHost busting tool written in Go
A Tool for Domain Flyovers
Gospider - Fast web spider written in Go
Simple, fast web crawler designed for easy, quick discovery of endpoints and assets within a web application
Quick Overview
Meg is a command-line tool for fetching multiple URLs without needing to maintain a list of URLs or manage concurrency. It's designed to be a lightweight and efficient alternative to tools like GNU Parallel for web scraping and reconnaissance tasks.
Pros
- Simple and easy to use with minimal setup
- Supports concurrent requests for faster execution
- Flexible output options for easy integration with other tools
- Lightweight and portable, written in Go
Cons
- Limited built-in features compared to more comprehensive web scraping tools
- May require additional processing of output for complex tasks
- No built-in support for authentication or complex request customization
- Limited documentation for advanced use cases
Getting Started
To get started with meg:
-
Install meg:
go install github.com/tomnomnom/meg@latest
-
Basic usage:
meg paths hosts
Where
paths
is a file containing URL paths (one per line) andhosts
is a file containing hostnames (one per line). -
Example command:
meg --verbose --delay 1000 paths.txt hosts.txt
This command will fetch each path from
paths.txt
for each host inhosts.txt
with a 1-second delay between requests and verbose output. -
Output: Meg creates a directory named
out
by default, containing files for each host with the responses from the requested paths.
For more advanced usage and options, refer to the project's GitHub repository and documentation.
Competitor Comparisons
httpx is a fast and multi-purpose HTTP toolkit that allows running multiple probes using the retryablehttp library.
Pros of httpx
- More feature-rich, offering advanced capabilities like probe filtering, custom HTTP methods, and TLS probing
- Faster execution due to optimized Go implementation and concurrent processing
- Actively maintained with regular updates and improvements
Cons of httpx
- More complex to use, with a steeper learning curve compared to meg's simplicity
- Larger binary size and potentially higher resource consumption
- May provide excessive information for simpler tasks where meg's focused approach suffices
Code Comparison
meg:
meg -d 1000 -v / hosts.txt
httpx:
cat hosts.txt | httpx -silent -threads 100 -o output.txt
Both tools can perform basic HTTP probing, but httpx offers more advanced options:
httpx -l hosts.txt -silent -title -status-code -content-type -web-server -tech-detect -follow-redirects
While meg focuses on fetching specific paths across multiple hosts:
meg -d 1000 paths.txt hosts.txt
httpx is more versatile for general HTTP probing and analysis, while meg excels at targeted path fetching across multiple hosts. The choice between them depends on the specific requirements of your task and the level of detail needed in the output.
Fast web fuzzer written in Go
Pros of ffuf
- Faster execution due to Go implementation and efficient concurrency
- More versatile with support for various fuzzing modes (e.g., directory discovery, parameter fuzzing)
- Extensive filtering options for fine-tuning results
Cons of ffuf
- Steeper learning curve due to more complex command-line options
- May be overkill for simple content discovery tasks
- Requires more setup and configuration for basic usage
Code Comparison
meg:
meg -d 1000 -v / hosts.txt
ffuf:
ffuf -w wordlist.txt -u https://example.com/FUZZ -mc all -v
Key Differences
- meg is designed for simple, straightforward content discovery across multiple hosts
- ffuf offers more advanced fuzzing capabilities and customization options
- meg uses a simpler command-line interface, while ffuf provides more granular control
Use Cases
- meg: Quick content discovery across multiple domains
- ffuf: In-depth fuzzing, parameter discovery, and advanced web application testing
Community and Support
Both projects have active communities, but ffuf tends to have more frequent updates and a larger user base due to its broader feature set.
Directory/File, DNS and VHost busting tool written in Go
Pros of Gobuster
- More feature-rich, offering directory/file, DNS, and VHOST busting modes
- Supports multiple wordlists and pattern-based fuzzing
- Highly customizable with numerous command-line options
Cons of Gobuster
- More complex to use due to its many features and options
- Potentially slower for simple tasks compared to Meg's simplicity
- Requires more setup and configuration for basic operations
Code Comparison
Meg:
meg -d 1000 -v / hosts.txt
Gobuster:
gobuster dir -u https://example.com -w wordlist.txt -t 50
Key Differences
- Meg is designed for quick, parallel HTTP requests across multiple hosts
- Gobuster focuses on comprehensive directory and file enumeration
- Meg is simpler and faster for basic tasks, while Gobuster offers more advanced features
- Gobuster provides built-in wordlists and supports various output formats
- Meg allows for easy customization of request headers and methods
Both tools are valuable for different scenarios in web application security testing and reconnaissance. Meg excels in simplicity and speed for basic tasks, while Gobuster offers more comprehensive enumeration capabilities at the cost of increased complexity.
A Tool for Domain Flyovers
Pros of Aquatone
- Provides visual output with screenshots and HTML report generation
- Offers more comprehensive scanning capabilities, including port scanning and technology detection
- Supports multiple input formats and integrates well with other tools
Cons of Aquatone
- Slower execution compared to Meg due to its more extensive feature set
- Requires more system resources and dependencies
- May be overkill for simple HTTP request tasks
Code Comparison
Meg:
meg -d 1000 -v / hosts.txt
Aquatone:
cat hosts.txt | aquatone -out ./aquatone -screenshot-timeout 1000
Key Differences
- Meg is focused on fast, concurrent HTTP requests and response saving
- Aquatone provides a more comprehensive web reconnaissance toolset
- Meg is written in Go, while Aquatone is written in Ruby
- Aquatone offers visual output and reporting features not present in Meg
- Meg is generally faster for simple HTTP request tasks
Use Cases
- Use Meg for quick, lightweight HTTP request tasks and response analysis
- Choose Aquatone for more in-depth web reconnaissance, including visual inspection and technology fingerprinting
Both tools have their strengths and can be complementary in a security professional's toolkit, depending on the specific requirements of the task at hand.
Gospider - Fast web spider written in Go
Pros of gospider
- More comprehensive web crawling and scraping capabilities
- Built-in support for various output formats (JSON, CSV, etc.)
- Advanced features like JavaScript rendering and form submission
Cons of gospider
- More complex to use due to its extensive feature set
- Potentially slower for simple tasks compared to meg's focused approach
- Requires more system resources for advanced features
Code comparison
meg:
meg -d 1000 -v / hosts.txt
gospider:
gospider -s "https://example.com/" -o output -c 10 -d 1
Key differences
- meg is designed for quick, parallel HTTP requests across multiple hosts
- gospider focuses on in-depth web crawling and content extraction
- meg is simpler and more lightweight, while gospider offers more advanced features
- gospider provides built-in parsing and data extraction capabilities
- meg is better suited for quick security scans, while gospider excels in comprehensive web scraping tasks
Both tools have their strengths and are valuable for different use cases. meg is ideal for rapid, parallel HTTP requests across multiple hosts, while gospider is better for in-depth web crawling and content extraction from specific targets.
Simple, fast web crawler designed for easy, quick discovery of endpoints and assets within a web application
Pros of hakrawler
- Performs web crawling and content discovery, providing more comprehensive results
- Supports JavaScript parsing for deeper analysis of web applications
- Offers flexible output options, including JSON for easier integration with other tools
Cons of hakrawler
- May be slower for large-scale scanning due to its crawling nature
- Potentially more complex to use, with additional configuration options
- Can generate more noise in results, requiring additional filtering
Code comparison
meg:
meg -d 1000 -v / hosts.txt
hakrawler:
echo "https://example.com" | hakrawler
Key differences
- Purpose: meg is primarily a fast, parallel HTTP fetcher, while hakrawler is a web crawler and content discovery tool
- Functionality: meg focuses on fetching specific paths across multiple hosts, whereas hakrawler explores and maps web applications
- Use cases: meg is ideal for quick, large-scale HTTP requests, while hakrawler is better suited for in-depth analysis of individual web applications
Both tools have their strengths and are valuable in different scenarios within web security testing and reconnaissance workflows.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
meg
meg is a tool for fetching lots of URLs but still being 'nice' to servers.
It can be used to fetch many paths for many hosts; fetching one path for all hosts before moving on to the next path and repeating.
You get lots of results quickly, but non of the individual hosts get flooded with traffic.
Install
meg is written in Go and has no run-time dependencies. If you have Go 1.9
or later installed and configured you can install meg with go install
:
ⶠgo install github.com/tomnomnom/meg@latest
Or download a binary and
put it somewhere in your $PATH
(e.g. in /usr/bin/
).
Install Errors
If you see an error like this it means your version of Go is too old:
# github.com/tomnomnom/rawhttp
/root/go/src/github.com/tomnomnom/rawhttp/request.go:102: u.Hostname undefined (
type *url.URL has no field or method Hostname)
/root/go/src/github.com/tomnomnom/rawhttp/request.go:103: u.Port undefined (type
*url.URL has no field or method Port)
/root/go/src/github.com/tomnomnom/rawhttp/request.go:259: undefined: x509.System
CertPool
You should either update your version of Go, or use a binary release for your platform.
Basic Usage
Given a file full of paths:
/robots.txt
/.well-known/security.txt
/package.json
And a file full of hosts (with a protocol):
http://example.com
https://example.com
http://example.net
meg
will request each path for every host:
ⶠmeg --verbose paths hosts
out/example.com/45ed6f717d44385c5e9c539b0ad8dc71771780e0 http://example.com/robots.txt (404 Not Found)
out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618 https://example.com/robots.txt (404 Not Found)
out/example.net/1432c16b671043271eab84111242b1fe2a28eb98 http://example.net/robots.txt (404 Not Found)
out/example.net/61deaa4fa10a6f601adb74519a900f1f0eca38b7 http://example.net/.well-known/security.txt (404 Not Found)
out/example.com/20bc94a296f17ce7a4e2daa2946d0dc12128b3f1 http://example.com/.well-known/security.txt (404 Not Found)
...
And save the output in a directory called ./out
:
ⶠhead -n 20 ./out/example.com/45ed6f717d44385c5e9c539b0ad8dc71771780e0
http://example.com/robots.txt
> GET /robots.txt HTTP/1.1
> Host: example.com
< HTTP/1.1 404 Not Found
< Expires: Sat, 06 Jan 2018 01:05:38 GMT
< Server: ECS (lga/13A2)
< Accept-Ranges: bytes
< Cache-Control: max-age=604800
< Content-Type: text/*
< Content-Length: 1270
< Date: Sat, 30 Dec 2017 01:05:38 GMT
< Last-Modified: Sun, 24 Dec 2017 06:53:36 GMT
< X-Cache: 404-HIT
<!doctype html>
<html>
<head>
Without any arguments, meg will read paths from a file called ./paths
,
and hosts from a file called ./hosts
. There will also be no output:
ⶠmeg
â¶
But it will save an index file to ./out/index
:
ⶠhead -n 2 ./out/index
out/example.com/538565d7ab544bc3bec5b2f0296783aaec25e756 http://example.com/package.json (404 Not Found)
out/example.com/20bc94a296f17ce7a4e2daa2946d0dc12128b3f1 http://example.com/.well-known/security.txt (404 Not Found)
You can use the index file to find where the response is stored, but it's
often easier to find what you're looking for with grep
:
ⶠgrep -Hnri '< Server:' out/
out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618:14:< Server: ECS (lga/13A2)
out/example.com/bd8d9f4c470ffa0e6ec8cfa8ba1c51d62289b6dd:16:< Server: ECS (lga/13A3)
If you want to request just one path, you can specify it directly as an argument:
ⶠmeg /admin.php
Detailed Usage
meg's help output tries to actually be helpful:
ⶠmeg --help
Request many paths for many hosts
Usage:
meg [options] [path|pathsFile] [hostsFile] [outputDir]
Options:
-c, --concurrency <val> Set the concurrency level (defaut: 20)
-d, --delay <val> Milliseconds between requests to the same host (default: 5000)
-H, --header <header> Send a custom HTTP header
-r, --rawhttp Use the rawhttp library for requests (experimental)
-s, --savestatus <status> Save only responses with specific status code
-v, --verbose Verbose mode
-X, --method <method> HTTP method (default: GET)
Defaults:
pathsFile: ./paths
hostsFile: ./hosts
outputDir: ./out
Paths file format:
/robots.txt
/package.json
/security.txt
Hosts file format:
http://example.com
https://example.edu
https://example.net
Examples:
meg /robots.txt
meg -s 200 -X HEAD
meg -c 30 /
meg hosts.txt paths.txt output
Concurrency
By default meg will attempt to make 20 concurrent requests. You can change that
with the -c
or --concurrency
option:
ⶠmeg --concurrency 5
It's not very friendly to keep the concurrency level higher than the number of hosts - you may end up sending lots of requests to one host at once.
Delay
By default meg will wait 5000 milliseconds between requests to the same host.
You can override that with the -d
or --delay
option:
ⶠmeg --delay 10000
Warning: before reducing the delay, ensure that you have permission to make large volumes of requests to the hosts you're targeting.
Adding Headers
You can set additional headers on the requests with the -H
or --header
option:
ⶠmeg --header "Origin: https://evil.com"
ⶠgrep -h '^>' out/example.com/*
> GET /.well-known/security.txt HTTP/1.1
> Origin: https://evil.com
> Host: example.com
...
Raw HTTP (Experimental)
If you want to send requests that aren't valid - for example with invalid URL encoding - the Go HTTP client will fail:
ⶠmeg /%%0a0afoo:bar
request failed: parse https://example.org/%%0a0afoo:bar: invalid URL escape "%%0"
You can use the -r
or --rawhttp
flag to enable use of the rawhttp
library, which does little to no validation on the request:
ⶠmeg --verbose --rawhttp /%%0a0afoo:bar
out/example.com/eac3a4978bfb95992e270c311582e6da4568d83d https://example.com/%%0a0afoo:bar (HTTP/1.1 404 Not Found)
The rawhttp
library and its use is experimental. Amongst other things it doesn't
yet support chunked transfer encoding, so you may notice chunk lengths interspersed
with your output if you use it.
Saving Only Certain Status Codes
If you only want to save results that returned a certain status code, you can
use the -s
or --savestatus
option:
ⶠmeg --savestatus 200 /robots.txt
Specifying The Method
You can specify which HTTP method to use with the -X
or --method
option:
ⶠmeg --method TRACE
ⶠgrep -nri 'TRACE' out/
out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618:3:> TRACE /robots.txt HTTP/1.1
out/example.com/bd8d9f4c470ffa0e6ec8cfa8ba1c51d62289b6dd:3:> TRACE /.well-known/security.txt HTTP/1.1
...
Top Related Projects
httpx is a fast and multi-purpose HTTP toolkit that allows running multiple probes using the retryablehttp library.
Fast web fuzzer written in Go
Directory/File, DNS and VHost busting tool written in Go
A Tool for Domain Flyovers
Gospider - Fast web spider written in Go
Simple, fast web crawler designed for easy, quick discovery of endpoints and assets within a web application
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot