PentestGPT
AI-Powered Penetration Testing Assistant for offensive security testing, focused on web applications and network penetration testing.
Top Related Projects
Examples and guides for using the OpenAI API
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
A guidance language for controlling large language models.
🦜🔗 Build context-aware reasoning applications
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
A Gradio web UI for Large Language Models with support for multiple inference backends.
Quick Overview
PentestGPT is an AI-powered penetration testing tool that leverages natural language processing to assist in various aspects of security assessments. It aims to streamline the penetration testing process by providing intelligent suggestions, automating certain tasks, and offering a user-friendly interface for security professionals.
Pros
- Utilizes advanced AI and NLP techniques to enhance penetration testing capabilities
- Offers a user-friendly interface, making it accessible to both experienced and novice security professionals
- Provides intelligent suggestions and automated task execution, potentially saving time during assessments
- Continuously updated with the latest security knowledge and techniques
Cons
- May require a learning curve for users unfamiliar with AI-assisted tools
- Dependence on AI suggestions could potentially lead to overlooking unconventional or unique vulnerabilities
- The effectiveness of the tool may vary depending on the quality and breadth of its training data
- As with any AI tool, there's a risk of false positives or misinterpretations in certain scenarios
Code Examples
# Initialize PentestGPT
from pentestgpt import PentestGPT
pentest_ai = PentestGPT()
# Perform a basic vulnerability scan
results = pentest_ai.scan_target("https://example.com")
print(results)
# Generate a custom exploit based on discovered vulnerabilities
vulnerability = "SQL Injection in login form"
exploit = pentest_ai.generate_exploit(vulnerability)
print(exploit)
# Analyze network traffic for potential threats
pcap_file = "captured_traffic.pcap"
threats = pentest_ai.analyze_network_traffic(pcap_file)
for threat in threats:
print(f"Detected threat: {threat}")
Getting Started
To get started with PentestGPT, follow these steps:
-
Install the library:
pip install pentestgpt
-
Import and initialize PentestGPT in your Python script:
from pentestgpt import PentestGPT pentest_ai = PentestGPT()
-
Use the various methods provided by PentestGPT to assist in your penetration testing tasks:
# Example: Perform a vulnerability scan results = pentest_ai.scan_target("https://example.com") print(results)
For more detailed information and advanced usage, refer to the official documentation.
Competitor Comparisons
Examples and guides for using the OpenAI API
Pros of openai-cookbook
- Comprehensive collection of examples and best practices for using OpenAI's APIs
- Regularly updated with new features and improvements
- Broad coverage of various use cases and applications
Cons of openai-cookbook
- Focused solely on OpenAI's products, limiting its scope for general AI development
- May not provide in-depth explanations for advanced topics or techniques
- Less specialized for specific domains like cybersecurity
Code Comparison
PentestGPT:
def generate_payload(target, vulnerability):
prompt = f"Generate a payload for {vulnerability} on {target}"
response = openai.Completion.create(engine="text-davinci-002", prompt=prompt, max_tokens=100)
return response.choices[0].text.strip()
openai-cookbook:
def get_embedding(text, model="text-embedding-ada-002"):
text = text.replace("\n", " ")
return openai.Embedding.create(input=[text], model=model)['data'][0]['embedding']
The code snippets demonstrate the different focus areas of each repository. PentestGPT is geared towards generating security-related payloads, while openai-cookbook provides general-purpose utilities for working with OpenAI's APIs, such as generating embeddings.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Pros of AutoGPT
- More versatile and general-purpose, capable of handling a wide range of tasks
- Larger community and more active development, with frequent updates and improvements
- Includes a web interface for easier interaction and visualization of results
Cons of AutoGPT
- May require more setup and configuration for specific use cases
- Potentially higher resource requirements due to its broader scope
- Less focused on pentesting and security-specific tasks
Code Comparison
PentestGPT:
def run_nmap_scan(target):
nmap_command = f"nmap -sV -sC -p- {target}"
result = subprocess.run(nmap_command, shell=True, capture_output=True, text=True)
return result.stdout
AutoGPT:
def execute_command(command: str, arguments: str) -> str:
try:
result = subprocess.run(f"{command} {arguments}", capture_output=True, shell=True, text=True)
return result.stdout
except subprocess.CalledProcessError as e:
return f"Error: {str(e)}"
The code comparison shows that PentestGPT has a more specific function for running Nmap scans, while AutoGPT uses a more generic command execution function. This reflects the difference in focus between the two projects, with PentestGPT being more tailored to pentesting tasks and AutoGPT offering a more flexible approach to executing various commands.
A guidance language for controlling large language models.
Pros of guidance
- More versatile and general-purpose, applicable to various AI/ML tasks
- Larger community and more frequent updates
- Better documentation and examples for getting started
Cons of guidance
- Less specialized for pentesting and security applications
- May require more customization for specific security use cases
- Steeper learning curve for security professionals without ML background
Code Comparison
PentestGPT:
def generate_payload(target, vulnerability):
prompt = f"Generate a payload for {vulnerability} on {target}"
response = openai.Completion.create(engine="text-davinci-002", prompt=prompt)
return response.choices[0].text.strip()
guidance:
import guidance
@guidance
def generate_text(topic: str):
return '''
Write a paragraph about {{topic}}:
{{gen 'paragraph' max_tokens=100}}
'''
The code snippets show that PentestGPT is more focused on security-specific tasks, while guidance offers a more flexible approach for general text generation and AI interactions.
🦜🔗 Build context-aware reasoning applications
Pros of langchain
- More comprehensive framework for building AI applications
- Larger community and ecosystem with extensive documentation
- Supports multiple language models and integrations
Cons of langchain
- Steeper learning curve due to its broader scope
- May be overkill for simple AI projects
- Less focused on specific pentesting tasks
Code comparison
PentestGPT:
def generate_payload(target, vulnerability):
prompt = f"Generate a payload for {vulnerability} on {target}"
response = openai.Completion.create(engine="text-davinci-002", prompt=prompt)
return response.choices[0].text.strip()
langchain:
from langchain import PromptTemplate, LLMChain
from langchain.llms import OpenAI
template = "Generate a payload for {vulnerability} on {target}"
prompt = PromptTemplate(template=template, input_variables=["vulnerability", "target"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI())
payload = llm_chain.run(vulnerability="SQL injection", target="example.com")
While PentestGPT focuses on generating payloads for specific vulnerabilities, langchain provides a more flexible framework for creating AI-powered applications. PentestGPT is tailored for pentesting tasks, whereas langchain offers a broader set of tools and integrations for various AI use cases.
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Pros of chatgpt-retrieval-plugin
- More versatile and general-purpose, designed for document retrieval and Q&A
- Better documentation and setup instructions
- Larger community support and active development
Cons of chatgpt-retrieval-plugin
- Not specifically tailored for penetration testing or security applications
- May require more customization to adapt to specific use cases
- Potentially more complex setup due to its broader scope
Code Comparison
PentestGPT:
def generate_payload(self, target, vulnerability):
prompt = f"Generate a payload for {vulnerability} on {target}"
response = self.llm.generate(prompt)
return response
chatgpt-retrieval-plugin:
def query_index(self, query_text: str, top_k: int = 5) -> List[Document]:
query_embedding = self.embeddings.get_query_embedding(query_text)
results = self.index.query_top_k(query_embedding, top_k)
return [self.document_store.get(doc_id) for doc_id, _ in results]
The code snippets highlight the different focus areas of each project. PentestGPT is geared towards generating payloads for specific vulnerabilities, while chatgpt-retrieval-plugin focuses on querying and retrieving relevant documents based on user input.
A Gradio web UI for Large Language Models with support for multiple inference backends.
Pros of text-generation-webui
- More versatile, supporting multiple language models and tasks
- Actively maintained with frequent updates and improvements
- Larger community and more extensive documentation
Cons of text-generation-webui
- Not specifically tailored for penetration testing tasks
- May require more setup and configuration for specialized use cases
Code Comparison
text-generation-webui:
def generate_reply(
question, state, stopping_strings=None, is_chat=False, for_ui=False
):
# ... (code for generating replies using various models)
PentestGPT:
def generate_response(prompt):
response = openai.Completion.create(
engine="text-davinci-002",
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
temperature=0.7,
)
return response.choices[0].text.strip()
The code snippets show that text-generation-webui is designed to work with multiple models and has more parameters, while PentestGPT is specifically configured for OpenAI's GPT model. This reflects the broader scope of text-generation-webui compared to the more focused approach of PentestGPT.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
PentestGPT
PentestGPT provides advanced AI and integrated tools to help security teams conduct comprehensive penetration tests effortlessly. Scan, exploit, and analyze web applications, networks, and cloud environments with ease and precision, without needing expert skills.
A Special Note of Thanks
Thank you so much, @fkesheh and @Fx64b, for your amazing work and dedication to this project.
Thank you for being part of the HackerAI family.
Important Note About Running PentestGPT Locally
The primary purpose of this GitHub repo is to show what's behind PentestGPT in order to build trust.
You can run PentestGPT locally, but the plugins and more will only work with proper and complex configuration.
Local Quickstart
Follow these steps to get your own PentestGPT instance running locally.
You can watch the full video tutorial here.
1. Clone the Repo
git clone https://github.com/hackerai-tech/PentestGPT.git
2. Install Dependencies
Open a terminal in the root directory of your local PentestGPT repository and run:
npm install
3. Install Supabase & Run Locally
Why Supabase?
Previously, we used local browser storage to store data. However, this was not a good solution for a few reasons:
- Security issues
- Limited storage
- Limits multi-modal use cases
We now use Supabase because it's easy to use, it's open-source, it's Postgres, and it has a free tier for hosted instances.
We will support other providers in the future to give you more options.
1. Install Docker
You will need to install Docker to run Supabase locally. You can download it here for free.
2. Install Supabase CLI
MacOS/Linux
brew install supabase/tap/supabase
Windows
scoop bucket add supabase https://github.com/supabase/scoop-bucket.git
scoop install supabase
3. Start Supabase
In your terminal at the root of your local PentestGPT repository, run:
supabase start
4. Fill in Secrets
1. Environment Variables
In your terminal at the root of your local PentestGPT repository, run:
cp .env.local.example .env.local
Get the required values by running:
supabase status
Note: Use API URL
from supabase status
for NEXT_PUBLIC_SUPABASE_URL
Now go to your .env.local
file and fill in the values.
If the environment variable is set, it will disable the input in the user settings.
2. SQL Setup
In the 1st migration file supabase/migrations/20240108234540_setup.sql
you will need to replace 2 values with the values you got above:
project_url
(line 53):http://supabase_kong_pentestgpt:8000
(default) can remain unchanged if you don't change yourproject_id
in theconfig.toml
fileservice_role_key
(line 54): You got this value from runningsupabase status
This prevents issues with storage files not being deleted properly.
5. Run app locally
In your terminal at the root of your local PentestGPT repository, run:
npm run chat
Your local instance of PentestGPT should now be running at http://localhost:3000. Be sure to use a compatible node version (i.e. v18).
You can view your backend GUI at http://localhost:54323/project/default/editor.
6. Adding local user
1. Sign Up
Go to the login screen at http://localhost:3000
Fill in your email and password, then press Sign Up.
2. Confirm email
Access Inbucket, the email testing service, at http://localhost:54324.
Find the mailbox for the email you used to sign up. Review the received message and confirm your email.
Now you can use this user and password to login.
Hosted Quickstart
Follow these steps to get your own PentestGPT instance running in the cloud.
Video tutorial coming soon.
1. Follow Local Quickstart
Repeat steps 1-4 in "Local Quickstart" above.
You will want separate repositories for your local and hosted instances.
Create a new repository for your hosted instance of PentestGPT on GitHub and push your code to it.
2. Setup Backend with Supabase
1. Create a new project
Go to Supabase and create a new project.
2. Get Project Values
Once you are in the project dashboard, click on the "Project Settings" icon tab on the far bottom left.
Here you will get the values for the following environment variables:
-
Project Ref
: Found in "General settings" as "Reference ID" -
Project ID
: Found in the URL of your project dashboard (Ex: https://supabase.com/dashboard/project/<YOUR_PROJECT_ID>/settings/general)
While still in "Settings" click on the "API" text tab on the left.
Here you will get the values for the following environment variables:
-
Project URL
: Found in "API Settings" as "Project URL" -
Anon key
: Found in "Project API keys" as "anon public" -
Service role key
: Found in "Project API keys" as "service_role" (Reminder: Treat this like a password!)
3. Configure Auth
Next, click on the "Authentication" icon tab on the far left.
In the text tabs, click on "Providers" and make sure "Email" is enabled.
We recommend turning off "Confirm email" for your own personal instance.
4. Connect to Hosted DB
Open up your repository for your hosted instance of PentestGPT.
In the 1st migration file supabase/migrations/20240108234540_setup.sql
you will need to replace 2 values with the values you got above:
project_url
(line 53): Use theProject URL
value from aboveservice_role_key
(line 54): Use theService role key
value from above
Now, open a terminal in the root directory of your local PentestGPT repository. We will execute a few commands here.
Login to Supabase by running:
supabase login
Next, link your project by running the following command with the "Project ID" you got above:
supabase link --project-ref <project-id>
Your project should now be linked.
Finally, push your database to Supabase by running:
supabase db push
Your hosted database should now be set up!
3. Setup Frontend with Vercel
Go to Vercel and create a new project.
In the setup page, import your GitHub repository for your hosted instance of PentestGPT. Within the project Settings, in the "Build & Development Settings" section, switch Framework Preset to "Next.js".
In environment variables, add the following from the values you got above:
NEXT_PUBLIC_SUPABASE_URL
NEXT_PUBLIC_SUPABASE_ANON_KEY
SUPABASE_SERVICE_ROLE_KEY
You can also add API keys as environment variables.
MISTRAL_API_KEY
OPENAI_API_KEY
For the full list of environment variables, refer to the '.env.local.example' file. If the environment variables are set for API keys, it will disable the input in the user settings.
Click "Deploy" and wait for your frontend to deploy.
Once deployed, you should be able to use your hosted instance of PentestGPT via the URL Vercel gives you.
Updating
In your terminal at the root of your local PentestGPT repository, run:
npm run update
If you run a hosted instance you'll also need to run:
npm run db-push
to apply the latest migrations to your live database.
Have a feature request, question, or comment?
You can get in touch with us through the HackerAI Help Center at https://help.hackerai.co.
Contributing
Interested in contributing to PentestGPT? Please see CONTRIBUTING.md for setup instructions and guidelines for new contributors. As an added incentive, top contributors will have the opportunity to become part of the PentestGPT team.
License
Licensed under the GNU General Public License v3.0
Top Related Projects
Examples and guides for using the OpenAI API
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
A guidance language for controlling large language models.
🦜🔗 Build context-aware reasoning applications
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
A Gradio web UI for Large Language Models with support for multiple inference backends.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot