Compare commits
40 Commits
cd80d6f569
...
try-fix
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4378146d0c | ||
|
|
b26002eff9 | ||
|
|
2185177a84 | ||
|
|
b7a57f1552 | ||
|
|
41d556e2ce | ||
|
|
2974312278 | ||
|
|
930fdca500 | ||
|
|
2925512a4d | ||
|
|
717f103596 | ||
|
|
612f414d2a | ||
|
|
53baf2e291 | ||
|
|
84810cdbb0 | ||
|
|
d36fb7d814 | ||
|
|
c0b820c96c | ||
|
|
03c52abd1b | ||
|
|
2d62191aa0 | ||
|
|
d2e4c6ee49 | ||
|
|
9e66fd0785 | ||
|
|
b250109736 | ||
|
|
a535d25714 | ||
|
|
4f69cabd41 | ||
|
|
8b7a0656bb | ||
|
|
007ebbfd73 | ||
|
|
3ecfca95e6 | ||
|
|
7e2473b521 | ||
|
|
f445187025 | ||
|
|
df4e1703c4 | ||
|
|
646b569ced | ||
|
|
b47e679992 | ||
|
|
0021bbc696 | ||
|
|
2a87403cb6 | ||
|
|
d3e1fcf35f | ||
|
|
2d485c5703 | ||
|
|
db2101d814 | ||
|
|
709d3b9f3d | ||
|
|
a0caedcb1f | ||
|
|
ce0e11cf0b | ||
|
|
696cec0723 | ||
|
|
29e36e34be | ||
|
|
cee620f5f6 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -168,3 +168,5 @@ cython_debug/
|
||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||
#.idea/
|
||||
|
||||
dump.rdb
|
||||
.vscode
|
||||
317
README.md
317
README.md
@@ -1,107 +1,256 @@
|
||||
# DNS Reconnaissance Tool
|
||||
# DNSRecon - Passive Infrastructure Reconnaissance Tool
|
||||
|
||||
A comprehensive DNS reconnaissance tool designed for investigators to gather intelligence on hostnames and IP addresses through multiple data sources.
|
||||
DNSRecon is an interactive, passive reconnaissance tool designed to map adversary infrastructure. It operates on a "free-by-default" model, ensuring core functionality without subscriptions, while allowing power users to enhance its capabilities with paid API keys.
|
||||
|
||||
**Current Status: Phase 2 Implementation**
|
||||
|
||||
- ✅ Core infrastructure and graph engine
|
||||
- ✅ Multi-provider support (crt.sh, DNS, Shodan)
|
||||
- ✅ Session-based multi-user support
|
||||
- ✅ Real-time web interface with interactive visualization
|
||||
- ✅ Forensic logging system and JSON export
|
||||
|
||||
## Features
|
||||
|
||||
- **DNS Resolution**: Query multiple DNS servers (1.1.1.1, 8.8.8.8, 9.9.9.9)
|
||||
- **TLD Expansion**: Automatically try all IANA TLDs for hostname-only inputs
|
||||
- **Certificate Transparency**: Query crt.sh for SSL certificate information
|
||||
- **Recursive Discovery**: Automatically discover and analyze subdomains
|
||||
- **External Intelligence**: Optional Shodan and VirusTotal integration
|
||||
- **Multiple Interfaces**: Both CLI and web interface available
|
||||
- **Comprehensive Reports**: JSON and text output formats
|
||||
- **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
|
||||
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
|
||||
- **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
|
||||
- **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
|
||||
- **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
|
||||
- **Session Management**: Supports concurrent user sessions with isolated scanner instances.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Clone or create the project structure
|
||||
mkdir dns-recon-tool && cd dns-recon-tool
|
||||
### Prerequisites
|
||||
|
||||
# Install dependencies
|
||||
- Python 3.8 or higher
|
||||
- A modern web browser with JavaScript enabled
|
||||
- (Recommended) A Linux host for running the application and the optional DNS cache.
|
||||
|
||||
### 1\. Clone the Project
|
||||
|
||||
```bash
|
||||
git clone https://github.com/your-repo/dnsrecon.git
|
||||
cd dnsrecon
|
||||
```
|
||||
|
||||
### 2\. Install Python Dependencies
|
||||
|
||||
It is highly recommended to use a virtual environment:
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Usage
|
||||
### 3\. (Optional but Recommended) Set up a Local DNS Caching Resolver
|
||||
|
||||
### Command Line Interface
|
||||
Running a local DNS caching resolver can significantly speed up DNS queries and reduce your network footprint. Here’s how to set up `unbound` on a Debian-based Linux distribution (like Ubuntu).
|
||||
|
||||
**a. Install Unbound:**
|
||||
|
||||
```bash
|
||||
# Basic domain scan
|
||||
python -m src.main example.com
|
||||
|
||||
# Try all TLDs for hostname
|
||||
python -m src.main example
|
||||
|
||||
# With API keys and custom depth
|
||||
python -m src.main example.com --shodan-key YOUR_KEY --virustotal-key YOUR_KEY --max-depth 3
|
||||
|
||||
# Save reports
|
||||
python -m src.main example.com --output results
|
||||
|
||||
# JSON only output
|
||||
python -m src.main example.com --json-only
|
||||
sudo apt update
|
||||
sudo apt install unbound -y
|
||||
```
|
||||
|
||||
### Web Interface
|
||||
**b. Configure Unbound:**
|
||||
Create a new configuration file for DNSRecon:
|
||||
|
||||
```bash
|
||||
# Start web server
|
||||
python -m src.main --web
|
||||
|
||||
# Custom port
|
||||
python -m src.main --web --port 8080
|
||||
sudo nano /etc/unbound/unbound.conf.d/dnsrecon.conf
|
||||
```
|
||||
|
||||
Then open http://localhost:5000 in your browser.
|
||||
|
||||
## Configuration
|
||||
|
||||
The tool uses the following default settings:
|
||||
- DNS Servers: 1.1.1.1, 8.8.8.8, 9.9.9.9
|
||||
- Max Recursion Depth: 2
|
||||
- Rate Limits: DNS (10/s), crt.sh (2/s), Shodan (0.5/s), VirusTotal (0.25/s)
|
||||
|
||||
## API Keys
|
||||
|
||||
For enhanced reconnaissance, obtain API keys from:
|
||||
- [Shodan](https://shodan.io) - Port scanning and service detection
|
||||
- [VirusTotal](https://virustotal.com) - Security analysis and reputation
|
||||
|
||||
## Output
|
||||
|
||||
The tool generates two types of reports:
|
||||
|
||||
### JSON Report
|
||||
Complete machine-readable data including:
|
||||
- All discovered hostnames and IPs
|
||||
- DNS records by type
|
||||
- Certificate information
|
||||
- External service results
|
||||
- Metadata and timing
|
||||
|
||||
### Text Report
|
||||
Human-readable summary with:
|
||||
- Executive summary
|
||||
- Hostnames by discovery depth
|
||||
- IP address analysis
|
||||
- DNS record details
|
||||
- Certificate analysis
|
||||
- Security findings
|
||||
|
||||
## Architecture
|
||||
Add the following content to the file:
|
||||
|
||||
```
|
||||
src/
|
||||
├── main.py # CLI entry point
|
||||
├── web_app.py # Flask web interface
|
||||
├── config.py # Configuration management
|
||||
├── data_structures.py # Data models
|
||||
├── dns_resolver.py # DNS functionality
|
||||
├── certificate_checker.py # crt.sh integration
|
||||
├── shodan_client.py # Shodan API
|
||||
├── virustotal_client.py # VirusTotal API
|
||||
├── tld_fetcher.py # IANA TLD handling
|
||||
├── reconnaissance.py # Main logic
|
||||
└── report_generator.py # Report generation
|
||||
server:
|
||||
# Listen on localhost for all users
|
||||
interface: 127.0.0.1
|
||||
access-control: 0.0.0.0/0 refuse
|
||||
access-control: 127.0.0.0/8 allow
|
||||
|
||||
# Enable prefetching of popular items
|
||||
prefetch: yes
|
||||
```
|
||||
|
||||
**c. Restart Unbound and set it as the default resolver:**
|
||||
|
||||
```bash
|
||||
sudo systemctl restart unbound
|
||||
sudo systemctl enable unbound
|
||||
```
|
||||
|
||||
To use this resolver for your system, you may need to update your network settings to point to `127.0.0.1` as your DNS server.
|
||||
|
||||
**d. Update DNSProvider to use the local resolver:**
|
||||
In `dnsrecon/providers/dns_provider.py`, you can explicitly set the resolver's nameservers in the `__init__` method:
|
||||
|
||||
```python
|
||||
# dnsrecon/providers/dns_provider.py
|
||||
|
||||
class DNSProvider(BaseProvider):
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize DNS provider with session-specific configuration."""
|
||||
super().__init__(...)
|
||||
|
||||
# Configure DNS resolver
|
||||
self.resolver = dns.resolver.Resolver()
|
||||
self.resolver.nameservers = ['127.0.0.1'] # Use local caching resolver
|
||||
self.resolver.timeout = 5
|
||||
self.resolver.lifetime = 10
|
||||
```
|
||||
|
||||
## Usage (Development)
|
||||
|
||||
### 1\. Start the Application
|
||||
|
||||
```bash
|
||||
python app.py
|
||||
```
|
||||
|
||||
### 2\. Open Your Browser
|
||||
|
||||
Navigate to `http://127.0.0.1:5000`.
|
||||
|
||||
### 3\. Basic Reconnaissance Workflow
|
||||
|
||||
1. **Enter Target Domain**: Input a domain like `example.com`.
|
||||
2. **Select Recursion Depth**: Depth 2 is recommended for most investigations.
|
||||
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin.
|
||||
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered.
|
||||
5. **Analyze and Export**: Interact with the graph and download the results when the scan is complete.
|
||||
|
||||
## Production Deployment
|
||||
|
||||
To deploy DNSRecon in a production environment, follow these steps:
|
||||
|
||||
### 1\. Use a Production WSGI Server
|
||||
|
||||
Do not use the built-in Flask development server for production. Use a WSGI server like **Gunicorn**:
|
||||
|
||||
```bash
|
||||
pip install gunicorn
|
||||
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||
```
|
||||
|
||||
### 2\. Configure Environment Variables
|
||||
|
||||
Set the following environment variables for a secure and configurable deployment:
|
||||
|
||||
```bash
|
||||
# Generate a strong, random secret key
|
||||
export SECRET_KEY='your-super-secret-and-random-key'
|
||||
|
||||
# Set Flask to production mode
|
||||
export FLASK_ENV='production'
|
||||
export FLASK_DEBUG=False
|
||||
|
||||
# API keys (optional, but recommended for full functionality)
|
||||
export SHODAN_API_KEY="your_shodan_key"
|
||||
```
|
||||
|
||||
### 3\. Use a Reverse Proxy
|
||||
|
||||
Set up a reverse proxy like **Nginx** to sit in front of the Gunicorn server. This provides several benefits, including:
|
||||
|
||||
- **TLS/SSL Termination**: Securely handle HTTPS traffic.
|
||||
- **Load Balancing**: Distribute traffic across multiple application instances.
|
||||
- **Serving Static Files**: Efficiently serve CSS and JavaScript files.
|
||||
|
||||
**Example Nginx Configuration:**
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name your_domain.com;
|
||||
|
||||
location / {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name your_domain.com;
|
||||
|
||||
# SSL cert configuration
|
||||
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:5000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
}
|
||||
|
||||
location /static {
|
||||
alias /path/to/your/dnsrecon/static;
|
||||
expires 30d;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Autostart with systemd
|
||||
|
||||
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
|
||||
|
||||
### 1\. Create a `.service` file
|
||||
|
||||
Create a new service file in `/etc/systemd/system/`:
|
||||
|
||||
```bash
|
||||
sudo nano /etc/systemd/system/dnsrecon.service
|
||||
```
|
||||
|
||||
### 2\. Add the Service Configuration
|
||||
|
||||
Paste the following configuration into the file. **Remember to replace `/path/to/your/dnsrecon` and `your_user` with your actual project path and username.**
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=DNSRecon Application
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
User=your_user
|
||||
Group=your_user
|
||||
WorkingDirectory=/path/to/your/dnsrecon
|
||||
ExecStart=/path/to/your/dnsrecon/venv/bin/gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||
Restart=always
|
||||
Environment="SECRET_KEY=your-super-secret-and-random-key"
|
||||
Environment="FLASK_ENV=production"
|
||||
Environment="FLASK_DEBUG=False"
|
||||
Environment="SHODAN_API_KEY=your_shodan_key"
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
### 3\. Enable and Start the Service
|
||||
|
||||
Reload the `systemd` daemon, enable the service to start on boot, and then start it immediately:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable dnsrecon.service
|
||||
sudo systemctl start dnsrecon.service
|
||||
```
|
||||
|
||||
You can check the status of the service at any time with:
|
||||
|
||||
```bash
|
||||
sudo systemctl status dnsrecon.service
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **API Keys**: API keys are stored in memory for the duration of a user session and are not written to disk.
|
||||
- **Rate Limiting**: DNSRecon includes built-in rate limiting to be respectful to data sources.
|
||||
- **Local Use**: The application is designed for local or trusted network use and does not have built-in authentication. **Do not expose it directly to the internet without proper security controls.**
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the terms of the license agreement found in the `LICENSE` file.
|
||||
647
app.py
Normal file
647
app.py
Normal file
@@ -0,0 +1,647 @@
|
||||
"""
|
||||
Flask application entry point for DNSRecon web interface.
|
||||
Enhanced with user session management and task-based completion model.
|
||||
"""
|
||||
|
||||
import json
|
||||
import traceback
|
||||
from flask import Flask, render_template, request, jsonify, send_file, session
|
||||
from datetime import datetime, timezone, timedelta
|
||||
import io
|
||||
|
||||
from core.session_manager import session_manager, UserIdentifier
|
||||
from config import config
|
||||
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config['SECRET_KEY'] = 'dnsrecon-dev-key-change-in-production'
|
||||
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=2) # 2 hour session lifetime
|
||||
|
||||
|
||||
def get_user_scanner():
|
||||
"""
|
||||
Enhanced user scanner retrieval with user identification and session consolidation.
|
||||
Implements single session per user with seamless consolidation.
|
||||
"""
|
||||
print("=== ENHANCED GET_USER_SCANNER ===")
|
||||
|
||||
try:
|
||||
# Extract user identification from request
|
||||
client_ip, user_agent = UserIdentifier.extract_request_info(request)
|
||||
user_fingerprint = UserIdentifier.generate_user_fingerprint(client_ip, user_agent)
|
||||
|
||||
print(f"User fingerprint: {user_fingerprint}")
|
||||
print(f"Client IP: {client_ip}")
|
||||
print(f"User Agent: {user_agent[:50]}...")
|
||||
|
||||
# Get current Flask session info for debugging
|
||||
current_flask_session_id = session.get('dnsrecon_session_id')
|
||||
print(f"Flask session ID: {current_flask_session_id}")
|
||||
|
||||
# Try to get existing session first
|
||||
if current_flask_session_id:
|
||||
existing_scanner = session_manager.get_session(current_flask_session_id)
|
||||
if existing_scanner:
|
||||
# Verify session belongs to current user
|
||||
session_info = session_manager.get_session_info(current_flask_session_id)
|
||||
if session_info.get('user_fingerprint') == user_fingerprint:
|
||||
print(f"Found valid existing session {current_flask_session_id} for user {user_fingerprint}")
|
||||
existing_scanner.session_id = current_flask_session_id
|
||||
return current_flask_session_id, existing_scanner
|
||||
else:
|
||||
print(f"Session {current_flask_session_id} belongs to different user, will create new session")
|
||||
else:
|
||||
print(f"Session {current_flask_session_id} not found in Redis, will create new session")
|
||||
|
||||
# Create or replace user session (this handles consolidation automatically)
|
||||
new_session_id = session_manager.create_or_replace_user_session(client_ip, user_agent)
|
||||
new_scanner = session_manager.get_session(new_session_id)
|
||||
|
||||
if not new_scanner:
|
||||
print(f"ERROR: Failed to retrieve newly created session {new_session_id}")
|
||||
raise Exception("Failed to create new scanner session")
|
||||
|
||||
# Store in Flask session for browser persistence
|
||||
session['dnsrecon_session_id'] = new_session_id
|
||||
session.permanent = True
|
||||
|
||||
# Ensure session ID is set on scanner
|
||||
new_scanner.session_id = new_session_id
|
||||
|
||||
# Get session info for user feedback
|
||||
session_info = session_manager.get_session_info(new_session_id)
|
||||
|
||||
print(f"Session created/consolidated successfully")
|
||||
print(f" - Session ID: {new_session_id}")
|
||||
print(f" - User: {user_fingerprint}")
|
||||
print(f" - Scanner status: {new_scanner.status}")
|
||||
print(f" - Session age: {session_info.get('session_age_minutes', 0)} minutes")
|
||||
|
||||
return new_session_id, new_scanner
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in get_user_scanner: {e}")
|
||||
traceback.print_exc()
|
||||
raise
|
||||
|
||||
|
||||
@app.route('/')
|
||||
def index():
|
||||
"""Serve the main web interface."""
|
||||
return render_template('index.html')
|
||||
|
||||
|
||||
@app.route('/api/scan/start', methods=['POST'])
|
||||
def start_scan():
|
||||
"""
|
||||
Start a new reconnaissance scan with enhanced user session management.
|
||||
"""
|
||||
print("=== API: /api/scan/start called ===")
|
||||
|
||||
try:
|
||||
print("Getting JSON data from request...")
|
||||
data = request.get_json()
|
||||
print(f"Request data: {data}")
|
||||
|
||||
if not data or 'target_domain' not in data:
|
||||
print("ERROR: Missing target_domain in request")
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'Missing target_domain in request'
|
||||
}), 400
|
||||
|
||||
target_domain = data['target_domain'].strip()
|
||||
max_depth = data.get('max_depth', config.default_recursion_depth)
|
||||
clear_graph = data.get('clear_graph', True)
|
||||
|
||||
print(f"Parsed - target_domain: '{target_domain}', max_depth: {max_depth}, clear_graph: {clear_graph}")
|
||||
|
||||
# Validation
|
||||
if not target_domain:
|
||||
print("ERROR: Target domain cannot be empty")
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'Target domain cannot be empty'
|
||||
}), 400
|
||||
|
||||
if not isinstance(max_depth, int) or max_depth < 1 or max_depth > 5:
|
||||
print(f"ERROR: Invalid max_depth: {max_depth}")
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'Max depth must be an integer between 1 and 5'
|
||||
}), 400
|
||||
|
||||
print("Validation passed, getting user scanner...")
|
||||
|
||||
# Get user-specific scanner with enhanced session management
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
|
||||
# Ensure session ID is properly set
|
||||
if not scanner.session_id:
|
||||
scanner.session_id = user_session_id
|
||||
|
||||
print(f"Using session: {user_session_id}")
|
||||
print(f"Scanner object ID: {id(scanner)}")
|
||||
|
||||
# Start scan
|
||||
print(f"Calling start_scan on scanner {id(scanner)}...")
|
||||
success = scanner.start_scan(target_domain, max_depth, clear_graph=clear_graph)
|
||||
|
||||
# Immediately update session state regardless of success
|
||||
session_manager.update_session_scanner(user_session_id, scanner)
|
||||
|
||||
if success:
|
||||
scan_session_id = scanner.logger.session_id
|
||||
print(f"Scan started successfully with scan session ID: {scan_session_id}")
|
||||
|
||||
# Get session info for user feedback
|
||||
session_info = session_manager.get_session_info(user_session_id)
|
||||
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'message': 'Scan started successfully',
|
||||
'scan_id': scan_session_id,
|
||||
'user_session_id': user_session_id,
|
||||
'scanner_status': scanner.status,
|
||||
'session_info': {
|
||||
'user_fingerprint': session_info.get('user_fingerprint', 'unknown'),
|
||||
'session_age_minutes': session_info.get('session_age_minutes', 0),
|
||||
'consolidated': session_info.get('session_age_minutes', 0) > 0
|
||||
},
|
||||
'debug_info': {
|
||||
'scanner_object_id': id(scanner),
|
||||
'scanner_status': scanner.status
|
||||
}
|
||||
})
|
||||
else:
|
||||
print("ERROR: Scanner returned False")
|
||||
|
||||
# Provide more detailed error information
|
||||
error_details = {
|
||||
'scanner_status': scanner.status,
|
||||
'scanner_object_id': id(scanner),
|
||||
'session_id': user_session_id,
|
||||
'providers_count': len(scanner.providers) if hasattr(scanner, 'providers') else 0
|
||||
}
|
||||
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Failed to start scan (scanner status: {scanner.status})',
|
||||
'debug_info': error_details
|
||||
}), 409
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in start_scan endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/scan/stop', methods=['POST'])
|
||||
def stop_scan():
|
||||
"""Stop the current scan with immediate GUI feedback."""
|
||||
print("=== API: /api/scan/stop called ===")
|
||||
|
||||
try:
|
||||
# Get user-specific scanner
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
print(f"Stopping scan for session: {user_session_id}")
|
||||
|
||||
if not scanner:
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'No scanner found for session'
|
||||
}), 404
|
||||
|
||||
# Ensure session ID is set
|
||||
if not scanner.session_id:
|
||||
scanner.session_id = user_session_id
|
||||
|
||||
# Use the stop mechanism
|
||||
success = scanner.stop_scan()
|
||||
|
||||
# Also set the Redis stop signal directly for extra reliability
|
||||
session_manager.set_stop_signal(user_session_id)
|
||||
|
||||
# Force immediate status update
|
||||
session_manager.update_scanner_status(user_session_id, 'stopped')
|
||||
|
||||
# Update the full scanner state
|
||||
session_manager.update_session_scanner(user_session_id, scanner)
|
||||
|
||||
print(f"Stop scan completed. Success: {success}, Scanner status: {scanner.status}")
|
||||
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'message': 'Scan stop requested - termination initiated',
|
||||
'user_session_id': user_session_id,
|
||||
'scanner_status': scanner.status,
|
||||
'stop_method': 'cross_process'
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in stop_scan endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/scan/status', methods=['GET'])
|
||||
def get_scan_status():
|
||||
"""Get current scan status with enhanced session information."""
|
||||
try:
|
||||
# Get user-specific scanner
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
|
||||
if not scanner:
|
||||
# Return default idle status if no scanner
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'status': {
|
||||
'status': 'idle',
|
||||
'target_domain': None,
|
||||
'current_depth': 0,
|
||||
'max_depth': 0,
|
||||
'current_indicator': '',
|
||||
'total_indicators_found': 0,
|
||||
'indicators_processed': 0,
|
||||
'progress_percentage': 0.0,
|
||||
'enabled_providers': [],
|
||||
'graph_statistics': {},
|
||||
'user_session_id': user_session_id
|
||||
}
|
||||
})
|
||||
|
||||
# Ensure session ID is set
|
||||
if not scanner.session_id:
|
||||
scanner.session_id = user_session_id
|
||||
|
||||
status = scanner.get_scan_status()
|
||||
status['user_session_id'] = user_session_id
|
||||
|
||||
# Add enhanced session information
|
||||
session_info = session_manager.get_session_info(user_session_id)
|
||||
status['session_info'] = {
|
||||
'user_fingerprint': session_info.get('user_fingerprint', 'unknown'),
|
||||
'session_age_minutes': session_info.get('session_age_minutes', 0),
|
||||
'client_ip': session_info.get('client_ip', 'unknown'),
|
||||
'last_activity': session_info.get('last_activity')
|
||||
}
|
||||
|
||||
# Additional debug info
|
||||
status['debug_info'] = {
|
||||
'scanner_object_id': id(scanner),
|
||||
'session_id_set': bool(scanner.session_id),
|
||||
'has_scan_thread': bool(scanner.scan_thread and scanner.scan_thread.is_alive())
|
||||
}
|
||||
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'status': status
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in get_scan_status endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}',
|
||||
'fallback_status': {
|
||||
'status': 'error',
|
||||
'target_domain': None,
|
||||
'current_depth': 0,
|
||||
'max_depth': 0,
|
||||
'progress_percentage': 0.0
|
||||
}
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/graph', methods=['GET'])
|
||||
def get_graph_data():
|
||||
"""Get current graph data with error handling."""
|
||||
try:
|
||||
# Get user-specific scanner
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
|
||||
if not scanner:
|
||||
# Return empty graph if no scanner
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'graph': {
|
||||
'nodes': [],
|
||||
'edges': [],
|
||||
'statistics': {
|
||||
'node_count': 0,
|
||||
'edge_count': 0,
|
||||
'creation_time': datetime.now(timezone.utc).isoformat(),
|
||||
'last_modified': datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
},
|
||||
'user_session_id': user_session_id
|
||||
})
|
||||
|
||||
graph_data = scanner.get_graph_data()
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'graph': graph_data,
|
||||
'user_session_id': user_session_id
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in get_graph_data endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}',
|
||||
'fallback_graph': {
|
||||
'nodes': [],
|
||||
'edges': [],
|
||||
'statistics': {'node_count': 0, 'edge_count': 0}
|
||||
}
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/export', methods=['GET'])
|
||||
def export_results():
|
||||
"""Export complete scan results as downloadable JSON for the user session."""
|
||||
try:
|
||||
# Get user-specific scanner
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
|
||||
# Get complete results
|
||||
results = scanner.export_results()
|
||||
|
||||
# Add enhanced session information to export
|
||||
session_info = session_manager.get_session_info(user_session_id)
|
||||
results['export_metadata'] = {
|
||||
'user_session_id': user_session_id,
|
||||
'user_fingerprint': session_info.get('user_fingerprint', 'unknown'),
|
||||
'client_ip': session_info.get('client_ip', 'unknown'),
|
||||
'session_age_minutes': session_info.get('session_age_minutes', 0),
|
||||
'export_timestamp': datetime.now(timezone.utc).isoformat(),
|
||||
'export_type': 'user_session_results'
|
||||
}
|
||||
|
||||
# Create filename with user fingerprint
|
||||
timestamp = datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')
|
||||
target = scanner.current_target or 'unknown'
|
||||
user_fp = session_info.get('user_fingerprint', 'unknown')[:8]
|
||||
filename = f"dnsrecon_{target}_{timestamp}_{user_fp}.json"
|
||||
|
||||
# Create in-memory file
|
||||
json_data = json.dumps(results, indent=2, ensure_ascii=False)
|
||||
file_obj = io.BytesIO(json_data.encode('utf-8'))
|
||||
|
||||
return send_file(
|
||||
file_obj,
|
||||
as_attachment=True,
|
||||
download_name=filename,
|
||||
mimetype='application/json'
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in export_results endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Export failed: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/providers', methods=['GET'])
|
||||
def get_providers():
|
||||
"""Get information about available providers for the user session."""
|
||||
print("=== API: /api/providers called ===")
|
||||
|
||||
try:
|
||||
# Get user-specific scanner
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
|
||||
provider_info = scanner.get_provider_info()
|
||||
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'providers': provider_info,
|
||||
'user_session_id': user_session_id
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in get_providers endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/config/api-keys', methods=['POST'])
|
||||
def set_api_keys():
|
||||
"""
|
||||
Set API keys for providers for the user session only.
|
||||
"""
|
||||
try:
|
||||
data = request.get_json()
|
||||
|
||||
if data is None:
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'No API keys provided'
|
||||
}), 400
|
||||
|
||||
# Get user-specific scanner and config
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
session_config = scanner.config
|
||||
|
||||
updated_providers = []
|
||||
|
||||
# Iterate over the API keys provided in the request data
|
||||
for provider_name, api_key in data.items():
|
||||
# This allows us to both set and clear keys. The config
|
||||
# handles enabling/disabling based on if the key is empty.
|
||||
api_key_value = str(api_key or '').strip()
|
||||
success = session_config.set_api_key(provider_name.lower(), api_key_value)
|
||||
|
||||
if success:
|
||||
updated_providers.append(provider_name)
|
||||
|
||||
if updated_providers:
|
||||
# Reinitialize scanner providers to apply the new keys
|
||||
scanner._initialize_providers()
|
||||
|
||||
# Persist the updated scanner object back to the user's session
|
||||
session_manager.update_session_scanner(user_session_id, scanner)
|
||||
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'message': f'API keys updated for session {user_session_id}: {", ".join(updated_providers)}',
|
||||
'updated_providers': updated_providers,
|
||||
'user_session_id': user_session_id
|
||||
})
|
||||
else:
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'No valid API keys were provided or provider names were incorrect.'
|
||||
}), 400
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in set_api_keys endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/session/info', methods=['GET'])
|
||||
def get_session_info():
|
||||
"""Get enhanced information about the current user session."""
|
||||
try:
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
session_info = session_manager.get_session_info(user_session_id)
|
||||
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'session_info': session_info
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in get_session_info endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/session/terminate', methods=['POST'])
|
||||
def terminate_session():
|
||||
"""Terminate the current user session."""
|
||||
try:
|
||||
user_session_id = session.get('dnsrecon_session_id')
|
||||
|
||||
if user_session_id:
|
||||
success = session_manager.terminate_session(user_session_id)
|
||||
# Clear Flask session
|
||||
session.pop('dnsrecon_session_id', None)
|
||||
|
||||
return jsonify({
|
||||
'success': success,
|
||||
'message': 'Session terminated' if success else 'Session not found'
|
||||
})
|
||||
else:
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'No active session to terminate'
|
||||
}), 400
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in terminate_session endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/admin/sessions', methods=['GET'])
|
||||
def list_sessions():
|
||||
"""Admin endpoint to list all active sessions with enhanced information."""
|
||||
try:
|
||||
sessions = session_manager.list_active_sessions()
|
||||
stats = session_manager.get_statistics()
|
||||
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'sessions': sessions,
|
||||
'statistics': stats
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in list_sessions endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Internal server error: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.route('/api/health', methods=['GET'])
|
||||
def health_check():
|
||||
"""Health check endpoint with enhanced session statistics."""
|
||||
try:
|
||||
# Get session stats
|
||||
session_stats = session_manager.get_statistics()
|
||||
|
||||
return jsonify({
|
||||
'success': True,
|
||||
'status': 'healthy',
|
||||
'timestamp': datetime.now(timezone.utc).isoformat(),
|
||||
'version': '2.0.0-enhanced',
|
||||
'phase': 'enhanced_architecture',
|
||||
'features': {
|
||||
'multi_provider': True,
|
||||
'concurrent_processing': True,
|
||||
'real_time_updates': True,
|
||||
'api_key_management': True,
|
||||
'visualization': True,
|
||||
'retry_logic': True,
|
||||
'user_sessions': True,
|
||||
'session_isolation': True,
|
||||
'global_provider_caching': True,
|
||||
'single_session_per_user': True,
|
||||
'session_consolidation': True,
|
||||
'task_completion_model': True
|
||||
},
|
||||
'session_statistics': session_stats,
|
||||
'cache_info': {
|
||||
'global_provider_cache': True,
|
||||
'cache_location': '.cache/<provider_name>/',
|
||||
'cache_expiry_hours': 12
|
||||
}
|
||||
})
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in health_check endpoint: {e}")
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': f'Health check failed: {str(e)}'
|
||||
}), 500
|
||||
|
||||
|
||||
@app.errorhandler(404)
|
||||
def not_found(error):
|
||||
"""Handle 404 errors."""
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'Endpoint not found'
|
||||
}), 404
|
||||
|
||||
|
||||
@app.errorhandler(500)
|
||||
def internal_error(error):
|
||||
"""Handle 500 errors."""
|
||||
print(f"ERROR: 500 Internal Server Error: {error}")
|
||||
traceback.print_exc()
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'Internal server error'
|
||||
}), 500
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
print("Starting DNSRecon Flask application with enhanced user session support...")
|
||||
|
||||
# Load configuration from environment
|
||||
config.load_from_env()
|
||||
|
||||
# Start Flask application
|
||||
print(f"Starting server on {config.flask_host}:{config.flask_port}")
|
||||
app.run(
|
||||
host=config.flask_host,
|
||||
port=config.flask_port,
|
||||
debug=config.flask_debug,
|
||||
threaded=True
|
||||
)
|
||||
114
config.py
Normal file
114
config.py
Normal file
@@ -0,0 +1,114 @@
|
||||
"""
|
||||
Configuration management for DNSRecon tool.
|
||||
Handles API key storage, rate limiting, and default settings.
|
||||
"""
|
||||
|
||||
import os
|
||||
from typing import Dict, Optional
|
||||
|
||||
|
||||
class Config:
|
||||
"""Configuration manager for DNSRecon application."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize configuration with default values."""
|
||||
self.api_keys: Dict[str, Optional[str]] = {
|
||||
'shodan': None
|
||||
}
|
||||
|
||||
# Default settings
|
||||
self.default_recursion_depth = 2
|
||||
self.default_timeout = 10
|
||||
self.max_concurrent_requests = 5
|
||||
self.large_entity_threshold = 100
|
||||
|
||||
# Rate limiting settings (requests per minute)
|
||||
self.rate_limits = {
|
||||
'crtsh': 60, # Free service, be respectful
|
||||
'shodan': 60, # API dependent
|
||||
'dns': 100 # Local DNS queries
|
||||
}
|
||||
|
||||
# Provider settings
|
||||
self.enabled_providers = {
|
||||
'crtsh': True, # Always enabled (free)
|
||||
'dns': True, # Always enabled (free)
|
||||
'shodan': False # Requires API key
|
||||
}
|
||||
|
||||
# Logging configuration
|
||||
self.log_level = 'INFO'
|
||||
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
# Flask configuration
|
||||
self.flask_host = '127.0.0.1'
|
||||
self.flask_port = 5000
|
||||
self.flask_debug = True
|
||||
|
||||
def set_api_key(self, provider: str, api_key: str) -> bool:
|
||||
"""
|
||||
Set API key for a provider.
|
||||
|
||||
Args:
|
||||
provider: Provider name (shodan, etc)
|
||||
api_key: API key string
|
||||
|
||||
Returns:
|
||||
bool: True if key was set successfully
|
||||
"""
|
||||
if provider in self.api_keys:
|
||||
self.api_keys[provider] = api_key
|
||||
self.enabled_providers[provider] = True if api_key else False
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_api_key(self, provider: str) -> Optional[str]:
|
||||
"""
|
||||
Get API key for a provider.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
API key or None if not set
|
||||
"""
|
||||
return self.api_keys.get(provider)
|
||||
|
||||
def is_provider_enabled(self, provider: str) -> bool:
|
||||
"""
|
||||
Check if a provider is enabled.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
bool: True if provider is enabled
|
||||
"""
|
||||
return self.enabled_providers.get(provider, False)
|
||||
|
||||
def get_rate_limit(self, provider: str) -> int:
|
||||
"""
|
||||
Get rate limit for a provider.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
Rate limit in requests per minute
|
||||
"""
|
||||
return self.rate_limits.get(provider, 60)
|
||||
|
||||
def load_from_env(self):
|
||||
"""Load configuration from environment variables."""
|
||||
if os.getenv('SHODAN_API_KEY'):
|
||||
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
|
||||
|
||||
# Override default settings from environment
|
||||
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', '2'))
|
||||
self.flask_debug = os.getenv('FLASK_DEBUG', 'True').lower() == 'true'
|
||||
self.default_timeout = 30
|
||||
self.max_concurrent_requests = 5
|
||||
|
||||
|
||||
# Global configuration instance
|
||||
config = Config()
|
||||
29
core/__init__.py
Normal file
29
core/__init__.py
Normal file
@@ -0,0 +1,29 @@
|
||||
"""
|
||||
Core modules for DNSRecon passive reconnaissance tool.
|
||||
Contains graph management, scanning orchestration, and forensic logging.
|
||||
"""
|
||||
|
||||
from .graph_manager import GraphManager, NodeType
|
||||
from .scanner import Scanner, ScanStatus
|
||||
from .logger import ForensicLogger, get_forensic_logger, new_session
|
||||
from .session_manager import session_manager
|
||||
from .session_config import SessionConfig, create_session_config
|
||||
from .task_manager import TaskManager, TaskType, ReconTask
|
||||
|
||||
__all__ = [
|
||||
'GraphManager',
|
||||
'NodeType',
|
||||
'Scanner',
|
||||
'ScanStatus',
|
||||
'ForensicLogger',
|
||||
'get_forensic_logger',
|
||||
'new_session',
|
||||
'session_manager',
|
||||
'SessionConfig',
|
||||
'create_session_config',
|
||||
'TaskManager',
|
||||
'TaskType',
|
||||
'ReconTask'
|
||||
]
|
||||
|
||||
__version__ = "1.0.0-phase2"
|
||||
453
core/graph_manager.py
Normal file
453
core/graph_manager.py
Normal file
@@ -0,0 +1,453 @@
|
||||
"""
|
||||
Graph data model for DNSRecon using NetworkX.
|
||||
Manages in-memory graph storage with confidence scoring and forensic metadata.
|
||||
"""
|
||||
import re
|
||||
from datetime import datetime, timezone
|
||||
from enum import Enum
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
|
||||
import networkx as nx
|
||||
|
||||
|
||||
class NodeType(Enum):
|
||||
"""Enumeration of supported node types."""
|
||||
DOMAIN = "domain"
|
||||
IP = "ip"
|
||||
ASN = "asn"
|
||||
LARGE_ENTITY = "large_entity"
|
||||
CORRELATION_OBJECT = "correlation_object"
|
||||
|
||||
def __repr__(self):
|
||||
return self.value
|
||||
|
||||
|
||||
class GraphManager:
|
||||
"""
|
||||
Thread-safe graph manager for DNSRecon infrastructure mapping.
|
||||
Uses NetworkX for in-memory graph storage with confidence scoring.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize empty directed graph."""
|
||||
self.graph = nx.DiGraph()
|
||||
self.creation_time = datetime.now(timezone.utc).isoformat()
|
||||
self.last_modified = self.creation_time
|
||||
self.correlation_index = {}
|
||||
# Compile regex for date filtering for efficiency
|
||||
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare GraphManager for pickling, excluding compiled regex."""
|
||||
state = self.__dict__.copy()
|
||||
# Compiled regex patterns are not always picklable
|
||||
if 'date_pattern' in state:
|
||||
del state['date_pattern']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore GraphManager state and recompile regex."""
|
||||
self.__dict__.update(state)
|
||||
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||
|
||||
def _update_correlation_index(self, node_id: str, data: Any, path: List[str] = None):
|
||||
"""Recursively traverse metadata and add hashable values to the index."""
|
||||
if path is None:
|
||||
path = []
|
||||
|
||||
if isinstance(data, dict):
|
||||
for key, value in data.items():
|
||||
self._update_correlation_index(node_id, value, path + [key])
|
||||
elif isinstance(data, list):
|
||||
for i, item in enumerate(data):
|
||||
self._update_correlation_index(node_id, item, path + [f"[{i}]"])
|
||||
else:
|
||||
self._add_to_correlation_index(node_id, data, ".".join(path))
|
||||
|
||||
def _add_to_correlation_index(self, node_id: str, value: Any, path_str: str):
|
||||
"""Add a hashable value to the correlation index, filtering out noise."""
|
||||
if not isinstance(value, (str, int, float, bool)) or value is None:
|
||||
return
|
||||
|
||||
# Ignore certain paths that contain noisy, non-unique identifiers
|
||||
if any(keyword in path_str.lower() for keyword in ['count', 'total', 'timestamp', 'date']):
|
||||
return
|
||||
|
||||
# Filter out common low-entropy values and date-like strings
|
||||
if isinstance(value, str):
|
||||
# FIXED: Prevent correlation on date/time strings.
|
||||
if self.date_pattern.match(value):
|
||||
return
|
||||
if len(value) < 4 or value.lower() in ['true', 'false', 'unknown', 'none', 'crt.sh']:
|
||||
return
|
||||
elif isinstance(value, int) and abs(value) < 9999:
|
||||
return # Ignore small integers
|
||||
elif isinstance(value, bool):
|
||||
return # Ignore boolean values
|
||||
|
||||
# Add the valuable correlation data to the index
|
||||
if value not in self.correlation_index:
|
||||
self.correlation_index[value] = {}
|
||||
if node_id not in self.correlation_index[value]:
|
||||
self.correlation_index[value][node_id] = []
|
||||
if path_str not in self.correlation_index[value][node_id]:
|
||||
self.correlation_index[value][node_id].append(path_str)
|
||||
|
||||
def _check_for_correlations(self, new_node_id: str, data: Any, path: List[str] = None) -> List[Dict]:
|
||||
"""Recursively traverse metadata to find correlations with existing data."""
|
||||
if path is None:
|
||||
path = []
|
||||
|
||||
all_correlations = []
|
||||
if isinstance(data, dict):
|
||||
for key, value in data.items():
|
||||
if key == 'source': # Avoid correlating on the provider name
|
||||
continue
|
||||
all_correlations.extend(self._check_for_correlations(new_node_id, value, path + [key]))
|
||||
elif isinstance(data, list):
|
||||
for i, item in enumerate(data):
|
||||
all_correlations.extend(self._check_for_correlations(new_node_id, item, path + [f"[{i}]"]))
|
||||
else:
|
||||
value = data
|
||||
if value in self.correlation_index:
|
||||
existing_nodes_with_paths = self.correlation_index[value]
|
||||
unique_nodes = set(existing_nodes_with_paths.keys())
|
||||
unique_nodes.add(new_node_id)
|
||||
|
||||
if len(unique_nodes) < 2:
|
||||
return all_correlations # Correlation must involve at least two distinct nodes
|
||||
|
||||
new_source = {'node_id': new_node_id, 'path': ".".join(path)}
|
||||
all_sources = [new_source]
|
||||
for node_id, paths in existing_nodes_with_paths.items():
|
||||
for p_str in paths:
|
||||
all_sources.append({'node_id': node_id, 'path': p_str})
|
||||
|
||||
all_correlations.append({
|
||||
'value': value,
|
||||
'sources': all_sources,
|
||||
'nodes': list(unique_nodes)
|
||||
})
|
||||
return all_correlations
|
||||
|
||||
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[Dict[str, Any]] = None,
|
||||
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
|
||||
"""Add a node to the graph, update attributes, and process correlations."""
|
||||
is_new_node = not self.graph.has_node(node_id)
|
||||
if is_new_node:
|
||||
self.graph.add_node(node_id, type=node_type.value,
|
||||
added_timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
attributes=attributes or {},
|
||||
description=description,
|
||||
metadata=metadata or {})
|
||||
else:
|
||||
# Safely merge new attributes into existing attributes
|
||||
if attributes:
|
||||
existing_attributes = self.graph.nodes[node_id].get('attributes', {})
|
||||
existing_attributes.update(attributes)
|
||||
self.graph.nodes[node_id]['attributes'] = existing_attributes
|
||||
if description:
|
||||
self.graph.nodes[node_id]['description'] = description
|
||||
if metadata:
|
||||
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
|
||||
existing_metadata.update(metadata)
|
||||
self.graph.nodes[node_id]['metadata'] = existing_metadata
|
||||
|
||||
if attributes and node_type != NodeType.CORRELATION_OBJECT:
|
||||
correlations = self._check_for_correlations(node_id, attributes)
|
||||
for corr in correlations:
|
||||
value = corr['value']
|
||||
|
||||
# STEP 1: Substring check against all existing nodes
|
||||
if self._correlation_value_matches_existing_node(value):
|
||||
# Skip creating correlation node - would be redundant
|
||||
continue
|
||||
|
||||
# STEP 2: Filter out node pairs that already have direct edges
|
||||
eligible_nodes = self._filter_nodes_without_direct_edges(set(corr['nodes']))
|
||||
|
||||
if len(eligible_nodes) < 2:
|
||||
# Need at least 2 nodes to create a correlation
|
||||
continue
|
||||
|
||||
# STEP 3: Check for existing correlation node with same connection pattern
|
||||
correlation_nodes_with_pattern = self._find_correlation_nodes_with_same_pattern(eligible_nodes)
|
||||
|
||||
if correlation_nodes_with_pattern:
|
||||
# STEP 4: Merge with existing correlation node
|
||||
target_correlation_node = correlation_nodes_with_pattern[0]
|
||||
self._merge_correlation_values(target_correlation_node, value, corr)
|
||||
else:
|
||||
# STEP 5: Create new correlation node for eligible nodes only
|
||||
correlation_node_id = f"corr_{abs(hash(str(sorted(eligible_nodes))))}"
|
||||
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT,
|
||||
metadata={'values': [value], 'sources': corr['sources'],
|
||||
'correlated_nodes': list(eligible_nodes)})
|
||||
|
||||
# Create edges from eligible nodes to this correlation node
|
||||
for c_node_id in eligible_nodes:
|
||||
if self.graph.has_node(c_node_id):
|
||||
attribute = corr['sources'][0]['path'].split('.')[-1]
|
||||
relationship_type = f"c_{attribute}"
|
||||
self.add_edge(c_node_id, correlation_node_id, relationship_type, confidence_score=0.9)
|
||||
|
||||
self._update_correlation_index(node_id, attributes)
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return is_new_node
|
||||
|
||||
def _filter_nodes_without_direct_edges(self, node_set: set) -> set:
|
||||
"""
|
||||
Filter out nodes that already have direct edges between them.
|
||||
Returns set of nodes that should be included in correlation.
|
||||
"""
|
||||
nodes_list = list(node_set)
|
||||
eligible_nodes = set(node_set) # Start with all nodes
|
||||
|
||||
# Check all pairs of nodes
|
||||
for i in range(len(nodes_list)):
|
||||
for j in range(i + 1, len(nodes_list)):
|
||||
node_a = nodes_list[i]
|
||||
node_b = nodes_list[j]
|
||||
|
||||
# Check if direct edge exists in either direction
|
||||
if self._has_direct_edge_bidirectional(node_a, node_b):
|
||||
# Remove both nodes from eligible set since they're already connected
|
||||
eligible_nodes.discard(node_a)
|
||||
eligible_nodes.discard(node_b)
|
||||
|
||||
return eligible_nodes
|
||||
|
||||
def _has_direct_edge_bidirectional(self, node_a: str, node_b: str) -> bool:
|
||||
"""
|
||||
Check if there's a direct edge between two nodes in either direction.
|
||||
Returns True if node_a→node_b OR node_b→node_a exists.
|
||||
"""
|
||||
return (self.graph.has_edge(node_a, node_b) or
|
||||
self.graph.has_edge(node_b, node_a))
|
||||
|
||||
def _correlation_value_matches_existing_node(self, correlation_value: str) -> bool:
|
||||
"""
|
||||
Check if correlation value contains any existing node ID as substring.
|
||||
Returns True if match found (correlation node should NOT be created).
|
||||
"""
|
||||
correlation_str = str(correlation_value).lower()
|
||||
|
||||
# Check against all existing nodes
|
||||
for existing_node_id in self.graph.nodes():
|
||||
if existing_node_id.lower() in correlation_str:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _find_correlation_nodes_with_same_pattern(self, node_set: set) -> List[str]:
|
||||
"""
|
||||
Find existing correlation nodes that have the exact same pattern of connected nodes.
|
||||
Returns list of correlation node IDs with matching patterns.
|
||||
"""
|
||||
correlation_nodes = self.get_nodes_by_type(NodeType.CORRELATION_OBJECT)
|
||||
matching_nodes = []
|
||||
|
||||
for corr_node_id in correlation_nodes:
|
||||
# Get all nodes connected to this correlation node
|
||||
connected_nodes = set()
|
||||
|
||||
# Add all predecessors (nodes pointing TO the correlation node)
|
||||
connected_nodes.update(self.graph.predecessors(corr_node_id))
|
||||
|
||||
# Add all successors (nodes pointed TO by the correlation node)
|
||||
connected_nodes.update(self.graph.successors(corr_node_id))
|
||||
|
||||
# Check if the pattern matches exactly
|
||||
if connected_nodes == node_set:
|
||||
matching_nodes.append(corr_node_id)
|
||||
|
||||
return matching_nodes
|
||||
|
||||
def _merge_correlation_values(self, target_node_id: str, new_value: Any, corr_data: Dict) -> None:
|
||||
"""
|
||||
Merge a new correlation value into an existing correlation node.
|
||||
Uses same logic as large entity merging.
|
||||
"""
|
||||
if not self.graph.has_node(target_node_id):
|
||||
return
|
||||
|
||||
target_metadata = self.graph.nodes[target_node_id]['metadata']
|
||||
|
||||
# Get existing values (ensure it's a list)
|
||||
existing_values = target_metadata.get('values', [])
|
||||
if not isinstance(existing_values, list):
|
||||
existing_values = [existing_values]
|
||||
|
||||
# Add new value if not already present
|
||||
if new_value not in existing_values:
|
||||
existing_values.append(new_value)
|
||||
|
||||
# Merge sources
|
||||
existing_sources = target_metadata.get('sources', [])
|
||||
new_sources = corr_data.get('sources', [])
|
||||
|
||||
# Create set of unique sources based on (node_id, path) tuples
|
||||
source_set = set()
|
||||
for source in existing_sources + new_sources:
|
||||
source_tuple = (source['node_id'], source['path'])
|
||||
source_set.add(source_tuple)
|
||||
|
||||
# Convert back to list of dictionaries
|
||||
merged_sources = [{'node_id': nid, 'path': path} for nid, path in source_set]
|
||||
|
||||
# Update metadata
|
||||
target_metadata.update({
|
||||
'values': existing_values,
|
||||
'sources': merged_sources,
|
||||
'correlated_nodes': list(set(target_metadata.get('correlated_nodes', []) + corr_data.get('nodes', []))),
|
||||
'merge_count': len(existing_values),
|
||||
'last_merge_timestamp': datetime.now(timezone.utc).isoformat()
|
||||
})
|
||||
|
||||
# Update description to reflect merged nature
|
||||
value_count = len(existing_values)
|
||||
node_count = len(target_metadata['correlated_nodes'])
|
||||
self.graph.nodes[target_node_id]['description'] = (
|
||||
f"Correlation container with {value_count} merged values "
|
||||
f"across {node_count} nodes"
|
||||
)
|
||||
|
||||
def add_edge(self, source_id: str, target_id: str, relationship_type: str,
|
||||
confidence_score: float = 0.5, source_provider: str = "unknown",
|
||||
raw_data: Optional[Dict[str, Any]] = None) -> bool:
|
||||
"""Add or update an edge between two nodes, ensuring nodes exist."""
|
||||
if not self.graph.has_node(source_id) or not self.graph.has_node(target_id):
|
||||
return False
|
||||
|
||||
new_confidence = confidence_score
|
||||
|
||||
if relationship_type.startswith("c_"):
|
||||
edge_label = relationship_type
|
||||
else:
|
||||
edge_label = f"{source_provider}_{relationship_type}"
|
||||
|
||||
if self.graph.has_edge(source_id, target_id):
|
||||
# If edge exists, update confidence if the new score is higher.
|
||||
if new_confidence > self.graph.edges[source_id, target_id].get('confidence_score', 0):
|
||||
self.graph.edges[source_id, target_id]['confidence_score'] = new_confidence
|
||||
self.graph.edges[source_id, target_id]['updated_timestamp'] = datetime.now(timezone.utc).isoformat()
|
||||
self.graph.edges[source_id, target_id]['updated_by'] = source_provider
|
||||
return False
|
||||
|
||||
# Add a new edge with all attributes.
|
||||
self.graph.add_edge(source_id, target_id,
|
||||
relationship_type=edge_label,
|
||||
confidence_score=new_confidence,
|
||||
source_provider=source_provider,
|
||||
discovery_timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
raw_data=raw_data or {})
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return True
|
||||
|
||||
def get_node_count(self) -> int:
|
||||
"""Get total number of nodes in the graph."""
|
||||
return self.graph.number_of_nodes()
|
||||
|
||||
def get_edge_count(self) -> int:
|
||||
"""Get total number of edges in the graph."""
|
||||
return self.graph.number_of_edges()
|
||||
|
||||
def get_nodes_by_type(self, node_type: NodeType) -> List[str]:
|
||||
"""Get all nodes of a specific type."""
|
||||
return [n for n, d in self.graph.nodes(data=True) if d.get('type') == node_type.value]
|
||||
|
||||
def get_neighbors(self, node_id: str) -> List[str]:
|
||||
"""Get all unique neighbors (predecessors and successors) for a node."""
|
||||
if not self.graph.has_node(node_id):
|
||||
return []
|
||||
return list(set(self.graph.predecessors(node_id)) | set(self.graph.successors(node_id)))
|
||||
|
||||
def get_high_confidence_edges(self, min_confidence: float = 0.8) -> List[Tuple[str, str, Dict]]:
|
||||
"""Get edges with confidence score above a given threshold."""
|
||||
return [(u, v, d) for u, v, d in self.graph.edges(data=True)
|
||||
if d.get('confidence_score', 0) >= min_confidence]
|
||||
|
||||
def get_graph_data(self) -> Dict[str, Any]:
|
||||
"""Export graph data formatted for frontend visualization."""
|
||||
nodes = []
|
||||
for node_id, attrs in self.graph.nodes(data=True):
|
||||
node_data = {'id': node_id, 'label': node_id, 'type': attrs.get('type', 'unknown'),
|
||||
'attributes': attrs.get('attributes', {}),
|
||||
'description': attrs.get('description', ''),
|
||||
'metadata': attrs.get('metadata', {}),
|
||||
'added_timestamp': attrs.get('added_timestamp')}
|
||||
# Customize node appearance based on type and attributes
|
||||
node_type = node_data['type']
|
||||
attributes = node_data['attributes']
|
||||
if node_type == 'domain' and attributes.get('certificates', {}).get('has_valid_cert') is False:
|
||||
node_data['color'] = {'background': '#c7c7c7', 'border': '#999'} # Gray for invalid cert
|
||||
|
||||
# Add incoming and outgoing edges to node data
|
||||
if self.graph.has_node(node_id):
|
||||
node_data['incoming_edges'] = [{'from': u, 'data': d} for u, _, d in self.graph.in_edges(node_id, data=True)]
|
||||
node_data['outgoing_edges'] = [{'to': v, 'data': d} for _, v, d in self.graph.out_edges(node_id, data=True)]
|
||||
|
||||
nodes.append(node_data)
|
||||
|
||||
edges = []
|
||||
for source, target, attrs in self.graph.edges(data=True):
|
||||
edges.append({'from': source, 'to': target,
|
||||
'label': attrs.get('relationship_type', ''),
|
||||
'confidence_score': attrs.get('confidence_score', 0),
|
||||
'source_provider': attrs.get('source_provider', ''),
|
||||
'discovery_timestamp': attrs.get('discovery_timestamp')})
|
||||
return {
|
||||
'nodes': nodes, 'edges': edges,
|
||||
'statistics': self.get_statistics()['basic_metrics']
|
||||
}
|
||||
|
||||
def export_json(self) -> Dict[str, Any]:
|
||||
"""Export complete graph data as a JSON-serializable dictionary."""
|
||||
graph_data = nx.node_link_data(self.graph) # Use NetworkX's built-in robust serializer
|
||||
return {
|
||||
'export_metadata': {
|
||||
'export_timestamp': datetime.now(timezone.utc).isoformat(),
|
||||
'graph_creation_time': self.creation_time,
|
||||
'last_modified': self.last_modified,
|
||||
'total_nodes': self.get_node_count(),
|
||||
'total_edges': self.get_edge_count(),
|
||||
'graph_format': 'dnsrecon_v1_nodeling'
|
||||
},
|
||||
'graph': graph_data,
|
||||
'statistics': self.get_statistics()
|
||||
}
|
||||
|
||||
def _get_confidence_distribution(self) -> Dict[str, int]:
|
||||
"""Get distribution of edge confidence scores."""
|
||||
distribution = {'high': 0, 'medium': 0, 'low': 0}
|
||||
for _, _, confidence in self.graph.edges(data='confidence_score', default=0):
|
||||
if confidence >= 0.8: distribution['high'] += 1
|
||||
elif confidence >= 0.6: distribution['medium'] += 1
|
||||
else: distribution['low'] += 1
|
||||
return distribution
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""Get comprehensive statistics about the graph."""
|
||||
stats = {'basic_metrics': {'total_nodes': self.get_node_count(),
|
||||
'total_edges': self.get_edge_count(),
|
||||
'creation_time': self.creation_time,
|
||||
'last_modified': self.last_modified},
|
||||
'node_type_distribution': {}, 'relationship_type_distribution': {},
|
||||
'confidence_distribution': self._get_confidence_distribution(),
|
||||
'provider_distribution': {}}
|
||||
# Calculate distributions
|
||||
for node_type in NodeType:
|
||||
stats['node_type_distribution'][node_type.value] = self.get_nodes_by_type(node_type).__len__()
|
||||
for _, _, rel_type in self.graph.edges(data='relationship_type', default='unknown'):
|
||||
stats['relationship_type_distribution'][rel_type] = stats['relationship_type_distribution'].get(rel_type, 0) + 1
|
||||
for _, _, provider in self.graph.edges(data='source_provider', default='unknown'):
|
||||
stats['provider_distribution'][provider] = stats['provider_distribution'].get(provider, 0) + 1
|
||||
return stats
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear all nodes, edges, and indices from the graph."""
|
||||
self.graph.clear()
|
||||
self.correlation_index.clear()
|
||||
self.creation_time = datetime.now(timezone.utc).isoformat()
|
||||
self.last_modified = self.creation_time
|
||||
283
core/logger.py
Normal file
283
core/logger.py
Normal file
@@ -0,0 +1,283 @@
|
||||
# dnsrecon/core/logger.py
|
||||
|
||||
import logging
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, Optional, List
|
||||
from dataclasses import dataclass, asdict
|
||||
from datetime import timezone
|
||||
|
||||
|
||||
@dataclass
|
||||
class APIRequest:
|
||||
"""Structured representation of an API request for forensic logging."""
|
||||
timestamp: str
|
||||
provider: str
|
||||
url: str
|
||||
method: str
|
||||
status_code: Optional[int]
|
||||
response_size: Optional[int]
|
||||
duration_ms: Optional[float]
|
||||
error: Optional[str]
|
||||
target_indicator: str
|
||||
discovery_context: Optional[str]
|
||||
|
||||
|
||||
@dataclass
|
||||
class RelationshipDiscovery:
|
||||
"""Structured representation of a discovered relationship."""
|
||||
timestamp: str
|
||||
source_node: str
|
||||
target_node: str
|
||||
relationship_type: str
|
||||
confidence_score: float
|
||||
provider: str
|
||||
raw_data: Dict[str, Any]
|
||||
discovery_method: str
|
||||
|
||||
|
||||
class ForensicLogger:
|
||||
"""
|
||||
Thread-safe forensic logging system for DNSRecon.
|
||||
Maintains detailed audit trail of all reconnaissance activities.
|
||||
"""
|
||||
|
||||
def __init__(self, session_id: str = None):
|
||||
"""
|
||||
Initialize forensic logger.
|
||||
|
||||
Args:
|
||||
session_id: Unique identifier for this reconnaissance session
|
||||
"""
|
||||
self.session_id = session_id or self._generate_session_id()
|
||||
#self.lock = threading.Lock()
|
||||
|
||||
# Initialize audit trail storage
|
||||
self.api_requests: List[APIRequest] = []
|
||||
self.relationships: List[RelationshipDiscovery] = []
|
||||
self.session_metadata = {
|
||||
'session_id': self.session_id,
|
||||
'start_time': datetime.now(timezone.utc).isoformat(),
|
||||
'end_time': None,
|
||||
'total_requests': 0,
|
||||
'total_relationships': 0,
|
||||
'providers_used': set(),
|
||||
'target_domains': set()
|
||||
}
|
||||
|
||||
# Configure standard logger
|
||||
self.logger = logging.getLogger(f'dnsrecon.{self.session_id}')
|
||||
self.logger.setLevel(logging.INFO)
|
||||
|
||||
# Create formatter for structured logging
|
||||
formatter = logging.Formatter(
|
||||
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
|
||||
# Add console handler if not already present
|
||||
if not self.logger.handlers:
|
||||
console_handler = logging.StreamHandler()
|
||||
console_handler.setFormatter(formatter)
|
||||
self.logger.addHandler(console_handler)
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare ForensicLogger for pickling by excluding unpicklable objects."""
|
||||
state = self.__dict__.copy()
|
||||
# Remove the unpickleable 'logger' attribute
|
||||
if 'logger' in state:
|
||||
del state['logger']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore ForensicLogger after unpickling by reconstructing logger."""
|
||||
self.__dict__.update(state)
|
||||
# Re-initialize the 'logger' attribute
|
||||
self.logger = logging.getLogger(f'dnsrecon.{self.session_id}')
|
||||
self.logger.setLevel(logging.INFO)
|
||||
formatter = logging.Formatter(
|
||||
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
if not self.logger.handlers:
|
||||
console_handler = logging.StreamHandler()
|
||||
console_handler.setFormatter(formatter)
|
||||
self.logger.addHandler(console_handler)
|
||||
|
||||
def _generate_session_id(self) -> str:
|
||||
"""Generate unique session identifier."""
|
||||
return f"dnsrecon_{datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')}"
|
||||
|
||||
def log_api_request(self, provider: str, url: str, method: str = "GET",
|
||||
status_code: Optional[int] = None,
|
||||
response_size: Optional[int] = None,
|
||||
duration_ms: Optional[float] = None,
|
||||
error: Optional[str] = None,
|
||||
target_indicator: str = "",
|
||||
discovery_context: Optional[str] = None) -> None:
|
||||
"""
|
||||
Log an API request for forensic audit trail.
|
||||
|
||||
Args:
|
||||
provider: Name of the data provider
|
||||
url: Request URL
|
||||
method: HTTP method
|
||||
status_code: HTTP response status code
|
||||
response_size: Size of response in bytes
|
||||
duration_ms: Request duration in milliseconds
|
||||
error: Error message if request failed
|
||||
target_indicator: The indicator being investigated
|
||||
discovery_context: Context of how this indicator was discovered
|
||||
"""
|
||||
api_request = APIRequest(
|
||||
timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
provider=provider,
|
||||
url=url,
|
||||
method=method,
|
||||
status_code=status_code,
|
||||
response_size=response_size,
|
||||
duration_ms=duration_ms,
|
||||
error=error,
|
||||
target_indicator=target_indicator,
|
||||
discovery_context=discovery_context
|
||||
)
|
||||
|
||||
self.api_requests.append(api_request)
|
||||
self.session_metadata['total_requests'] += 1
|
||||
self.session_metadata['providers_used'].add(provider)
|
||||
|
||||
if target_indicator:
|
||||
self.session_metadata['target_domains'].add(target_indicator)
|
||||
|
||||
# Log to standard logger
|
||||
if error:
|
||||
self.logger.error(f"API Request Failed - {provider}: {url} - {error}")
|
||||
else:
|
||||
self.logger.info(f"API Request - {provider}: {url} - Status: {status_code}")
|
||||
|
||||
def log_relationship_discovery(self, source_node: str, target_node: str,
|
||||
relationship_type: str, confidence_score: float,
|
||||
provider: str, raw_data: Dict[str, Any],
|
||||
discovery_method: str) -> None:
|
||||
"""
|
||||
Log discovery of a new relationship between indicators.
|
||||
|
||||
Args:
|
||||
source_node: Source node identifier
|
||||
target_node: Target node identifier
|
||||
relationship_type: Type of relationship (e.g., 'SAN', 'A_Record')
|
||||
confidence_score: Confidence score (0.0 to 1.0)
|
||||
provider: Provider that discovered this relationship
|
||||
raw_data: Raw data from provider response
|
||||
discovery_method: Method used to discover relationship
|
||||
"""
|
||||
relationship = RelationshipDiscovery(
|
||||
timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
source_node=source_node,
|
||||
target_node=target_node,
|
||||
relationship_type=relationship_type,
|
||||
confidence_score=confidence_score,
|
||||
provider=provider,
|
||||
raw_data=raw_data,
|
||||
discovery_method=discovery_method
|
||||
)
|
||||
|
||||
self.relationships.append(relationship)
|
||||
self.session_metadata['total_relationships'] += 1
|
||||
|
||||
self.logger.info(
|
||||
f"Relationship Discovered - {source_node} -> {target_node} "
|
||||
f"({relationship_type}) - Confidence: {confidence_score:.2f} - Provider: {provider}"
|
||||
)
|
||||
|
||||
def log_scan_start(self, target_domain: str, recursion_depth: int,
|
||||
enabled_providers: List[str]) -> None:
|
||||
"""Log the start of a reconnaissance scan."""
|
||||
self.logger.info(f"Scan Started - Target: {target_domain}, Depth: {recursion_depth}")
|
||||
self.logger.info(f"Enabled Providers: {', '.join(enabled_providers)}")
|
||||
|
||||
self.session_metadata['target_domains'].add(target_domain)
|
||||
|
||||
def log_scan_complete(self) -> None:
|
||||
"""Log the completion of a reconnaissance scan."""
|
||||
self.session_metadata['end_time'] = datetime.now(timezone.utc).isoformat()
|
||||
self.session_metadata['providers_used'] = list(self.session_metadata['providers_used'])
|
||||
self.session_metadata['target_domains'] = list(self.session_metadata['target_domains'])
|
||||
|
||||
self.logger.info(f"Scan Complete - Session: {self.session_id}")
|
||||
self.logger.info(f"Total API Requests: {self.session_metadata['total_requests']}")
|
||||
self.logger.info(f"Total Relationships: {self.session_metadata['total_relationships']}")
|
||||
|
||||
def export_audit_trail(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Export complete audit trail for forensic analysis.
|
||||
|
||||
Returns:
|
||||
Dictionary containing complete session audit trail
|
||||
"""
|
||||
return {
|
||||
'session_metadata': self.session_metadata.copy(),
|
||||
'api_requests': [asdict(req) for req in self.api_requests],
|
||||
'relationships': [asdict(rel) for rel in self.relationships],
|
||||
'export_timestamp': datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
|
||||
def get_forensic_summary(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get summary statistics for forensic reporting.
|
||||
|
||||
Returns:
|
||||
Dictionary containing summary statistics
|
||||
"""
|
||||
provider_stats = {}
|
||||
for provider in self.session_metadata['providers_used']:
|
||||
provider_requests = [req for req in self.api_requests if req.provider == provider]
|
||||
provider_relationships = [rel for rel in self.relationships if rel.provider == provider]
|
||||
|
||||
provider_stats[provider] = {
|
||||
'total_requests': len(provider_requests),
|
||||
'successful_requests': len([req for req in provider_requests if req.error is None]),
|
||||
'failed_requests': len([req for req in provider_requests if req.error is not None]),
|
||||
'relationships_discovered': len(provider_relationships),
|
||||
'avg_confidence': sum(rel.confidence_score for rel in provider_relationships) / len(provider_relationships) if provider_relationships else 0
|
||||
}
|
||||
|
||||
return {
|
||||
'session_id': self.session_id,
|
||||
'duration_minutes': self._calculate_session_duration(),
|
||||
'total_requests': self.session_metadata['total_requests'],
|
||||
'total_relationships': self.session_metadata['total_relationships'],
|
||||
'unique_indicators': len(set([rel.source_node for rel in self.relationships] + [rel.target_node for rel in self.relationships])),
|
||||
'provider_statistics': provider_stats
|
||||
}
|
||||
|
||||
def _calculate_session_duration(self) -> float:
|
||||
"""Calculate session duration in minutes."""
|
||||
if not self.session_metadata['end_time']:
|
||||
end_time = datetime.now(timezone.utc)
|
||||
else:
|
||||
end_time = datetime.fromisoformat(self.session_metadata['end_time'])
|
||||
|
||||
start_time = datetime.fromisoformat(self.session_metadata['start_time'])
|
||||
duration = (end_time - start_time).total_seconds() / 60
|
||||
return round(duration, 2)
|
||||
|
||||
|
||||
# Global logger instance for the current session
|
||||
_current_logger: Optional[ForensicLogger] = None
|
||||
_logger_lock = threading.Lock()
|
||||
|
||||
|
||||
def get_forensic_logger() -> ForensicLogger:
|
||||
"""Get or create the current forensic logger instance."""
|
||||
global _current_logger
|
||||
with _logger_lock:
|
||||
if _current_logger is None:
|
||||
_current_logger = ForensicLogger()
|
||||
return _current_logger
|
||||
|
||||
|
||||
def new_session() -> ForensicLogger:
|
||||
"""Start a new forensic logging session."""
|
||||
global _current_logger
|
||||
with _logger_lock:
|
||||
_current_logger = ForensicLogger()
|
||||
return _current_logger
|
||||
742
core/scanner.py
Normal file
742
core/scanner.py
Normal file
@@ -0,0 +1,742 @@
|
||||
# dnsrecon/core/scanner.py
|
||||
|
||||
import threading
|
||||
import traceback
|
||||
import time
|
||||
import os
|
||||
import importlib
|
||||
from typing import List, Set, Dict, Any, Tuple
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed, CancelledError, Future
|
||||
from collections import defaultdict, deque
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from core.graph_manager import GraphManager, NodeType
|
||||
from core.logger import get_forensic_logger, new_session
|
||||
from core.task_manager import TaskManager, TaskType, ReconTask
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
from providers.base_provider import BaseProvider
|
||||
|
||||
|
||||
class ScanStatus:
|
||||
"""Enumeration of scan statuses."""
|
||||
IDLE = "idle"
|
||||
RUNNING = "running"
|
||||
COMPLETED = "completed"
|
||||
FAILED = "failed"
|
||||
STOPPED = "stopped"
|
||||
|
||||
|
||||
class Scanner:
|
||||
"""
|
||||
Enhanced scanning orchestrator for DNSRecon passive reconnaissance.
|
||||
Now uses task-based completion model with comprehensive retry logic.
|
||||
"""
|
||||
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize scanner with session-specific configuration and task management."""
|
||||
print("Initializing Enhanced Scanner instance...")
|
||||
|
||||
try:
|
||||
# Use provided session config or create default
|
||||
if session_config is None:
|
||||
from core.session_config import create_session_config
|
||||
session_config = create_session_config()
|
||||
|
||||
self.config = session_config
|
||||
self.graph = GraphManager()
|
||||
self.providers = []
|
||||
self.status = ScanStatus.IDLE
|
||||
self.current_target = None
|
||||
self.current_depth = 0
|
||||
self.max_depth = 2
|
||||
self.stop_event = threading.Event()
|
||||
self.scan_thread = None
|
||||
self.session_id = None # Will be set by session manager
|
||||
self.current_scan_id = None # Track current scan ID
|
||||
|
||||
# Task-based execution components
|
||||
self.task_manager = None # Will be initialized when needed
|
||||
self.max_workers = self.config.max_concurrent_requests
|
||||
|
||||
# Enhanced progress tracking
|
||||
self.total_indicators_found = 0
|
||||
self.indicators_processed = 0
|
||||
self.current_indicator = ""
|
||||
self.scan_start_time = None
|
||||
self.scan_end_time = None
|
||||
|
||||
# Initialize providers with session config
|
||||
print("Calling _initialize_providers with session config...")
|
||||
self._initialize_providers()
|
||||
|
||||
# Initialize logger
|
||||
print("Initializing forensic logger...")
|
||||
self.logger = get_forensic_logger()
|
||||
|
||||
print("Enhanced Scanner initialization complete")
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Enhanced Scanner initialization failed: {e}")
|
||||
traceback.print_exc()
|
||||
raise
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare object for pickling by excluding unpicklable attributes."""
|
||||
state = self.__dict__.copy()
|
||||
|
||||
# Remove unpicklable threading objects
|
||||
unpicklable_attrs = [
|
||||
'stop_event',
|
||||
'scan_thread',
|
||||
'task_manager'
|
||||
]
|
||||
|
||||
for attr in unpicklable_attrs:
|
||||
if attr in state:
|
||||
del state[attr]
|
||||
|
||||
# Handle providers separately to ensure they're picklable
|
||||
if 'providers' in state:
|
||||
# The providers should be picklable now, but let's ensure clean state
|
||||
for provider in state['providers']:
|
||||
if hasattr(provider, '_stop_event'):
|
||||
provider._stop_event = None
|
||||
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore object after unpickling by reconstructing threading objects."""
|
||||
self.__dict__.update(state)
|
||||
|
||||
# Reconstruct threading objects
|
||||
self.stop_event = threading.Event()
|
||||
self.scan_thread = None
|
||||
self.task_manager = None
|
||||
|
||||
# Re-set stop events for providers
|
||||
if hasattr(self, 'providers'):
|
||||
for provider in self.providers:
|
||||
if hasattr(provider, 'set_stop_event'):
|
||||
provider.set_stop_event(self.stop_event)
|
||||
|
||||
def _is_stop_requested(self) -> bool:
|
||||
"""
|
||||
Enhanced stop signal checking that handles both local and Redis-based signals.
|
||||
"""
|
||||
# Check local threading event first (fastest)
|
||||
if self.stop_event.is_set():
|
||||
return True
|
||||
|
||||
# Check Redis-based stop signal if session ID is available
|
||||
if self.session_id:
|
||||
try:
|
||||
from core.session_manager import session_manager
|
||||
return session_manager.is_stop_requested(self.session_id)
|
||||
except Exception as e:
|
||||
print(f"Error checking Redis stop signal: {e}")
|
||||
# Fall back to local event
|
||||
return self.stop_event.is_set()
|
||||
|
||||
return False
|
||||
|
||||
def _set_stop_signal(self) -> None:
|
||||
"""
|
||||
Set stop signal both locally and in Redis.
|
||||
"""
|
||||
# Set local event
|
||||
self.stop_event.set()
|
||||
|
||||
# Set Redis signal if session ID is available
|
||||
if self.session_id:
|
||||
try:
|
||||
from core.session_manager import session_manager
|
||||
session_manager.set_stop_signal(self.session_id)
|
||||
except Exception as e:
|
||||
print(f"Error setting Redis stop signal: {e}")
|
||||
|
||||
def _initialize_providers(self) -> None:
|
||||
"""Initialize all available providers based on session configuration."""
|
||||
self.providers = []
|
||||
print("Initializing providers with session config...")
|
||||
|
||||
provider_dir = os.path.join(os.path.dirname(__file__), '..', 'providers')
|
||||
print(f"Looking for providers in: {provider_dir}")
|
||||
|
||||
if not os.path.exists(provider_dir):
|
||||
print(f"ERROR: Provider directory does not exist: {provider_dir}")
|
||||
return
|
||||
|
||||
provider_files = [f for f in os.listdir(provider_dir) if f.endswith('_provider.py') and not f.startswith('base')]
|
||||
print(f"Found provider files: {provider_files}")
|
||||
|
||||
for filename in provider_files:
|
||||
module_name = f"providers.{filename[:-3]}"
|
||||
print(f"Attempting to load module: {module_name}")
|
||||
|
||||
try:
|
||||
module = importlib.import_module(module_name)
|
||||
print(f" ✓ Module {module_name} loaded successfully")
|
||||
|
||||
# Find provider classes in the module
|
||||
provider_classes_found = []
|
||||
for attribute_name in dir(module):
|
||||
attribute = getattr(module, attribute_name)
|
||||
if isinstance(attribute, type) and issubclass(attribute, BaseProvider) and attribute is not BaseProvider:
|
||||
provider_classes_found.append((attribute_name, attribute))
|
||||
|
||||
print(f" Found provider classes: {[name for name, _ in provider_classes_found]}")
|
||||
|
||||
for class_name, provider_class in provider_classes_found:
|
||||
try:
|
||||
# Create temporary instance to get provider name
|
||||
temp_provider = provider_class(session_config=self.config)
|
||||
provider_name = temp_provider.get_name()
|
||||
print(f" Provider {class_name} -> name: {provider_name}")
|
||||
|
||||
# Check if enabled in config
|
||||
is_enabled = self.config.is_provider_enabled(provider_name)
|
||||
print(f" Provider {provider_name} enabled: {is_enabled}")
|
||||
|
||||
if is_enabled:
|
||||
# Check if available (has API keys, etc.)
|
||||
is_available = temp_provider.is_available()
|
||||
print(f" Provider {provider_name} available: {is_available}")
|
||||
|
||||
if is_available:
|
||||
# Set stop event and add to providers list
|
||||
temp_provider.set_stop_event(self.stop_event)
|
||||
self.providers.append(temp_provider)
|
||||
print(f" ✓ {temp_provider.get_display_name()} provider initialized successfully")
|
||||
else:
|
||||
print(f" - {temp_provider.get_display_name()} provider is not available (missing API key or other requirement)")
|
||||
else:
|
||||
print(f" - {temp_provider.get_display_name()} provider is disabled in config")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ✗ Failed to initialize provider class {class_name}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
except Exception as e:
|
||||
print(f" ✗ Failed to load module {module_name}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
print(f"Total providers initialized: {len(self.providers)}")
|
||||
for provider in self.providers:
|
||||
print(f" - {provider.get_display_name()} ({provider.get_name()})")
|
||||
|
||||
if len(self.providers) == 0:
|
||||
print("WARNING: No providers were initialized!")
|
||||
elif len(self.providers) == 1 and self.providers[0].get_name() == 'dns':
|
||||
print("WARNING: Only DNS provider initialized - other providers may have failed to load")
|
||||
|
||||
def start_scan(self, target_domain: str, max_depth: int = 2, clear_graph: bool = True) -> bool:
|
||||
"""Start a new reconnaissance scan with task-based completion model."""
|
||||
print(f"=== STARTING ENHANCED SCAN IN SCANNER {id(self)} ===")
|
||||
print(f"Session ID: {self.session_id}")
|
||||
print(f"Initial scanner status: {self.status}")
|
||||
print(f"Clear graph: {clear_graph}")
|
||||
|
||||
# Generate scan ID based on clear_graph behavior
|
||||
import uuid
|
||||
|
||||
if clear_graph:
|
||||
# NEW SCAN: Generate new ID and terminate existing scan
|
||||
print("NEW SCAN: Generating new scan ID and terminating existing scan")
|
||||
self.current_scan_id = str(uuid.uuid4())[:8]
|
||||
|
||||
# Aggressive cleanup of previous scan
|
||||
if self.scan_thread and self.scan_thread.is_alive():
|
||||
print("Terminating previous scan thread...")
|
||||
self._set_stop_signal()
|
||||
|
||||
if self.task_manager:
|
||||
self.task_manager.stop_execution()
|
||||
|
||||
self.scan_thread.join(timeout=8.0)
|
||||
if self.scan_thread.is_alive():
|
||||
print("WARNING: Previous scan thread did not terminate cleanly")
|
||||
|
||||
else:
|
||||
# ADD TO GRAPH: Keep existing scan ID if scan is running, or generate new one
|
||||
if self.status == ScanStatus.RUNNING and self.current_scan_id:
|
||||
print(f"ADD TO GRAPH: Keeping existing scan ID {self.current_scan_id}")
|
||||
# Don't terminate existing scan - we're adding to it
|
||||
else:
|
||||
print("ADD TO GRAPH: No active scan, generating new scan ID")
|
||||
self.current_scan_id = str(uuid.uuid4())[:8]
|
||||
|
||||
print(f"Using scan ID: {self.current_scan_id}")
|
||||
|
||||
# Reset state for new scan (but preserve graph if clear_graph=False)
|
||||
if clear_graph or self.status != ScanStatus.RUNNING:
|
||||
self.status = ScanStatus.IDLE
|
||||
self._update_session_state()
|
||||
|
||||
try:
|
||||
if not hasattr(self, 'providers') or not self.providers:
|
||||
print(f"ERROR: No providers available in scanner {id(self)}, cannot start scan")
|
||||
return False
|
||||
|
||||
print(f"Scanner {id(self)} validation passed, providers available: {[p.get_name() for p in self.providers]}")
|
||||
|
||||
if clear_graph:
|
||||
self.graph.clear()
|
||||
|
||||
self.current_target = target_domain.lower().strip()
|
||||
self.max_depth = max_depth
|
||||
self.current_depth = 0
|
||||
|
||||
# Clear stop signals only if starting new scan
|
||||
if clear_graph or self.status != ScanStatus.RUNNING:
|
||||
self.stop_event.clear()
|
||||
if self.session_id:
|
||||
from core.session_manager import session_manager
|
||||
session_manager.clear_stop_signal(self.session_id)
|
||||
|
||||
self.total_indicators_found = 0
|
||||
self.indicators_processed = 0
|
||||
self.current_indicator = self.current_target
|
||||
self.scan_start_time = datetime.now(timezone.utc)
|
||||
self.scan_end_time = None
|
||||
|
||||
self._update_session_state()
|
||||
|
||||
# Initialize forensic session only for new scans
|
||||
if clear_graph:
|
||||
self.logger = new_session()
|
||||
|
||||
# Start task-based scan thread
|
||||
print(f"Starting task-based scan thread with scan ID {self.current_scan_id}...")
|
||||
self.scan_thread = threading.Thread(
|
||||
target=self._execute_task_based_scan,
|
||||
args=(self.current_target, max_depth, self.current_scan_id),
|
||||
daemon=True
|
||||
)
|
||||
self.scan_thread.start()
|
||||
|
||||
print(f"=== ENHANCED SCAN STARTED SUCCESSFULLY IN SCANNER {id(self)} ===")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in start_scan for scanner {id(self)}: {e}")
|
||||
traceback.print_exc()
|
||||
self.status = ScanStatus.FAILED
|
||||
self.scan_end_time = datetime.now(timezone.utc)
|
||||
self._update_session_state()
|
||||
return False
|
||||
|
||||
def _execute_task_based_scan(self, target_domain: str, max_depth: int, scan_id: str) -> None:
|
||||
"""Execute the reconnaissance scan using the task-based completion model."""
|
||||
print(f"_execute_task_based_scan started for {target_domain} with depth {max_depth}, scan ID {scan_id}")
|
||||
|
||||
try:
|
||||
self.status = ScanStatus.RUNNING
|
||||
self._update_session_state()
|
||||
|
||||
enabled_providers = [provider.get_name() for provider in self.providers]
|
||||
self.logger.log_scan_start(target_domain, max_depth, enabled_providers)
|
||||
|
||||
# Initialize task manager
|
||||
self.task_manager = TaskManager(
|
||||
self.providers,
|
||||
self.graph,
|
||||
self.logger,
|
||||
max_concurrent_tasks=self.max_workers
|
||||
)
|
||||
|
||||
# Add initial target to graph
|
||||
self.graph.add_node(target_domain, NodeType.DOMAIN)
|
||||
|
||||
# Start task execution
|
||||
self.task_manager.start_execution(max_workers=self.max_workers)
|
||||
|
||||
# Track processed targets to avoid duplicates
|
||||
processed_targets = set()
|
||||
|
||||
# Task queue for breadth-first processing
|
||||
target_queue = deque([(target_domain, 0)]) # (target, depth)
|
||||
|
||||
while target_queue:
|
||||
# Abort if scan ID changed (new scan started)
|
||||
if self.current_scan_id != scan_id:
|
||||
print(f"Scan aborted - ID mismatch (current: {self.current_scan_id}, expected: {scan_id})")
|
||||
break
|
||||
|
||||
if self._is_stop_requested():
|
||||
print("Stop requested, terminating task-based scan.")
|
||||
break
|
||||
|
||||
target, depth = target_queue.popleft()
|
||||
|
||||
if target in processed_targets or depth > max_depth:
|
||||
continue
|
||||
|
||||
self.current_depth = depth
|
||||
self.current_indicator = target
|
||||
self._update_session_state()
|
||||
|
||||
print(f"Processing target: {target} at depth {depth}")
|
||||
|
||||
# Create tasks for all eligible providers
|
||||
task_ids = self.task_manager.create_provider_tasks(target, depth, self.providers)
|
||||
|
||||
if task_ids:
|
||||
print(f"Created {len(task_ids)} tasks for target {target}")
|
||||
self.total_indicators_found += len(task_ids)
|
||||
self._update_session_state()
|
||||
|
||||
processed_targets.add(target)
|
||||
|
||||
# Wait for current batch of tasks to complete before processing next depth
|
||||
# This ensures we get all relationships before expanding further
|
||||
timeout_per_batch = 60 # 60 seconds per batch
|
||||
batch_start = time.time()
|
||||
|
||||
while time.time() - batch_start < timeout_per_batch:
|
||||
if self._is_stop_requested() or self.current_scan_id != scan_id:
|
||||
break
|
||||
|
||||
progress_report = self.task_manager.get_progress_report()
|
||||
stats = progress_report['statistics']
|
||||
|
||||
# Update progress tracking
|
||||
self.indicators_processed = stats['succeeded'] + stats['failed_permanent']
|
||||
self._update_session_state()
|
||||
|
||||
# Check if current batch is complete
|
||||
current_batch_complete = (
|
||||
stats['pending'] == 0 and
|
||||
stats['running'] == 0 and
|
||||
stats['failed_retrying'] == 0
|
||||
)
|
||||
|
||||
if current_batch_complete:
|
||||
print(f"Batch complete for {target}: {stats['succeeded']} succeeded, {stats['failed_permanent']} failed")
|
||||
break
|
||||
|
||||
time.sleep(1.0) # Check every second
|
||||
|
||||
# Collect new targets from completed successful tasks
|
||||
if depth < max_depth:
|
||||
new_targets = self._collect_new_targets_from_completed_tasks()
|
||||
for new_target in new_targets:
|
||||
if new_target not in processed_targets:
|
||||
target_queue.append((new_target, depth + 1))
|
||||
print(f"Added new target for next depth: {new_target}")
|
||||
|
||||
# Wait for all remaining tasks to complete
|
||||
print("Waiting for all tasks to complete...")
|
||||
final_completion = self.task_manager.wait_for_completion(timeout_seconds=300)
|
||||
|
||||
if not final_completion:
|
||||
print("WARNING: Some tasks did not complete within timeout")
|
||||
|
||||
# Final progress update
|
||||
final_report = self.task_manager.get_progress_report()
|
||||
final_stats = final_report['statistics']
|
||||
|
||||
print(f"Final task statistics:")
|
||||
print(f" - Total tasks: {final_stats['total_tasks']}")
|
||||
print(f" - Succeeded: {final_stats['succeeded']}")
|
||||
print(f" - Failed permanently: {final_stats['failed_permanent']}")
|
||||
print(f" - Completion rate: {final_stats['completion_rate']:.1f}%")
|
||||
|
||||
# Determine final scan status
|
||||
if self.current_scan_id == scan_id:
|
||||
if self._is_stop_requested():
|
||||
self.status = ScanStatus.STOPPED
|
||||
elif final_stats['failed_permanent'] > 0 and final_stats['succeeded'] == 0:
|
||||
self.status = ScanStatus.FAILED
|
||||
elif final_stats['completion_rate'] < 50.0: # Less than 50% success rate
|
||||
self.status = ScanStatus.FAILED
|
||||
else:
|
||||
self.status = ScanStatus.COMPLETED
|
||||
|
||||
self.scan_end_time = datetime.now(timezone.utc)
|
||||
self._update_session_state()
|
||||
self.logger.log_scan_complete()
|
||||
else:
|
||||
print(f"Scan completed but ID mismatch - not updating final status")
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Task-based scan execution failed: {e}")
|
||||
traceback.print_exc()
|
||||
self.status = ScanStatus.FAILED
|
||||
self.scan_end_time = datetime.now(timezone.utc)
|
||||
self.logger.logger.error(f"Task-based scan failed: {e}")
|
||||
finally:
|
||||
# Clean up task manager
|
||||
if self.task_manager:
|
||||
self.task_manager.stop_execution()
|
||||
|
||||
# Final statistics
|
||||
graph_stats = self.graph.get_statistics()
|
||||
print("Final scan statistics:")
|
||||
print(f" - Total nodes: {graph_stats['basic_metrics']['total_nodes']}")
|
||||
print(f" - Total edges: {graph_stats['basic_metrics']['total_edges']}")
|
||||
print(f" - Targets processed: {len(processed_targets)}")
|
||||
|
||||
def _collect_new_targets_from_completed_tasks(self) -> Set[str]:
|
||||
"""Collect new targets from successfully completed tasks."""
|
||||
new_targets = set()
|
||||
|
||||
if not self.task_manager:
|
||||
return new_targets
|
||||
|
||||
# Get task summaries to find successful tasks
|
||||
task_summaries = self.task_manager.task_queue.get_task_summaries()
|
||||
|
||||
for task_summary in task_summaries:
|
||||
if task_summary['status'] == 'succeeded':
|
||||
task_id = task_summary['task_id']
|
||||
task = self.task_manager.task_queue.tasks.get(task_id)
|
||||
|
||||
if task and task.result and task.result.data:
|
||||
task_new_targets = task.result.data.get('new_targets', [])
|
||||
for target in task_new_targets:
|
||||
if _is_valid_domain(target) or _is_valid_ip(target):
|
||||
new_targets.add(target)
|
||||
|
||||
return new_targets
|
||||
|
||||
def _update_session_state(self) -> None:
|
||||
"""
|
||||
Update the scanner state in Redis for GUI updates.
|
||||
"""
|
||||
if self.session_id:
|
||||
try:
|
||||
from core.session_manager import session_manager
|
||||
success = session_manager.update_session_scanner(self.session_id, self)
|
||||
if not success:
|
||||
print(f"WARNING: Failed to update session state for {self.session_id}")
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to update session state: {e}")
|
||||
|
||||
def stop_scan(self) -> bool:
|
||||
"""Request immediate scan termination with task manager cleanup."""
|
||||
try:
|
||||
print("=== INITIATING ENHANCED SCAN TERMINATION ===")
|
||||
self.logger.logger.info("Enhanced scan termination requested by user")
|
||||
|
||||
# Invalidate current scan ID to prevent stale updates
|
||||
old_scan_id = self.current_scan_id
|
||||
self.current_scan_id = None
|
||||
print(f"Invalidated scan ID {old_scan_id}")
|
||||
|
||||
# Set stop signals
|
||||
self._set_stop_signal()
|
||||
self.status = ScanStatus.STOPPED
|
||||
self.scan_end_time = datetime.now(timezone.utc)
|
||||
|
||||
# Immediately update GUI with stopped status
|
||||
self._update_session_state()
|
||||
|
||||
# Stop task manager if running
|
||||
if self.task_manager:
|
||||
print("Stopping task manager...")
|
||||
self.task_manager.stop_execution()
|
||||
|
||||
print("Enhanced termination signals sent. The scan will stop as soon as possible.")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in enhanced stop_scan: {e}")
|
||||
self.logger.logger.error(f"Error during enhanced scan termination: {e}")
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
def get_scan_status(self) -> Dict[str, Any]:
|
||||
"""Get current scan status with enhanced task-based information."""
|
||||
try:
|
||||
status = {
|
||||
'status': self.status,
|
||||
'target_domain': self.current_target,
|
||||
'current_depth': self.current_depth,
|
||||
'max_depth': self.max_depth,
|
||||
'current_indicator': self.current_indicator,
|
||||
'total_indicators_found': self.total_indicators_found,
|
||||
'indicators_processed': self.indicators_processed,
|
||||
'progress_percentage': self._calculate_progress(),
|
||||
'enabled_providers': [provider.get_name() for provider in self.providers],
|
||||
'graph_statistics': self.graph.get_statistics(),
|
||||
'scan_duration_seconds': self._calculate_scan_duration(),
|
||||
'scan_start_time': self.scan_start_time.isoformat() if self.scan_start_time else None,
|
||||
'scan_end_time': self.scan_end_time.isoformat() if self.scan_end_time else None
|
||||
}
|
||||
|
||||
# Add task manager statistics if available
|
||||
if self.task_manager:
|
||||
progress_report = self.task_manager.get_progress_report()
|
||||
status['task_statistics'] = progress_report['statistics']
|
||||
status['task_details'] = {
|
||||
'is_running': progress_report['is_running'],
|
||||
'worker_count': progress_report['worker_count'],
|
||||
'failed_tasks_count': len(progress_report['failed_tasks'])
|
||||
}
|
||||
|
||||
# Update indicators processed from task statistics
|
||||
task_stats = progress_report['statistics']
|
||||
status['indicators_processed'] = task_stats['succeeded'] + task_stats['failed_permanent']
|
||||
|
||||
# Recalculate progress based on task completion
|
||||
if task_stats['total_tasks'] > 0:
|
||||
task_completion_rate = (task_stats['succeeded'] + task_stats['failed_permanent']) / task_stats['total_tasks']
|
||||
status['progress_percentage'] = min(100.0, task_completion_rate * 100.0)
|
||||
|
||||
return status
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in get_scan_status: {e}")
|
||||
traceback.print_exc()
|
||||
return {
|
||||
'status': 'error',
|
||||
'target_domain': None,
|
||||
'current_depth': 0,
|
||||
'max_depth': 0,
|
||||
'current_indicator': '',
|
||||
'total_indicators_found': 0,
|
||||
'indicators_processed': 0,
|
||||
'progress_percentage': 0.0,
|
||||
'enabled_providers': [],
|
||||
'graph_statistics': {},
|
||||
'scan_duration_seconds': 0,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def _calculate_progress(self) -> float:
|
||||
"""Calculate scan progress percentage."""
|
||||
if self.total_indicators_found == 0:
|
||||
return 0.0
|
||||
return min(100.0, (self.indicators_processed / self.total_indicators_found) * 100)
|
||||
|
||||
def _calculate_scan_duration(self) -> float:
|
||||
"""Calculate scan duration in seconds."""
|
||||
if not self.scan_start_time:
|
||||
return 0.0
|
||||
|
||||
end_time = self.scan_end_time or datetime.now(timezone.utc)
|
||||
duration = (end_time - self.scan_start_time).total_seconds()
|
||||
return round(duration, 2)
|
||||
|
||||
def get_graph_data(self) -> Dict[str, Any]:
|
||||
"""Get current graph data for visualization."""
|
||||
return self.graph.get_graph_data()
|
||||
|
||||
def export_results(self) -> Dict[str, Any]:
|
||||
"""Export complete scan results with enhanced task-based audit trail."""
|
||||
graph_data = self.graph.export_json()
|
||||
audit_trail = self.logger.export_audit_trail()
|
||||
provider_stats = {}
|
||||
for provider in self.providers:
|
||||
provider_stats[provider.get_name()] = provider.get_statistics()
|
||||
|
||||
export_data = {
|
||||
'scan_metadata': {
|
||||
'target_domain': self.current_target,
|
||||
'max_depth': self.max_depth,
|
||||
'final_status': self.status,
|
||||
'total_indicators_processed': self.indicators_processed,
|
||||
'enabled_providers': list(provider_stats.keys()),
|
||||
'session_id': self.session_id,
|
||||
'scan_id': self.current_scan_id,
|
||||
'scan_duration_seconds': self._calculate_scan_duration(),
|
||||
'scan_start_time': self.scan_start_time.isoformat() if self.scan_start_time else None,
|
||||
'scan_end_time': self.scan_end_time.isoformat() if self.scan_end_time else None
|
||||
},
|
||||
'graph_data': graph_data,
|
||||
'forensic_audit': audit_trail,
|
||||
'provider_statistics': provider_stats,
|
||||
'scan_summary': self.logger.get_forensic_summary()
|
||||
}
|
||||
|
||||
# Add task execution details if available
|
||||
if self.task_manager:
|
||||
progress_report = self.task_manager.get_progress_report()
|
||||
export_data['task_execution'] = {
|
||||
'statistics': progress_report['statistics'],
|
||||
'failed_tasks': progress_report['failed_tasks'],
|
||||
'execution_summary': {
|
||||
'total_tasks_created': progress_report['statistics']['total_tasks'],
|
||||
'success_rate': progress_report['statistics']['completion_rate'],
|
||||
'average_retries': self._calculate_average_retries(progress_report)
|
||||
}
|
||||
}
|
||||
|
||||
return export_data
|
||||
|
||||
def _calculate_average_retries(self, progress_report: Dict[str, Any]) -> float:
|
||||
"""Calculate average retry attempts across all tasks."""
|
||||
if not self.task_manager or not hasattr(self.task_manager.task_queue, 'tasks'):
|
||||
return 0.0
|
||||
|
||||
total_attempts = 0
|
||||
task_count = 0
|
||||
|
||||
for task in self.task_manager.task_queue.tasks.values():
|
||||
if hasattr(task, 'execution_history'):
|
||||
total_attempts += len(task.execution_history)
|
||||
task_count += 1
|
||||
|
||||
return round(total_attempts / task_count, 2) if task_count > 0 else 0.0
|
||||
|
||||
def get_provider_statistics(self) -> Dict[str, Dict[str, Any]]:
|
||||
"""Get statistics for all providers with enhanced cache information."""
|
||||
stats = {}
|
||||
for provider in self.providers:
|
||||
provider_stats = provider.get_statistics()
|
||||
# Add cache performance metrics
|
||||
if hasattr(provider, 'cache'):
|
||||
cache_performance = {
|
||||
'cache_enabled': True,
|
||||
'cache_directory': provider.cache.cache_dir,
|
||||
'cache_expiry_hours': provider.cache.cache_expiry / 3600
|
||||
}
|
||||
provider_stats.update(cache_performance)
|
||||
stats[provider.get_name()] = provider_stats
|
||||
return stats
|
||||
|
||||
def get_provider_info(self) -> Dict[str, Dict[str, Any]]:
|
||||
"""Get information about all available providers with enhanced details."""
|
||||
info = {}
|
||||
provider_dir = os.path.join(os.path.dirname(__file__), '..', 'providers')
|
||||
for filename in os.listdir(provider_dir):
|
||||
if filename.endswith('_provider.py') and not filename.startswith('base'):
|
||||
module_name = f"providers.{filename[:-3]}"
|
||||
try:
|
||||
module = importlib.import_module(module_name)
|
||||
for attribute_name in dir(module):
|
||||
attribute = getattr(module, attribute_name)
|
||||
if isinstance(attribute, type) and issubclass(attribute, BaseProvider) and attribute is not BaseProvider:
|
||||
provider_class = attribute
|
||||
# Instantiate to get metadata, even if not fully configured
|
||||
temp_provider = provider_class(session_config=self.config)
|
||||
provider_name = temp_provider.get_name()
|
||||
|
||||
# Find the actual provider instance if it exists, to get live stats
|
||||
live_provider = next((p for p in self.providers if p.get_name() == provider_name), None)
|
||||
|
||||
provider_info = {
|
||||
'display_name': temp_provider.get_display_name(),
|
||||
'requires_api_key': temp_provider.requires_api_key(),
|
||||
'statistics': live_provider.get_statistics() if live_provider else temp_provider.get_statistics(),
|
||||
'enabled': self.config.is_provider_enabled(provider_name),
|
||||
'rate_limit': self.config.get_rate_limit(provider_name),
|
||||
'eligibility': temp_provider.get_eligibility()
|
||||
}
|
||||
|
||||
# Add cache information if provider has caching
|
||||
if live_provider and hasattr(live_provider, 'cache'):
|
||||
provider_info['cache_info'] = {
|
||||
'cache_enabled': True,
|
||||
'cache_directory': live_provider.cache.cache_dir,
|
||||
'cache_expiry_hours': live_provider.cache.cache_expiry / 3600
|
||||
}
|
||||
|
||||
info[provider_name] = provider_info
|
||||
|
||||
except Exception as e:
|
||||
print(f"✗ Failed to get info for provider from {filename}: {e}")
|
||||
traceback.print_exc()
|
||||
return info
|
||||
372
core/session_config.py
Normal file
372
core/session_config.py
Normal file
@@ -0,0 +1,372 @@
|
||||
"""
|
||||
Enhanced per-session configuration management for DNSRecon.
|
||||
Provides isolated configuration instances for each user session while supporting global caching.
|
||||
"""
|
||||
|
||||
import os
|
||||
from typing import Dict, Optional
|
||||
|
||||
|
||||
class SessionConfig:
|
||||
"""
|
||||
Enhanced session-specific configuration that inherits from global config
|
||||
but maintains isolated API keys and provider settings while supporting global caching.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize enhanced session config with global cache support."""
|
||||
# Copy all attributes from global config
|
||||
self.api_keys: Dict[str, Optional[str]] = {
|
||||
'shodan': None
|
||||
}
|
||||
|
||||
# Default settings (copied from global config)
|
||||
self.default_recursion_depth = 2
|
||||
self.default_timeout = 30
|
||||
self.max_concurrent_requests = 5
|
||||
self.large_entity_threshold = 100
|
||||
|
||||
# Enhanced rate limiting settings (per session)
|
||||
self.rate_limits = {
|
||||
'crtsh': 60,
|
||||
'shodan': 60,
|
||||
'dns': 100
|
||||
}
|
||||
|
||||
# Enhanced provider settings (per session)
|
||||
self.enabled_providers = {
|
||||
'crtsh': True,
|
||||
'dns': True,
|
||||
'shodan': False
|
||||
}
|
||||
|
||||
# Task-based execution settings
|
||||
self.task_retry_settings = {
|
||||
'max_retries': 3,
|
||||
'base_backoff_seconds': 1.0,
|
||||
'max_backoff_seconds': 60.0,
|
||||
'retry_on_rate_limit': True,
|
||||
'retry_on_connection_error': True,
|
||||
'retry_on_timeout': True
|
||||
}
|
||||
|
||||
# Cache settings (global across all sessions)
|
||||
self.cache_settings = {
|
||||
'enabled': True,
|
||||
'expiry_hours': 12,
|
||||
'cache_base_dir': '.cache',
|
||||
'per_provider_directories': True,
|
||||
'thread_safe_operations': True
|
||||
}
|
||||
|
||||
# Logging configuration
|
||||
self.log_level = 'INFO'
|
||||
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
# Flask configuration (shared)
|
||||
self.flask_host = '127.0.0.1'
|
||||
self.flask_port = 5000
|
||||
self.flask_debug = True
|
||||
|
||||
# Session isolation settings
|
||||
self.session_isolation = {
|
||||
'enforce_single_session_per_user': True,
|
||||
'consolidate_session_data_on_replacement': True,
|
||||
'user_fingerprinting_enabled': True,
|
||||
'session_timeout_minutes': 60
|
||||
}
|
||||
|
||||
# Circuit breaker settings for provider reliability
|
||||
self.circuit_breaker = {
|
||||
'enabled': True,
|
||||
'failure_threshold': 5, # Failures before opening circuit
|
||||
'recovery_timeout_seconds': 300, # 5 minutes before trying again
|
||||
'half_open_max_calls': 3 # Test calls when recovering
|
||||
}
|
||||
|
||||
def set_api_key(self, provider: str, api_key: str) -> bool:
|
||||
"""
|
||||
Set API key for a provider in this session.
|
||||
|
||||
Args:
|
||||
provider: Provider name (shodan, etc)
|
||||
api_key: API key string (empty string to clear)
|
||||
|
||||
Returns:
|
||||
bool: True if key was set successfully
|
||||
"""
|
||||
if provider in self.api_keys:
|
||||
# Handle clearing of API keys
|
||||
if api_key and api_key.strip():
|
||||
self.api_keys[provider] = api_key.strip()
|
||||
self.enabled_providers[provider] = True
|
||||
else:
|
||||
self.api_keys[provider] = None
|
||||
self.enabled_providers[provider] = False
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_api_key(self, provider: str) -> Optional[str]:
|
||||
"""
|
||||
Get API key for a provider in this session.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
API key or None if not set
|
||||
"""
|
||||
return self.api_keys.get(provider)
|
||||
|
||||
def is_provider_enabled(self, provider: str) -> bool:
|
||||
"""
|
||||
Check if a provider is enabled in this session.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
bool: True if provider is enabled
|
||||
"""
|
||||
return self.enabled_providers.get(provider, False)
|
||||
|
||||
def get_rate_limit(self, provider: str) -> int:
|
||||
"""
|
||||
Get rate limit for a provider in this session.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
Rate limit in requests per minute
|
||||
"""
|
||||
return self.rate_limits.get(provider, 60)
|
||||
|
||||
def get_task_retry_config(self) -> Dict[str, any]:
|
||||
"""
|
||||
Get task retry configuration for this session.
|
||||
|
||||
Returns:
|
||||
Dictionary with retry settings
|
||||
"""
|
||||
return self.task_retry_settings.copy()
|
||||
|
||||
def get_cache_config(self) -> Dict[str, any]:
|
||||
"""
|
||||
Get cache configuration (global settings).
|
||||
|
||||
Returns:
|
||||
Dictionary with cache settings
|
||||
"""
|
||||
return self.cache_settings.copy()
|
||||
|
||||
def is_circuit_breaker_enabled(self) -> bool:
|
||||
"""Check if circuit breaker is enabled for provider reliability."""
|
||||
return self.circuit_breaker.get('enabled', True)
|
||||
|
||||
def get_circuit_breaker_config(self) -> Dict[str, any]:
|
||||
"""Get circuit breaker configuration."""
|
||||
return self.circuit_breaker.copy()
|
||||
|
||||
def update_provider_settings(self, provider_updates: Dict[str, Dict[str, any]]) -> bool:
|
||||
"""
|
||||
Update provider-specific settings in bulk.
|
||||
|
||||
Args:
|
||||
provider_updates: Dictionary of provider -> settings updates
|
||||
|
||||
Returns:
|
||||
bool: True if updates were applied successfully
|
||||
"""
|
||||
try:
|
||||
for provider_name, updates in provider_updates.items():
|
||||
# Update rate limits
|
||||
if 'rate_limit' in updates:
|
||||
self.rate_limits[provider_name] = updates['rate_limit']
|
||||
|
||||
# Update enabled status
|
||||
if 'enabled' in updates:
|
||||
self.enabled_providers[provider_name] = updates['enabled']
|
||||
|
||||
# Update API key
|
||||
if 'api_key' in updates:
|
||||
self.set_api_key(provider_name, updates['api_key'])
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"Error updating provider settings: {e}")
|
||||
return False
|
||||
|
||||
def validate_configuration(self) -> Dict[str, any]:
|
||||
"""
|
||||
Validate the current configuration and return validation results.
|
||||
|
||||
Returns:
|
||||
Dictionary with validation results and any issues found
|
||||
"""
|
||||
validation_result = {
|
||||
'valid': True,
|
||||
'warnings': [],
|
||||
'errors': [],
|
||||
'provider_status': {}
|
||||
}
|
||||
|
||||
# Validate provider configurations
|
||||
for provider_name, enabled in self.enabled_providers.items():
|
||||
provider_status = {
|
||||
'enabled': enabled,
|
||||
'has_api_key': bool(self.api_keys.get(provider_name)),
|
||||
'rate_limit': self.rate_limits.get(provider_name, 60)
|
||||
}
|
||||
|
||||
# Check for potential issues
|
||||
if enabled and provider_name in ['shodan'] and not provider_status['has_api_key']:
|
||||
validation_result['warnings'].append(
|
||||
f"Provider '{provider_name}' is enabled but missing API key"
|
||||
)
|
||||
|
||||
validation_result['provider_status'][provider_name] = provider_status
|
||||
|
||||
# Validate task settings
|
||||
if self.task_retry_settings['max_retries'] > 10:
|
||||
validation_result['warnings'].append(
|
||||
f"High retry count ({self.task_retry_settings['max_retries']}) may cause long delays"
|
||||
)
|
||||
|
||||
# Validate concurrent settings
|
||||
if self.max_concurrent_requests > 10:
|
||||
validation_result['warnings'].append(
|
||||
f"High concurrency ({self.max_concurrent_requests}) may overwhelm providers"
|
||||
)
|
||||
|
||||
# Validate cache settings
|
||||
if not os.path.exists(self.cache_settings['cache_base_dir']):
|
||||
try:
|
||||
os.makedirs(self.cache_settings['cache_base_dir'], exist_ok=True)
|
||||
except Exception as e:
|
||||
validation_result['errors'].append(f"Cannot create cache directory: {e}")
|
||||
validation_result['valid'] = False
|
||||
|
||||
return validation_result
|
||||
|
||||
def load_from_env(self):
|
||||
"""Load configuration from environment variables with enhanced validation."""
|
||||
# Load API keys from environment
|
||||
if os.getenv('SHODAN_API_KEY') and not self.api_keys['shodan']:
|
||||
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
|
||||
print("Loaded Shodan API key from environment")
|
||||
|
||||
# Override default settings from environment
|
||||
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', '2'))
|
||||
self.default_timeout = int(os.getenv('DEFAULT_TIMEOUT', '30'))
|
||||
self.max_concurrent_requests = int(os.getenv('MAX_CONCURRENT_REQUESTS', '5'))
|
||||
|
||||
# Load task retry settings from environment
|
||||
if os.getenv('TASK_MAX_RETRIES'):
|
||||
self.task_retry_settings['max_retries'] = int(os.getenv('TASK_MAX_RETRIES'))
|
||||
|
||||
if os.getenv('TASK_BASE_BACKOFF'):
|
||||
self.task_retry_settings['base_backoff_seconds'] = float(os.getenv('TASK_BASE_BACKOFF'))
|
||||
|
||||
# Load cache settings from environment
|
||||
if os.getenv('CACHE_EXPIRY_HOURS'):
|
||||
self.cache_settings['expiry_hours'] = int(os.getenv('CACHE_EXPIRY_HOURS'))
|
||||
|
||||
if os.getenv('CACHE_DISABLED'):
|
||||
self.cache_settings['enabled'] = os.getenv('CACHE_DISABLED').lower() != 'true'
|
||||
|
||||
# Load circuit breaker settings
|
||||
if os.getenv('CIRCUIT_BREAKER_DISABLED'):
|
||||
self.circuit_breaker['enabled'] = os.getenv('CIRCUIT_BREAKER_DISABLED').lower() != 'true'
|
||||
|
||||
# Flask settings
|
||||
self.flask_debug = os.getenv('FLASK_DEBUG', 'True').lower() == 'true'
|
||||
|
||||
print("Enhanced configuration loaded from environment")
|
||||
|
||||
def export_config_summary(self) -> Dict[str, any]:
|
||||
"""
|
||||
Export a summary of the current configuration for debugging/logging.
|
||||
|
||||
Returns:
|
||||
Dictionary with configuration summary (API keys redacted)
|
||||
"""
|
||||
return {
|
||||
'providers': {
|
||||
provider: {
|
||||
'enabled': self.enabled_providers.get(provider, False),
|
||||
'has_api_key': bool(self.api_keys.get(provider)),
|
||||
'rate_limit': self.rate_limits.get(provider, 60)
|
||||
}
|
||||
for provider in self.enabled_providers.keys()
|
||||
},
|
||||
'task_settings': {
|
||||
'max_retries': self.task_retry_settings['max_retries'],
|
||||
'max_concurrent_requests': self.max_concurrent_requests,
|
||||
'large_entity_threshold': self.large_entity_threshold
|
||||
},
|
||||
'cache_settings': {
|
||||
'enabled': self.cache_settings['enabled'],
|
||||
'expiry_hours': self.cache_settings['expiry_hours'],
|
||||
'base_directory': self.cache_settings['cache_base_dir']
|
||||
},
|
||||
'session_settings': {
|
||||
'isolation_enabled': self.session_isolation['enforce_single_session_per_user'],
|
||||
'consolidation_enabled': self.session_isolation['consolidate_session_data_on_replacement'],
|
||||
'timeout_minutes': self.session_isolation['session_timeout_minutes']
|
||||
},
|
||||
'circuit_breaker': {
|
||||
'enabled': self.circuit_breaker['enabled'],
|
||||
'failure_threshold': self.circuit_breaker['failure_threshold'],
|
||||
'recovery_timeout': self.circuit_breaker['recovery_timeout_seconds']
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def create_session_config() -> SessionConfig:
|
||||
"""
|
||||
Create a new enhanced session configuration instance.
|
||||
|
||||
Returns:
|
||||
Configured SessionConfig instance
|
||||
"""
|
||||
session_config = SessionConfig()
|
||||
session_config.load_from_env()
|
||||
|
||||
# Validate configuration and log any issues
|
||||
validation = session_config.validate_configuration()
|
||||
if validation['warnings']:
|
||||
print("Configuration warnings:")
|
||||
for warning in validation['warnings']:
|
||||
print(f" WARNING: {warning}")
|
||||
|
||||
if validation['errors']:
|
||||
print("Configuration errors:")
|
||||
for error in validation['errors']:
|
||||
print(f" ERROR: {error}")
|
||||
|
||||
if not validation['valid']:
|
||||
raise ValueError("Configuration validation failed - see errors above")
|
||||
|
||||
print(f"Enhanced session configuration created successfully")
|
||||
return session_config
|
||||
|
||||
|
||||
def create_test_config() -> SessionConfig:
|
||||
"""
|
||||
Create a test configuration with safe defaults for testing.
|
||||
|
||||
Returns:
|
||||
Test-safe SessionConfig instance
|
||||
"""
|
||||
test_config = SessionConfig()
|
||||
|
||||
# Override settings for testing
|
||||
test_config.max_concurrent_requests = 2
|
||||
test_config.task_retry_settings['max_retries'] = 1
|
||||
test_config.task_retry_settings['base_backoff_seconds'] = 0.1
|
||||
test_config.cache_settings['expiry_hours'] = 1
|
||||
test_config.session_isolation['session_timeout_minutes'] = 10
|
||||
|
||||
print("Test configuration created")
|
||||
return test_config
|
||||
576
core/session_manager.py
Normal file
576
core/session_manager.py
Normal file
@@ -0,0 +1,576 @@
|
||||
# dnsrecon/core/session_manager.py
|
||||
|
||||
import threading
|
||||
import time
|
||||
import uuid
|
||||
import redis
|
||||
import pickle
|
||||
import hashlib
|
||||
from typing import Dict, Optional, Any, List, Tuple
|
||||
|
||||
from core.scanner import Scanner
|
||||
|
||||
|
||||
class UserIdentifier:
|
||||
"""Handles user identification for session management."""
|
||||
|
||||
@staticmethod
|
||||
def generate_user_fingerprint(client_ip: str, user_agent: str) -> str:
|
||||
"""
|
||||
Generate a unique fingerprint for a user based on IP and User-Agent.
|
||||
|
||||
Args:
|
||||
client_ip: Client IP address
|
||||
user_agent: User-Agent header value
|
||||
|
||||
Returns:
|
||||
Unique user fingerprint hash
|
||||
"""
|
||||
# Create deterministic user identifier
|
||||
user_data = f"{client_ip}:{user_agent[:100]}" # Limit UA to 100 chars
|
||||
fingerprint = hashlib.sha256(user_data.encode()).hexdigest()[:16] # 16 char fingerprint
|
||||
return f"user_{fingerprint}"
|
||||
|
||||
@staticmethod
|
||||
def extract_request_info(request) -> Tuple[str, str]:
|
||||
"""
|
||||
Extract client IP and User-Agent from Flask request.
|
||||
|
||||
Args:
|
||||
request: Flask request object
|
||||
|
||||
Returns:
|
||||
Tuple of (client_ip, user_agent)
|
||||
"""
|
||||
# Handle proxy headers for real IP
|
||||
client_ip = request.headers.get('X-Forwarded-For', '').split(',')[0].strip()
|
||||
if not client_ip:
|
||||
client_ip = request.headers.get('X-Real-IP', '')
|
||||
if not client_ip:
|
||||
client_ip = request.remote_addr or 'unknown'
|
||||
|
||||
user_agent = request.headers.get('User-Agent', 'unknown')
|
||||
|
||||
return client_ip, user_agent
|
||||
|
||||
|
||||
class SessionConsolidator:
|
||||
"""Handles consolidation of session data when replacing sessions."""
|
||||
|
||||
@staticmethod
|
||||
def consolidate_scanner_data(old_scanner: 'Scanner', new_scanner: 'Scanner') -> 'Scanner':
|
||||
"""
|
||||
Consolidate useful data from old scanner into new scanner.
|
||||
|
||||
Args:
|
||||
old_scanner: Scanner from terminated session
|
||||
new_scanner: New scanner instance
|
||||
|
||||
Returns:
|
||||
Enhanced new scanner with consolidated data
|
||||
"""
|
||||
try:
|
||||
# Consolidate graph data if old scanner has valuable data
|
||||
if old_scanner and hasattr(old_scanner, 'graph') and old_scanner.graph:
|
||||
old_stats = old_scanner.graph.get_statistics()
|
||||
if old_stats['basic_metrics']['total_nodes'] > 0:
|
||||
print(f"Consolidating graph data: {old_stats['basic_metrics']['total_nodes']} nodes, {old_stats['basic_metrics']['total_edges']} edges")
|
||||
|
||||
# Transfer nodes and edges to new scanner's graph
|
||||
for node_id, node_data in old_scanner.graph.graph.nodes(data=True):
|
||||
# Add node to new graph with all attributes
|
||||
new_scanner.graph.graph.add_node(node_id, **node_data)
|
||||
|
||||
for source, target, edge_data in old_scanner.graph.graph.edges(data=True):
|
||||
# Add edge to new graph with all attributes
|
||||
new_scanner.graph.graph.add_edge(source, target, **edge_data)
|
||||
|
||||
# Update correlation index
|
||||
if hasattr(old_scanner.graph, 'correlation_index'):
|
||||
new_scanner.graph.correlation_index = old_scanner.graph.correlation_index.copy()
|
||||
|
||||
# Update timestamps
|
||||
new_scanner.graph.creation_time = old_scanner.graph.creation_time
|
||||
new_scanner.graph.last_modified = old_scanner.graph.last_modified
|
||||
|
||||
# Consolidate provider statistics
|
||||
if old_scanner and hasattr(old_scanner, 'providers') and old_scanner.providers:
|
||||
for old_provider in old_scanner.providers:
|
||||
# Find matching provider in new scanner
|
||||
matching_new_provider = None
|
||||
for new_provider in new_scanner.providers:
|
||||
if new_provider.get_name() == old_provider.get_name():
|
||||
matching_new_provider = new_provider
|
||||
break
|
||||
|
||||
if matching_new_provider:
|
||||
# Transfer cumulative statistics
|
||||
matching_new_provider.total_requests += old_provider.total_requests
|
||||
matching_new_provider.successful_requests += old_provider.successful_requests
|
||||
matching_new_provider.failed_requests += old_provider.failed_requests
|
||||
matching_new_provider.total_relationships_found += old_provider.total_relationships_found
|
||||
|
||||
# Transfer cache statistics if available
|
||||
if hasattr(old_provider, 'cache_hits'):
|
||||
matching_new_provider.cache_hits += getattr(old_provider, 'cache_hits', 0)
|
||||
matching_new_provider.cache_misses += getattr(old_provider, 'cache_misses', 0)
|
||||
|
||||
print(f"Consolidated {old_provider.get_name()} provider stats: {old_provider.total_requests} requests")
|
||||
|
||||
return new_scanner
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Error during session consolidation: {e}")
|
||||
return new_scanner
|
||||
|
||||
|
||||
class SessionManager:
|
||||
"""
|
||||
Manages single scanner session per user using Redis with user identification.
|
||||
Enforces one active session per user for consistent state management.
|
||||
"""
|
||||
|
||||
def __init__(self, session_timeout_minutes: int = 60):
|
||||
"""
|
||||
Initialize session manager with Redis backend and user tracking.
|
||||
"""
|
||||
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
|
||||
self.session_timeout = session_timeout_minutes * 60 # Convert to seconds
|
||||
self.lock = threading.Lock()
|
||||
|
||||
# User identification helper
|
||||
self.user_identifier = UserIdentifier()
|
||||
self.consolidator = SessionConsolidator()
|
||||
|
||||
# Start cleanup thread
|
||||
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
|
||||
self.cleanup_thread.start()
|
||||
|
||||
print(f"SessionManager initialized with Redis backend, user tracking, and {session_timeout_minutes}min timeout")
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare SessionManager for pickling."""
|
||||
state = self.__dict__.copy()
|
||||
# Exclude unpickleable attributes
|
||||
unpicklable_attrs = ['lock', 'cleanup_thread', 'redis_client']
|
||||
for attr in unpicklable_attrs:
|
||||
if attr in state:
|
||||
del state[attr]
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore SessionManager after unpickling."""
|
||||
self.__dict__.update(state)
|
||||
# Re-initialize unpickleable attributes
|
||||
import redis
|
||||
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
|
||||
self.lock = threading.Lock()
|
||||
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
|
||||
self.cleanup_thread.start()
|
||||
|
||||
def _get_session_key(self, session_id: str) -> str:
|
||||
"""Generate Redis key for a session."""
|
||||
return f"dnsrecon:session:{session_id}"
|
||||
|
||||
def _get_user_session_key(self, user_fingerprint: str) -> str:
|
||||
"""Generate Redis key for user -> session mapping."""
|
||||
return f"dnsrecon:user:{user_fingerprint}"
|
||||
|
||||
def _get_stop_signal_key(self, session_id: str) -> str:
|
||||
"""Generate Redis key for session stop signal."""
|
||||
return f"dnsrecon:stop:{session_id}"
|
||||
|
||||
def create_or_replace_user_session(self, client_ip: str, user_agent: str) -> str:
|
||||
"""
|
||||
Create new session for user, replacing any existing session.
|
||||
Consolidates data from previous session if it exists.
|
||||
|
||||
Args:
|
||||
client_ip: Client IP address
|
||||
user_agent: User-Agent header
|
||||
|
||||
Returns:
|
||||
New session ID
|
||||
"""
|
||||
user_fingerprint = self.user_identifier.generate_user_fingerprint(client_ip, user_agent)
|
||||
new_session_id = str(uuid.uuid4())
|
||||
|
||||
print(f"=== CREATING/REPLACING SESSION FOR USER {user_fingerprint} ===")
|
||||
|
||||
try:
|
||||
# Check for existing user session
|
||||
existing_session_id = self._get_user_current_session(user_fingerprint)
|
||||
old_scanner = None
|
||||
|
||||
if existing_session_id:
|
||||
print(f"Found existing session {existing_session_id} for user {user_fingerprint}")
|
||||
# Get old scanner data for consolidation
|
||||
old_scanner = self.get_session(existing_session_id)
|
||||
# Terminate old session
|
||||
self._terminate_session_internal(existing_session_id, cleanup_user_mapping=False)
|
||||
print(f"Terminated old session {existing_session_id}")
|
||||
|
||||
# Create new session config and scanner
|
||||
from core.session_config import create_session_config
|
||||
session_config = create_session_config()
|
||||
new_scanner = Scanner(session_config=session_config)
|
||||
|
||||
# Set session ID on scanner for cross-process operations
|
||||
new_scanner.session_id = new_session_id
|
||||
|
||||
# Consolidate data from old session if available
|
||||
if old_scanner:
|
||||
new_scanner = self.consolidator.consolidate_scanner_data(old_scanner, new_scanner)
|
||||
print(f"Consolidated data from previous session")
|
||||
|
||||
# Create session data
|
||||
session_data = {
|
||||
'scanner': new_scanner,
|
||||
'config': session_config,
|
||||
'created_at': time.time(),
|
||||
'last_activity': time.time(),
|
||||
'status': 'active',
|
||||
'user_fingerprint': user_fingerprint,
|
||||
'client_ip': client_ip,
|
||||
'user_agent': user_agent[:200] # Truncate for storage
|
||||
}
|
||||
|
||||
# Store session in Redis
|
||||
session_key = self._get_session_key(new_session_id)
|
||||
serialized_data = pickle.dumps(session_data)
|
||||
self.redis_client.setex(session_key, self.session_timeout, serialized_data)
|
||||
|
||||
# Update user -> session mapping
|
||||
user_session_key = self._get_user_session_key(user_fingerprint)
|
||||
self.redis_client.setex(user_session_key, self.session_timeout, new_session_id.encode('utf-8'))
|
||||
|
||||
# Initialize stop signal
|
||||
stop_key = self._get_stop_signal_key(new_session_id)
|
||||
self.redis_client.setex(stop_key, self.session_timeout, b'0')
|
||||
|
||||
print(f"Created new session {new_session_id} for user {user_fingerprint}")
|
||||
return new_session_id
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to create session for user {user_fingerprint}: {e}")
|
||||
raise
|
||||
|
||||
def _get_user_current_session(self, user_fingerprint: str) -> Optional[str]:
|
||||
"""Get current session ID for a user."""
|
||||
try:
|
||||
user_session_key = self._get_user_session_key(user_fingerprint)
|
||||
session_id_bytes = self.redis_client.get(user_session_key)
|
||||
if session_id_bytes:
|
||||
return session_id_bytes.decode('utf-8')
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"Error getting user session: {e}")
|
||||
return None
|
||||
|
||||
def set_stop_signal(self, session_id: str) -> bool:
|
||||
"""Set stop signal for session (cross-process safe)."""
|
||||
try:
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
self.redis_client.setex(stop_key, self.session_timeout, b'1')
|
||||
print(f"Stop signal set for session {session_id}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to set stop signal for session {session_id}: {e}")
|
||||
return False
|
||||
|
||||
def is_stop_requested(self, session_id: str) -> bool:
|
||||
"""Check if stop is requested for session (cross-process safe)."""
|
||||
try:
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
value = self.redis_client.get(stop_key)
|
||||
return value == b'1' if value is not None else False
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to check stop signal for session {session_id}: {e}")
|
||||
return False
|
||||
|
||||
def clear_stop_signal(self, session_id: str) -> bool:
|
||||
"""Clear stop signal for session."""
|
||||
try:
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
self.redis_client.setex(stop_key, self.session_timeout, b'0')
|
||||
print(f"Stop signal cleared for session {session_id}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to clear stop signal for session {session_id}: {e}")
|
||||
return False
|
||||
|
||||
def _get_session_data(self, session_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""Retrieve and deserialize session data from Redis."""
|
||||
try:
|
||||
session_key = self._get_session_key(session_id)
|
||||
serialized_data = self.redis_client.get(session_key)
|
||||
if serialized_data:
|
||||
session_data = pickle.loads(serialized_data)
|
||||
# Ensure scanner has correct session ID
|
||||
if 'scanner' in session_data and session_data['scanner']:
|
||||
session_data['scanner'].session_id = session_id
|
||||
return session_data
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to get session data for {session_id}: {e}")
|
||||
return None
|
||||
|
||||
def _save_session_data(self, session_id: str, session_data: Dict[str, Any]) -> bool:
|
||||
"""Serialize and save session data to Redis with updated TTL."""
|
||||
try:
|
||||
session_key = self._get_session_key(session_id)
|
||||
serialized_data = pickle.dumps(session_data)
|
||||
result = self.redis_client.setex(session_key, self.session_timeout, serialized_data)
|
||||
|
||||
# Also refresh user mapping TTL if available
|
||||
if 'user_fingerprint' in session_data:
|
||||
user_session_key = self._get_user_session_key(session_data['user_fingerprint'])
|
||||
self.redis_client.setex(user_session_key, self.session_timeout, session_id.encode('utf-8'))
|
||||
|
||||
return result
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to save session data for {session_id}: {e}")
|
||||
return False
|
||||
|
||||
def update_session_scanner(self, session_id: str, scanner: 'Scanner') -> bool:
|
||||
"""Update scanner object in session with immediate persistence."""
|
||||
try:
|
||||
session_data = self._get_session_data(session_id)
|
||||
if session_data:
|
||||
# Ensure scanner has session ID
|
||||
scanner.session_id = session_id
|
||||
session_data['scanner'] = scanner
|
||||
session_data['last_activity'] = time.time()
|
||||
|
||||
success = self._save_session_data(session_id, session_data)
|
||||
if success:
|
||||
print(f"Scanner state updated for session {session_id} (status: {scanner.status})")
|
||||
else:
|
||||
print(f"WARNING: Failed to save scanner state for session {session_id}")
|
||||
return success
|
||||
else:
|
||||
print(f"WARNING: Session {session_id} not found for scanner update")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to update scanner for session {session_id}: {e}")
|
||||
return False
|
||||
|
||||
def update_scanner_status(self, session_id: str, status: str) -> bool:
|
||||
"""Quickly update scanner status for immediate GUI feedback."""
|
||||
try:
|
||||
session_data = self._get_session_data(session_id)
|
||||
if session_data and 'scanner' in session_data:
|
||||
session_data['scanner'].status = status
|
||||
session_data['last_activity'] = time.time()
|
||||
|
||||
success = self._save_session_data(session_id, session_data)
|
||||
if success:
|
||||
print(f"Scanner status updated to '{status}' for session {session_id}")
|
||||
else:
|
||||
print(f"WARNING: Failed to save status update for session {session_id}")
|
||||
return success
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to update scanner status for session {session_id}: {e}")
|
||||
return False
|
||||
|
||||
def get_session(self, session_id: str) -> Optional[Scanner]:
|
||||
"""Get scanner instance for session with session ID management."""
|
||||
if not session_id:
|
||||
return None
|
||||
|
||||
session_data = self._get_session_data(session_id)
|
||||
|
||||
if not session_data or session_data.get('status') != 'active':
|
||||
return None
|
||||
|
||||
# Update last activity and save back to Redis
|
||||
session_data['last_activity'] = time.time()
|
||||
self._save_session_data(session_id, session_data)
|
||||
|
||||
scanner = session_data.get('scanner')
|
||||
if scanner:
|
||||
# Ensure scanner can check Redis-based stop signal
|
||||
scanner.session_id = session_id
|
||||
|
||||
return scanner
|
||||
|
||||
def get_session_status_only(self, session_id: str) -> Optional[str]:
|
||||
"""Get scanner status without full session retrieval (for performance)."""
|
||||
try:
|
||||
session_data = self._get_session_data(session_id)
|
||||
if session_data and 'scanner' in session_data:
|
||||
return session_data['scanner'].status
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to get session status for {session_id}: {e}")
|
||||
return None
|
||||
|
||||
def terminate_session(self, session_id: str) -> bool:
|
||||
"""Terminate specific session with reliable stop signal and immediate status update."""
|
||||
return self._terminate_session_internal(session_id, cleanup_user_mapping=True)
|
||||
|
||||
def _terminate_session_internal(self, session_id: str, cleanup_user_mapping: bool = True) -> bool:
|
||||
"""Internal session termination with configurable user mapping cleanup."""
|
||||
print(f"=== TERMINATING SESSION {session_id} ===")
|
||||
|
||||
try:
|
||||
# Set stop signal first
|
||||
self.set_stop_signal(session_id)
|
||||
|
||||
# Update scanner status immediately for GUI feedback
|
||||
self.update_scanner_status(session_id, 'stopped')
|
||||
|
||||
session_data = self._get_session_data(session_id)
|
||||
if not session_data:
|
||||
print(f"Session {session_id} not found")
|
||||
return False
|
||||
|
||||
scanner = session_data.get('scanner')
|
||||
if scanner and scanner.status == 'running':
|
||||
print(f"Stopping scan for session: {session_id}")
|
||||
scanner.stop_scan()
|
||||
self.update_session_scanner(session_id, scanner)
|
||||
|
||||
# Wait for graceful shutdown
|
||||
time.sleep(0.5)
|
||||
|
||||
# Clean up user mapping if requested
|
||||
if cleanup_user_mapping and 'user_fingerprint' in session_data:
|
||||
user_session_key = self._get_user_session_key(session_data['user_fingerprint'])
|
||||
self.redis_client.delete(user_session_key)
|
||||
print(f"Cleaned up user mapping for {session_data['user_fingerprint']}")
|
||||
|
||||
# Delete session data and stop signal
|
||||
session_key = self._get_session_key(session_id)
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
self.redis_client.delete(session_key)
|
||||
self.redis_client.delete(stop_key)
|
||||
|
||||
print(f"Terminated and removed session from Redis: {session_id}")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to terminate session {session_id}: {e}")
|
||||
return False
|
||||
|
||||
def _cleanup_loop(self) -> None:
|
||||
"""Background thread to cleanup inactive sessions and orphaned signals."""
|
||||
while True:
|
||||
try:
|
||||
# Clean up orphaned stop signals
|
||||
stop_keys = self.redis_client.keys("dnsrecon:stop:*")
|
||||
for stop_key in stop_keys:
|
||||
session_id = stop_key.decode('utf-8').split(':')[-1]
|
||||
session_key = self._get_session_key(session_id)
|
||||
|
||||
if not self.redis_client.exists(session_key):
|
||||
self.redis_client.delete(stop_key)
|
||||
print(f"Cleaned up orphaned stop signal for session {session_id}")
|
||||
|
||||
# Clean up orphaned user mappings
|
||||
user_keys = self.redis_client.keys("dnsrecon:user:*")
|
||||
for user_key in user_keys:
|
||||
session_id_bytes = self.redis_client.get(user_key)
|
||||
if session_id_bytes:
|
||||
session_id = session_id_bytes.decode('utf-8')
|
||||
session_key = self._get_session_key(session_id)
|
||||
|
||||
if not self.redis_client.exists(session_key):
|
||||
self.redis_client.delete(user_key)
|
||||
print(f"Cleaned up orphaned user mapping for session {session_id}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in cleanup loop: {e}")
|
||||
|
||||
time.sleep(300) # Sleep for 5 minutes
|
||||
|
||||
def list_active_sessions(self) -> List[Dict[str, Any]]:
|
||||
"""List all active sessions for admin purposes."""
|
||||
try:
|
||||
session_keys = self.redis_client.keys("dnsrecon:session:*")
|
||||
sessions = []
|
||||
|
||||
for session_key in session_keys:
|
||||
session_id = session_key.decode('utf-8').split(':')[-1]
|
||||
session_data = self._get_session_data(session_id)
|
||||
|
||||
if session_data:
|
||||
scanner = session_data.get('scanner')
|
||||
sessions.append({
|
||||
'session_id': session_id,
|
||||
'user_fingerprint': session_data.get('user_fingerprint', 'unknown'),
|
||||
'client_ip': session_data.get('client_ip', 'unknown'),
|
||||
'created_at': session_data.get('created_at'),
|
||||
'last_activity': session_data.get('last_activity'),
|
||||
'scanner_status': scanner.status if scanner else 'unknown',
|
||||
'current_target': scanner.current_target if scanner else None
|
||||
})
|
||||
|
||||
return sessions
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to list active sessions: {e}")
|
||||
return []
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""Get session manager statistics."""
|
||||
try:
|
||||
session_keys = self.redis_client.keys("dnsrecon:session:*")
|
||||
user_keys = self.redis_client.keys("dnsrecon:user:*")
|
||||
stop_keys = self.redis_client.keys("dnsrecon:stop:*")
|
||||
|
||||
active_sessions = len(session_keys)
|
||||
unique_users = len(user_keys)
|
||||
running_scans = 0
|
||||
|
||||
for session_key in session_keys:
|
||||
session_id = session_key.decode('utf-8').split(':')[-1]
|
||||
status = self.get_session_status_only(session_id)
|
||||
if status == 'running':
|
||||
running_scans += 1
|
||||
|
||||
return {
|
||||
'total_active_sessions': active_sessions,
|
||||
'unique_users': unique_users,
|
||||
'running_scans': running_scans,
|
||||
'total_stop_signals': len(stop_keys),
|
||||
'average_sessions_per_user': round(active_sessions / unique_users, 2) if unique_users > 0 else 0
|
||||
}
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to get statistics: {e}")
|
||||
return {
|
||||
'total_active_sessions': 0,
|
||||
'unique_users': 0,
|
||||
'running_scans': 0,
|
||||
'total_stop_signals': 0,
|
||||
'average_sessions_per_user': 0
|
||||
}
|
||||
|
||||
def get_session_info(self, session_id: str) -> Dict[str, Any]:
|
||||
"""Get detailed information about a specific session."""
|
||||
try:
|
||||
session_data = self._get_session_data(session_id)
|
||||
if not session_data:
|
||||
return {'error': 'Session not found'}
|
||||
|
||||
scanner = session_data.get('scanner')
|
||||
|
||||
return {
|
||||
'session_id': session_id,
|
||||
'user_fingerprint': session_data.get('user_fingerprint', 'unknown'),
|
||||
'client_ip': session_data.get('client_ip', 'unknown'),
|
||||
'user_agent': session_data.get('user_agent', 'unknown'),
|
||||
'created_at': session_data.get('created_at'),
|
||||
'last_activity': session_data.get('last_activity'),
|
||||
'status': session_data.get('status'),
|
||||
'scanner_status': scanner.status if scanner else 'unknown',
|
||||
'current_target': scanner.current_target if scanner else None,
|
||||
'session_age_minutes': round((time.time() - session_data.get('created_at', time.time())) / 60, 1)
|
||||
}
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to get session info for {session_id}: {e}")
|
||||
return {'error': f'Failed to get session info: {str(e)}'}
|
||||
|
||||
|
||||
# Global session manager instance
|
||||
session_manager = SessionManager(session_timeout_minutes=60)
|
||||
564
core/task_manager.py
Normal file
564
core/task_manager.py
Normal file
@@ -0,0 +1,564 @@
|
||||
# dnsrecon/core/task_manager.py
|
||||
|
||||
import threading
|
||||
import time
|
||||
import uuid
|
||||
from enum import Enum
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Dict, List, Optional, Any, Set
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from collections import deque
|
||||
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
|
||||
|
||||
class TaskStatus(Enum):
|
||||
"""Enumeration of task execution statuses."""
|
||||
PENDING = "pending"
|
||||
RUNNING = "running"
|
||||
SUCCEEDED = "succeeded"
|
||||
FAILED_RETRYING = "failed_retrying"
|
||||
FAILED_PERMANENT = "failed_permanent"
|
||||
CANCELLED = "cancelled"
|
||||
|
||||
|
||||
class TaskType(Enum):
|
||||
"""Enumeration of task types for provider queries."""
|
||||
DOMAIN_QUERY = "domain_query"
|
||||
IP_QUERY = "ip_query"
|
||||
GRAPH_UPDATE = "graph_update"
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskResult:
|
||||
"""Result of a task execution."""
|
||||
success: bool
|
||||
data: Optional[Any] = None
|
||||
error: Optional[str] = None
|
||||
metadata: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReconTask:
|
||||
"""Represents a single reconnaissance task with retry logic."""
|
||||
task_id: str
|
||||
task_type: TaskType
|
||||
target: str
|
||||
provider_name: str
|
||||
depth: int
|
||||
status: TaskStatus = TaskStatus.PENDING
|
||||
created_at: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
|
||||
# Retry configuration
|
||||
max_retries: int = 3
|
||||
current_retry: int = 0
|
||||
base_backoff_seconds: float = 1.0
|
||||
max_backoff_seconds: float = 60.0
|
||||
|
||||
# Execution tracking
|
||||
last_attempt_at: Optional[datetime] = None
|
||||
next_retry_at: Optional[datetime] = None
|
||||
execution_history: List[Dict[str, Any]] = field(default_factory=list)
|
||||
|
||||
# Results
|
||||
result: Optional[TaskResult] = None
|
||||
|
||||
def __post_init__(self):
|
||||
"""Initialize additional fields after creation."""
|
||||
if not self.task_id:
|
||||
self.task_id = str(uuid.uuid4())[:8]
|
||||
|
||||
def calculate_next_retry_time(self) -> datetime:
|
||||
"""Calculate next retry time with exponential backoff and jitter."""
|
||||
if self.current_retry >= self.max_retries:
|
||||
return None
|
||||
|
||||
# Exponential backoff with jitter
|
||||
backoff_time = min(
|
||||
self.max_backoff_seconds,
|
||||
self.base_backoff_seconds * (2 ** self.current_retry)
|
||||
)
|
||||
|
||||
# Add jitter (±25%)
|
||||
jitter = backoff_time * 0.25 * (0.5 - hash(self.task_id) % 1000 / 1000.0)
|
||||
final_backoff = max(self.base_backoff_seconds, backoff_time + jitter)
|
||||
|
||||
return datetime.now(timezone.utc) + timedelta(seconds=final_backoff)
|
||||
|
||||
def should_retry(self) -> bool:
|
||||
"""Determine if task should be retried based on status and retry count."""
|
||||
if self.status != TaskStatus.FAILED_RETRYING:
|
||||
return False
|
||||
if self.current_retry >= self.max_retries:
|
||||
return False
|
||||
if self.next_retry_at and datetime.now(timezone.utc) < self.next_retry_at:
|
||||
return False
|
||||
return True
|
||||
|
||||
def mark_failed(self, error: str, metadata: Dict[str, Any] = None):
|
||||
"""Mark task as failed and prepare for retry or permanent failure."""
|
||||
self.current_retry += 1
|
||||
self.last_attempt_at = datetime.now(timezone.utc)
|
||||
|
||||
# Record execution history
|
||||
execution_record = {
|
||||
'attempt': self.current_retry,
|
||||
'timestamp': self.last_attempt_at.isoformat(),
|
||||
'error': error,
|
||||
'metadata': metadata or {}
|
||||
}
|
||||
self.execution_history.append(execution_record)
|
||||
|
||||
if self.current_retry >= self.max_retries:
|
||||
self.status = TaskStatus.FAILED_PERMANENT
|
||||
self.result = TaskResult(success=False, error=f"Permanent failure after {self.max_retries} attempts: {error}")
|
||||
else:
|
||||
self.status = TaskStatus.FAILED_RETRYING
|
||||
self.next_retry_at = self.calculate_next_retry_time()
|
||||
|
||||
def mark_succeeded(self, data: Any = None, metadata: Dict[str, Any] = None):
|
||||
"""Mark task as successfully completed."""
|
||||
self.status = TaskStatus.SUCCEEDED
|
||||
self.last_attempt_at = datetime.now(timezone.utc)
|
||||
self.result = TaskResult(success=True, data=data, metadata=metadata or {})
|
||||
|
||||
# Record successful execution
|
||||
execution_record = {
|
||||
'attempt': self.current_retry + 1,
|
||||
'timestamp': self.last_attempt_at.isoformat(),
|
||||
'success': True,
|
||||
'metadata': metadata or {}
|
||||
}
|
||||
self.execution_history.append(execution_record)
|
||||
|
||||
def get_summary(self) -> Dict[str, Any]:
|
||||
"""Get task summary for progress reporting."""
|
||||
return {
|
||||
'task_id': self.task_id,
|
||||
'task_type': self.task_type.value,
|
||||
'target': self.target,
|
||||
'provider': self.provider_name,
|
||||
'status': self.status.value,
|
||||
'current_retry': self.current_retry,
|
||||
'max_retries': self.max_retries,
|
||||
'created_at': self.created_at.isoformat(),
|
||||
'last_attempt_at': self.last_attempt_at.isoformat() if self.last_attempt_at else None,
|
||||
'next_retry_at': self.next_retry_at.isoformat() if self.next_retry_at else None,
|
||||
'total_attempts': len(self.execution_history),
|
||||
'has_result': self.result is not None
|
||||
}
|
||||
|
||||
|
||||
class TaskQueue:
|
||||
"""Thread-safe task queue with retry logic and priority handling."""
|
||||
|
||||
def __init__(self, max_concurrent_tasks: int = 5):
|
||||
"""Initialize task queue."""
|
||||
self.max_concurrent_tasks = max_concurrent_tasks
|
||||
self.tasks: Dict[str, ReconTask] = {}
|
||||
self.pending_queue = deque()
|
||||
self.retry_queue = deque()
|
||||
self.running_tasks: Set[str] = set()
|
||||
|
||||
self._lock = threading.Lock()
|
||||
self._stop_event = threading.Event()
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare TaskQueue for pickling by excluding unpicklable objects."""
|
||||
state = self.__dict__.copy()
|
||||
# Exclude the unpickleable '_lock' and '_stop_event' attributes
|
||||
if '_lock' in state:
|
||||
del state['_lock']
|
||||
if '_stop_event' in state:
|
||||
del state['_stop_event']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore TaskQueue after unpickling by reconstructing threading objects."""
|
||||
self.__dict__.update(state)
|
||||
# Re-initialize the '_lock' and '_stop_event' attributes
|
||||
self._lock = threading.Lock()
|
||||
self._stop_event = threading.Event()
|
||||
|
||||
def add_task(self, task: ReconTask) -> str:
|
||||
"""Add task to queue."""
|
||||
with self._lock:
|
||||
self.tasks[task.task_id] = task
|
||||
self.pending_queue.append(task.task_id)
|
||||
print(f"Added task {task.task_id}: {task.provider_name} query for {task.target}")
|
||||
return task.task_id
|
||||
|
||||
def get_next_ready_task(self) -> Optional[ReconTask]:
|
||||
"""Get next task ready for execution."""
|
||||
with self._lock:
|
||||
# Check if we have room for more concurrent tasks
|
||||
if len(self.running_tasks) >= self.max_concurrent_tasks:
|
||||
return None
|
||||
|
||||
# First priority: retry queue (tasks ready for retry)
|
||||
while self.retry_queue:
|
||||
task_id = self.retry_queue.popleft()
|
||||
if task_id in self.tasks:
|
||||
task = self.tasks[task_id]
|
||||
if task.should_retry():
|
||||
task.status = TaskStatus.RUNNING
|
||||
self.running_tasks.add(task_id)
|
||||
print(f"Retrying task {task_id} (attempt {task.current_retry + 1})")
|
||||
return task
|
||||
|
||||
# Second priority: pending queue (new tasks)
|
||||
while self.pending_queue:
|
||||
task_id = self.pending_queue.popleft()
|
||||
if task_id in self.tasks:
|
||||
task = self.tasks[task_id]
|
||||
if task.status == TaskStatus.PENDING:
|
||||
task.status = TaskStatus.RUNNING
|
||||
self.running_tasks.add(task_id)
|
||||
print(f"Starting task {task_id}")
|
||||
return task
|
||||
|
||||
return None
|
||||
|
||||
def complete_task(self, task_id: str, success: bool, data: Any = None,
|
||||
error: str = None, metadata: Dict[str, Any] = None):
|
||||
"""Mark task as completed (success or failure)."""
|
||||
with self._lock:
|
||||
if task_id not in self.tasks:
|
||||
return
|
||||
|
||||
task = self.tasks[task_id]
|
||||
self.running_tasks.discard(task_id)
|
||||
|
||||
if success:
|
||||
task.mark_succeeded(data=data, metadata=metadata)
|
||||
print(f"Task {task_id} succeeded")
|
||||
else:
|
||||
task.mark_failed(error or "Unknown error", metadata=metadata)
|
||||
if task.status == TaskStatus.FAILED_RETRYING:
|
||||
self.retry_queue.append(task_id)
|
||||
print(f"Task {task_id} failed, scheduled for retry at {task.next_retry_at}")
|
||||
else:
|
||||
print(f"Task {task_id} permanently failed after {task.current_retry} attempts")
|
||||
|
||||
def cancel_all_tasks(self):
|
||||
"""Cancel all pending and running tasks."""
|
||||
with self._lock:
|
||||
self._stop_event.set()
|
||||
for task in self.tasks.values():
|
||||
if task.status in [TaskStatus.PENDING, TaskStatus.RUNNING, TaskStatus.FAILED_RETRYING]:
|
||||
task.status = TaskStatus.CANCELLED
|
||||
self.pending_queue.clear()
|
||||
self.retry_queue.clear()
|
||||
self.running_tasks.clear()
|
||||
print("All tasks cancelled")
|
||||
|
||||
def is_complete(self) -> bool:
|
||||
"""Check if all tasks are complete (succeeded, permanently failed, or cancelled)."""
|
||||
with self._lock:
|
||||
for task in self.tasks.values():
|
||||
if task.status in [TaskStatus.PENDING, TaskStatus.RUNNING, TaskStatus.FAILED_RETRYING]:
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""Get queue statistics."""
|
||||
with self._lock:
|
||||
stats = {
|
||||
'total_tasks': len(self.tasks),
|
||||
'pending': len(self.pending_queue),
|
||||
'running': len(self.running_tasks),
|
||||
'retry_queue': len(self.retry_queue),
|
||||
'succeeded': 0,
|
||||
'failed_permanent': 0,
|
||||
'cancelled': 0,
|
||||
'failed_retrying': 0
|
||||
}
|
||||
|
||||
for task in self.tasks.values():
|
||||
if task.status == TaskStatus.SUCCEEDED:
|
||||
stats['succeeded'] += 1
|
||||
elif task.status == TaskStatus.FAILED_PERMANENT:
|
||||
stats['failed_permanent'] += 1
|
||||
elif task.status == TaskStatus.CANCELLED:
|
||||
stats['cancelled'] += 1
|
||||
elif task.status == TaskStatus.FAILED_RETRYING:
|
||||
stats['failed_retrying'] += 1
|
||||
|
||||
stats['completion_rate'] = (stats['succeeded'] / stats['total_tasks'] * 100) if stats['total_tasks'] > 0 else 0
|
||||
stats['is_complete'] = self.is_complete()
|
||||
|
||||
return stats
|
||||
|
||||
def get_task_summaries(self) -> List[Dict[str, Any]]:
|
||||
"""Get summaries of all tasks for detailed progress reporting."""
|
||||
with self._lock:
|
||||
return [task.get_summary() for task in self.tasks.values()]
|
||||
|
||||
def get_failed_tasks(self) -> List[ReconTask]:
|
||||
"""Get all permanently failed tasks for analysis."""
|
||||
with self._lock:
|
||||
return [task for task in self.tasks.values() if task.status == TaskStatus.FAILED_PERMANENT]
|
||||
|
||||
|
||||
class TaskExecutor:
|
||||
"""Executes reconnaissance tasks using providers."""
|
||||
|
||||
def __init__(self, providers: List, graph_manager, logger):
|
||||
"""Initialize task executor."""
|
||||
self.providers = {provider.get_name(): provider for provider in providers}
|
||||
self.graph = graph_manager
|
||||
self.logger = logger
|
||||
|
||||
def execute_task(self, task: ReconTask) -> TaskResult:
|
||||
"""
|
||||
Execute a single reconnaissance task.
|
||||
|
||||
Args:
|
||||
task: Task to execute
|
||||
|
||||
Returns:
|
||||
TaskResult with success/failure information
|
||||
"""
|
||||
try:
|
||||
print(f"Executing task {task.task_id}: {task.provider_name} query for {task.target}")
|
||||
|
||||
provider = self.providers.get(task.provider_name)
|
||||
if not provider:
|
||||
return TaskResult(
|
||||
success=False,
|
||||
error=f"Provider {task.provider_name} not available"
|
||||
)
|
||||
|
||||
if not provider.is_available():
|
||||
return TaskResult(
|
||||
success=False,
|
||||
error=f"Provider {task.provider_name} is not available (missing API key or configuration)"
|
||||
)
|
||||
|
||||
# Execute provider query based on task type
|
||||
if task.task_type == TaskType.DOMAIN_QUERY:
|
||||
if not _is_valid_domain(task.target):
|
||||
return TaskResult(success=False, error=f"Invalid domain: {task.target}")
|
||||
|
||||
relationships = provider.query_domain(task.target)
|
||||
|
||||
elif task.task_type == TaskType.IP_QUERY:
|
||||
if not _is_valid_ip(task.target):
|
||||
return TaskResult(success=False, error=f"Invalid IP: {task.target}")
|
||||
|
||||
relationships = provider.query_ip(task.target)
|
||||
|
||||
else:
|
||||
return TaskResult(success=False, error=f"Unsupported task type: {task.task_type}")
|
||||
|
||||
# Process results and update graph
|
||||
new_targets = set()
|
||||
relationships_added = 0
|
||||
|
||||
for source, target, rel_type, confidence, raw_data in relationships:
|
||||
# Add nodes to graph
|
||||
from core.graph_manager import NodeType
|
||||
|
||||
if _is_valid_ip(target):
|
||||
self.graph.add_node(target, NodeType.IP)
|
||||
new_targets.add(target)
|
||||
elif target.startswith('AS') and target[2:].isdigit():
|
||||
self.graph.add_node(target, NodeType.ASN)
|
||||
elif _is_valid_domain(target):
|
||||
self.graph.add_node(target, NodeType.DOMAIN)
|
||||
new_targets.add(target)
|
||||
|
||||
# Add edge to graph
|
||||
if self.graph.add_edge(source, target, rel_type, confidence, task.provider_name, raw_data):
|
||||
relationships_added += 1
|
||||
|
||||
# Log forensic information
|
||||
self.logger.logger.info(
|
||||
f"Task {task.task_id} completed: {len(relationships)} relationships found, "
|
||||
f"{relationships_added} added to graph, {len(new_targets)} new targets"
|
||||
)
|
||||
|
||||
return TaskResult(
|
||||
success=True,
|
||||
data={
|
||||
'relationships': relationships,
|
||||
'new_targets': list(new_targets),
|
||||
'relationships_added': relationships_added
|
||||
},
|
||||
metadata={
|
||||
'provider': task.provider_name,
|
||||
'target': task.target,
|
||||
'depth': task.depth,
|
||||
'execution_time': datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Task execution failed: {str(e)}"
|
||||
print(f"ERROR: {error_msg} for task {task.task_id}")
|
||||
self.logger.logger.error(error_msg)
|
||||
|
||||
return TaskResult(
|
||||
success=False,
|
||||
error=error_msg,
|
||||
metadata={
|
||||
'provider': task.provider_name,
|
||||
'target': task.target,
|
||||
'exception_type': type(e).__name__
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
class TaskManager:
|
||||
"""High-level task management for reconnaissance scans."""
|
||||
|
||||
def __init__(self, providers: List, graph_manager, logger, max_concurrent_tasks: int = 5):
|
||||
"""Initialize task manager."""
|
||||
self.task_queue = TaskQueue(max_concurrent_tasks)
|
||||
self.task_executor = TaskExecutor(providers, graph_manager, logger)
|
||||
self.logger = logger
|
||||
|
||||
# Execution control
|
||||
self._stop_event = threading.Event()
|
||||
self._execution_threads: List[threading.Thread] = []
|
||||
self._is_running = False
|
||||
|
||||
def create_provider_tasks(self, target: str, depth: int, providers: List) -> List[str]:
|
||||
"""
|
||||
Create tasks for querying all eligible providers for a target.
|
||||
|
||||
Args:
|
||||
target: Domain or IP to query
|
||||
depth: Current recursion depth
|
||||
providers: List of available providers
|
||||
|
||||
Returns:
|
||||
List of created task IDs
|
||||
"""
|
||||
task_ids = []
|
||||
is_ip = _is_valid_ip(target)
|
||||
target_key = 'ips' if is_ip else 'domains'
|
||||
task_type = TaskType.IP_QUERY if is_ip else TaskType.DOMAIN_QUERY
|
||||
|
||||
for provider in providers:
|
||||
if provider.get_eligibility().get(target_key) and provider.is_available():
|
||||
task = ReconTask(
|
||||
task_id=str(uuid.uuid4())[:8],
|
||||
task_type=task_type,
|
||||
target=target,
|
||||
provider_name=provider.get_name(),
|
||||
depth=depth,
|
||||
max_retries=3 # Configure retries per task type/provider
|
||||
)
|
||||
|
||||
task_id = self.task_queue.add_task(task)
|
||||
task_ids.append(task_id)
|
||||
|
||||
return task_ids
|
||||
|
||||
def start_execution(self, max_workers: int = 3):
|
||||
"""Start task execution with specified number of worker threads."""
|
||||
if self._is_running:
|
||||
print("Task execution already running")
|
||||
return
|
||||
|
||||
self._is_running = True
|
||||
self._stop_event.clear()
|
||||
|
||||
print(f"Starting task execution with {max_workers} workers")
|
||||
|
||||
for i in range(max_workers):
|
||||
worker_thread = threading.Thread(
|
||||
target=self._worker_loop,
|
||||
name=f"TaskWorker-{i+1}",
|
||||
daemon=True
|
||||
)
|
||||
worker_thread.start()
|
||||
self._execution_threads.append(worker_thread)
|
||||
|
||||
def stop_execution(self):
|
||||
"""Stop task execution and cancel all tasks."""
|
||||
print("Stopping task execution")
|
||||
self._stop_event.set()
|
||||
self.task_queue.cancel_all_tasks()
|
||||
self._is_running = False
|
||||
|
||||
# Wait for worker threads to finish
|
||||
for thread in self._execution_threads:
|
||||
thread.join(timeout=5.0)
|
||||
|
||||
self._execution_threads.clear()
|
||||
print("Task execution stopped")
|
||||
|
||||
def _worker_loop(self):
|
||||
"""Worker thread loop for executing tasks."""
|
||||
thread_name = threading.current_thread().name
|
||||
print(f"{thread_name} started")
|
||||
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
# Get next task to execute
|
||||
task = self.task_queue.get_next_ready_task()
|
||||
|
||||
if task is None:
|
||||
# No tasks ready, check if we should exit
|
||||
if self.task_queue.is_complete() or self._stop_event.is_set():
|
||||
break
|
||||
time.sleep(0.1) # Brief sleep before checking again
|
||||
continue
|
||||
|
||||
# Execute the task
|
||||
result = self.task_executor.execute_task(task)
|
||||
|
||||
# Complete the task in queue
|
||||
self.task_queue.complete_task(
|
||||
task.task_id,
|
||||
success=result.success,
|
||||
data=result.data,
|
||||
error=result.error,
|
||||
metadata=result.metadata
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Worker {thread_name} encountered error: {e}")
|
||||
# Continue running even if individual task fails
|
||||
continue
|
||||
|
||||
print(f"{thread_name} finished")
|
||||
|
||||
def wait_for_completion(self, timeout_seconds: int = 300) -> bool:
|
||||
"""
|
||||
Wait for all tasks to complete.
|
||||
|
||||
Args:
|
||||
timeout_seconds: Maximum time to wait
|
||||
|
||||
Returns:
|
||||
True if all tasks completed, False if timeout
|
||||
"""
|
||||
start_time = time.time()
|
||||
|
||||
while time.time() - start_time < timeout_seconds:
|
||||
if self.task_queue.is_complete():
|
||||
return True
|
||||
|
||||
if self._stop_event.is_set():
|
||||
return False
|
||||
|
||||
time.sleep(1.0) # Check every second
|
||||
|
||||
print(f"Timeout waiting for task completion after {timeout_seconds} seconds")
|
||||
return False
|
||||
|
||||
def get_progress_report(self) -> Dict[str, Any]:
|
||||
"""Get detailed progress report for UI updates."""
|
||||
stats = self.task_queue.get_statistics()
|
||||
failed_tasks = self.task_queue.get_failed_tasks()
|
||||
|
||||
return {
|
||||
'statistics': stats,
|
||||
'failed_tasks': [task.get_summary() for task in failed_tasks],
|
||||
'is_running': self._is_running,
|
||||
'worker_count': len(self._execution_threads),
|
||||
'detailed_tasks': self.task_queue.get_task_summaries() if stats['total_tasks'] < 50 else [] # Limit detail for performance
|
||||
}
|
||||
976
dnsrecon.py
976
dnsrecon.py
@@ -1,976 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Enhanced DNS Reconnaissance Tool with Recursive Analysis
|
||||
|
||||
Copyright (c) 2025 mstoeck3.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import json
|
||||
import requests
|
||||
import argparse
|
||||
import sys
|
||||
import time
|
||||
import os
|
||||
import re
|
||||
import ipaddress
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Optional, Any, Set
|
||||
from urllib.parse import urlparse
|
||||
import threading
|
||||
from queue import Queue, Empty
|
||||
|
||||
class EnhancedDNSReconTool:
|
||||
def __init__(self, shodan_api_key: Optional[str] = None, virustotal_api_key: Optional[str] = None):
|
||||
self.shodan_api_key = shodan_api_key
|
||||
self.virustotal_api_key = virustotal_api_key
|
||||
self.output_dir = "dns_recon_results"
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update({
|
||||
'User-Agent': 'EnhancedDNSReconTool/2.0 (Educational/Research Purpose)'
|
||||
})
|
||||
|
||||
# Track processed items to avoid infinite recursion
|
||||
self.processed_domains: Set[str] = set()
|
||||
self.processed_ips: Set[str] = set()
|
||||
|
||||
# Results storage for recursive analysis
|
||||
self.all_results: Dict[str, Any] = {}
|
||||
|
||||
# Rate limiting
|
||||
self.last_vt_request = 0
|
||||
self.last_shodan_request = 0
|
||||
self.vt_rate_limit = 4 # 4 requests per minute for free tier
|
||||
self.shodan_rate_limit = 1 # 1 request per second for free tier
|
||||
|
||||
def check_dependencies(self) -> bool:
|
||||
"""Check if required system tools are available."""
|
||||
required_tools = ['dig', 'whois']
|
||||
missing_tools = []
|
||||
|
||||
for tool in required_tools:
|
||||
try:
|
||||
subprocess.run([tool, '--help'],
|
||||
capture_output=True, check=False, timeout=5)
|
||||
except (subprocess.TimeoutExpired, FileNotFoundError):
|
||||
missing_tools.append(tool)
|
||||
|
||||
if missing_tools:
|
||||
print(f"❌ Missing required tools: {', '.join(missing_tools)}")
|
||||
print("Install with: apt install dnsutils whois (Ubuntu/Debian)")
|
||||
return False
|
||||
return True
|
||||
|
||||
def run_command(self, cmd: str, timeout: int = 30) -> str:
|
||||
"""Run shell command with timeout and error handling."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd, shell=True, capture_output=True,
|
||||
text=True, timeout=timeout
|
||||
)
|
||||
return result.stdout.strip() if result.stdout else result.stderr.strip()
|
||||
except subprocess.TimeoutExpired:
|
||||
return "Error: Command timed out"
|
||||
except Exception as e:
|
||||
return f"Error: {str(e)}"
|
||||
|
||||
def rate_limit_virustotal(self):
|
||||
"""Implement rate limiting for VirusTotal API."""
|
||||
current_time = time.time()
|
||||
time_since_last = current_time - self.last_vt_request
|
||||
min_interval = 60 / self.vt_rate_limit # seconds between requests
|
||||
|
||||
if time_since_last < min_interval:
|
||||
sleep_time = min_interval - time_since_last
|
||||
print(f" Rate limiting: waiting {sleep_time:.1f}s for VirusTotal...")
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_vt_request = time.time()
|
||||
|
||||
def rate_limit_shodan(self):
|
||||
"""Implement rate limiting for Shodan API."""
|
||||
current_time = time.time()
|
||||
time_since_last = current_time - self.last_shodan_request
|
||||
min_interval = 1 / self.shodan_rate_limit # seconds between requests
|
||||
|
||||
if time_since_last < min_interval:
|
||||
sleep_time = min_interval - time_since_last
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_shodan_request = time.time()
|
||||
|
||||
def query_virustotal_domain(self, domain: str) -> Dict[str, Any]:
|
||||
"""Query VirusTotal API for domain information."""
|
||||
if not self.virustotal_api_key:
|
||||
return {
|
||||
'success': False,
|
||||
'message': 'No VirusTotal API key provided'
|
||||
}
|
||||
|
||||
print(f"🔍 Querying VirusTotal for domain: {domain}")
|
||||
|
||||
try:
|
||||
self.rate_limit_virustotal()
|
||||
|
||||
url = f"https://www.virustotal.com/vtapi/v2/domain/report"
|
||||
params = {
|
||||
'apikey': self.virustotal_api_key,
|
||||
'domain': domain
|
||||
}
|
||||
|
||||
response = self.session.get(url, params=params, timeout=30)
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
|
||||
# Extract key information
|
||||
result = {
|
||||
'success': True,
|
||||
'domain': domain,
|
||||
'response_code': data.get('response_code', 0),
|
||||
'verbose_msg': data.get('verbose_msg', ''),
|
||||
'detection_ratio': f"{data.get('positives', 0)}/{data.get('total', 0)}"
|
||||
}
|
||||
|
||||
# Add scan results if available
|
||||
if 'scans' in data:
|
||||
result['scan_engines'] = len(data['scans'])
|
||||
result['malicious_engines'] = sum(1 for scan in data['scans'].values() if scan.get('detected', False))
|
||||
result['scan_summary'] = {}
|
||||
|
||||
# Categorize detections
|
||||
for engine, scan_result in data['scans'].items():
|
||||
if scan_result.get('detected', False):
|
||||
category = scan_result.get('result', 'malicious')
|
||||
if category not in result['scan_summary']:
|
||||
result['scan_summary'][category] = []
|
||||
result['scan_summary'][category].append(engine)
|
||||
|
||||
# Add additional data if available
|
||||
for key in ['subdomains', 'detected_urls', 'undetected_urls', 'resolutions']:
|
||||
if key in data:
|
||||
result[key] = data[key]
|
||||
|
||||
return result
|
||||
else:
|
||||
return {
|
||||
'success': False,
|
||||
'error': f"HTTP {response.status_code}",
|
||||
'message': response.text[:200]
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e),
|
||||
'message': 'VirusTotal domain query failed'
|
||||
}
|
||||
|
||||
def query_virustotal_ip(self, ip: str) -> Dict[str, Any]:
|
||||
"""Query VirusTotal API for IP information."""
|
||||
if not self.virustotal_api_key:
|
||||
return {
|
||||
'success': False,
|
||||
'message': 'No VirusTotal API key provided'
|
||||
}
|
||||
|
||||
print(f"🔍 Querying VirusTotal for IP: {ip}")
|
||||
|
||||
try:
|
||||
self.rate_limit_virustotal()
|
||||
|
||||
url = f"https://www.virustotal.com/vtapi/v2/ip-address/report"
|
||||
params = {
|
||||
'apikey': self.virustotal_api_key,
|
||||
'ip': ip
|
||||
}
|
||||
|
||||
response = self.session.get(url, params=params, timeout=30)
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
|
||||
result = {
|
||||
'success': True,
|
||||
'ip': ip,
|
||||
'response_code': data.get('response_code', 0),
|
||||
'verbose_msg': data.get('verbose_msg', ''),
|
||||
'detection_ratio': f"{data.get('positives', 0)}/{data.get('total', 0)}"
|
||||
}
|
||||
|
||||
# Add scan results if available
|
||||
if 'scans' in data:
|
||||
result['scan_engines'] = len(data['scans'])
|
||||
result['malicious_engines'] = sum(1 for scan in data['scans'].values() if scan.get('detected', False))
|
||||
|
||||
# Add additional data
|
||||
for key in ['detected_urls', 'undetected_urls', 'resolutions', 'asn', 'country']:
|
||||
if key in data:
|
||||
result[key] = data[key]
|
||||
|
||||
return result
|
||||
else:
|
||||
return {
|
||||
'success': False,
|
||||
'error': f"HTTP {response.status_code}",
|
||||
'message': response.text[:200]
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e),
|
||||
'message': 'VirusTotal IP query failed'
|
||||
}
|
||||
|
||||
def get_dns_records(self, domain: str, record_type: str,
|
||||
server: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Fetch DNS records with comprehensive error handling and proper parsing."""
|
||||
server_flag = f"@{server}" if server else ""
|
||||
cmd = f"dig {domain} {record_type} {server_flag} +noall +answer"
|
||||
|
||||
output = self.run_command(cmd)
|
||||
|
||||
# Parse the output into structured data
|
||||
records = []
|
||||
if output and not output.startswith("Error:"):
|
||||
for line in output.split('\n'):
|
||||
line = line.strip()
|
||||
if line and not line.startswith(';') and not line.startswith('>>'):
|
||||
# Split on any whitespace (handles both tabs and spaces)
|
||||
parts = line.split()
|
||||
|
||||
if len(parts) >= 4:
|
||||
name = parts[0].rstrip('.')
|
||||
|
||||
# Check if second field is numeric (TTL)
|
||||
if len(parts) >= 5 and parts[1].isdigit():
|
||||
# Format: name TTL class type data
|
||||
ttl = parts[1]
|
||||
dns_class = parts[2]
|
||||
dns_type = parts[3]
|
||||
data = ' '.join(parts[4:])
|
||||
else:
|
||||
# Format: name class type data (no TTL shown)
|
||||
ttl = ''
|
||||
dns_class = parts[1]
|
||||
dns_type = parts[2]
|
||||
data = ' '.join(parts[3:]) if len(parts) > 3 else ''
|
||||
|
||||
# Validate that we have the expected record type
|
||||
if dns_type.upper() == record_type.upper():
|
||||
records.append({
|
||||
'name': name,
|
||||
'ttl': ttl,
|
||||
'class': dns_class,
|
||||
'type': dns_type,
|
||||
'data': data
|
||||
})
|
||||
|
||||
return {
|
||||
'query': f"{domain} {record_type}",
|
||||
'server': server or 'system',
|
||||
'raw_output': output,
|
||||
'records': records,
|
||||
'record_count': len(records)
|
||||
}
|
||||
|
||||
def get_comprehensive_dns(self, domain: str) -> Dict[str, Any]:
|
||||
"""Get comprehensive DNS information."""
|
||||
print(f"🔍 Gathering DNS records for {domain}...")
|
||||
|
||||
# Standard record types
|
||||
record_types = ['A', 'AAAA', 'MX', 'NS', 'SOA', 'TXT', 'CNAME',
|
||||
'CAA', 'SRV', 'PTR']
|
||||
|
||||
# DNS servers to query
|
||||
dns_servers = [
|
||||
None, # System default
|
||||
'1.1.1.1', # Cloudflare
|
||||
'8.8.8.8', # Google
|
||||
'9.9.9.9', # Quad9
|
||||
]
|
||||
|
||||
dns_results = {}
|
||||
|
||||
for record_type in record_types:
|
||||
dns_results[record_type] = {}
|
||||
for server in dns_servers:
|
||||
server_name = server or 'system'
|
||||
result = self.get_dns_records(domain, record_type, server)
|
||||
dns_results[record_type][server_name] = result
|
||||
|
||||
time.sleep(0.1) # Rate limiting
|
||||
|
||||
# Try DNSSEC validation
|
||||
dnssec_cmd = f"dig {domain} +dnssec +noall +answer"
|
||||
dns_results['DNSSEC'] = {
|
||||
'system': {
|
||||
'query': f"{domain} +dnssec",
|
||||
'raw_output': self.run_command(dnssec_cmd),
|
||||
'records': [],
|
||||
'record_count': 0
|
||||
}
|
||||
}
|
||||
|
||||
return dns_results
|
||||
|
||||
def perform_reverse_dns(self, ip: str) -> Dict[str, Any]:
|
||||
"""Perform reverse DNS lookup on IP address."""
|
||||
print(f"🔄 Reverse DNS lookup for {ip}")
|
||||
|
||||
try:
|
||||
# Validate IP address
|
||||
ipaddress.ip_address(ip)
|
||||
|
||||
# Perform reverse DNS lookup
|
||||
cmd = f"dig -x {ip} +short"
|
||||
output = self.run_command(cmd)
|
||||
|
||||
hostnames = []
|
||||
if output and not output.startswith("Error:"):
|
||||
hostnames = [line.strip().rstrip('.') for line in output.split('\n') if line.strip()]
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'ip': ip,
|
||||
'hostnames': hostnames,
|
||||
'hostname_count': len(hostnames),
|
||||
'raw_output': output
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'ip': ip,
|
||||
'error': str(e),
|
||||
'hostnames': [],
|
||||
'hostname_count': 0
|
||||
}
|
||||
|
||||
def extract_subdomains_from_certificates(self, domain: str) -> Set[str]:
|
||||
"""Extract subdomains from certificate transparency logs."""
|
||||
print(f"📋 Extracting subdomains from certificates for {domain}")
|
||||
|
||||
try:
|
||||
url = f"https://crt.sh/?q=%.{domain}&output=json"
|
||||
response = self.session.get(url, timeout=30)
|
||||
|
||||
subdomains = set()
|
||||
|
||||
if response.status_code == 200:
|
||||
cert_data = response.json()
|
||||
|
||||
for cert in cert_data:
|
||||
name_value = cert.get('name_value', '')
|
||||
if name_value:
|
||||
# Handle multiple domains in one certificate
|
||||
domains_in_cert = [d.strip() for d in name_value.split('\n')]
|
||||
for subdomain in domains_in_cert:
|
||||
# Clean up the subdomain
|
||||
subdomain = subdomain.lower().strip()
|
||||
if subdomain and '.' in subdomain:
|
||||
# Only include subdomains of the target domain
|
||||
if subdomain.endswith(f".{domain}") or subdomain == domain:
|
||||
subdomains.add(subdomain)
|
||||
elif subdomain.startswith("*."):
|
||||
# Handle wildcard certificates
|
||||
clean_subdomain = subdomain[2:]
|
||||
if clean_subdomain.endswith(f".{domain}") or clean_subdomain == domain:
|
||||
subdomains.add(clean_subdomain)
|
||||
|
||||
return subdomains
|
||||
|
||||
except Exception as e:
|
||||
print(f" Error extracting subdomains: {e}")
|
||||
return set()
|
||||
|
||||
def extract_ips_from_dns(self, dns_data: Dict[str, Any]) -> Set[str]:
|
||||
"""Extract IP addresses from DNS records."""
|
||||
ips = set()
|
||||
|
||||
# Extract from A records
|
||||
for server_data in dns_data.get('A', {}).values():
|
||||
for record in server_data.get('records', []):
|
||||
ip = record.get('data', '')
|
||||
if ip and self.is_valid_ip(ip):
|
||||
ips.add(ip)
|
||||
|
||||
# Extract from AAAA records
|
||||
for server_data in dns_data.get('AAAA', {}).values():
|
||||
for record in server_data.get('records', []):
|
||||
ipv6 = record.get('data', '')
|
||||
if ipv6 and self.is_valid_ip(ipv6):
|
||||
ips.add(ipv6)
|
||||
|
||||
return ips
|
||||
|
||||
def is_valid_ip(self, ip: str) -> bool:
|
||||
"""Check if string is a valid IP address."""
|
||||
try:
|
||||
ipaddress.ip_address(ip)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
def get_whois_data(self, domain: str) -> Dict[str, Any]:
|
||||
"""Fetch and parse WHOIS data with improved parsing."""
|
||||
print(f"📋 Fetching WHOIS data for {domain}...")
|
||||
|
||||
raw_whois = self.run_command(f"whois {domain}")
|
||||
|
||||
# Basic parsing of common WHOIS fields
|
||||
whois_data = {
|
||||
'raw': raw_whois,
|
||||
'parsed': {}
|
||||
}
|
||||
|
||||
if not raw_whois.startswith("Error:"):
|
||||
lines = raw_whois.split('\n')
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if ':' in line and not line.startswith('%') and not line.startswith('#') and not line.startswith('>>>'):
|
||||
# Handle different WHOIS formats
|
||||
if line.count(':') == 1:
|
||||
key, value = line.split(':', 1)
|
||||
else:
|
||||
# Multiple colons - take first as key, rest as value
|
||||
parts = line.split(':', 2)
|
||||
key, value = parts[0], ':'.join(parts[1:])
|
||||
|
||||
key = key.strip().lower().replace(' ', '_').replace('-', '_')
|
||||
value = value.strip()
|
||||
if value and key:
|
||||
# Handle multiple values for same key (like name servers)
|
||||
if key in whois_data['parsed']:
|
||||
# Convert to list if not already
|
||||
if not isinstance(whois_data['parsed'][key], list):
|
||||
whois_data['parsed'][key] = [whois_data['parsed'][key]]
|
||||
whois_data['parsed'][key].append(value)
|
||||
else:
|
||||
whois_data['parsed'][key] = value
|
||||
|
||||
return whois_data
|
||||
|
||||
def get_certificate_transparency(self, domain: str) -> Dict[str, Any]:
|
||||
"""Query certificate transparency logs via crt.sh."""
|
||||
print(f"🔐 Querying certificate transparency logs for {domain}...")
|
||||
|
||||
try:
|
||||
# Query crt.sh API
|
||||
url = f"https://crt.sh/?q=%.{domain}&output=json"
|
||||
response = self.session.get(url, timeout=30)
|
||||
|
||||
if response.status_code == 200:
|
||||
cert_data = response.json()
|
||||
|
||||
# Extract unique subdomains
|
||||
subdomains = set()
|
||||
cert_details = []
|
||||
|
||||
for cert in cert_data:
|
||||
# Extract subdomains from name_value
|
||||
name_value = cert.get('name_value', '')
|
||||
if name_value:
|
||||
# Handle multiple domains in one certificate
|
||||
domains_in_cert = [d.strip() for d in name_value.split('\n')]
|
||||
subdomains.update(domains_in_cert)
|
||||
|
||||
cert_details.append({
|
||||
'id': cert.get('id'),
|
||||
'issuer': cert.get('issuer_name'),
|
||||
'common_name': cert.get('common_name'),
|
||||
'name_value': cert.get('name_value'),
|
||||
'not_before': cert.get('not_before'),
|
||||
'not_after': cert.get('not_after'),
|
||||
'serial_number': cert.get('serial_number')
|
||||
})
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'total_certificates': len(cert_data),
|
||||
'unique_subdomains': sorted(list(subdomains)),
|
||||
'subdomain_count': len(subdomains),
|
||||
'certificates': cert_details[:50] # Limit for output size
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'success': False,
|
||||
'error': f"HTTP {response.status_code}",
|
||||
'message': 'Failed to fetch certificate data'
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e),
|
||||
'message': 'Request to crt.sh failed'
|
||||
}
|
||||
|
||||
def query_shodan(self, domain: str) -> Dict[str, Any]:
|
||||
"""Query Shodan API for domain information."""
|
||||
if not self.shodan_api_key:
|
||||
return {
|
||||
'success': False,
|
||||
'message': 'No Shodan API key provided'
|
||||
}
|
||||
|
||||
print(f"🔎 Querying Shodan for {domain}...")
|
||||
|
||||
try:
|
||||
self.rate_limit_shodan()
|
||||
|
||||
# Search for the domain
|
||||
url = f"https://api.shodan.io/shodan/host/search"
|
||||
params = {
|
||||
'key': self.shodan_api_key,
|
||||
'query': f'hostname:{domain}'
|
||||
}
|
||||
|
||||
response = self.session.get(url, params=params, timeout=30)
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
return {
|
||||
'success': True,
|
||||
'total_results': data.get('total', 0),
|
||||
'matches': data.get('matches', [])[:10], # Limit results
|
||||
'facets': data.get('facets', {})
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'success': False,
|
||||
'error': f"HTTP {response.status_code}",
|
||||
'message': response.text[:200]
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e),
|
||||
'message': 'Shodan query failed'
|
||||
}
|
||||
|
||||
def query_shodan_ip(self, ip: str) -> Dict[str, Any]:
|
||||
"""Query Shodan API for IP information."""
|
||||
if not self.shodan_api_key:
|
||||
return {
|
||||
'success': False,
|
||||
'message': 'No Shodan API key provided'
|
||||
}
|
||||
|
||||
print(f"🔎 Querying Shodan for IP {ip}...")
|
||||
|
||||
try:
|
||||
self.rate_limit_shodan()
|
||||
|
||||
url = f"https://api.shodan.io/shodan/host/{ip}"
|
||||
params = {'key': self.shodan_api_key}
|
||||
|
||||
response = self.session.get(url, params=params, timeout=30)
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
return {
|
||||
'success': True,
|
||||
'ip': ip,
|
||||
'data': data
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'success': False,
|
||||
'error': f"HTTP {response.status_code}",
|
||||
'message': response.text[:200]
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e),
|
||||
'message': 'Shodan IP query failed'
|
||||
}
|
||||
|
||||
def analyze_domain_recursively(self, domain: str, depth: int = 0, max_depth: int = 2) -> Dict[str, Any]:
|
||||
"""Perform comprehensive analysis on a domain with recursive subdomain discovery."""
|
||||
if domain in self.processed_domains or depth > max_depth:
|
||||
return {}
|
||||
|
||||
self.processed_domains.add(domain)
|
||||
|
||||
print(f"\n{' ' * depth}🎯 Analyzing domain: {domain} (depth {depth})")
|
||||
|
||||
results = {
|
||||
'domain': domain,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'depth': depth,
|
||||
'dns_records': {},
|
||||
'whois': {},
|
||||
'certificate_transparency': {},
|
||||
'virustotal_domain': {},
|
||||
'shodan': {},
|
||||
'discovered_ips': {},
|
||||
'discovered_subdomains': {}
|
||||
}
|
||||
|
||||
# DNS Records
|
||||
results['dns_records'] = self.get_comprehensive_dns(domain)
|
||||
|
||||
# Extract IP addresses from DNS records
|
||||
discovered_ips = self.extract_ips_from_dns(results['dns_records'])
|
||||
|
||||
# WHOIS (only for primary domain to avoid rate limiting)
|
||||
if depth == 0:
|
||||
results['whois'] = self.get_whois_data(domain)
|
||||
|
||||
# Certificate Transparency
|
||||
results['certificate_transparency'] = self.get_certificate_transparency(domain)
|
||||
|
||||
# VirusTotal Domain Analysis
|
||||
results['virustotal_domain'] = self.query_virustotal_domain(domain)
|
||||
|
||||
# Shodan Domain Analysis
|
||||
results['shodan'] = self.query_shodan(domain)
|
||||
|
||||
# Extract subdomains from certificate transparency
|
||||
if depth < max_depth:
|
||||
subdomains = self.extract_subdomains_from_certificates(domain)
|
||||
|
||||
# Filter out already processed subdomains
|
||||
new_subdomains = subdomains - self.processed_domains
|
||||
new_subdomains.discard(domain) # Remove the current domain itself
|
||||
|
||||
print(f"{' ' * depth}📋 Found {len(new_subdomains)} new subdomains to analyze")
|
||||
|
||||
# Recursively analyze subdomains (limit to prevent excessive recursion)
|
||||
for subdomain in list(new_subdomains)[:20]: # Limit to 20 subdomains per domain
|
||||
if subdomain not in self.processed_domains:
|
||||
subdomain_results = self.analyze_domain_recursively(subdomain, depth + 1, max_depth)
|
||||
if subdomain_results:
|
||||
results['discovered_subdomains'][subdomain] = subdomain_results
|
||||
|
||||
# Analyze discovered IP addresses
|
||||
for ip in discovered_ips:
|
||||
if ip not in self.processed_ips:
|
||||
ip_results = self.analyze_ip_recursively(ip, depth)
|
||||
if ip_results:
|
||||
results['discovered_ips'][ip] = ip_results
|
||||
|
||||
# Store in global results
|
||||
self.all_results[domain] = results
|
||||
|
||||
return results
|
||||
|
||||
def analyze_ip_recursively(self, ip: str, depth: int = 0) -> Dict[str, Any]:
|
||||
"""Perform comprehensive analysis on an IP address."""
|
||||
if ip in self.processed_ips:
|
||||
return {}
|
||||
|
||||
self.processed_ips.add(ip)
|
||||
|
||||
print(f"{' ' * depth}🌐 Analyzing IP: {ip}")
|
||||
|
||||
results = {
|
||||
'ip': ip,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'reverse_dns': {},
|
||||
'virustotal_ip': {},
|
||||
'shodan_ip': {},
|
||||
'discovered_domains': {}
|
||||
}
|
||||
|
||||
# Reverse DNS lookup
|
||||
results['reverse_dns'] = self.perform_reverse_dns(ip)
|
||||
|
||||
# VirusTotal IP Analysis
|
||||
results['virustotal_ip'] = self.query_virustotal_ip(ip)
|
||||
|
||||
# Shodan IP Analysis
|
||||
results['shodan_ip'] = self.query_shodan_ip(ip)
|
||||
|
||||
# Analyze discovered domains from reverse DNS
|
||||
reverse_dns = results['reverse_dns']
|
||||
if reverse_dns.get('success') and reverse_dns.get('hostnames'):
|
||||
for hostname in reverse_dns['hostnames'][:5]: # Limit to 5 hostnames
|
||||
if hostname not in self.processed_domains and hostname.count('.') >= 1:
|
||||
# Only analyze if it's a reasonable hostname and not already processed
|
||||
domain_results = self.analyze_domain_recursively(hostname, depth + 1, max_depth=1)
|
||||
if domain_results:
|
||||
results['discovered_domains'][hostname] = domain_results
|
||||
|
||||
return results
|
||||
|
||||
def create_comprehensive_summary(self, filename: str) -> None:
|
||||
"""Create comprehensive summary report with recursive analysis results."""
|
||||
with open(filename, 'w', encoding='utf-8') as f:
|
||||
f.write("Enhanced DNS Reconnaissance Report with Recursive Analysis\n")
|
||||
f.write("=" * 65 + "\n")
|
||||
f.write(f"Analysis completed at: {datetime.now().isoformat()}\n")
|
||||
f.write(f"Total domains analyzed: {len(self.processed_domains)}\n")
|
||||
f.write(f"Total IP addresses analyzed: {len(self.processed_ips)}\n\n")
|
||||
|
||||
# Executive Summary
|
||||
f.write("EXECUTIVE SUMMARY\n")
|
||||
f.write("-" * 17 + "\n")
|
||||
|
||||
total_threats = 0
|
||||
domains_with_issues = []
|
||||
ips_with_issues = []
|
||||
|
||||
# Count threats across all analyzed domains and IPs
|
||||
for domain, domain_data in self.all_results.items():
|
||||
# Check VirusTotal results for domain
|
||||
vt_domain = domain_data.get('virustotal_domain', {})
|
||||
if vt_domain.get('success') and vt_domain.get('malicious_engines', 0) > 0:
|
||||
total_threats += 1
|
||||
domains_with_issues.append(domain)
|
||||
|
||||
# Check discovered IPs
|
||||
for ip, ip_data in domain_data.get('discovered_ips', {}).items():
|
||||
vt_ip = ip_data.get('virustotal_ip', {})
|
||||
if vt_ip.get('success') and vt_ip.get('malicious_engines', 0) > 0:
|
||||
total_threats += 1
|
||||
ips_with_issues.append(ip)
|
||||
|
||||
f.write(f"Security Status: {'⚠️ THREATS DETECTED' if total_threats > 0 else '✅ NO THREATS DETECTED'}\n")
|
||||
f.write(f"Total Security Issues: {total_threats}\n")
|
||||
if domains_with_issues:
|
||||
f.write(f"Domains with issues: {', '.join(domains_with_issues[:5])}\n")
|
||||
if ips_with_issues:
|
||||
f.write(f"IPs with issues: {', '.join(ips_with_issues[:5])}\n")
|
||||
f.write("\n")
|
||||
|
||||
# Process each domain in detail
|
||||
for domain, domain_data in self.all_results.items():
|
||||
if domain_data.get('depth', 0) == 0: # Only show primary domains in detail
|
||||
self._write_domain_analysis(f, domain, domain_data)
|
||||
|
||||
# Summary of all discovered assets
|
||||
f.write("\nASSET DISCOVERY SUMMARY\n")
|
||||
f.write("-" * 23 + "\n")
|
||||
f.write(f"All Discovered Domains ({len(self.processed_domains)}):\n")
|
||||
for domain in sorted(self.processed_domains):
|
||||
f.write(f" {domain}\n")
|
||||
|
||||
f.write(f"\nAll Discovered IP Addresses ({len(self.processed_ips)}):\n")
|
||||
for ip in sorted(self.processed_ips, key=ipaddress.IPv4Address):
|
||||
f.write(f" {ip}\n")
|
||||
|
||||
f.write(f"\n{'=' * 65}\n")
|
||||
f.write("Report Generation Complete\n")
|
||||
|
||||
def _write_domain_analysis(self, f, domain: str, domain_data: Dict[str, Any]) -> None:
|
||||
"""Write detailed domain analysis to file."""
|
||||
f.write(f"\nDETAILED ANALYSIS: {domain.upper()}\n")
|
||||
f.write("=" * (20 + len(domain)) + "\n")
|
||||
|
||||
# DNS Records Summary
|
||||
dns_data = domain_data.get('dns_records', {})
|
||||
f.write("DNS Records Summary:\n")
|
||||
for record_type in ['A', 'AAAA', 'MX', 'NS', 'TXT']:
|
||||
system_records = dns_data.get(record_type, {}).get('system', {}).get('records', [])
|
||||
f.write(f" {record_type}: {len(system_records)} records\n")
|
||||
|
||||
# Security Analysis
|
||||
f.write(f"\nSecurity Analysis:\n")
|
||||
|
||||
# VirusTotal Domain Results
|
||||
vt_domain = domain_data.get('virustotal_domain', {})
|
||||
if vt_domain.get('success'):
|
||||
detection_ratio = vt_domain.get('detection_ratio', '0/0')
|
||||
malicious_engines = vt_domain.get('malicious_engines', 0)
|
||||
f.write(f" VirusTotal Domain: {detection_ratio} ({malicious_engines} flagged as malicious)\n")
|
||||
|
||||
if malicious_engines > 0:
|
||||
f.write(f" ⚠️ SECURITY ALERT: Domain flagged by {malicious_engines} security engines\n")
|
||||
scan_summary = vt_domain.get('scan_summary', {})
|
||||
for category, engines in scan_summary.items():
|
||||
f.write(f" {category}: {', '.join(engines[:3])}\n")
|
||||
else:
|
||||
f.write(f" VirusTotal Domain: {vt_domain.get('message', 'Not available')}\n")
|
||||
|
||||
# Certificate Information
|
||||
cert_data = domain_data.get('certificate_transparency', {})
|
||||
if cert_data.get('success'):
|
||||
f.write(f" SSL Certificates: {cert_data.get('total_certificates', 0)} found\n")
|
||||
f.write(f" Subdomains from Certificates: {cert_data.get('subdomain_count', 0)}\n")
|
||||
|
||||
# Discovered Assets
|
||||
discovered_ips = domain_data.get('discovered_ips', {})
|
||||
discovered_subdomains = domain_data.get('discovered_subdomains', {})
|
||||
|
||||
if discovered_ips:
|
||||
f.write(f"\nDiscovered IP Addresses ({len(discovered_ips)}):\n")
|
||||
for ip, ip_data in discovered_ips.items():
|
||||
vt_ip = ip_data.get('virustotal_ip', {})
|
||||
reverse_dns = ip_data.get('reverse_dns', {})
|
||||
|
||||
f.write(f" {ip}:\n")
|
||||
|
||||
# Reverse DNS
|
||||
if reverse_dns.get('success') and reverse_dns.get('hostnames'):
|
||||
f.write(f" Reverse DNS: {', '.join(reverse_dns['hostnames'][:3])}\n")
|
||||
|
||||
# VirusTotal IP results
|
||||
if vt_ip.get('success'):
|
||||
detection_ratio = vt_ip.get('detection_ratio', '0/0')
|
||||
malicious_engines = vt_ip.get('malicious_engines', 0)
|
||||
f.write(f" VirusTotal: {detection_ratio}")
|
||||
if malicious_engines > 0:
|
||||
f.write(f" ⚠️ FLAGGED BY {malicious_engines} ENGINES")
|
||||
f.write("\n")
|
||||
|
||||
# Shodan IP results
|
||||
shodan_ip = ip_data.get('shodan_ip', {})
|
||||
if shodan_ip.get('success'):
|
||||
shodan_data = shodan_ip.get('data', {})
|
||||
ports = shodan_data.get('ports', [])
|
||||
if ports:
|
||||
f.write(f" Shodan Ports: {', '.join(map(str, ports[:10]))}\n")
|
||||
|
||||
f.write("\n")
|
||||
|
||||
if discovered_subdomains:
|
||||
f.write(f"Discovered Subdomains ({len(discovered_subdomains)}):\n")
|
||||
for subdomain, subdomain_data in discovered_subdomains.items():
|
||||
f.write(f" {subdomain}\n")
|
||||
|
||||
# Quick security check for subdomain
|
||||
vt_subdomain = subdomain_data.get('virustotal_domain', {})
|
||||
if vt_subdomain.get('success') and vt_subdomain.get('malicious_engines', 0) > 0:
|
||||
f.write(f" ⚠️ Security Issue: Flagged by VirusTotal\n")
|
||||
|
||||
subdomain_ips = subdomain_data.get('discovered_ips', {})
|
||||
if subdomain_ips:
|
||||
f.write(f" IPs: {', '.join(list(subdomain_ips.keys())[:3])}\n")
|
||||
|
||||
f.write("\n")
|
||||
|
||||
def save_results(self, domain: str) -> None:
|
||||
"""Save results in multiple formats."""
|
||||
if not os.path.exists(self.output_dir):
|
||||
os.makedirs(self.output_dir)
|
||||
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
base_filename = f"{self.output_dir}/{domain}_{timestamp}"
|
||||
|
||||
# Save complete JSON (all recursive data)
|
||||
json_file = f"{base_filename}_complete.json"
|
||||
with open(json_file, 'w', encoding='utf-8') as f:
|
||||
json.dump(self.all_results, f, indent=2, ensure_ascii=False, default=str)
|
||||
|
||||
# Save comprehensive summary
|
||||
summary_file = f"{base_filename}_analysis.txt"
|
||||
self.create_comprehensive_summary(summary_file)
|
||||
|
||||
# Save asset list (domains and IPs)
|
||||
assets_file = f"{base_filename}_assets.txt"
|
||||
with open(assets_file, 'w', encoding='utf-8') as f:
|
||||
f.write("Discovered Assets Summary\n")
|
||||
f.write("=" * 25 + "\n\n")
|
||||
|
||||
f.write(f"Domains ({len(self.processed_domains)}):\n")
|
||||
for domain in sorted(self.processed_domains):
|
||||
f.write(f"{domain}\n")
|
||||
|
||||
f.write(f"\nIP Addresses ({len(self.processed_ips)}):\n")
|
||||
for ip in sorted(self.processed_ips, key=lambda x: ipaddress.IPv4Address(x)):
|
||||
f.write(f"{ip}\n")
|
||||
|
||||
print(f"\n📄 Results saved:")
|
||||
print(f" Complete JSON: {json_file}")
|
||||
print(f" Analysis Report: {summary_file}")
|
||||
print(f" Asset List: {assets_file}")
|
||||
|
||||
def run_enhanced_reconnaissance(self, domain: str, max_depth: int = 2) -> Dict[str, Any]:
|
||||
"""Run enhanced recursive DNS reconnaissance."""
|
||||
print(f"\n🚀 Starting enhanced DNS reconnaissance for: {domain}")
|
||||
print(f" Max recursion depth: {max_depth}")
|
||||
print(f" APIs enabled: VirusTotal={bool(self.virustotal_api_key)}, Shodan={bool(self.shodan_api_key)}")
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
# Clear previous results
|
||||
self.processed_domains.clear()
|
||||
self.processed_ips.clear()
|
||||
self.all_results.clear()
|
||||
|
||||
# Start recursive analysis
|
||||
results = self.analyze_domain_recursively(domain, depth=0, max_depth=max_depth)
|
||||
|
||||
end_time = time.time()
|
||||
duration = end_time - start_time
|
||||
|
||||
print(f"\n✅ Enhanced reconnaissance completed in {duration:.1f} seconds")
|
||||
print(f" Domains analyzed: {len(self.processed_domains)}")
|
||||
print(f" IP addresses analyzed: {len(self.processed_ips)}")
|
||||
|
||||
return results
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Enhanced DNS Reconnaissance Tool with Recursive Analysis - Use only on domains you own or have permission to test",
|
||||
epilog="LEGAL NOTICE: Unauthorized reconnaissance may violate applicable laws. Use responsibly."
|
||||
)
|
||||
parser.add_argument('domain', help='Target domain (e.g., example.com)')
|
||||
parser.add_argument('--shodan-key', help='Shodan API key for additional reconnaissance')
|
||||
parser.add_argument('--virustotal-key', help='VirusTotal API key for threat intelligence')
|
||||
parser.add_argument('--max-depth', type=int, default=2,
|
||||
help='Maximum recursion depth for subdomain analysis (default: 2)')
|
||||
parser.add_argument('--output-dir', default='dns_recon_results',
|
||||
help='Output directory for results')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Validate domain format
|
||||
if not re.match(r'^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$', args.domain):
|
||||
print("❌ Invalid domain format. Please provide a valid domain (e.g., example.com)")
|
||||
sys.exit(1)
|
||||
|
||||
# Initialize tool
|
||||
tool = EnhancedDNSReconTool(
|
||||
shodan_api_key=args.shodan_key,
|
||||
virustotal_api_key=args.virustotal_key
|
||||
)
|
||||
tool.output_dir = args.output_dir
|
||||
|
||||
# Check dependencies
|
||||
if not tool.check_dependencies():
|
||||
sys.exit(1)
|
||||
|
||||
# Warn about API keys
|
||||
if not args.virustotal_key:
|
||||
print("⚠️ No VirusTotal API key provided. Threat intelligence will be limited.")
|
||||
if not args.shodan_key:
|
||||
print("⚠️ No Shodan API key provided. Host intelligence will be limited.")
|
||||
|
||||
try:
|
||||
# Run enhanced reconnaissance
|
||||
results = tool.run_enhanced_reconnaissance(args.domain, args.max_depth)
|
||||
|
||||
# Save results
|
||||
tool.save_results(args.domain)
|
||||
|
||||
print(f"\n🎯 Enhanced reconnaissance completed for {args.domain}")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n⏹️ Reconnaissance interrupted by user")
|
||||
sys.exit(0)
|
||||
except Exception as e:
|
||||
print(f"❌ Error during reconnaissance: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
19
providers/__init__.py
Normal file
19
providers/__init__.py
Normal file
@@ -0,0 +1,19 @@
|
||||
"""
|
||||
Data provider modules for DNSRecon.
|
||||
Contains implementations for various reconnaissance data sources.
|
||||
"""
|
||||
|
||||
from .base_provider import BaseProvider, RateLimiter
|
||||
from .crtsh_provider import CrtShProvider
|
||||
from .dns_provider import DNSProvider
|
||||
from .shodan_provider import ShodanProvider
|
||||
|
||||
__all__ = [
|
||||
'BaseProvider',
|
||||
'RateLimiter',
|
||||
'CrtShProvider',
|
||||
'DNSProvider',
|
||||
'ShodanProvider'
|
||||
]
|
||||
|
||||
__version__ = "0.0.0-rc"
|
||||
562
providers/base_provider.py
Normal file
562
providers/base_provider.py
Normal file
@@ -0,0 +1,562 @@
|
||||
# dnsrecon/providers/base_provider.py
|
||||
|
||||
import time
|
||||
import requests
|
||||
import threading
|
||||
import os
|
||||
import json
|
||||
import hashlib
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import List, Dict, Any, Optional, Tuple
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from core.logger import get_forensic_logger
|
||||
|
||||
|
||||
class RateLimiter:
|
||||
"""Thread-safe rate limiter for API calls."""
|
||||
|
||||
def __init__(self, requests_per_minute: int):
|
||||
"""
|
||||
Initialize rate limiter.
|
||||
|
||||
Args:
|
||||
requests_per_minute: Maximum requests allowed per minute
|
||||
"""
|
||||
self.requests_per_minute = requests_per_minute
|
||||
self.min_interval = 60.0 / requests_per_minute
|
||||
self.last_request_time = 0
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def __getstate__(self):
|
||||
"""RateLimiter is fully picklable, return full state."""
|
||||
state = self.__dict__.copy()
|
||||
# Exclude unpickleable lock
|
||||
if '_lock' in state:
|
||||
del state['_lock']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore RateLimiter state."""
|
||||
self.__dict__.update(state)
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def wait_if_needed(self) -> None:
|
||||
"""Wait if necessary to respect rate limits."""
|
||||
with self._lock:
|
||||
current_time = time.time()
|
||||
time_since_last = current_time - self.last_request_time
|
||||
|
||||
if time_since_last < self.min_interval:
|
||||
sleep_time = self.min_interval - time_since_last
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_request_time = time.time()
|
||||
|
||||
|
||||
class ProviderCache:
|
||||
"""Thread-safe global cache for provider queries."""
|
||||
|
||||
def __init__(self, provider_name: str, cache_expiry_hours: int = 12):
|
||||
"""
|
||||
Initialize provider-specific cache.
|
||||
|
||||
Args:
|
||||
provider_name: Name of the provider for cache directory
|
||||
cache_expiry_hours: Cache expiry time in hours
|
||||
"""
|
||||
self.provider_name = provider_name
|
||||
self.cache_expiry = cache_expiry_hours * 3600 # Convert to seconds
|
||||
self.cache_dir = os.path.join('.cache', provider_name)
|
||||
self._lock = threading.Lock()
|
||||
|
||||
# Ensure cache directory exists with thread-safe creation
|
||||
os.makedirs(self.cache_dir, exist_ok=True)
|
||||
|
||||
def _generate_cache_key(self, method: str, url: str, params: Optional[Dict[str, Any]]) -> str:
|
||||
"""Generate unique cache key for request."""
|
||||
cache_data = f"{method}:{url}:{json.dumps(params or {}, sort_keys=True)}"
|
||||
return hashlib.md5(cache_data.encode()).hexdigest() + ".json"
|
||||
|
||||
def get_cached_response(self, method: str, url: str, params: Optional[Dict[str, Any]]) -> Optional[requests.Response]:
|
||||
"""
|
||||
Retrieve cached response if available and not expired.
|
||||
|
||||
Returns:
|
||||
Cached Response object or None if cache miss/expired
|
||||
"""
|
||||
cache_key = self._generate_cache_key(method, url, params)
|
||||
cache_path = os.path.join(self.cache_dir, cache_key)
|
||||
|
||||
with self._lock:
|
||||
if not os.path.exists(cache_path):
|
||||
return None
|
||||
|
||||
# Check if cache is expired
|
||||
cache_age = time.time() - os.path.getmtime(cache_path)
|
||||
if cache_age >= self.cache_expiry:
|
||||
try:
|
||||
os.remove(cache_path)
|
||||
except OSError:
|
||||
pass # File might have been removed by another thread
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(cache_path, 'r', encoding='utf-8') as f:
|
||||
cached_data = json.load(f)
|
||||
|
||||
# Reconstruct Response object
|
||||
response = requests.Response()
|
||||
response.status_code = cached_data['status_code']
|
||||
response._content = cached_data['content'].encode('utf-8')
|
||||
response.headers.update(cached_data['headers'])
|
||||
|
||||
return response
|
||||
|
||||
except (json.JSONDecodeError, KeyError, IOError) as e:
|
||||
# Cache file corrupted, remove it
|
||||
try:
|
||||
os.remove(cache_path)
|
||||
except OSError:
|
||||
pass
|
||||
return None
|
||||
|
||||
def cache_response(self, method: str, url: str, params: Optional[Dict[str, Any]],
|
||||
response: requests.Response) -> bool:
|
||||
"""
|
||||
Cache successful response to disk.
|
||||
|
||||
Returns:
|
||||
True if cached successfully, False otherwise
|
||||
"""
|
||||
if response.status_code != 200:
|
||||
return False
|
||||
|
||||
cache_key = self._generate_cache_key(method, url, params)
|
||||
cache_path = os.path.join(self.cache_dir, cache_key)
|
||||
|
||||
with self._lock:
|
||||
try:
|
||||
cache_data = {
|
||||
'status_code': response.status_code,
|
||||
'content': response.text,
|
||||
'headers': dict(response.headers),
|
||||
'cached_at': datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
|
||||
# Write to temporary file first, then rename for atomic operation
|
||||
temp_path = cache_path + '.tmp'
|
||||
with open(temp_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(cache_data, f)
|
||||
|
||||
# Atomic rename to prevent partial cache files
|
||||
os.rename(temp_path, cache_path)
|
||||
return True
|
||||
|
||||
except (IOError, OSError) as e:
|
||||
# Clean up temp file if it exists
|
||||
try:
|
||||
if os.path.exists(temp_path):
|
||||
os.remove(temp_path)
|
||||
except OSError:
|
||||
pass
|
||||
return False
|
||||
|
||||
|
||||
class BaseProvider(ABC):
|
||||
"""
|
||||
Abstract base class for all DNSRecon data providers.
|
||||
Now supports global provider-specific caching and session-specific configuration.
|
||||
"""
|
||||
|
||||
def __init__(self, name: str, rate_limit: int = 60, timeout: int = 30, session_config=None):
|
||||
"""
|
||||
Initialize base provider with global caching and session-specific configuration.
|
||||
|
||||
Args:
|
||||
name: Provider name for logging
|
||||
rate_limit: Requests per minute limit (default override)
|
||||
timeout: Request timeout in seconds
|
||||
session_config: Session-specific configuration
|
||||
"""
|
||||
# Use session config if provided, otherwise fall back to global config
|
||||
if session_config is not None:
|
||||
self.config = session_config
|
||||
actual_rate_limit = self.config.get_rate_limit(name)
|
||||
actual_timeout = self.config.default_timeout
|
||||
else:
|
||||
# Fallback to global config for backwards compatibility
|
||||
from config import config as global_config
|
||||
self.config = global_config
|
||||
actual_rate_limit = rate_limit
|
||||
actual_timeout = timeout
|
||||
|
||||
self.name = name
|
||||
self.rate_limiter = RateLimiter(actual_rate_limit)
|
||||
self.timeout = actual_timeout
|
||||
self._local = threading.local()
|
||||
self.logger = get_forensic_logger()
|
||||
self._stop_event = None
|
||||
|
||||
# GLOBAL provider-specific caching (not session-based)
|
||||
self.cache = ProviderCache(name, cache_expiry_hours=12)
|
||||
|
||||
# Statistics (per provider instance)
|
||||
self.total_requests = 0
|
||||
self.successful_requests = 0
|
||||
self.failed_requests = 0
|
||||
self.total_relationships_found = 0
|
||||
self.cache_hits = 0
|
||||
self.cache_misses = 0
|
||||
|
||||
print(f"Initialized {name} provider with global cache and session config (rate: {actual_rate_limit}/min)")
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare BaseProvider for pickling by excluding unpicklable objects."""
|
||||
state = self.__dict__.copy()
|
||||
# Exclude the unpickleable '_local' attribute and stop event
|
||||
state['_local'] = None
|
||||
state['_stop_event'] = None
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore BaseProvider after unpickling by reconstructing threading objects."""
|
||||
self.__dict__.update(state)
|
||||
# Re-initialize the '_local' attribute and stop event
|
||||
self._local = threading.local()
|
||||
self._stop_event = None
|
||||
|
||||
@property
|
||||
def session(self):
|
||||
if not hasattr(self._local, 'session'):
|
||||
self._local.session = requests.Session()
|
||||
self._local.session.headers.update({
|
||||
'User-Agent': 'DNSRecon/2.0 (Passive Reconnaissance Tool)'
|
||||
})
|
||||
return self._local.session
|
||||
|
||||
@abstractmethod
|
||||
def get_name(self) -> str:
|
||||
"""Return the provider name."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_display_name(self) -> str:
|
||||
"""Return the provider display name for the UI."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def requires_api_key(self) -> bool:
|
||||
"""Return True if the provider requires an API key."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_eligibility(self) -> Dict[str, bool]:
|
||||
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def is_available(self) -> bool:
|
||||
"""Check if the provider is available and properly configured."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query the provider for information about a domain.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
List of tuples: (source_node, target_node, relationship_type, confidence, raw_data)
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query the provider for information about an IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
List of tuples: (source_node, target_node, relationship_type, confidence, raw_data)
|
||||
"""
|
||||
pass
|
||||
|
||||
def make_request(self, url: str, method: str = "GET",
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None,
|
||||
target_indicator: str = "",
|
||||
max_retries: int = 3) -> Optional[requests.Response]:
|
||||
"""
|
||||
Make a rate-limited HTTP request with global caching and aggressive stop signal handling.
|
||||
"""
|
||||
# Check for cancellation before starting
|
||||
if self._is_stop_requested():
|
||||
print(f"Request cancelled before start: {url}")
|
||||
return None
|
||||
|
||||
# Check global cache first
|
||||
cached_response = self.cache.get_cached_response(method, url, params)
|
||||
if cached_response is not None:
|
||||
print(f"Cache hit for {self.name}: {url}")
|
||||
self.cache_hits += 1
|
||||
return cached_response
|
||||
|
||||
self.cache_misses += 1
|
||||
|
||||
# Determine effective max_retries based on stop signal
|
||||
effective_max_retries = 0 if self._is_stop_requested() else max_retries
|
||||
last_exception = None
|
||||
|
||||
for attempt in range(effective_max_retries + 1):
|
||||
# Check for cancellation before each attempt
|
||||
if self._is_stop_requested():
|
||||
print(f"Request cancelled during attempt {attempt + 1}: {url}")
|
||||
return None
|
||||
|
||||
# Apply rate limiting with cancellation awareness
|
||||
if not self._wait_with_cancellation_check():
|
||||
print(f"Request cancelled during rate limiting: {url}")
|
||||
return None
|
||||
|
||||
# Final check before making HTTP request
|
||||
if self._is_stop_requested():
|
||||
print(f"Request cancelled before HTTP call: {url}")
|
||||
return None
|
||||
|
||||
start_time = time.time()
|
||||
response = None
|
||||
error = None
|
||||
|
||||
try:
|
||||
self.total_requests += 1
|
||||
|
||||
# Prepare request
|
||||
request_headers = self.session.headers.copy()
|
||||
if headers:
|
||||
request_headers.update(headers)
|
||||
|
||||
print(f"Making {method} request to: {url} (attempt {attempt + 1})")
|
||||
|
||||
# Use shorter timeout if termination is requested
|
||||
request_timeout = 2 if self._is_stop_requested() else self.timeout
|
||||
|
||||
# Make request
|
||||
if method.upper() == "GET":
|
||||
response = self.session.get(
|
||||
url,
|
||||
params=params,
|
||||
headers=request_headers,
|
||||
timeout=request_timeout
|
||||
)
|
||||
elif method.upper() == "POST":
|
||||
response = self.session.post(
|
||||
url,
|
||||
json=params,
|
||||
headers=request_headers,
|
||||
timeout=request_timeout
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||
|
||||
print(f"Response status: {response.status_code}")
|
||||
response.raise_for_status()
|
||||
self.successful_requests += 1
|
||||
|
||||
# Success - log, cache, and return
|
||||
duration_ms = (time.time() - start_time) * 1000
|
||||
self.logger.log_api_request(
|
||||
provider=self.name,
|
||||
url=url,
|
||||
method=method.upper(),
|
||||
status_code=response.status_code,
|
||||
response_size=len(response.content),
|
||||
duration_ms=duration_ms,
|
||||
error=None,
|
||||
target_indicator=target_indicator
|
||||
)
|
||||
|
||||
# Cache the successful response globally
|
||||
self.cache.cache_response(method, url, params, response)
|
||||
return response
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
error = str(e)
|
||||
self.failed_requests += 1
|
||||
print(f"Request failed (attempt {attempt + 1}): {error}")
|
||||
last_exception = e
|
||||
|
||||
# Immediately abort retries if stop requested
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested - aborting retries for: {url}")
|
||||
break
|
||||
|
||||
# Check if we should retry
|
||||
if attempt < effective_max_retries and self._should_retry(e):
|
||||
# Exponential backoff with jitter for 429 errors
|
||||
if isinstance(e, requests.exceptions.HTTPError) and e.response and e.response.status_code == 429:
|
||||
backoff_time = min(60, 10 * (2 ** attempt))
|
||||
print(f"Rate limit hit. Retrying in {backoff_time} seconds...")
|
||||
else:
|
||||
backoff_time = min(2.0, (2 ** attempt) * 0.5)
|
||||
print(f"Retrying in {backoff_time} seconds...")
|
||||
|
||||
if not self._sleep_with_cancellation_check(backoff_time):
|
||||
print(f"Stop requested during backoff - aborting: {url}")
|
||||
return None
|
||||
continue
|
||||
else:
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
error = f"Unexpected error: {str(e)}"
|
||||
self.failed_requests += 1
|
||||
print(f"Unexpected error: {error}")
|
||||
last_exception = e
|
||||
break
|
||||
|
||||
# All attempts failed - log and return None
|
||||
duration_ms = (time.time() - start_time) * 1000
|
||||
self.logger.log_api_request(
|
||||
provider=self.name,
|
||||
url=url,
|
||||
method=method.upper(),
|
||||
status_code=response.status_code if response else None,
|
||||
response_size=len(response.content) if response else None,
|
||||
duration_ms=duration_ms,
|
||||
error=error,
|
||||
target_indicator=target_indicator
|
||||
)
|
||||
|
||||
if error and last_exception:
|
||||
raise last_exception
|
||||
|
||||
return None
|
||||
|
||||
def _is_stop_requested(self) -> bool:
|
||||
"""
|
||||
Enhanced stop signal checking that handles both local and Redis-based signals.
|
||||
"""
|
||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
||||
return True
|
||||
return False
|
||||
|
||||
def _wait_with_cancellation_check(self) -> bool:
|
||||
"""
|
||||
Wait for rate limiting while aggressively checking for cancellation.
|
||||
Returns False if cancelled during wait.
|
||||
"""
|
||||
current_time = time.time()
|
||||
time_since_last = current_time - self.rate_limiter.last_request_time
|
||||
|
||||
if time_since_last < self.rate_limiter.min_interval:
|
||||
sleep_time = self.rate_limiter.min_interval - time_since_last
|
||||
if not self._sleep_with_cancellation_check(sleep_time):
|
||||
return False
|
||||
|
||||
self.rate_limiter.last_request_time = time.time()
|
||||
return True
|
||||
|
||||
def _sleep_with_cancellation_check(self, sleep_time: float) -> bool:
|
||||
"""
|
||||
Sleep for the specified time while aggressively checking for cancellation.
|
||||
|
||||
Args:
|
||||
sleep_time: Time to sleep in seconds
|
||||
|
||||
Returns:
|
||||
bool: True if sleep completed, False if cancelled
|
||||
"""
|
||||
sleep_start = time.time()
|
||||
check_interval = 0.05 # Check every 50ms for aggressive responsiveness
|
||||
|
||||
while time.time() - sleep_start < sleep_time:
|
||||
if self._is_stop_requested():
|
||||
return False
|
||||
remaining_time = sleep_time - (time.time() - sleep_start)
|
||||
time.sleep(min(check_interval, remaining_time))
|
||||
|
||||
return True
|
||||
|
||||
def set_stop_event(self, stop_event: threading.Event) -> None:
|
||||
"""
|
||||
Set the stop event for this provider to enable cancellation.
|
||||
|
||||
Args:
|
||||
stop_event: Threading event to signal cancellation
|
||||
"""
|
||||
self._stop_event = stop_event
|
||||
|
||||
def _should_retry(self, exception: requests.exceptions.RequestException) -> bool:
|
||||
"""
|
||||
Determine if a request should be retried based on the exception.
|
||||
|
||||
Args:
|
||||
exception: The request exception that occurred
|
||||
|
||||
Returns:
|
||||
True if the request should be retried
|
||||
"""
|
||||
# Retry on connection errors and timeouts
|
||||
if isinstance(exception, (requests.exceptions.ConnectionError,
|
||||
requests.exceptions.Timeout)):
|
||||
return True
|
||||
|
||||
if isinstance(exception, requests.exceptions.HTTPError):
|
||||
if hasattr(exception, 'response') and exception.response:
|
||||
# Retry on server errors (5xx) AND on rate-limiting errors (429)
|
||||
return exception.response.status_code >= 500 or exception.response.status_code == 429
|
||||
|
||||
return False
|
||||
|
||||
def log_relationship_discovery(self, source_node: str, target_node: str,
|
||||
relationship_type: str,
|
||||
confidence_score: float,
|
||||
raw_data: Dict[str, Any],
|
||||
discovery_method: str) -> None:
|
||||
"""
|
||||
Log discovery of a new relationship.
|
||||
|
||||
Args:
|
||||
source_node: Source node identifier
|
||||
target_node: Target node identifier
|
||||
relationship_type: Type of relationship
|
||||
confidence_score: Confidence score
|
||||
raw_data: Raw data from provider
|
||||
discovery_method: Method used for discovery
|
||||
"""
|
||||
self.total_relationships_found += 1
|
||||
|
||||
self.logger.log_relationship_discovery(
|
||||
source_node=source_node,
|
||||
target_node=target_node,
|
||||
relationship_type=relationship_type,
|
||||
confidence_score=confidence_score,
|
||||
provider=self.name,
|
||||
raw_data=raw_data,
|
||||
discovery_method=discovery_method
|
||||
)
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get provider statistics including cache performance.
|
||||
|
||||
Returns:
|
||||
Dictionary containing provider performance metrics
|
||||
"""
|
||||
return {
|
||||
'name': self.name,
|
||||
'total_requests': self.total_requests,
|
||||
'successful_requests': self.successful_requests,
|
||||
'failed_requests': self.failed_requests,
|
||||
'success_rate': (self.successful_requests / self.total_requests * 100) if self.total_requests > 0 else 0,
|
||||
'relationships_found': self.total_relationships_found,
|
||||
'rate_limit': self.rate_limiter.requests_per_minute,
|
||||
'cache_hits': self.cache_hits,
|
||||
'cache_misses': self.cache_misses,
|
||||
'cache_hit_rate': (self.cache_hits / (self.cache_hits + self.cache_misses) * 100) if (self.cache_hits + self.cache_misses) > 0 else 0
|
||||
}
|
||||
548
providers/crtsh_provider.py
Normal file
548
providers/crtsh_provider.py
Normal file
@@ -0,0 +1,548 @@
|
||||
"""
|
||||
Certificate Transparency provider using crt.sh.
|
||||
Discovers domain relationships through certificate SAN analysis with comprehensive certificate tracking.
|
||||
Stores certificates as metadata on domain nodes rather than creating certificate nodes.
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
from typing import List, Dict, Any, Tuple, Set
|
||||
from urllib.parse import quote
|
||||
from datetime import datetime, timezone
|
||||
import requests
|
||||
|
||||
from .base_provider import BaseProvider
|
||||
from utils.helpers import _is_valid_domain
|
||||
|
||||
|
||||
class CrtShProvider(BaseProvider):
|
||||
"""
|
||||
Provider for querying crt.sh certificate transparency database.
|
||||
Now uses session-specific configuration and caching.
|
||||
"""
|
||||
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize CrtSh provider with session-specific configuration."""
|
||||
super().__init__(
|
||||
name="crtsh",
|
||||
rate_limit=60,
|
||||
timeout=15,
|
||||
session_config=session_config
|
||||
)
|
||||
self.base_url = "https://crt.sh/"
|
||||
self._stop_event = None
|
||||
|
||||
def get_name(self) -> str:
|
||||
"""Return the provider name."""
|
||||
return "crtsh"
|
||||
|
||||
def get_display_name(self) -> str:
|
||||
"""Return the provider display name for the UI."""
|
||||
return "crt.sh"
|
||||
|
||||
def requires_api_key(self) -> bool:
|
||||
"""Return True if the provider requires an API key."""
|
||||
return False
|
||||
|
||||
def get_eligibility(self) -> Dict[str, bool]:
|
||||
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||
return {'domains': True, 'ips': False}
|
||||
|
||||
def is_available(self) -> bool:
|
||||
"""
|
||||
Check if the provider is configured to be used.
|
||||
This method is intentionally simple and does not perform a network request
|
||||
to avoid blocking application startup.
|
||||
"""
|
||||
return True
|
||||
|
||||
def _parse_certificate_date(self, date_string: str) -> datetime:
|
||||
"""
|
||||
Parse certificate date from crt.sh format.
|
||||
|
||||
Args:
|
||||
date_string: Date string from crt.sh API
|
||||
|
||||
Returns:
|
||||
Parsed datetime object in UTC
|
||||
"""
|
||||
if not date_string:
|
||||
raise ValueError("Empty date string")
|
||||
|
||||
try:
|
||||
# Handle various possible formats from crt.sh
|
||||
if date_string.endswith('Z'):
|
||||
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
|
||||
elif '+' in date_string or date_string.endswith('UTC'):
|
||||
# Handle timezone-aware strings
|
||||
date_string = date_string.replace('UTC', '').strip()
|
||||
if '+' in date_string:
|
||||
date_string = date_string.split('+')[0]
|
||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
||||
else:
|
||||
# Assume UTC if no timezone specified
|
||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
||||
except Exception as e:
|
||||
# Fallback: try parsing without timezone info and assume UTC
|
||||
try:
|
||||
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
|
||||
except Exception:
|
||||
raise ValueError(f"Unable to parse date: {date_string}") from e
|
||||
|
||||
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Check if a certificate is currently valid based on its expiry date.
|
||||
|
||||
Args:
|
||||
cert_data: Certificate data from crt.sh
|
||||
|
||||
Returns:
|
||||
True if certificate is currently valid (not expired)
|
||||
"""
|
||||
try:
|
||||
not_after_str = cert_data.get('not_after')
|
||||
if not not_after_str:
|
||||
return False
|
||||
|
||||
not_after_date = self._parse_certificate_date(not_after_str)
|
||||
not_before_str = cert_data.get('not_before')
|
||||
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
# Check if certificate is within valid date range
|
||||
is_not_expired = not_after_date > now
|
||||
|
||||
if not_before_str:
|
||||
not_before_date = self._parse_certificate_date(not_before_str)
|
||||
is_not_before_valid = not_before_date <= now
|
||||
return is_not_expired and is_not_before_valid
|
||||
|
||||
return is_not_expired
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Certificate validity check failed: {e}")
|
||||
return False
|
||||
|
||||
def _extract_certificate_metadata(self, cert_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract comprehensive metadata from certificate data.
|
||||
|
||||
Args:
|
||||
cert_data: Raw certificate data from crt.sh
|
||||
|
||||
Returns:
|
||||
Comprehensive certificate metadata dictionary
|
||||
"""
|
||||
metadata = {
|
||||
'certificate_id': cert_data.get('id'),
|
||||
'serial_number': cert_data.get('serial_number'),
|
||||
'issuer_name': cert_data.get('issuer_name'),
|
||||
'issuer_ca_id': cert_data.get('issuer_ca_id'),
|
||||
'common_name': cert_data.get('common_name'),
|
||||
'not_before': cert_data.get('not_before'),
|
||||
'not_after': cert_data.get('not_after'),
|
||||
'entry_timestamp': cert_data.get('entry_timestamp'),
|
||||
'source': 'crt.sh'
|
||||
}
|
||||
|
||||
try:
|
||||
if metadata['not_before'] and metadata['not_after']:
|
||||
not_before = self._parse_certificate_date(metadata['not_before'])
|
||||
not_after = self._parse_certificate_date(metadata['not_after'])
|
||||
|
||||
metadata['validity_period_days'] = (not_after - not_before).days
|
||||
metadata['is_currently_valid'] = self._is_cert_valid(cert_data)
|
||||
metadata['expires_soon'] = (not_after - datetime.now(timezone.utc)).days <= 30
|
||||
|
||||
# Add human-readable dates
|
||||
metadata['not_before'] = not_before.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
metadata['not_after'] = not_after.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Error computing certificate metadata: {e}")
|
||||
metadata['is_currently_valid'] = False
|
||||
metadata['expires_soon'] = False
|
||||
|
||||
return metadata
|
||||
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query crt.sh for certificates containing the domain.
|
||||
"""
|
||||
if not _is_valid_domain(domain):
|
||||
return []
|
||||
|
||||
# Check for cancellation before starting
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled before start for domain: {domain}")
|
||||
return []
|
||||
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
# Query crt.sh for certificates
|
||||
url = f"{self.base_url}?q={quote(domain)}&output=json"
|
||||
response = self.make_request(url, target_indicator=domain, max_retries=3)
|
||||
|
||||
if not response or response.status_code != 200:
|
||||
return []
|
||||
|
||||
# Check for cancellation after request
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled after request for domain: {domain}")
|
||||
return []
|
||||
|
||||
certificates = response.json()
|
||||
|
||||
if not certificates:
|
||||
return []
|
||||
|
||||
# Check for cancellation before processing
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled before processing for domain: {domain}")
|
||||
return []
|
||||
|
||||
# Aggregate certificate data by domain
|
||||
domain_certificates = {}
|
||||
all_discovered_domains = set()
|
||||
|
||||
# Process certificates with cancellation checking
|
||||
for i, cert_data in enumerate(certificates):
|
||||
# Check for cancellation every 5 certificates instead of 10 for faster response
|
||||
if i % 5 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh processing cancelled at certificate {i} for domain: {domain}")
|
||||
break
|
||||
|
||||
cert_metadata = self._extract_certificate_metadata(cert_data)
|
||||
cert_domains = self._extract_domains_from_certificate(cert_data)
|
||||
|
||||
# Add all domains from this certificate to our tracking
|
||||
for cert_domain in cert_domains:
|
||||
# Additional stop check during domain processing
|
||||
if i % 20 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh domain processing cancelled for domain: {domain}")
|
||||
break
|
||||
|
||||
if not _is_valid_domain(cert_domain):
|
||||
continue
|
||||
|
||||
all_discovered_domains.add(cert_domain)
|
||||
|
||||
# Initialize domain certificate list if needed
|
||||
if cert_domain not in domain_certificates:
|
||||
domain_certificates[cert_domain] = []
|
||||
|
||||
# Add this certificate to the domain's certificate list
|
||||
domain_certificates[cert_domain].append(cert_metadata)
|
||||
|
||||
# Final cancellation check before creating relationships
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled before relationship creation for domain: {domain}")
|
||||
return []
|
||||
|
||||
# Create relationships from query domain to ALL discovered domains with stop checking
|
||||
for i, discovered_domain in enumerate(all_discovered_domains):
|
||||
if discovered_domain == domain:
|
||||
continue # Skip self-relationships
|
||||
|
||||
# Check for cancellation every 10 relationships
|
||||
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh relationship creation cancelled for domain: {domain}")
|
||||
break
|
||||
|
||||
if not _is_valid_domain(discovered_domain):
|
||||
continue
|
||||
|
||||
# Get certificates for both domains
|
||||
query_domain_certs = domain_certificates.get(domain, [])
|
||||
discovered_domain_certs = domain_certificates.get(discovered_domain, [])
|
||||
|
||||
# Find shared certificates (for metadata purposes)
|
||||
shared_certificates = self._find_shared_certificates(query_domain_certs, discovered_domain_certs)
|
||||
|
||||
# Calculate confidence based on relationship type and shared certificates
|
||||
confidence = self._calculate_domain_relationship_confidence(
|
||||
domain, discovered_domain, shared_certificates, all_discovered_domains
|
||||
)
|
||||
|
||||
# Create comprehensive raw data for the relationship
|
||||
relationship_raw_data = {
|
||||
'relationship_type': 'certificate_discovery',
|
||||
'shared_certificates': shared_certificates,
|
||||
'total_shared_certs': len(shared_certificates),
|
||||
'discovery_context': self._determine_relationship_context(discovered_domain, domain),
|
||||
'domain_certificates': {
|
||||
domain: self._summarize_certificates(query_domain_certs),
|
||||
discovered_domain: self._summarize_certificates(discovered_domain_certs)
|
||||
}
|
||||
}
|
||||
|
||||
# Create domain -> domain relationship
|
||||
relationships.append((
|
||||
domain,
|
||||
discovered_domain,
|
||||
'san_certificate',
|
||||
confidence,
|
||||
relationship_raw_data
|
||||
))
|
||||
|
||||
# Log the relationship discovery
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=discovered_domain,
|
||||
relationship_type='san_certificate',
|
||||
confidence_score=confidence,
|
||||
raw_data=relationship_raw_data,
|
||||
discovery_method="certificate_transparency_analysis"
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.logger.error(f"Failed to parse JSON response from crt.sh: {e}")
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.logger.error(f"HTTP request to crt.sh failed: {e}")
|
||||
|
||||
|
||||
return relationships
|
||||
|
||||
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Find certificates that are shared between two domain certificate lists.
|
||||
|
||||
Args:
|
||||
certs1: First domain's certificates
|
||||
certs2: Second domain's certificates
|
||||
|
||||
Returns:
|
||||
List of shared certificate metadata
|
||||
"""
|
||||
shared = []
|
||||
|
||||
# Create a set of certificate IDs from the first list for quick lookup
|
||||
cert1_ids = {cert.get('certificate_id') for cert in certs1 if cert.get('certificate_id')}
|
||||
|
||||
# Find certificates in the second list that match
|
||||
for cert in certs2:
|
||||
if cert.get('certificate_id') in cert1_ids:
|
||||
shared.append(cert)
|
||||
|
||||
return shared
|
||||
|
||||
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a summary of certificates for a domain.
|
||||
|
||||
Args:
|
||||
certificates: List of certificate metadata
|
||||
|
||||
Returns:
|
||||
Summary dictionary with aggregate statistics
|
||||
"""
|
||||
if not certificates:
|
||||
return {
|
||||
'total_certificates': 0,
|
||||
'valid_certificates': 0,
|
||||
'expired_certificates': 0,
|
||||
'expires_soon_count': 0,
|
||||
'unique_issuers': [],
|
||||
'latest_certificate': None,
|
||||
'has_valid_cert': False
|
||||
}
|
||||
|
||||
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
|
||||
expired_count = len(certificates) - valid_count
|
||||
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
|
||||
|
||||
# Get unique issuers
|
||||
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
|
||||
|
||||
# Find the most recent certificate
|
||||
latest_cert = None
|
||||
latest_date = None
|
||||
|
||||
for cert in certificates:
|
||||
try:
|
||||
if cert.get('not_before'):
|
||||
cert_date = self._parse_certificate_date(cert['not_before'])
|
||||
if latest_date is None or cert_date > latest_date:
|
||||
latest_date = cert_date
|
||||
latest_cert = cert
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
return {
|
||||
'total_certificates': len(certificates),
|
||||
'valid_certificates': valid_count,
|
||||
'expired_certificates': expired_count,
|
||||
'expires_soon_count': expires_soon_count,
|
||||
'unique_issuers': unique_issuers,
|
||||
'latest_certificate': latest_cert,
|
||||
'has_valid_cert': valid_count > 0,
|
||||
'certificate_details': certificates # Full details for forensic analysis
|
||||
}
|
||||
|
||||
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str,
|
||||
shared_certificates: List[Dict[str, Any]],
|
||||
all_discovered_domains: Set[str]) -> float:
|
||||
"""
|
||||
Calculate confidence score for domain relationship based on various factors.
|
||||
|
||||
Args:
|
||||
domain1: Source domain (query domain)
|
||||
domain2: Target domain (discovered domain)
|
||||
shared_certificates: List of shared certificate metadata
|
||||
all_discovered_domains: All domains discovered in this query
|
||||
|
||||
Returns:
|
||||
Confidence score between 0.0 and 1.0
|
||||
"""
|
||||
base_confidence = 0.9
|
||||
|
||||
# Adjust confidence based on domain relationship context
|
||||
relationship_context = self._determine_relationship_context(domain2, domain1)
|
||||
|
||||
if relationship_context == 'exact_match':
|
||||
context_bonus = 0.0 # This shouldn't happen, but just in case
|
||||
elif relationship_context == 'subdomain':
|
||||
context_bonus = 0.1 # High confidence for subdomains
|
||||
elif relationship_context == 'parent_domain':
|
||||
context_bonus = 0.05 # Medium confidence for parent domains
|
||||
else:
|
||||
context_bonus = 0.0 # Related domains get base confidence
|
||||
|
||||
# Adjust confidence based on shared certificates
|
||||
if shared_certificates:
|
||||
shared_count = len(shared_certificates)
|
||||
if shared_count >= 3:
|
||||
shared_bonus = 0.1
|
||||
elif shared_count >= 2:
|
||||
shared_bonus = 0.05
|
||||
else:
|
||||
shared_bonus = 0.02
|
||||
|
||||
# Additional bonus for valid shared certificates
|
||||
valid_shared = sum(1 for cert in shared_certificates if cert.get('is_currently_valid'))
|
||||
if valid_shared > 0:
|
||||
validity_bonus = 0.05
|
||||
else:
|
||||
validity_bonus = 0.0
|
||||
else:
|
||||
# Even without shared certificates, domains found in the same query have some relationship
|
||||
shared_bonus = 0.0
|
||||
validity_bonus = 0.0
|
||||
|
||||
# Adjust confidence based on certificate issuer reputation (if shared certificates exist)
|
||||
issuer_bonus = 0.0
|
||||
if shared_certificates:
|
||||
for cert in shared_certificates:
|
||||
issuer = cert.get('issuer_name', '').lower()
|
||||
if any(trusted_ca in issuer for trusted_ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
|
||||
issuer_bonus = max(issuer_bonus, 0.03)
|
||||
break
|
||||
|
||||
# Calculate final confidence
|
||||
final_confidence = base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus
|
||||
return max(0.1, min(1.0, final_confidence)) # Clamp between 0.1 and 1.0
|
||||
|
||||
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
|
||||
"""
|
||||
Determine the context of the relationship between certificate domain and query domain.
|
||||
|
||||
Args:
|
||||
cert_domain: Domain found in certificate
|
||||
query_domain: Original query domain
|
||||
|
||||
Returns:
|
||||
String describing the relationship context
|
||||
"""
|
||||
if cert_domain == query_domain:
|
||||
return 'exact_match'
|
||||
elif cert_domain.endswith(f'.{query_domain}'):
|
||||
return 'subdomain'
|
||||
elif query_domain.endswith(f'.{cert_domain}'):
|
||||
return 'parent_domain'
|
||||
else:
|
||||
return 'related_domain'
|
||||
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query crt.sh for certificates containing the IP address.
|
||||
Note: crt.sh doesn't typically index by IP, so this returns empty results.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
Empty list (crt.sh doesn't support IP-based certificate queries effectively)
|
||||
"""
|
||||
# crt.sh doesn't effectively support IP-based certificate queries
|
||||
return []
|
||||
|
||||
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
|
||||
"""
|
||||
Extract all domains from certificate data.
|
||||
|
||||
Args:
|
||||
cert_data: Certificate data from crt.sh API
|
||||
|
||||
Returns:
|
||||
Set of unique domain names found in the certificate
|
||||
"""
|
||||
domains = set()
|
||||
|
||||
# Extract from common name
|
||||
common_name = cert_data.get('common_name', '')
|
||||
if common_name:
|
||||
cleaned_cn = self._clean_domain_name(common_name)
|
||||
if cleaned_cn:
|
||||
domains.update(cleaned_cn)
|
||||
|
||||
# Extract from name_value field (contains SANs)
|
||||
name_value = cert_data.get('name_value', '')
|
||||
if name_value:
|
||||
# Split by newlines and clean each domain
|
||||
for line in name_value.split('\n'):
|
||||
cleaned_domains = self._clean_domain_name(line.strip())
|
||||
if cleaned_domains:
|
||||
domains.update(cleaned_domains)
|
||||
|
||||
return domains
|
||||
|
||||
def _clean_domain_name(self, domain_name: str) -> List[str]:
|
||||
"""
|
||||
Clean and normalize domain name from certificate data.
|
||||
Now returns a list to handle wildcards correctly.
|
||||
"""
|
||||
if not domain_name:
|
||||
return []
|
||||
|
||||
domain = domain_name.strip().lower()
|
||||
|
||||
# Remove protocol if present
|
||||
if domain.startswith(('http://', 'https://')):
|
||||
domain = domain.split('://', 1)[1]
|
||||
|
||||
# Remove path if present
|
||||
if '/' in domain:
|
||||
domain = domain.split('/', 1)[0]
|
||||
|
||||
# Remove port if present
|
||||
if ':' in domain and not domain.count(':') > 1: # Avoid breaking IPv6
|
||||
domain = domain.split(':', 1)[0]
|
||||
|
||||
# Handle wildcard domains
|
||||
cleaned_domains = []
|
||||
if domain.startswith('*.'):
|
||||
# Add both the wildcard and the base domain
|
||||
cleaned_domains.append(domain)
|
||||
cleaned_domains.append(domain[2:])
|
||||
else:
|
||||
cleaned_domains.append(domain)
|
||||
|
||||
# Remove any remaining invalid characters and validate
|
||||
final_domains = []
|
||||
for d in cleaned_domains:
|
||||
d = re.sub(r'[^\w\-\.]', '', d)
|
||||
if d and not d.startswith(('.', '-')) and not d.endswith(('.', '-')):
|
||||
final_domains.append(d)
|
||||
|
||||
return [d for d in final_domains if _is_valid_domain(d)]
|
||||
189
providers/dns_provider.py
Normal file
189
providers/dns_provider.py
Normal file
@@ -0,0 +1,189 @@
|
||||
# dnsrecon/providers/dns_provider.py
|
||||
|
||||
import dns.resolver
|
||||
import dns.reversename
|
||||
from typing import List, Dict, Any, Tuple
|
||||
from .base_provider import BaseProvider
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
|
||||
|
||||
class DNSProvider(BaseProvider):
|
||||
"""
|
||||
Provider for standard DNS resolution and reverse DNS lookups.
|
||||
Now uses session-specific configuration.
|
||||
"""
|
||||
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize DNS provider with session-specific configuration."""
|
||||
super().__init__(
|
||||
name="dns",
|
||||
rate_limit=100,
|
||||
timeout=10,
|
||||
session_config=session_config
|
||||
)
|
||||
|
||||
# Configure DNS resolver
|
||||
self.resolver = dns.resolver.Resolver()
|
||||
self.resolver.timeout = 5
|
||||
self.resolver.lifetime = 10
|
||||
#self.resolver.nameservers = ['127.0.0.1']
|
||||
|
||||
def get_name(self) -> str:
|
||||
"""Return the provider name."""
|
||||
return "dns"
|
||||
|
||||
def get_display_name(self) -> str:
|
||||
"""Return the provider display name for the UI."""
|
||||
return "DNS"
|
||||
|
||||
def requires_api_key(self) -> bool:
|
||||
"""Return True if the provider requires an API key."""
|
||||
return False
|
||||
|
||||
def get_eligibility(self) -> Dict[str, bool]:
|
||||
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||
return {'domains': True, 'ips': True}
|
||||
|
||||
def is_available(self) -> bool:
|
||||
"""DNS is always available - no API key required."""
|
||||
return True
|
||||
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query DNS records for the domain to discover relationships.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
List of relationships discovered from DNS analysis
|
||||
"""
|
||||
if not _is_valid_domain(domain):
|
||||
return []
|
||||
|
||||
relationships = []
|
||||
|
||||
# Query all record types
|
||||
for record_type in ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SOA', 'TXT', 'SRV', 'CAA']:
|
||||
relationships.extend(self._query_record(domain, record_type))
|
||||
|
||||
return relationships
|
||||
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query reverse DNS for the IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
List of relationships discovered from reverse DNS
|
||||
"""
|
||||
if not _is_valid_ip(ip):
|
||||
return []
|
||||
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
# Perform reverse DNS lookup
|
||||
self.total_requests += 1
|
||||
reverse_name = dns.reversename.from_address(ip)
|
||||
response = self.resolver.resolve(reverse_name, 'PTR')
|
||||
self.successful_requests += 1
|
||||
|
||||
for ptr_record in response:
|
||||
hostname = str(ptr_record).rstrip('.')
|
||||
|
||||
if _is_valid_domain(hostname):
|
||||
raw_data = {
|
||||
'query_type': 'PTR',
|
||||
'ip_address': ip,
|
||||
'hostname': hostname,
|
||||
'ttl': response.ttl
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
ip,
|
||||
hostname,
|
||||
'ptr_record',
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='ptr_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=raw_data,
|
||||
discovery_method="reverse_dns_lookup"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.failed_requests += 1
|
||||
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: {e}")
|
||||
|
||||
return relationships
|
||||
|
||||
def _query_record(self, domain: str, record_type: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query a specific type of DNS record for the domain.
|
||||
"""
|
||||
relationships = []
|
||||
try:
|
||||
self.total_requests += 1
|
||||
response = self.resolver.resolve(domain, record_type)
|
||||
self.successful_requests += 1
|
||||
|
||||
for record in response:
|
||||
target = ""
|
||||
if record_type in ['A', 'AAAA']:
|
||||
target = str(record)
|
||||
elif record_type in ['CNAME', 'NS', 'PTR']:
|
||||
target = str(record.target).rstrip('.')
|
||||
elif record_type == 'MX':
|
||||
target = str(record.exchange).rstrip('.')
|
||||
elif record_type == 'SOA':
|
||||
target = str(record.mname).rstrip('.')
|
||||
elif record_type in ['TXT']:
|
||||
# TXT records are treated as metadata, not relationships.
|
||||
continue
|
||||
elif record_type == 'SRV':
|
||||
target = str(record.target).rstrip('.')
|
||||
elif record_type == 'CAA':
|
||||
target = f"{record.flags} {record.tag.decode('utf-8')} \"{record.value.decode('utf-8')}\""
|
||||
else:
|
||||
target = str(record)
|
||||
|
||||
if target:
|
||||
raw_data = {
|
||||
'query_type': record_type,
|
||||
'domain': domain,
|
||||
'value': target,
|
||||
'ttl': response.ttl
|
||||
}
|
||||
relationship_type = f"{record_type.lower()}_record"
|
||||
confidence = 0.8 # Default confidence for DNS records
|
||||
|
||||
relationships.append((
|
||||
domain,
|
||||
target,
|
||||
relationship_type,
|
||||
confidence,
|
||||
raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=target,
|
||||
relationship_type=relationship_type,
|
||||
confidence_score=confidence,
|
||||
raw_data=raw_data,
|
||||
discovery_method=f"dns_{record_type.lower()}_record"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.failed_requests += 1
|
||||
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
|
||||
|
||||
return relationships
|
||||
310
providers/shodan_provider.py
Normal file
310
providers/shodan_provider.py
Normal file
@@ -0,0 +1,310 @@
|
||||
"""
|
||||
Shodan provider for DNSRecon.
|
||||
Discovers IP relationships and infrastructure context through Shodan API.
|
||||
"""
|
||||
|
||||
import json
|
||||
from typing import List, Dict, Any, Tuple
|
||||
from .base_provider import BaseProvider
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
|
||||
|
||||
class ShodanProvider(BaseProvider):
|
||||
"""
|
||||
Provider for querying Shodan API for IP address and hostname information.
|
||||
Now uses session-specific API keys.
|
||||
"""
|
||||
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize Shodan provider with session-specific configuration."""
|
||||
super().__init__(
|
||||
name="shodan",
|
||||
rate_limit=60,
|
||||
timeout=30,
|
||||
session_config=session_config
|
||||
)
|
||||
self.base_url = "https://api.shodan.io"
|
||||
self.api_key = self.config.get_api_key('shodan')
|
||||
|
||||
def is_available(self) -> bool:
|
||||
"""Check if Shodan provider is available (has valid API key in this session)."""
|
||||
return self.api_key is not None and len(self.api_key.strip()) > 0
|
||||
|
||||
def get_name(self) -> str:
|
||||
"""Return the provider name."""
|
||||
return "shodan"
|
||||
|
||||
def get_display_name(self) -> str:
|
||||
"""Return the provider display name for the UI."""
|
||||
return "shodan"
|
||||
|
||||
def requires_api_key(self) -> bool:
|
||||
"""Return True if the provider requires an API key."""
|
||||
return True
|
||||
|
||||
def get_eligibility(self) -> Dict[str, bool]:
|
||||
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||
return {'domains': True, 'ips': True}
|
||||
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query Shodan for information about a domain.
|
||||
Uses Shodan's hostname search to find associated IPs.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
List of relationships discovered from Shodan data
|
||||
"""
|
||||
if not _is_valid_domain(domain) or not self.is_available():
|
||||
return []
|
||||
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
# Search for hostname in Shodan
|
||||
search_query = f"hostname:{domain}"
|
||||
url = f"{self.base_url}/shodan/host/search"
|
||||
params = {
|
||||
'key': self.api_key,
|
||||
'query': search_query,
|
||||
'minify': True # Get minimal data to reduce bandwidth
|
||||
}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
|
||||
|
||||
if not response or response.status_code != 200:
|
||||
return []
|
||||
|
||||
data = response.json()
|
||||
|
||||
if 'matches' not in data:
|
||||
return []
|
||||
|
||||
# Process search results
|
||||
for match in data['matches']:
|
||||
ip_address = match.get('ip_str')
|
||||
hostnames = match.get('hostnames', [])
|
||||
|
||||
if ip_address and domain in hostnames:
|
||||
raw_data = {
|
||||
'ip_address': ip_address,
|
||||
'hostnames': hostnames,
|
||||
'country': match.get('location', {}).get('country_name', ''),
|
||||
'city': match.get('location', {}).get('city', ''),
|
||||
'isp': match.get('isp', ''),
|
||||
'org': match.get('org', ''),
|
||||
'ports': match.get('ports', []),
|
||||
'last_update': match.get('last_update', '')
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
domain,
|
||||
ip_address,
|
||||
'a_record', # Domain resolves to IP
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=ip_address,
|
||||
relationship_type='a_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=raw_data,
|
||||
discovery_method="shodan_hostname_search"
|
||||
)
|
||||
|
||||
# Also create relationships to other hostnames on the same IP
|
||||
for hostname in hostnames:
|
||||
if hostname != domain and _is_valid_domain(hostname):
|
||||
hostname_raw_data = {
|
||||
'shared_ip': ip_address,
|
||||
'all_hostnames': hostnames,
|
||||
'discovery_context': 'shared_hosting'
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
domain,
|
||||
hostname,
|
||||
'passive_dns', # Shared hosting relationship
|
||||
0.6, # Lower confidence for shared hosting
|
||||
hostname_raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=hostname,
|
||||
relationship_type='passive_dns',
|
||||
confidence_score=0.6,
|
||||
raw_data=hostname_raw_data,
|
||||
discovery_method="shodan_shared_hosting"
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
||||
|
||||
return relationships
|
||||
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query Shodan for information about an IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
List of relationships discovered from Shodan IP data
|
||||
"""
|
||||
if not _is_valid_ip(ip) or not self.is_available():
|
||||
return []
|
||||
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
# Query Shodan host information
|
||||
url = f"{self.base_url}/shodan/host/{ip}"
|
||||
params = {'key': self.api_key}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||
|
||||
if not response or response.status_code != 200:
|
||||
return []
|
||||
|
||||
data = response.json()
|
||||
|
||||
# Extract hostname relationships
|
||||
hostnames = data.get('hostnames', [])
|
||||
for hostname in hostnames:
|
||||
if _is_valid_domain(hostname):
|
||||
raw_data = {
|
||||
'ip_address': ip,
|
||||
'hostname': hostname,
|
||||
'country': data.get('country_name', ''),
|
||||
'city': data.get('city', ''),
|
||||
'isp': data.get('isp', ''),
|
||||
'org': data.get('org', ''),
|
||||
'asn': data.get('asn', ''),
|
||||
'ports': data.get('ports', []),
|
||||
'last_update': data.get('last_update', ''),
|
||||
'os': data.get('os', '')
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
ip,
|
||||
hostname,
|
||||
'a_record', # IP resolves to hostname
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='a_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=raw_data,
|
||||
discovery_method="shodan_host_lookup"
|
||||
)
|
||||
|
||||
# Extract ASN relationship if available
|
||||
asn = data.get('asn')
|
||||
if asn:
|
||||
# Ensure the ASN starts with "AS"
|
||||
if isinstance(asn, str) and asn.startswith('AS'):
|
||||
asn_name = asn
|
||||
asn_number = asn[2:]
|
||||
else:
|
||||
asn_name = f"AS{asn}"
|
||||
asn_number = str(asn)
|
||||
|
||||
asn_raw_data = {
|
||||
'ip_address': ip,
|
||||
'asn': asn_number,
|
||||
'isp': data.get('isp', ''),
|
||||
'org': data.get('org', '')
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
ip,
|
||||
asn_name,
|
||||
'asn_membership',
|
||||
0.7,
|
||||
asn_raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=asn_name,
|
||||
relationship_type='asn_membership',
|
||||
confidence_score=0.7,
|
||||
raw_data=asn_raw_data,
|
||||
discovery_method="shodan_asn_lookup"
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
||||
|
||||
return relationships
|
||||
|
||||
def search_by_organization(self, org_name: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Search Shodan for hosts belonging to a specific organization.
|
||||
|
||||
Args:
|
||||
org_name: Organization name to search for
|
||||
|
||||
Returns:
|
||||
List of host information dictionaries
|
||||
"""
|
||||
if not self.is_available():
|
||||
return []
|
||||
|
||||
try:
|
||||
search_query = f"org:\"{org_name}\""
|
||||
url = f"{self.base_url}/shodan/host/search"
|
||||
params = {
|
||||
'key': self.api_key,
|
||||
'query': search_query,
|
||||
'minify': True
|
||||
}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=org_name)
|
||||
|
||||
if response and response.status_code == 200:
|
||||
data = response.json()
|
||||
return data.get('matches', [])
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.error(f"Error searching Shodan by organization {org_name}: {e}")
|
||||
|
||||
return []
|
||||
|
||||
def get_host_services(self, ip: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get service information for a specific IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address to query
|
||||
|
||||
Returns:
|
||||
List of service information dictionaries
|
||||
"""
|
||||
if not _is_valid_ip(ip) or not self.is_available():
|
||||
return []
|
||||
|
||||
try:
|
||||
url = f"{self.base_url}/shodan/host/{ip}"
|
||||
params = {'key': self.api_key}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||
|
||||
if response and response.status_code == 200:
|
||||
data = response.json()
|
||||
return data.get('data', []) # Service banners
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.error(f"Error getting Shodan services for IP {ip}: {e}")
|
||||
|
||||
return []
|
||||
@@ -1,4 +1,9 @@
|
||||
Flask>=2.3.3
|
||||
networkx>=3.1
|
||||
requests>=2.31.0
|
||||
flask>=2.3.0
|
||||
dnspython>=2.4.0
|
||||
click>=8.1.0
|
||||
python-dateutil>=2.8.2
|
||||
Werkzeug>=2.3.7
|
||||
urllib3>=2.0.0
|
||||
dnspython>=2.4.2
|
||||
gunicorn
|
||||
redis
|
||||
@@ -1,20 +0,0 @@
|
||||
# File: src/__init__.py
|
||||
"""DNS Reconnaissance Tool Package."""
|
||||
|
||||
__version__ = "1.0.0"
|
||||
__author__ = "DNS Recon Tool"
|
||||
__email__ = ""
|
||||
__description__ = "A comprehensive DNS reconnaissance tool for investigators"
|
||||
|
||||
from .main import main
|
||||
from .config import Config
|
||||
from .reconnaissance import ReconnaissanceEngine
|
||||
from .data_structures import ReconData
|
||||
|
||||
__all__ = [
|
||||
'main',
|
||||
'Config',
|
||||
'ReconnaissanceEngine',
|
||||
'ReconData'
|
||||
]
|
||||
|
||||
@@ -1,324 +0,0 @@
|
||||
# File: src/certificate_checker.py
|
||||
"""Certificate transparency log checker using crt.sh."""
|
||||
|
||||
import requests
|
||||
import json
|
||||
import time
|
||||
import logging
|
||||
import socket
|
||||
from datetime import datetime
|
||||
from typing import List, Optional, Set
|
||||
from .data_structures import Certificate
|
||||
from .config import Config
|
||||
|
||||
# Module logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class CertificateChecker:
|
||||
"""Check certificates using crt.sh."""
|
||||
|
||||
CRT_SH_URL = "https://crt.sh/"
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.last_request = 0
|
||||
self.query_count = 0
|
||||
self.connection_failures = 0
|
||||
self.max_connection_failures = 3 # Stop trying after 3 consecutive failures
|
||||
|
||||
logger.info("🔐 Certificate checker initialized")
|
||||
|
||||
# Test connectivity to crt.sh on initialization
|
||||
self._test_connectivity()
|
||||
|
||||
def _test_connectivity(self):
|
||||
"""Test if we can reach crt.sh."""
|
||||
try:
|
||||
logger.info("🔗 Testing connectivity to crt.sh...")
|
||||
|
||||
# First test DNS resolution
|
||||
try:
|
||||
socket.gethostbyname('crt.sh')
|
||||
logger.debug("✅ DNS resolution for crt.sh successful")
|
||||
except socket.gaierror as e:
|
||||
logger.warning(f"⚠️ DNS resolution failed for crt.sh: {e}")
|
||||
return False
|
||||
|
||||
# Test HTTP connection with a simple request
|
||||
response = requests.get(
|
||||
self.CRT_SH_URL,
|
||||
params={'q': 'example.com', 'output': 'json'},
|
||||
timeout=10,
|
||||
headers={'User-Agent': 'DNS-Recon-Tool/1.0'}
|
||||
)
|
||||
|
||||
if response.status_code in [200, 404]: # 404 is also acceptable (no results)
|
||||
logger.info("✅ crt.sh connectivity test successful")
|
||||
return True
|
||||
else:
|
||||
logger.warning(f"⚠️ crt.sh returned status {response.status_code}")
|
||||
return False
|
||||
|
||||
except requests.exceptions.ConnectionError as e:
|
||||
logger.warning(f"⚠️ Cannot reach crt.sh: {e}")
|
||||
return False
|
||||
except requests.exceptions.Timeout:
|
||||
logger.warning("⚠️ crt.sh connectivity test timed out")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Unexpected error testing crt.sh connectivity: {e}")
|
||||
return False
|
||||
|
||||
def _rate_limit(self):
|
||||
"""Apply rate limiting for crt.sh."""
|
||||
now = time.time()
|
||||
time_since_last = now - self.last_request
|
||||
min_interval = 1.0 / self.config.CRT_SH_RATE_LIMIT
|
||||
|
||||
if time_since_last < min_interval:
|
||||
sleep_time = min_interval - time_since_last
|
||||
logger.debug(f"⏸️ crt.sh rate limiting: sleeping for {sleep_time:.2f}s")
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_request = time.time()
|
||||
self.query_count += 1
|
||||
|
||||
def get_certificates(self, domain: str) -> List[Certificate]:
|
||||
"""Get certificates for a domain from crt.sh."""
|
||||
logger.debug(f"🔍 Getting certificates for domain: {domain}")
|
||||
|
||||
# Skip if we've had too many connection failures
|
||||
if self.connection_failures >= self.max_connection_failures:
|
||||
logger.warning(f"⚠️ Skipping certificate lookup for {domain} due to repeated connection failures")
|
||||
return []
|
||||
|
||||
certificates = []
|
||||
|
||||
# Query for the domain
|
||||
domain_certs = self._query_crt_sh(domain)
|
||||
certificates.extend(domain_certs)
|
||||
|
||||
# Also query for wildcard certificates (if the main query succeeded)
|
||||
if domain_certs or self.connection_failures < self.max_connection_failures:
|
||||
wildcard_certs = self._query_crt_sh(f"%.{domain}")
|
||||
certificates.extend(wildcard_certs)
|
||||
|
||||
# Remove duplicates based on certificate ID
|
||||
unique_certs = {cert.id: cert for cert in certificates}
|
||||
final_certs = list(unique_certs.values())
|
||||
|
||||
if final_certs:
|
||||
logger.info(f"📜 Found {len(final_certs)} unique certificates for {domain}")
|
||||
else:
|
||||
logger.debug(f"❌ No certificates found for {domain}")
|
||||
|
||||
return final_certs
|
||||
|
||||
def _query_crt_sh(self, query: str) -> List[Certificate]:
|
||||
"""Query crt.sh API with retry logic and better error handling."""
|
||||
certificates = []
|
||||
self._rate_limit()
|
||||
|
||||
logger.debug(f"📡 Querying crt.sh for: {query}")
|
||||
|
||||
max_retries = 2 # Reduced retries for faster failure
|
||||
backoff_delays = [1, 3] # Shorter delays
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
params = {
|
||||
'q': query,
|
||||
'output': 'json'
|
||||
}
|
||||
|
||||
response = requests.get(
|
||||
self.CRT_SH_URL,
|
||||
params=params,
|
||||
timeout=self.config.HTTP_TIMEOUT,
|
||||
headers={'User-Agent': 'DNS-Recon-Tool/1.0'}
|
||||
)
|
||||
|
||||
logger.debug(f"📡 crt.sh API response for {query}: {response.status_code}")
|
||||
|
||||
if response.status_code == 200:
|
||||
try:
|
||||
data = response.json()
|
||||
logger.debug(f"📊 crt.sh returned {len(data)} certificate entries for {query}")
|
||||
|
||||
for cert_data in data:
|
||||
try:
|
||||
# Parse dates with better error handling
|
||||
not_before = self._parse_date(cert_data.get('not_before'))
|
||||
not_after = self._parse_date(cert_data.get('not_after'))
|
||||
|
||||
if not_before and not_after:
|
||||
certificate = Certificate(
|
||||
id=cert_data.get('id'),
|
||||
issuer=cert_data.get('issuer_name', ''),
|
||||
subject=cert_data.get('name_value', ''),
|
||||
not_before=not_before,
|
||||
not_after=not_after,
|
||||
is_wildcard='*.' in cert_data.get('name_value', '')
|
||||
)
|
||||
certificates.append(certificate)
|
||||
logger.debug(f"✅ Parsed certificate ID {certificate.id} for {query}")
|
||||
else:
|
||||
logger.debug(f"⚠️ Skipped certificate with invalid dates: {cert_data.get('id')}")
|
||||
|
||||
except (ValueError, TypeError, KeyError) as e:
|
||||
logger.debug(f"⚠️ Error parsing certificate data: {e}")
|
||||
continue # Skip malformed certificate data
|
||||
|
||||
# Success! Reset connection failure counter
|
||||
self.connection_failures = 0
|
||||
logger.info(f"✅ Successfully processed {len(certificates)} certificates from crt.sh for {query}")
|
||||
return certificates
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
logger.warning(f"❌ Invalid JSON response from crt.sh for {query}: {e}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(backoff_delays[attempt])
|
||||
continue
|
||||
return certificates
|
||||
|
||||
elif response.status_code == 404:
|
||||
# 404 is normal - no certificates found
|
||||
logger.debug(f"ℹ️ No certificates found for {query} (404)")
|
||||
self.connection_failures = 0 # Reset counter for successful connection
|
||||
return certificates
|
||||
|
||||
elif response.status_code == 429:
|
||||
logger.warning(f"⚠️ crt.sh rate limit exceeded for {query}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(5) # Wait longer for rate limits
|
||||
continue
|
||||
return certificates
|
||||
|
||||
else:
|
||||
logger.warning(f"⚠️ crt.sh HTTP error for {query}: {response.status_code}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(backoff_delays[attempt])
|
||||
continue
|
||||
return certificates
|
||||
|
||||
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout) as e:
|
||||
error_type = "Connection Error" if isinstance(e, requests.exceptions.ConnectionError) else "Timeout"
|
||||
logger.warning(f"🌐 crt.sh {error_type} for {query} (attempt {attempt+1}/{max_retries}): {e}")
|
||||
|
||||
# Track connection failures
|
||||
if isinstance(e, requests.exceptions.ConnectionError):
|
||||
self.connection_failures += 1
|
||||
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(backoff_delays[attempt])
|
||||
continue
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.warning(f"🌐 crt.sh network error for {query} (attempt {attempt+1}/{max_retries}): {e}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(backoff_delays[attempt])
|
||||
continue
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Unexpected error querying crt.sh for {query}: {e}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(backoff_delays[attempt])
|
||||
continue
|
||||
|
||||
# If we get here, all retries failed
|
||||
logger.warning(f"❌ All {max_retries} attempts failed for crt.sh query: {query}")
|
||||
return certificates
|
||||
|
||||
def _parse_date(self, date_str: str) -> Optional[datetime]:
|
||||
"""Parse date string with multiple format support."""
|
||||
if not date_str:
|
||||
return None
|
||||
|
||||
# Common date formats from crt.sh
|
||||
date_formats = [
|
||||
'%Y-%m-%dT%H:%M:%S', # ISO format without timezone
|
||||
'%Y-%m-%dT%H:%M:%SZ', # ISO format with Z
|
||||
'%Y-%m-%d %H:%M:%S', # Space separated
|
||||
'%Y-%m-%dT%H:%M:%S.%f', # With microseconds
|
||||
'%Y-%m-%dT%H:%M:%S.%fZ', # With microseconds and Z
|
||||
]
|
||||
|
||||
for fmt in date_formats:
|
||||
try:
|
||||
return datetime.strptime(date_str, fmt)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
# Try with timezone info
|
||||
try:
|
||||
return datetime.fromisoformat(date_str.replace('Z', '+00:00'))
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
logger.debug(f"⚠️ Could not parse date: {date_str}")
|
||||
return None
|
||||
|
||||
def extract_subdomains_from_certificates(self, certificates: List[Certificate]) -> Set[str]:
|
||||
"""Extract subdomains from certificate subjects."""
|
||||
subdomains = set()
|
||||
|
||||
logger.debug(f"🌿 Extracting subdomains from {len(certificates)} certificates")
|
||||
|
||||
for cert in certificates:
|
||||
# Parse subject field for domain names
|
||||
# Certificate subjects can be multi-line with multiple domains
|
||||
subject_lines = cert.subject.split('\n')
|
||||
|
||||
for line in subject_lines:
|
||||
line = line.strip()
|
||||
|
||||
# Skip wildcard domains for recursion (they don't resolve directly)
|
||||
if line.startswith('*.'):
|
||||
logger.debug(f"🌿 Skipping wildcard domain: {line}")
|
||||
continue
|
||||
|
||||
if self._is_valid_domain(line):
|
||||
subdomains.add(line.lower())
|
||||
logger.debug(f"🌿 Found subdomain from certificate: {line}")
|
||||
|
||||
if subdomains:
|
||||
logger.info(f"🌿 Extracted {len(subdomains)} subdomains from certificates")
|
||||
else:
|
||||
logger.debug("❌ No subdomains extracted from certificates")
|
||||
|
||||
return subdomains
|
||||
|
||||
def _is_valid_domain(self, domain: str) -> bool:
|
||||
"""Basic domain validation."""
|
||||
if not domain or '.' not in domain:
|
||||
return False
|
||||
|
||||
# Remove common prefixes
|
||||
domain = domain.lower().strip()
|
||||
if domain.startswith('www.'):
|
||||
domain = domain[4:]
|
||||
|
||||
# Basic validation
|
||||
if len(domain) < 3 or len(domain) > 255:
|
||||
return False
|
||||
|
||||
# Must not be an IP address
|
||||
try:
|
||||
import socket
|
||||
socket.inet_aton(domain)
|
||||
return False # It's an IPv4 address
|
||||
except socket.error:
|
||||
pass
|
||||
|
||||
# Check for reasonable domain structure
|
||||
parts = domain.split('.')
|
||||
if len(parts) < 2:
|
||||
return False
|
||||
|
||||
# Each part should be reasonable
|
||||
for part in parts:
|
||||
if len(part) < 1 or len(part) > 63:
|
||||
return False
|
||||
if not part.replace('-', '').replace('_', '').isalnum():
|
||||
return False
|
||||
|
||||
return True
|
||||
@@ -1,80 +0,0 @@
|
||||
# File: src/config.py
|
||||
"""Configuration settings for the reconnaissance tool."""
|
||||
|
||||
import os
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Optional
|
||||
|
||||
@dataclass
|
||||
class Config:
|
||||
"""Configuration class for the reconnaissance tool."""
|
||||
|
||||
# DNS servers to query
|
||||
DNS_SERVERS: List[str] = None
|
||||
|
||||
# API keys
|
||||
shodan_key: Optional[str] = None
|
||||
virustotal_key: Optional[str] = None
|
||||
|
||||
# Rate limiting (requests per second)
|
||||
# DNS servers are generally quite robust, increased from 10 to 50/s
|
||||
DNS_RATE_LIMIT: float = 50.0
|
||||
CRT_SH_RATE_LIMIT: float = 2.0
|
||||
SHODAN_RATE_LIMIT: float = 0.5 # Shodan is more restrictive
|
||||
VIRUSTOTAL_RATE_LIMIT: float = 0.25 # VirusTotal is very restrictive
|
||||
|
||||
# Recursive depth
|
||||
max_depth: int = 2
|
||||
|
||||
# Timeouts
|
||||
DNS_TIMEOUT: int = 5
|
||||
HTTP_TIMEOUT: int = 20
|
||||
|
||||
# Logging level
|
||||
log_level: str = "INFO"
|
||||
|
||||
def __post_init__(self):
|
||||
if self.DNS_SERVERS is None:
|
||||
# Use multiple reliable DNS servers
|
||||
self.DNS_SERVERS = [
|
||||
'1.1.1.1', # Cloudflare
|
||||
'8.8.8.8', # Google
|
||||
'9.9.9.9' # Quad9
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def from_args(cls, shodan_key: Optional[str] = None,
|
||||
virustotal_key: Optional[str] = None,
|
||||
max_depth: int = 2,
|
||||
log_level: str = "INFO") -> 'Config':
|
||||
"""Create config from command line arguments."""
|
||||
return cls(
|
||||
shodan_key=shodan_key,
|
||||
virustotal_key=virustotal_key,
|
||||
max_depth=max_depth,
|
||||
log_level=log_level.upper()
|
||||
)
|
||||
|
||||
def setup_logging(self, cli_mode: bool = True):
|
||||
"""Set up logging configuration."""
|
||||
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
if cli_mode:
|
||||
# For CLI, use a more readable format
|
||||
log_format = '%(asctime)s [%(levelname)s] %(message)s'
|
||||
|
||||
logging.basicConfig(
|
||||
level=getattr(logging, self.log_level, logging.INFO),
|
||||
format=log_format,
|
||||
datefmt='%H:%M:%S'
|
||||
)
|
||||
|
||||
# Set specific loggers
|
||||
logging.getLogger('urllib3').setLevel(logging.WARNING) # Reduce HTTP noise
|
||||
logging.getLogger('requests').setLevel(logging.WARNING) # Reduce HTTP noise
|
||||
|
||||
if self.log_level == "DEBUG":
|
||||
logging.getLogger(__name__.split('.')[0]).setLevel(logging.DEBUG)
|
||||
|
||||
return logging.getLogger(__name__)
|
||||
@@ -1,204 +0,0 @@
|
||||
# File: src/data_structures.py
|
||||
"""Data structures for storing reconnaissance results."""
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Dict, List, Set, Optional, Any
|
||||
from datetime import datetime
|
||||
import json
|
||||
import logging
|
||||
|
||||
# Set up logging for this module
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class DNSRecord:
|
||||
"""DNS record information."""
|
||||
record_type: str
|
||||
value: str
|
||||
ttl: Optional[int] = None
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
return {
|
||||
'record_type': self.record_type,
|
||||
'value': self.value,
|
||||
'ttl': self.ttl
|
||||
}
|
||||
|
||||
@dataclass
|
||||
class Certificate:
|
||||
"""Certificate information from crt.sh."""
|
||||
id: int
|
||||
issuer: str
|
||||
subject: str
|
||||
not_before: datetime
|
||||
not_after: datetime
|
||||
is_wildcard: bool = False
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
return {
|
||||
'id': self.id,
|
||||
'issuer': self.issuer,
|
||||
'subject': self.subject,
|
||||
'not_before': self.not_before.isoformat() if self.not_before else None,
|
||||
'not_after': self.not_after.isoformat() if self.not_after else None,
|
||||
'is_wildcard': self.is_wildcard
|
||||
}
|
||||
|
||||
@dataclass
|
||||
class ShodanResult:
|
||||
"""Shodan scan result."""
|
||||
ip: str
|
||||
ports: List[int]
|
||||
services: Dict[str, Any]
|
||||
organization: Optional[str] = None
|
||||
country: Optional[str] = None
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
return {
|
||||
'ip': self.ip,
|
||||
'ports': self.ports,
|
||||
'services': self.services,
|
||||
'organization': self.organization,
|
||||
'country': self.country
|
||||
}
|
||||
|
||||
@dataclass
|
||||
class VirusTotalResult:
|
||||
"""VirusTotal scan result."""
|
||||
resource: str # IP or domain
|
||||
positives: int
|
||||
total: int
|
||||
scan_date: datetime
|
||||
permalink: str
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
return {
|
||||
'resource': self.resource,
|
||||
'positives': self.positives,
|
||||
'total': self.total,
|
||||
'scan_date': self.scan_date.isoformat() if self.scan_date else None,
|
||||
'permalink': self.permalink
|
||||
}
|
||||
|
||||
@dataclass
|
||||
class ReconData:
|
||||
"""Main data structure for reconnaissance results."""
|
||||
|
||||
# Core data
|
||||
hostnames: Set[str] = field(default_factory=set)
|
||||
ip_addresses: Set[str] = field(default_factory=set)
|
||||
|
||||
# DNS information
|
||||
dns_records: Dict[str, List[DNSRecord]] = field(default_factory=dict)
|
||||
reverse_dns: Dict[str, str] = field(default_factory=dict)
|
||||
|
||||
# Certificate information
|
||||
certificates: Dict[str, List[Certificate]] = field(default_factory=dict)
|
||||
|
||||
# External service results
|
||||
shodan_results: Dict[str, ShodanResult] = field(default_factory=dict)
|
||||
virustotal_results: Dict[str, VirusTotalResult] = field(default_factory=dict)
|
||||
|
||||
# Metadata
|
||||
start_time: datetime = field(default_factory=datetime.now)
|
||||
end_time: Optional[datetime] = None
|
||||
depth_map: Dict[str, int] = field(default_factory=dict) # Track recursion depth
|
||||
|
||||
def add_hostname(self, hostname: str, depth: int = 0) -> None:
|
||||
"""Add a hostname to the dataset."""
|
||||
hostname = hostname.lower()
|
||||
self.hostnames.add(hostname)
|
||||
self.depth_map[hostname] = depth
|
||||
logger.info(f"Added hostname: {hostname} (depth: {depth})")
|
||||
|
||||
def add_ip_address(self, ip: str) -> None:
|
||||
"""Add an IP address to the dataset."""
|
||||
self.ip_addresses.add(ip)
|
||||
logger.info(f"Added IP address: {ip}")
|
||||
|
||||
def add_dns_record(self, hostname: str, record: DNSRecord) -> None:
|
||||
"""Add a DNS record for a hostname."""
|
||||
hostname = hostname.lower()
|
||||
if hostname not in self.dns_records:
|
||||
self.dns_records[hostname] = []
|
||||
self.dns_records[hostname].append(record)
|
||||
logger.debug(f"Added DNS record for {hostname}: {record.record_type} -> {record.value}")
|
||||
|
||||
def add_shodan_result(self, ip: str, result: ShodanResult) -> None:
|
||||
"""Add Shodan result."""
|
||||
self.shodan_results[ip] = result
|
||||
logger.info(f"Added Shodan result for {ip}: {len(result.ports)} ports, org: {result.organization}")
|
||||
|
||||
def add_virustotal_result(self, resource: str, result: VirusTotalResult) -> None:
|
||||
"""Add VirusTotal result."""
|
||||
self.virustotal_results[resource] = result
|
||||
logger.info(f"Added VirusTotal result for {resource}: {result.positives}/{result.total} detections")
|
||||
|
||||
def get_new_subdomains(self, max_depth: int) -> Set[str]:
|
||||
"""Get subdomains that haven't been processed yet and are within depth limit."""
|
||||
new_domains = set()
|
||||
for hostname in self.hostnames:
|
||||
if (hostname not in self.dns_records and
|
||||
self.depth_map.get(hostname, 0) < max_depth):
|
||||
new_domains.add(hostname)
|
||||
return new_domains
|
||||
|
||||
def get_stats(self) -> Dict[str, int]:
|
||||
"""Get current statistics."""
|
||||
return {
|
||||
'hostnames': len(self.hostnames),
|
||||
'ip_addresses': len(self.ip_addresses),
|
||||
'dns_records': sum(len(records) for records in self.dns_records.values()),
|
||||
'certificates': sum(len(certs) for certs in self.certificates.values()),
|
||||
'shodan_results': len(self.shodan_results),
|
||||
'virustotal_results': len(self.virustotal_results)
|
||||
}
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Export data as a serializable dictionary."""
|
||||
logger.debug(f"Serializing ReconData with stats: {self.get_stats()}")
|
||||
|
||||
result = {
|
||||
'hostnames': sorted(list(self.hostnames)),
|
||||
'ip_addresses': sorted(list(self.ip_addresses)),
|
||||
'dns_records': {
|
||||
host: [record.to_dict() for record in records]
|
||||
for host, records in self.dns_records.items()
|
||||
},
|
||||
'reverse_dns': dict(self.reverse_dns),
|
||||
'certificates': {
|
||||
host: [cert.to_dict() for cert in certs]
|
||||
for host, certs in self.certificates.items()
|
||||
},
|
||||
'shodan_results': {
|
||||
ip: result.to_dict() for ip, result in self.shodan_results.items()
|
||||
},
|
||||
'virustotal_results': {
|
||||
resource: result.to_dict() for resource, result in self.virustotal_results.items()
|
||||
},
|
||||
'depth_map': dict(self.depth_map),
|
||||
'metadata': {
|
||||
'start_time': self.start_time.isoformat() if self.start_time else None,
|
||||
'end_time': self.end_time.isoformat() if self.end_time else None,
|
||||
'stats': self.get_stats()
|
||||
}
|
||||
}
|
||||
|
||||
logger.info(f"Serialized data contains: {len(result['hostnames'])} hostnames, "
|
||||
f"{len(result['ip_addresses'])} IPs, {len(result['shodan_results'])} Shodan results, "
|
||||
f"{len(result['virustotal_results'])} VirusTotal results")
|
||||
|
||||
return result
|
||||
|
||||
def to_json(self) -> str:
|
||||
"""Export data as JSON."""
|
||||
try:
|
||||
return json.dumps(self.to_dict(), indent=2, ensure_ascii=False)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to serialize to JSON: {e}")
|
||||
# Return minimal JSON in case of error
|
||||
return json.dumps({
|
||||
'error': str(e),
|
||||
'stats': self.get_stats(),
|
||||
'timestamp': datetime.now().isoformat()
|
||||
}, indent=2)
|
||||
@@ -1,312 +0,0 @@
|
||||
# File: src/dns_resolver.py
|
||||
"""DNS resolution functionality with enhanced TLD testing."""
|
||||
|
||||
import dns.resolver
|
||||
import dns.reversename
|
||||
import dns.query
|
||||
import dns.zone
|
||||
from typing import List, Dict, Optional, Set
|
||||
import socket
|
||||
import time
|
||||
import logging
|
||||
from .data_structures import DNSRecord, ReconData
|
||||
from .config import Config
|
||||
|
||||
# Module logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class DNSResolver:
|
||||
"""DNS resolution and record lookup with optimized TLD testing."""
|
||||
|
||||
# All DNS record types to query
|
||||
RECORD_TYPES = [
|
||||
'A', 'AAAA', 'MX', 'NS', 'TXT', 'CNAME', 'SOA', 'PTR',
|
||||
'SRV', 'CAA', 'DNSKEY', 'DS', 'RRSIG', 'NSEC', 'NSEC3'
|
||||
]
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.last_request = 0
|
||||
self.query_count = 0
|
||||
|
||||
logger.info(f"🌐 DNS resolver initialized with {len(config.DNS_SERVERS)} servers: {config.DNS_SERVERS}")
|
||||
logger.info(f"⚡ DNS rate limit: {config.DNS_RATE_LIMIT}/s, timeout: {config.DNS_TIMEOUT}s")
|
||||
|
||||
def _rate_limit(self):
|
||||
"""Apply rate limiting - more graceful for DNS servers."""
|
||||
now = time.time()
|
||||
time_since_last = now - self.last_request
|
||||
min_interval = 1.0 / self.config.DNS_RATE_LIMIT
|
||||
|
||||
if time_since_last < min_interval:
|
||||
sleep_time = min_interval - time_since_last
|
||||
# Only log if sleep is significant to reduce spam
|
||||
if sleep_time > 0.1:
|
||||
logger.debug(f"⏸️ DNS rate limiting: sleeping for {sleep_time:.2f}s")
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_request = time.time()
|
||||
self.query_count += 1
|
||||
|
||||
def resolve_hostname_fast(self, hostname: str) -> List[str]:
|
||||
"""Fast hostname resolution optimized for TLD testing."""
|
||||
ips = []
|
||||
|
||||
logger.debug(f"🚀 Fast resolving hostname: {hostname}")
|
||||
|
||||
# Use only the first DNS server and shorter timeout for TLD testing
|
||||
resolver = dns.resolver.Resolver()
|
||||
resolver.nameservers = [self.config.DNS_SERVERS[0]] # Use primary DNS only
|
||||
resolver.timeout = 2 # Shorter timeout for TLD testing
|
||||
resolver.lifetime = 2 # Total query time limit
|
||||
|
||||
try:
|
||||
# Try A records only for speed (most common)
|
||||
answers = resolver.resolve(hostname, 'A')
|
||||
for answer in answers:
|
||||
ips.append(str(answer))
|
||||
logger.debug(f"⚡ Fast A record for {hostname}: {answer}")
|
||||
except dns.resolver.NXDOMAIN:
|
||||
logger.debug(f"❌ NXDOMAIN for {hostname}")
|
||||
except dns.resolver.NoAnswer:
|
||||
logger.debug(f"⚠️ No A record for {hostname}")
|
||||
except dns.resolver.Timeout:
|
||||
logger.debug(f"⏱️ Timeout for {hostname}")
|
||||
except Exception as e:
|
||||
logger.debug(f"⚠️ Error fast resolving {hostname}: {e}")
|
||||
|
||||
if ips:
|
||||
logger.debug(f"⚡ Fast resolved {hostname} to {len(ips)} IPs: {ips}")
|
||||
|
||||
return ips
|
||||
|
||||
def resolve_hostname(self, hostname: str) -> List[str]:
|
||||
"""Resolve hostname to IP addresses (full resolution with retries)."""
|
||||
ips = []
|
||||
|
||||
logger.debug(f"🔍 Resolving hostname: {hostname}")
|
||||
|
||||
for dns_server in self.config.DNS_SERVERS:
|
||||
self._rate_limit()
|
||||
resolver = dns.resolver.Resolver()
|
||||
resolver.nameservers = [dns_server]
|
||||
resolver.timeout = self.config.DNS_TIMEOUT
|
||||
|
||||
try:
|
||||
# Try A records
|
||||
answers = resolver.resolve(hostname, 'A')
|
||||
for answer in answers:
|
||||
ips.append(str(answer))
|
||||
logger.debug(f"✅ A record for {hostname}: {answer}")
|
||||
except dns.resolver.NXDOMAIN:
|
||||
logger.debug(f"❌ NXDOMAIN for {hostname} A record on {dns_server}")
|
||||
except dns.resolver.NoAnswer:
|
||||
logger.debug(f"⚠️ No A record for {hostname} on {dns_server}")
|
||||
except Exception as e:
|
||||
logger.debug(f"⚠️ Error resolving A record for {hostname} on {dns_server}: {e}")
|
||||
|
||||
try:
|
||||
# Try AAAA records (IPv6)
|
||||
answers = resolver.resolve(hostname, 'AAAA')
|
||||
for answer in answers:
|
||||
ips.append(str(answer))
|
||||
logger.debug(f"✅ AAAA record for {hostname}: {answer}")
|
||||
except dns.resolver.NXDOMAIN:
|
||||
logger.debug(f"❌ NXDOMAIN for {hostname} AAAA record on {dns_server}")
|
||||
except dns.resolver.NoAnswer:
|
||||
logger.debug(f"⚠️ No AAAA record for {hostname} on {dns_server}")
|
||||
except Exception as e:
|
||||
logger.debug(f"⚠️ Error resolving AAAA record for {hostname} on {dns_server}: {e}")
|
||||
|
||||
unique_ips = list(set(ips))
|
||||
if unique_ips:
|
||||
logger.info(f"✅ Resolved {hostname} to {len(unique_ips)} unique IPs: {unique_ips}")
|
||||
else:
|
||||
logger.debug(f"❌ No IPs found for {hostname}")
|
||||
|
||||
return unique_ips
|
||||
|
||||
def get_all_dns_records(self, hostname: str) -> List[DNSRecord]:
|
||||
"""Get all DNS records for a hostname."""
|
||||
records = []
|
||||
successful_queries = 0
|
||||
|
||||
logger.debug(f"📋 Getting all DNS records for: {hostname}")
|
||||
|
||||
for record_type in self.RECORD_TYPES:
|
||||
type_found = False
|
||||
|
||||
for dns_server in self.config.DNS_SERVERS:
|
||||
self._rate_limit()
|
||||
resolver = dns.resolver.Resolver()
|
||||
resolver.nameservers = [dns_server]
|
||||
resolver.timeout = self.config.DNS_TIMEOUT
|
||||
|
||||
try:
|
||||
answers = resolver.resolve(hostname, record_type)
|
||||
for answer in answers:
|
||||
records.append(DNSRecord(
|
||||
record_type=record_type,
|
||||
value=str(answer),
|
||||
ttl=answers.ttl
|
||||
))
|
||||
if not type_found:
|
||||
logger.debug(f"✅ Found {record_type} record for {hostname}: {answer}")
|
||||
type_found = True
|
||||
|
||||
if not type_found:
|
||||
successful_queries += 1
|
||||
break # Found records, no need to query other DNS servers for this type
|
||||
|
||||
except dns.resolver.NXDOMAIN:
|
||||
logger.debug(f"❌ NXDOMAIN for {hostname} {record_type} on {dns_server}")
|
||||
break # Domain doesn't exist, no point checking other servers
|
||||
except dns.resolver.NoAnswer:
|
||||
logger.debug(f"⚠️ No {record_type} record for {hostname} on {dns_server}")
|
||||
continue # Try next DNS server
|
||||
except dns.resolver.Timeout:
|
||||
logger.debug(f"⏱️ Timeout for {hostname} {record_type} on {dns_server}")
|
||||
continue # Try next DNS server
|
||||
except Exception as e:
|
||||
logger.debug(f"⚠️ Error querying {record_type} for {hostname} on {dns_server}: {e}")
|
||||
continue # Try next DNS server
|
||||
|
||||
logger.info(f"📋 Found {len(records)} DNS records for {hostname} across {len(set(r.record_type for r in records))} record types")
|
||||
|
||||
# Log query statistics every 100 queries
|
||||
if self.query_count % 100 == 0:
|
||||
logger.info(f"📊 DNS query statistics: {self.query_count} total queries performed")
|
||||
|
||||
return records
|
||||
|
||||
def reverse_dns_lookup(self, ip: str) -> Optional[str]:
|
||||
"""Perform reverse DNS lookup."""
|
||||
logger.debug(f"🔍 Reverse DNS lookup for: {ip}")
|
||||
|
||||
try:
|
||||
self._rate_limit()
|
||||
hostname = socket.gethostbyaddr(ip)[0]
|
||||
logger.info(f"✅ Reverse DNS for {ip}: {hostname}")
|
||||
return hostname
|
||||
except socket.herror:
|
||||
logger.debug(f"❌ No reverse DNS for {ip}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.debug(f"⚠️ Error in reverse DNS for {ip}: {e}")
|
||||
return None
|
||||
|
||||
def extract_subdomains_from_dns(self, records: List[DNSRecord]) -> Set[str]:
|
||||
"""Extract potential subdomains from DNS records."""
|
||||
subdomains = set()
|
||||
|
||||
logger.debug(f"🌿 Extracting subdomains from {len(records)} DNS records")
|
||||
|
||||
for record in records:
|
||||
value = record.value.lower()
|
||||
|
||||
# Extract from different record types
|
||||
try:
|
||||
if record.record_type == 'MX':
|
||||
# MX record format: "priority hostname"
|
||||
parts = value.split()
|
||||
if len(parts) >= 2:
|
||||
hostname = parts[-1].rstrip('.') # Take the last part (hostname)
|
||||
if self._is_valid_hostname(hostname):
|
||||
subdomains.add(hostname)
|
||||
logger.debug(f"🌿 Found subdomain from MX: {hostname}")
|
||||
|
||||
elif record.record_type in ['CNAME', 'NS']:
|
||||
# Direct hostname records
|
||||
hostname = value.rstrip('.')
|
||||
if self._is_valid_hostname(hostname):
|
||||
subdomains.add(hostname)
|
||||
logger.debug(f"🌿 Found subdomain from {record.record_type}: {hostname}")
|
||||
|
||||
elif record.record_type == 'TXT':
|
||||
# Search for domain-like strings in TXT records
|
||||
# Common patterns: include:example.com, v=spf1 include:_spf.google.com
|
||||
words = value.replace(',', ' ').replace(';', ' ').split()
|
||||
for word in words:
|
||||
# Look for include: patterns
|
||||
if word.startswith('include:'):
|
||||
hostname = word[8:].rstrip('.')
|
||||
if self._is_valid_hostname(hostname):
|
||||
subdomains.add(hostname)
|
||||
logger.debug(f"🌿 Found subdomain from TXT include: {hostname}")
|
||||
|
||||
# Look for other domain patterns
|
||||
elif '.' in word and not word.startswith('http'):
|
||||
clean_word = word.strip('",\'()[]{}').rstrip('.')
|
||||
if self._is_valid_hostname(clean_word):
|
||||
subdomains.add(clean_word)
|
||||
logger.debug(f"🌿 Found subdomain from TXT: {clean_word}")
|
||||
|
||||
elif record.record_type == 'SRV':
|
||||
# SRV record format: "priority weight port target"
|
||||
parts = value.split()
|
||||
if len(parts) >= 4:
|
||||
hostname = parts[-1].rstrip('.') # Target hostname
|
||||
if self._is_valid_hostname(hostname):
|
||||
subdomains.add(hostname)
|
||||
logger.debug(f"🌿 Found subdomain from SRV: {hostname}")
|
||||
|
||||
except Exception as e:
|
||||
logger.debug(f"⚠️ Error extracting subdomain from {record.record_type} record '{value}': {e}")
|
||||
continue
|
||||
|
||||
if subdomains:
|
||||
logger.info(f"🌿 Extracted {len(subdomains)} potential subdomains")
|
||||
else:
|
||||
logger.debug("❌ No subdomains extracted from DNS records")
|
||||
|
||||
return subdomains
|
||||
|
||||
def _is_valid_hostname(self, hostname: str) -> bool:
|
||||
"""Basic hostname validation."""
|
||||
if not hostname or len(hostname) > 255:
|
||||
return False
|
||||
|
||||
# Must contain at least one dot
|
||||
if '.' not in hostname:
|
||||
return False
|
||||
|
||||
# Must not be an IP address
|
||||
if self._looks_like_ip(hostname):
|
||||
return False
|
||||
|
||||
# Basic character check - allow international domains
|
||||
# Remove overly restrictive character filtering
|
||||
if not hostname.replace('-', '').replace('.', '').replace('_', '').isalnum():
|
||||
# Allow some special cases for internationalized domains
|
||||
try:
|
||||
hostname.encode('ascii')
|
||||
except UnicodeEncodeError:
|
||||
return False # Skip non-ASCII for now
|
||||
|
||||
# Must have reasonable length parts
|
||||
parts = hostname.split('.')
|
||||
if len(parts) < 2:
|
||||
return False
|
||||
|
||||
# Each part should be reasonable length
|
||||
for part in parts:
|
||||
if len(part) < 1 or len(part) > 63:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _looks_like_ip(self, text: str) -> bool:
|
||||
"""Check if text looks like an IP address."""
|
||||
try:
|
||||
socket.inet_aton(text)
|
||||
return True
|
||||
except socket.error:
|
||||
pass
|
||||
|
||||
try:
|
||||
socket.inet_pton(socket.AF_INET6, text)
|
||||
return True
|
||||
except socket.error:
|
||||
pass
|
||||
|
||||
return False
|
||||
195
src/main.py
195
src/main.py
@@ -1,195 +0,0 @@
|
||||
# File: src/main.py
|
||||
"""Main CLI interface for the reconnaissance tool."""
|
||||
|
||||
import click
|
||||
import json
|
||||
import sys
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from .config import Config
|
||||
from .reconnaissance import ReconnaissanceEngine
|
||||
from .report_generator import ReportGenerator
|
||||
from .web_app import create_app
|
||||
|
||||
# Module logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@click.command()
|
||||
@click.argument('target', required=False)
|
||||
@click.option('--web', is_flag=True, help='Start web interface instead of CLI')
|
||||
@click.option('--shodan-key', help='Shodan API key')
|
||||
@click.option('--virustotal-key', help='VirusTotal API key')
|
||||
@click.option('--max-depth', default=2, help='Maximum recursion depth (default: 2)')
|
||||
@click.option('--output', '-o', help='Output file prefix (will create .json and .txt files)')
|
||||
@click.option('--json-only', is_flag=True, help='Only output JSON')
|
||||
@click.option('--text-only', is_flag=True, help='Only output text report')
|
||||
@click.option('--port', default=5000, help='Port for web interface (default: 5000)')
|
||||
@click.option('--verbose', '-v', is_flag=True, help='Enable verbose logging (DEBUG level)')
|
||||
@click.option('--quiet', '-q', is_flag=True, help='Quiet mode (WARNING level only)')
|
||||
def main(target, web, shodan_key, virustotal_key, max_depth, output, json_only, text_only, port, verbose, quiet):
|
||||
"""DNS Reconnaissance Tool
|
||||
|
||||
Examples:
|
||||
recon example.com # Scan example.com
|
||||
recon example # Try example.* for all TLDs
|
||||
recon example.com --max-depth 3 # Deeper recursion
|
||||
recon example.com -v # Verbose logging
|
||||
recon --web # Start web interface
|
||||
"""
|
||||
|
||||
# Determine log level
|
||||
if verbose:
|
||||
log_level = "DEBUG"
|
||||
elif quiet:
|
||||
log_level = "WARNING"
|
||||
else:
|
||||
log_level = "INFO"
|
||||
|
||||
# Create configuration and setup logging
|
||||
config = Config.from_args(shodan_key, virustotal_key, max_depth, log_level)
|
||||
config.setup_logging(cli_mode=True)
|
||||
|
||||
if web:
|
||||
# Start web interface
|
||||
logger.info("🌐 Starting web interface...")
|
||||
app = create_app(config)
|
||||
logger.info(f"🚀 Web interface starting on http://0.0.0.0:{port}")
|
||||
app.run(host='0.0.0.0', port=port, debug=False) # Changed debug to False to reduce noise
|
||||
return
|
||||
|
||||
if not target:
|
||||
click.echo("❌ Error: TARGET is required for CLI mode. Use --web for web interface.")
|
||||
sys.exit(1)
|
||||
|
||||
# Initialize reconnaissance engine
|
||||
logger.info("🔧 Initializing reconnaissance engine...")
|
||||
engine = ReconnaissanceEngine(config)
|
||||
|
||||
# Set up progress callback for CLI
|
||||
def progress_callback(message, percentage=None):
|
||||
if percentage is not None:
|
||||
click.echo(f"[{percentage:3d}%] {message}")
|
||||
else:
|
||||
click.echo(f" {message}")
|
||||
|
||||
engine.set_progress_callback(progress_callback)
|
||||
|
||||
# Display startup information
|
||||
click.echo("=" * 60)
|
||||
click.echo("🔍 DNS RECONNAISSANCE TOOL")
|
||||
click.echo("=" * 60)
|
||||
click.echo(f"🎯 Target: {target}")
|
||||
click.echo(f"📊 Max recursion depth: {max_depth}")
|
||||
click.echo(f"🌐 DNS servers: {', '.join(config.DNS_SERVERS[:3])}{'...' if len(config.DNS_SERVERS) > 3 else ''}")
|
||||
click.echo(f"⚡ DNS rate limit: {config.DNS_RATE_LIMIT}/s")
|
||||
|
||||
if shodan_key:
|
||||
click.echo("✅ Shodan integration enabled")
|
||||
logger.info(f"🕵️ Shodan API key provided (ends with: ...{shodan_key[-4:] if len(shodan_key) > 4 else shodan_key})")
|
||||
else:
|
||||
click.echo("⚠️ Shodan integration disabled (no API key)")
|
||||
|
||||
if virustotal_key:
|
||||
click.echo("✅ VirusTotal integration enabled")
|
||||
logger.info(f"🛡️ VirusTotal API key provided (ends with: ...{virustotal_key[-4:] if len(virustotal_key) > 4 else virustotal_key})")
|
||||
else:
|
||||
click.echo("⚠️ VirusTotal integration disabled (no API key)")
|
||||
|
||||
click.echo("")
|
||||
|
||||
# Run reconnaissance
|
||||
try:
|
||||
logger.info(f"🚀 Starting reconnaissance for target: {target}")
|
||||
data = engine.run_reconnaissance(target)
|
||||
|
||||
# Display final statistics
|
||||
stats = data.get_stats()
|
||||
click.echo("")
|
||||
click.echo("=" * 60)
|
||||
click.echo("📊 RECONNAISSANCE COMPLETE")
|
||||
click.echo("=" * 60)
|
||||
click.echo(f"🏠 Hostnames discovered: {stats['hostnames']}")
|
||||
click.echo(f"🌐 IP addresses found: {stats['ip_addresses']}")
|
||||
click.echo(f"📋 DNS records collected: {stats['dns_records']}")
|
||||
click.echo(f"📜 Certificates found: {stats['certificates']}")
|
||||
click.echo(f"🕵️ Shodan results: {stats['shodan_results']}")
|
||||
click.echo(f"🛡️ VirusTotal results: {stats['virustotal_results']}")
|
||||
|
||||
# Calculate and display timing
|
||||
if data.end_time and data.start_time:
|
||||
duration = data.end_time - data.start_time
|
||||
click.echo(f"⏱️ Total time: {duration}")
|
||||
|
||||
click.echo("")
|
||||
|
||||
# Generate reports
|
||||
logger.info("📄 Generating reports...")
|
||||
report_gen = ReportGenerator(data)
|
||||
|
||||
if output:
|
||||
# Save to files
|
||||
saved_files = []
|
||||
|
||||
if not text_only:
|
||||
json_file = f"{output}.json"
|
||||
try:
|
||||
json_content = data.to_json()
|
||||
with open(json_file, 'w', encoding='utf-8') as f:
|
||||
f.write(json_content)
|
||||
saved_files.append(json_file)
|
||||
logger.info(f"💾 JSON report saved: {json_file}")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to save JSON report: {e}")
|
||||
|
||||
if not json_only:
|
||||
text_file = f"{output}.txt"
|
||||
try:
|
||||
with open(text_file, 'w', encoding='utf-8') as f:
|
||||
f.write(report_gen.generate_text_report())
|
||||
saved_files.append(text_file)
|
||||
logger.info(f"💾 Text report saved: {text_file}")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to save text report: {e}")
|
||||
|
||||
if saved_files:
|
||||
click.echo(f"💾 Reports saved:")
|
||||
for file in saved_files:
|
||||
click.echo(f" 📄 {file}")
|
||||
|
||||
else:
|
||||
# Output to stdout
|
||||
if json_only:
|
||||
try:
|
||||
click.echo(data.to_json())
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to generate JSON output: {e}")
|
||||
click.echo(f"Error generating JSON: {e}")
|
||||
elif text_only:
|
||||
try:
|
||||
click.echo(report_gen.generate_text_report())
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to generate text report: {e}")
|
||||
click.echo(f"Error generating text report: {e}")
|
||||
else:
|
||||
# Default: show text report
|
||||
try:
|
||||
click.echo(report_gen.generate_text_report())
|
||||
click.echo(f"\n💡 To get JSON output, use: --json-only")
|
||||
click.echo(f"💡 To save reports, use: --output filename")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to generate report: {e}")
|
||||
click.echo(f"Error generating report: {e}")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logger.warning("⚠️ Reconnaissance interrupted by user")
|
||||
click.echo("\n⚠️ Reconnaissance interrupted by user.")
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error during reconnaissance: {e}", exc_info=True)
|
||||
click.echo(f"❌ Error during reconnaissance: {e}")
|
||||
if verbose:
|
||||
raise # Re-raise in verbose mode to show full traceback
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,400 +0,0 @@
|
||||
# File: src/reconnaissance.py
|
||||
"""Main reconnaissance logic with enhanced TLD expansion."""
|
||||
|
||||
import threading
|
||||
import concurrent.futures
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Set, List, Optional, Tuple
|
||||
from .data_structures import ReconData
|
||||
from .config import Config
|
||||
from .dns_resolver import DNSResolver
|
||||
from .certificate_checker import CertificateChecker
|
||||
from .shodan_client import ShodanClient
|
||||
from .virustotal_client import VirusTotalClient
|
||||
from .tld_fetcher import TLDFetcher
|
||||
|
||||
# Set up logging for this module
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ReconnaissanceEngine:
|
||||
"""Main reconnaissance engine with smart TLD expansion."""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
|
||||
# Initialize clients
|
||||
self.dns_resolver = DNSResolver(config)
|
||||
self.cert_checker = CertificateChecker(config)
|
||||
self.tld_fetcher = TLDFetcher()
|
||||
|
||||
# Optional clients
|
||||
self.shodan_client = None
|
||||
if config.shodan_key:
|
||||
self.shodan_client = ShodanClient(config.shodan_key, config)
|
||||
logger.info("✅ Shodan client initialized")
|
||||
else:
|
||||
logger.info("⚠️ Shodan API key not provided, skipping Shodan integration")
|
||||
|
||||
self.virustotal_client = None
|
||||
if config.virustotal_key:
|
||||
self.virustotal_client = VirusTotalClient(config.virustotal_key, config)
|
||||
logger.info("✅ VirusTotal client initialized")
|
||||
else:
|
||||
logger.info("⚠️ VirusTotal API key not provided, skipping VirusTotal integration")
|
||||
|
||||
# Progress tracking
|
||||
self.progress_callback = None
|
||||
self._lock = threading.Lock()
|
||||
|
||||
# Shared data object for live updates
|
||||
self.shared_data = None
|
||||
|
||||
def set_progress_callback(self, callback):
|
||||
"""Set callback for progress updates."""
|
||||
self.progress_callback = callback
|
||||
|
||||
def set_shared_data(self, shared_data: ReconData):
|
||||
"""Set shared data object for live updates during web interface usage."""
|
||||
self.shared_data = shared_data
|
||||
logger.info("📊 Using shared data object for live updates")
|
||||
|
||||
def _update_progress(self, message: str, percentage: int = None):
|
||||
"""Update progress if callback is set."""
|
||||
logger.info(f"Progress: {message} ({percentage}%)" if percentage else f"Progress: {message}")
|
||||
if self.progress_callback:
|
||||
self.progress_callback(message, percentage)
|
||||
|
||||
def run_reconnaissance(self, target: str) -> ReconData:
|
||||
"""Run full reconnaissance on target."""
|
||||
# Use shared data object if available, otherwise create new one
|
||||
if self.shared_data is not None:
|
||||
self.data = self.shared_data
|
||||
logger.info("📊 Using shared data object for reconnaissance")
|
||||
else:
|
||||
self.data = ReconData()
|
||||
logger.info("📊 Created new data object for reconnaissance")
|
||||
|
||||
self.data.start_time = datetime.now()
|
||||
|
||||
logger.info(f"🚀 Starting reconnaissance for target: {target}")
|
||||
logger.info(f"📊 Configuration: max_depth={self.config.max_depth}, "
|
||||
f"DNS_rate={self.config.DNS_RATE_LIMIT}/s")
|
||||
|
||||
try:
|
||||
# Determine if target is hostname.tld or just hostname
|
||||
if '.' in target:
|
||||
logger.info(f"🎯 Target '{target}' appears to be a full domain name")
|
||||
self._update_progress(f"Starting reconnaissance for {target}", 0)
|
||||
self.data.add_hostname(target, 0)
|
||||
initial_targets = {target}
|
||||
else:
|
||||
logger.info(f"🔍 Target '{target}' appears to be a hostname, expanding to all TLDs")
|
||||
self._update_progress(f"Expanding {target} to all TLDs", 5)
|
||||
initial_targets = self._expand_hostname_to_tlds_smart(target)
|
||||
logger.info(f"📋 Found {len(initial_targets)} valid domains after TLD expansion")
|
||||
|
||||
self._update_progress("Resolving initial targets", 10)
|
||||
|
||||
# Process all targets recursively
|
||||
self._process_targets_recursively(initial_targets)
|
||||
|
||||
# Final external lookups
|
||||
self._update_progress("Performing external service lookups", 90)
|
||||
self._perform_external_lookups()
|
||||
|
||||
# Log final statistics
|
||||
stats = self.data.get_stats()
|
||||
logger.info(f"📈 Final statistics: {stats}")
|
||||
|
||||
self._update_progress("Reconnaissance complete", 100)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error during reconnaissance: {e}", exc_info=True)
|
||||
raise
|
||||
finally:
|
||||
self.data.end_time = datetime.now()
|
||||
duration = self.data.end_time - self.data.start_time
|
||||
logger.info(f"⏱️ Total reconnaissance time: {duration}")
|
||||
|
||||
return self.data
|
||||
|
||||
def _expand_hostname_to_tlds_smart(self, hostname: str) -> Set[str]:
|
||||
"""Smart TLD expansion with prioritization and parallel processing."""
|
||||
logger.info(f"🌐 Starting smart TLD expansion for hostname: {hostname}")
|
||||
|
||||
# Get prioritized TLD lists
|
||||
priority_tlds, normal_tlds, deprioritized_tlds = self.tld_fetcher.get_prioritized_tlds()
|
||||
|
||||
logger.info(f"📊 TLD categories: {len(priority_tlds)} priority, "
|
||||
f"{len(normal_tlds)} normal, {len(deprioritized_tlds)} deprioritized")
|
||||
|
||||
valid_domains = set()
|
||||
|
||||
# Phase 1: Check priority TLDs first (parallel processing)
|
||||
logger.info("🚀 Phase 1: Checking priority TLDs...")
|
||||
priority_results = self._check_tlds_parallel(hostname, priority_tlds, "priority")
|
||||
valid_domains.update(priority_results)
|
||||
|
||||
self._update_progress(f"Phase 1 complete: {len(priority_results)} priority TLD matches", 6)
|
||||
|
||||
# Phase 2: Check normal TLDs (if we found fewer than 5 results)
|
||||
if len(valid_domains) < 5:
|
||||
logger.info("🔍 Phase 2: Checking normal TLDs...")
|
||||
normal_results = self._check_tlds_parallel(hostname, normal_tlds, "normal")
|
||||
valid_domains.update(normal_results)
|
||||
|
||||
self._update_progress(f"Phase 2 complete: {len(normal_results)} normal TLD matches", 8)
|
||||
else:
|
||||
logger.info(f"⏭️ Skipping normal TLDs (found {len(valid_domains)} matches in priority)")
|
||||
|
||||
# Phase 3: Check deprioritized TLDs only if we found very few results
|
||||
if len(valid_domains) < 2:
|
||||
logger.info("🔍 Phase 3: Checking deprioritized TLDs (limited results so far)...")
|
||||
depri_results = self._check_tlds_parallel(hostname, deprioritized_tlds, "deprioritized")
|
||||
valid_domains.update(depri_results)
|
||||
|
||||
self._update_progress(f"Phase 3 complete: {len(depri_results)} deprioritized TLD matches", 9)
|
||||
else:
|
||||
logger.info(f"⏭️ Skipping deprioritized TLDs (found {len(valid_domains)} matches already)")
|
||||
|
||||
logger.info(f"🎯 Smart TLD expansion complete: found {len(valid_domains)} valid domains")
|
||||
return valid_domains
|
||||
|
||||
def _check_tlds_parallel(self, hostname: str, tlds: List[str], phase_name: str) -> Set[str]:
|
||||
"""Check TLDs in parallel with optimized settings."""
|
||||
valid_domains = set()
|
||||
tested_count = 0
|
||||
wildcard_detected = set()
|
||||
|
||||
# Use thread pool for parallel processing
|
||||
max_workers = min(20, len(tlds)) # Limit concurrent requests
|
||||
|
||||
logger.info(f"⚡ Starting parallel check of {len(tlds)} {phase_name} TLDs "
|
||||
f"with {max_workers} workers")
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
# Submit all tasks
|
||||
future_to_tld = {
|
||||
executor.submit(self._check_single_tld, hostname, tld): tld
|
||||
for tld in tlds
|
||||
}
|
||||
|
||||
# Process results as they complete
|
||||
for future in concurrent.futures.as_completed(future_to_tld):
|
||||
tld = future_to_tld[future]
|
||||
tested_count += 1
|
||||
|
||||
try:
|
||||
result = future.result(timeout=10) # 10 second timeout per future
|
||||
|
||||
if result:
|
||||
full_hostname, ips = result
|
||||
|
||||
|
||||
logger.info(f"✅ Valid domain found: {full_hostname} -> {ips}")
|
||||
self.data.add_hostname(full_hostname, 0)
|
||||
valid_domains.add(full_hostname)
|
||||
|
||||
for ip in ips:
|
||||
self.data.add_ip_address(ip)
|
||||
|
||||
# Progress update every 50 TLDs in this phase
|
||||
if tested_count % 50 == 0:
|
||||
logger.info(f"📊 {phase_name.title()} phase progress: "
|
||||
f"{tested_count}/{len(tlds)} tested, "
|
||||
f"{len(valid_domains)} found")
|
||||
|
||||
except concurrent.futures.TimeoutError:
|
||||
logger.debug(f"⏱️ Timeout checking {hostname}.{tld}")
|
||||
except Exception as e:
|
||||
logger.debug(f"⚠️ Error checking {hostname}.{tld}: {e}")
|
||||
|
||||
logger.info(f"📊 {phase_name.title()} phase complete: "
|
||||
f"tested {tested_count} TLDs, found {len(valid_domains)} valid domains, "
|
||||
f"detected {len(wildcard_detected)} wildcards")
|
||||
|
||||
return valid_domains
|
||||
|
||||
def _check_single_tld(self, hostname: str, tld: str) -> Optional[Tuple[str, List[str]]]:
|
||||
"""Check a single TLD combination with optimized DNS resolution."""
|
||||
full_hostname = f"{hostname}.{tld}"
|
||||
|
||||
# Use faster DNS resolution with shorter timeout for TLD testing
|
||||
ips = self.dns_resolver.resolve_hostname_fast(full_hostname)
|
||||
|
||||
if ips:
|
||||
logger.debug(f"✅ {full_hostname} -> {ips}")
|
||||
return (full_hostname, ips)
|
||||
|
||||
return None
|
||||
|
||||
def _process_targets_recursively(self, targets: Set[str]):
|
||||
"""Process targets with recursive subdomain discovery."""
|
||||
current_depth = 0
|
||||
|
||||
while current_depth <= self.config.max_depth and targets:
|
||||
logger.info(f"🔄 Processing depth {current_depth} with {len(targets)} targets")
|
||||
self._update_progress(f"Processing depth {current_depth} ({len(targets)} targets)", 15 + (current_depth * 25))
|
||||
|
||||
new_targets = set()
|
||||
|
||||
for target in targets:
|
||||
logger.debug(f"🎯 Processing target: {target}")
|
||||
|
||||
# DNS resolution and record gathering
|
||||
self._process_single_target(target, current_depth)
|
||||
|
||||
# Extract new subdomains
|
||||
if current_depth < self.config.max_depth:
|
||||
new_subdomains = self._extract_new_subdomains(target)
|
||||
logger.debug(f"🌿 Found {len(new_subdomains)} new subdomains from {target}")
|
||||
|
||||
for subdomain in new_subdomains:
|
||||
self.data.add_hostname(subdomain, current_depth + 1)
|
||||
new_targets.add(subdomain)
|
||||
|
||||
logger.info(f"📊 Depth {current_depth} complete. Found {len(new_targets)} new targets for next depth")
|
||||
targets = new_targets
|
||||
current_depth += 1
|
||||
|
||||
logger.info(f"🏁 Recursive processing complete after {current_depth} levels")
|
||||
|
||||
def _process_single_target(self, hostname: str, depth: int):
|
||||
"""Process a single target hostname."""
|
||||
logger.debug(f"🎯 Processing single target: {hostname} at depth {depth}")
|
||||
|
||||
# Get all DNS records
|
||||
dns_records = self.dns_resolver.get_all_dns_records(hostname)
|
||||
logger.debug(f"📋 Found {len(dns_records)} DNS records for {hostname}")
|
||||
|
||||
for record in dns_records:
|
||||
self.data.add_dns_record(hostname, record)
|
||||
|
||||
# Extract IP addresses from A and AAAA records
|
||||
if record.record_type in ['A', 'AAAA']:
|
||||
self.data.add_ip_address(record.value)
|
||||
|
||||
# Get certificates
|
||||
logger.debug(f"🔍 Checking certificates for {hostname}")
|
||||
certificates = self.cert_checker.get_certificates(hostname)
|
||||
if certificates:
|
||||
self.data.certificates[hostname] = certificates
|
||||
logger.info(f"📜 Found {len(certificates)} certificates for {hostname}")
|
||||
else:
|
||||
logger.debug(f"❌ No certificates found for {hostname}")
|
||||
|
||||
def _extract_new_subdomains(self, hostname: str) -> Set[str]:
|
||||
"""Extract new subdomains from DNS records and certificates."""
|
||||
new_subdomains = set()
|
||||
|
||||
# From DNS records
|
||||
if hostname in self.data.dns_records:
|
||||
dns_subdomains = self.dns_resolver.extract_subdomains_from_dns(
|
||||
self.data.dns_records[hostname]
|
||||
)
|
||||
new_subdomains.update(dns_subdomains)
|
||||
logger.debug(f"🌐 Extracted {len(dns_subdomains)} subdomains from DNS records of {hostname}")
|
||||
|
||||
# From certificates
|
||||
if hostname in self.data.certificates:
|
||||
cert_subdomains = self.cert_checker.extract_subdomains_from_certificates(
|
||||
self.data.certificates[hostname]
|
||||
)
|
||||
new_subdomains.update(cert_subdomains)
|
||||
logger.debug(f"🔍 Extracted {len(cert_subdomains)} subdomains from certificates of {hostname}")
|
||||
|
||||
# Filter out already known hostnames
|
||||
filtered_subdomains = new_subdomains - self.data.hostnames
|
||||
logger.debug(f"🆕 {len(filtered_subdomains)} new subdomains after filtering")
|
||||
|
||||
return filtered_subdomains
|
||||
|
||||
def _perform_external_lookups(self):
|
||||
"""Perform Shodan and VirusTotal lookups."""
|
||||
logger.info(f"🔍 Starting external lookups for {len(self.data.ip_addresses)} IPs and {len(self.data.hostnames)} hostnames")
|
||||
|
||||
# Reverse DNS for all IPs
|
||||
logger.info("🔄 Performing reverse DNS lookups")
|
||||
reverse_dns_count = 0
|
||||
for ip in self.data.ip_addresses:
|
||||
reverse = self.dns_resolver.reverse_dns_lookup(ip)
|
||||
if reverse:
|
||||
self.data.reverse_dns[ip] = reverse
|
||||
reverse_dns_count += 1
|
||||
logger.debug(f"🔙 Reverse DNS for {ip}: {reverse}")
|
||||
|
||||
logger.info(f"✅ Completed reverse DNS: {reverse_dns_count}/{len(self.data.ip_addresses)} successful")
|
||||
|
||||
# Shodan lookups
|
||||
if self.shodan_client:
|
||||
logger.info(f"🕵️ Starting Shodan lookups for {len(self.data.ip_addresses)} IPs")
|
||||
shodan_success_count = 0
|
||||
|
||||
for ip in self.data.ip_addresses:
|
||||
try:
|
||||
logger.debug(f"🔍 Querying Shodan for IP: {ip}")
|
||||
result = self.shodan_client.lookup_ip(ip)
|
||||
if result:
|
||||
self.data.add_shodan_result(ip, result)
|
||||
shodan_success_count += 1
|
||||
logger.info(f"✅ Shodan result for {ip}: {len(result.ports)} ports")
|
||||
else:
|
||||
logger.debug(f"❌ No Shodan data for {ip}")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Error querying Shodan for {ip}: {e}")
|
||||
|
||||
logger.info(f"✅ Shodan lookups complete: {shodan_success_count}/{len(self.data.ip_addresses)} successful")
|
||||
else:
|
||||
logger.info("⚠️ Skipping Shodan lookups (no API key)")
|
||||
|
||||
# VirusTotal lookups
|
||||
if self.virustotal_client:
|
||||
total_resources = len(self.data.ip_addresses) + len(self.data.hostnames)
|
||||
logger.info(f"🛡️ Starting VirusTotal lookups for {total_resources} resources")
|
||||
vt_success_count = 0
|
||||
|
||||
# Check IPs
|
||||
for ip in self.data.ip_addresses:
|
||||
try:
|
||||
logger.debug(f"🔍 Querying VirusTotal for IP: {ip}")
|
||||
result = self.virustotal_client.lookup_ip(ip)
|
||||
if result:
|
||||
self.data.add_virustotal_result(ip, result)
|
||||
vt_success_count += 1
|
||||
logger.info(f"🛡️ VirusTotal result for {ip}: {result.positives}/{result.total} detections")
|
||||
else:
|
||||
logger.debug(f"❌ No VirusTotal data for {ip}")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Error querying VirusTotal for IP {ip}: {e}")
|
||||
|
||||
# Check domains
|
||||
for hostname in self.data.hostnames:
|
||||
try:
|
||||
logger.debug(f"🔍 Querying VirusTotal for domain: {hostname}")
|
||||
result = self.virustotal_client.lookup_domain(hostname)
|
||||
if result:
|
||||
self.data.add_virustotal_result(hostname, result)
|
||||
vt_success_count += 1
|
||||
logger.info(f"🛡️ VirusTotal result for {hostname}: {result.positives}/{result.total} detections")
|
||||
else:
|
||||
logger.debug(f"❌ No VirusTotal data for {hostname}")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Error querying VirusTotal for domain {hostname}: {e}")
|
||||
|
||||
logger.info(f"✅ VirusTotal lookups complete: {vt_success_count}/{total_resources} successful")
|
||||
else:
|
||||
logger.info("⚠️ Skipping VirusTotal lookups (no API key)")
|
||||
|
||||
# Final external lookup summary
|
||||
ext_stats = {
|
||||
'reverse_dns': len(self.data.reverse_dns),
|
||||
'shodan_results': len(self.data.shodan_results),
|
||||
'virustotal_results': len(self.data.virustotal_results)
|
||||
}
|
||||
logger.info(f"📊 External lookups summary: {ext_stats}")
|
||||
|
||||
# Keep the original method name for backward compatibility
|
||||
def _expand_hostname_to_tlds(self, hostname: str) -> Set[str]:
|
||||
"""Legacy method - redirects to smart expansion."""
|
||||
return self._expand_hostname_to_tlds_smart(hostname)
|
||||
@@ -1,111 +0,0 @@
|
||||
# File: src/report_generator.py
|
||||
"""Generate reports from reconnaissance data."""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
from .data_structures import ReconData
|
||||
|
||||
class ReportGenerator:
|
||||
"""Generate various report formats."""
|
||||
|
||||
def __init__(self, data: ReconData):
|
||||
self.data = data
|
||||
|
||||
def generate_text_report(self) -> str:
|
||||
"""Generate comprehensive text report."""
|
||||
report = []
|
||||
|
||||
# Header
|
||||
report.append("="*80)
|
||||
report.append("DNS RECONNAISSANCE REPORT")
|
||||
report.append("="*80)
|
||||
report.append(f"Start Time: {self.data.start_time}")
|
||||
report.append(f"End Time: {self.data.end_time}")
|
||||
if self.data.end_time:
|
||||
duration = self.data.end_time - self.data.start_time
|
||||
report.append(f"Duration: {duration}")
|
||||
report.append("")
|
||||
|
||||
# Summary
|
||||
report.append("SUMMARY")
|
||||
report.append("-" * 40)
|
||||
report.append(f"Total Hostnames Discovered: {len(self.data.hostnames)}")
|
||||
report.append(f"Total IP Addresses Found: {len(self.data.ip_addresses)}")
|
||||
report.append(f"Total DNS Records: {sum(len(records) for records in self.data.dns_records.values())}")
|
||||
report.append(f"Total Certificates Found: {sum(len(certs) for certs in self.data.certificates.values())}")
|
||||
report.append("")
|
||||
|
||||
# Hostnames by depth
|
||||
report.append("HOSTNAMES BY DISCOVERY DEPTH")
|
||||
report.append("-" * 40)
|
||||
depth_groups = {}
|
||||
for hostname, depth in self.data.depth_map.items():
|
||||
if depth not in depth_groups:
|
||||
depth_groups[depth] = []
|
||||
depth_groups[depth].append(hostname)
|
||||
|
||||
for depth in sorted(depth_groups.keys()):
|
||||
report.append(f"Depth {depth}: {len(depth_groups[depth])} hostnames")
|
||||
for hostname in sorted(depth_groups[depth]):
|
||||
report.append(f" - {hostname}")
|
||||
report.append("")
|
||||
|
||||
# IP Addresses
|
||||
report.append("IP ADDRESSES")
|
||||
report.append("-" * 40)
|
||||
for ip in sorted(self.data.ip_addresses):
|
||||
report.append(f"{ip}")
|
||||
if ip in self.data.reverse_dns:
|
||||
report.append(f" Reverse DNS: {self.data.reverse_dns[ip]}")
|
||||
if ip in self.data.shodan_results:
|
||||
shodan = self.data.shodan_results[ip]
|
||||
report.append(f" Shodan: {len(shodan.ports)} open ports")
|
||||
if shodan.organization:
|
||||
report.append(f" Organization: {shodan.organization}")
|
||||
if shodan.country:
|
||||
report.append(f" Country: {shodan.country}")
|
||||
report.append("")
|
||||
|
||||
# DNS Records
|
||||
report.append("DNS RECORDS")
|
||||
report.append("-" * 40)
|
||||
for hostname in sorted(self.data.dns_records.keys()):
|
||||
report.append(f"{hostname}:")
|
||||
records_by_type = {}
|
||||
for record in self.data.dns_records[hostname]:
|
||||
if record.record_type not in records_by_type:
|
||||
records_by_type[record.record_type] = []
|
||||
records_by_type[record.record_type].append(record)
|
||||
|
||||
for record_type in sorted(records_by_type.keys()):
|
||||
report.append(f" {record_type}:")
|
||||
for record in records_by_type[record_type]:
|
||||
report.append(f" {record.value}")
|
||||
report.append("")
|
||||
|
||||
# Certificates
|
||||
if self.data.certificates:
|
||||
report.append("CERTIFICATES")
|
||||
report.append("-" * 40)
|
||||
for hostname in sorted(self.data.certificates.keys()):
|
||||
report.append(f"{hostname}:")
|
||||
for cert in self.data.certificates[hostname]:
|
||||
report.append(f" Certificate ID: {cert.id}")
|
||||
report.append(f" Issuer: {cert.issuer}")
|
||||
report.append(f" Valid From: {cert.not_before}")
|
||||
report.append(f" Valid Until: {cert.not_after}")
|
||||
if cert.is_wildcard:
|
||||
report.append(f" Type: Wildcard Certificate")
|
||||
report.append("")
|
||||
|
||||
# Security Analysis
|
||||
if self.data.virustotal_results:
|
||||
report.append("SECURITY ANALYSIS")
|
||||
report.append("-" * 40)
|
||||
for resource, result in self.data.virustotal_results.items():
|
||||
if result.positives > 0:
|
||||
report.append(f"⚠️ {resource}: {result.positives}/{result.total} detections")
|
||||
report.append(f" Scan Date: {result.scan_date}")
|
||||
report.append(f" Report: {result.permalink}")
|
||||
|
||||
return "\n".join(report)
|
||||
@@ -1,166 +0,0 @@
|
||||
# File: src/shodan_client.py
|
||||
"""Shodan API integration."""
|
||||
|
||||
import requests
|
||||
import time
|
||||
import logging
|
||||
from typing import Optional, Dict, Any, List
|
||||
from .data_structures import ShodanResult
|
||||
from .config import Config
|
||||
|
||||
# Module logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ShodanClient:
|
||||
"""Shodan API client."""
|
||||
|
||||
BASE_URL = "https://api.shodan.io"
|
||||
|
||||
def __init__(self, api_key: str, config: Config):
|
||||
self.api_key = api_key
|
||||
self.config = config
|
||||
self.last_request = 0
|
||||
|
||||
logger.info(f"🕵️ Shodan client initialized with API key ending in: ...{api_key[-4:] if len(api_key) > 4 else api_key}")
|
||||
|
||||
def _rate_limit(self):
|
||||
"""Apply rate limiting for Shodan."""
|
||||
now = time.time()
|
||||
time_since_last = now - self.last_request
|
||||
min_interval = 1.0 / self.config.SHODAN_RATE_LIMIT
|
||||
|
||||
if time_since_last < min_interval:
|
||||
sleep_time = min_interval - time_since_last
|
||||
logger.debug(f"⏸️ Shodan rate limiting: sleeping for {sleep_time:.2f}s")
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_request = time.time()
|
||||
|
||||
def lookup_ip(self, ip: str) -> Optional[ShodanResult]:
|
||||
"""Lookup IP address information."""
|
||||
self._rate_limit()
|
||||
|
||||
logger.debug(f"🔍 Querying Shodan for IP: {ip}")
|
||||
|
||||
try:
|
||||
url = f"{self.BASE_URL}/shodan/host/{ip}"
|
||||
params = {'key': self.api_key}
|
||||
|
||||
response = requests.get(
|
||||
url,
|
||||
params=params,
|
||||
timeout=self.config.HTTP_TIMEOUT,
|
||||
headers={'User-Agent': 'DNS-Recon-Tool/1.0'}
|
||||
)
|
||||
|
||||
logger.debug(f"📡 Shodan API response for {ip}: {response.status_code}")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
|
||||
ports = []
|
||||
services = {}
|
||||
|
||||
for service in data.get('data', []):
|
||||
port = service.get('port')
|
||||
if port:
|
||||
ports.append(port)
|
||||
services[str(port)] = {
|
||||
'product': service.get('product', ''),
|
||||
'version': service.get('version', ''),
|
||||
'banner': service.get('data', '').strip()[:200] if service.get('data') else ''
|
||||
}
|
||||
|
||||
result = ShodanResult(
|
||||
ip=ip,
|
||||
ports=sorted(list(set(ports))),
|
||||
services=services,
|
||||
organization=data.get('org'),
|
||||
country=data.get('country_name')
|
||||
)
|
||||
|
||||
logger.info(f"✅ Shodan result for {ip}: {len(result.ports)} ports, org: {result.organization}")
|
||||
return result
|
||||
|
||||
elif response.status_code == 404:
|
||||
logger.debug(f"ℹ️ IP {ip} not found in Shodan database")
|
||||
return None
|
||||
elif response.status_code == 401:
|
||||
logger.error("❌ Shodan API key is invalid or expired")
|
||||
return None
|
||||
elif response.status_code == 429:
|
||||
logger.warning("⚠️ Shodan API rate limit exceeded")
|
||||
return None
|
||||
else:
|
||||
logger.warning(f"⚠️ Shodan API error for {ip}: HTTP {response.status_code}")
|
||||
try:
|
||||
error_data = response.json()
|
||||
logger.debug(f"Shodan error details: {error_data}")
|
||||
except:
|
||||
pass
|
||||
return None
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
logger.warning(f"⏱️ Shodan query timeout for {ip}")
|
||||
return None
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"🌐 Shodan network error for {ip}: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Unexpected error querying Shodan for {ip}: {e}")
|
||||
return None
|
||||
|
||||
def search_domain(self, domain: str) -> List[str]:
|
||||
"""Search for IPs associated with a domain."""
|
||||
self._rate_limit()
|
||||
|
||||
logger.debug(f"🔍 Searching Shodan for domain: {domain}")
|
||||
|
||||
try:
|
||||
url = f"{self.BASE_URL}/shodan/host/search"
|
||||
params = {
|
||||
'key': self.api_key,
|
||||
'query': f'hostname:{domain}',
|
||||
'limit': 100
|
||||
}
|
||||
|
||||
response = requests.get(
|
||||
url,
|
||||
params=params,
|
||||
timeout=self.config.HTTP_TIMEOUT,
|
||||
headers={'User-Agent': 'DNS-Recon-Tool/1.0'}
|
||||
)
|
||||
|
||||
logger.debug(f"📡 Shodan search response for {domain}: {response.status_code}")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
ips = []
|
||||
|
||||
for match in data.get('matches', []):
|
||||
ip = match.get('ip_str')
|
||||
if ip:
|
||||
ips.append(ip)
|
||||
|
||||
unique_ips = list(set(ips))
|
||||
logger.info(f"🔍 Shodan search for {domain} found {len(unique_ips)} unique IPs")
|
||||
return unique_ips
|
||||
elif response.status_code == 401:
|
||||
logger.error("❌ Shodan API key is invalid for search")
|
||||
return []
|
||||
elif response.status_code == 429:
|
||||
logger.warning("⚠️ Shodan search rate limit exceeded")
|
||||
return []
|
||||
else:
|
||||
logger.warning(f"⚠️ Shodan search error for {domain}: HTTP {response.status_code}")
|
||||
return []
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
logger.warning(f"⏱️ Shodan search timeout for {domain}")
|
||||
return []
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"🌐 Shodan search network error for {domain}: {e}")
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Unexpected error searching Shodan for {domain}: {e}")
|
||||
return []
|
||||
@@ -1,213 +0,0 @@
|
||||
# File: src/tld_fetcher.py
|
||||
"""Fetch and cache IANA TLD list with smart prioritization."""
|
||||
|
||||
import requests
|
||||
import logging
|
||||
from typing import List, Set, Optional, Tuple
|
||||
import os
|
||||
import time
|
||||
|
||||
# Module logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class TLDFetcher:
|
||||
"""Fetches and caches IANA TLD list with smart prioritization."""
|
||||
|
||||
IANA_TLD_URL = "https://data.iana.org/TLD/tlds-alpha-by-domain.txt"
|
||||
CACHE_FILE = "tlds_cache.txt"
|
||||
CACHE_DURATION = 86400 # 24 hours in seconds
|
||||
|
||||
# Common TLDs that should be checked first (high success rate)
|
||||
PRIORITY_TLDS = {
|
||||
# Generic top-level domains (most common)
|
||||
'com', 'org', 'net', 'edu', 'gov', 'mil', 'int', 'info', 'biz', 'name',
|
||||
'io', 'co', 'me', 'tv', 'cc', 'ly', 'to', 'us', 'uk', 'ca',
|
||||
|
||||
# Major country codes (high usage)
|
||||
'de', 'fr', 'it', 'es', 'nl', 'be', 'ch', 'at', 'se', 'no', 'dk', 'fi',
|
||||
'au', 'nz', 'jp', 'kr', 'cn', 'hk', 'sg', 'my', 'th', 'in', 'br', 'mx',
|
||||
'ru', 'pl', 'cz', 'hu', 'ro', 'bg', 'hr', 'si', 'sk', 'lt', 'lv', 'ee',
|
||||
'ie', 'pt', 'gr', 'cy', 'mt', 'lu', 'is', 'tr', 'il', 'za', 'ng', 'eg',
|
||||
|
||||
# Popular new gTLDs (established, not spam-prone)
|
||||
'app', 'dev', 'tech', 'blog', 'news', 'shop', 'store', 'cloud', 'digital',
|
||||
'website', 'site', 'online', 'world', 'global', 'international'
|
||||
}
|
||||
|
||||
# TLDs to deprioritize (often have wildcard DNS or low-quality domains)
|
||||
DEPRIORITIZED_PATTERNS = [
|
||||
'xn--', # Internationalized domain names (often less common)
|
||||
# These TLDs are known for high wildcard/parking rates
|
||||
'tk', 'ml', 'ga', 'cf', # Free TLDs often misused
|
||||
'top', 'win', 'download', 'stream', 'science', 'click', 'link',
|
||||
'loan', 'men', 'racing', 'review', 'party', 'trade', 'date',
|
||||
'cricket', 'accountant', 'faith', 'gdn', 'realtor'
|
||||
]
|
||||
|
||||
def __init__(self):
|
||||
self._tlds: Optional[Set[str]] = None
|
||||
self._prioritized_tlds: Optional[Tuple[List[str], List[str], List[str]]] = None
|
||||
logger.info("🌐 TLD fetcher initialized with smart prioritization")
|
||||
|
||||
def get_tlds(self) -> Set[str]:
|
||||
"""Get list of TLDs, using cache if available."""
|
||||
if self._tlds is None:
|
||||
logger.debug("🔍 Loading TLD list...")
|
||||
self._tlds = self._load_tlds()
|
||||
logger.info(f"✅ Loaded {len(self._tlds)} TLDs")
|
||||
return self._tlds
|
||||
|
||||
def get_prioritized_tlds(self) -> Tuple[List[str], List[str], List[str]]:
|
||||
"""Get TLDs sorted by priority: (priority, normal, deprioritized)."""
|
||||
if self._prioritized_tlds is None:
|
||||
all_tlds = self.get_tlds()
|
||||
logger.debug("📊 Categorizing TLDs by priority...")
|
||||
|
||||
priority_list = []
|
||||
normal_list = []
|
||||
deprioritized_list = []
|
||||
|
||||
for tld in all_tlds:
|
||||
tld_lower = tld.lower()
|
||||
|
||||
if tld_lower in self.PRIORITY_TLDS:
|
||||
priority_list.append(tld_lower)
|
||||
elif any(pattern in tld_lower for pattern in self.DEPRIORITIZED_PATTERNS):
|
||||
deprioritized_list.append(tld_lower)
|
||||
else:
|
||||
normal_list.append(tld_lower)
|
||||
|
||||
# Sort each category alphabetically for consistency
|
||||
priority_list.sort()
|
||||
normal_list.sort()
|
||||
deprioritized_list.sort()
|
||||
|
||||
self._prioritized_tlds = (priority_list, normal_list, deprioritized_list)
|
||||
|
||||
logger.info(f"📊 TLD prioritization complete: "
|
||||
f"{len(priority_list)} priority, "
|
||||
f"{len(normal_list)} normal, "
|
||||
f"{len(deprioritized_list)} deprioritized")
|
||||
|
||||
return self._prioritized_tlds
|
||||
|
||||
def _load_tlds(self) -> Set[str]:
|
||||
"""Load TLDs from cache or fetch from IANA."""
|
||||
if self._is_cache_valid():
|
||||
logger.debug("📂 Loading TLDs from cache")
|
||||
return self._load_from_cache()
|
||||
else:
|
||||
logger.info("🌐 Fetching fresh TLD list from IANA")
|
||||
return self._fetch_and_cache()
|
||||
|
||||
def _is_cache_valid(self) -> bool:
|
||||
"""Check if cache file exists and is recent."""
|
||||
if not os.path.exists(self.CACHE_FILE):
|
||||
logger.debug("❌ TLD cache file does not exist")
|
||||
return False
|
||||
|
||||
cache_age = time.time() - os.path.getmtime(self.CACHE_FILE)
|
||||
is_valid = cache_age < self.CACHE_DURATION
|
||||
|
||||
if is_valid:
|
||||
logger.debug(f"✅ TLD cache is valid (age: {cache_age/3600:.1f} hours)")
|
||||
else:
|
||||
logger.debug(f"❌ TLD cache is expired (age: {cache_age/3600:.1f} hours)")
|
||||
|
||||
return is_valid
|
||||
|
||||
def _load_from_cache(self) -> Set[str]:
|
||||
"""Load TLDs from cache file."""
|
||||
try:
|
||||
with open(self.CACHE_FILE, 'r', encoding='utf-8') as f:
|
||||
tlds = set()
|
||||
for line in f:
|
||||
line = line.strip().lower()
|
||||
if line and not line.startswith('#'):
|
||||
tlds.add(line)
|
||||
|
||||
logger.info(f"📂 Loaded {len(tlds)} TLDs from cache")
|
||||
return tlds
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error loading TLD cache: {e}")
|
||||
# Fall back to fetching fresh data
|
||||
return self._fetch_and_cache()
|
||||
|
||||
def _fetch_and_cache(self) -> Set[str]:
|
||||
"""Fetch TLDs from IANA and cache them."""
|
||||
try:
|
||||
logger.info(f"📡 Fetching TLD list from: {self.IANA_TLD_URL}")
|
||||
|
||||
response = requests.get(
|
||||
self.IANA_TLD_URL,
|
||||
timeout=30,
|
||||
headers={'User-Agent': 'DNS-Recon-Tool/1.0'}
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
tlds = set()
|
||||
lines_processed = 0
|
||||
|
||||
for line in response.text.split('\n'):
|
||||
line = line.strip().lower()
|
||||
if line and not line.startswith('#'):
|
||||
tlds.add(line)
|
||||
lines_processed += 1
|
||||
|
||||
logger.info(f"✅ Fetched {len(tlds)} TLDs from IANA (processed {lines_processed} lines)")
|
||||
|
||||
# Cache the results
|
||||
try:
|
||||
with open(self.CACHE_FILE, 'w', encoding='utf-8') as f:
|
||||
f.write(response.text)
|
||||
logger.info(f"💾 TLD list cached to {self.CACHE_FILE}")
|
||||
except Exception as cache_error:
|
||||
logger.warning(f"⚠️ Could not cache TLD list: {cache_error}")
|
||||
|
||||
return tlds
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
logger.error("⏱️ Timeout fetching TLD list from IANA")
|
||||
return self._get_fallback_tlds()
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"🌐 Network error fetching TLD list: {e}")
|
||||
return self._get_fallback_tlds()
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Unexpected error fetching TLD list: {e}")
|
||||
return self._get_fallback_tlds()
|
||||
|
||||
def _get_fallback_tlds(self) -> Set[str]:
|
||||
"""Return a minimal set of short TLDs if fetch fails."""
|
||||
logger.warning("⚠️ Using fallback TLD list")
|
||||
|
||||
# Use only short, well-established TLDs as fallback
|
||||
fallback_tlds = {
|
||||
# 2-character TLDs (country codes - most established)
|
||||
'ad', 'ae', 'af', 'ag', 'ai', 'al', 'am', 'ao', 'aq', 'ar', 'as', 'at',
|
||||
'au', 'aw', 'ax', 'az', 'ba', 'bb', 'bd', 'be', 'bf', 'bg', 'bh', 'bi',
|
||||
'bj', 'bl', 'bm', 'bn', 'bo', 'bq', 'br', 'bs', 'bt', 'bv', 'bw', 'by',
|
||||
'bz', 'ca', 'cc', 'cd', 'cf', 'cg', 'ch', 'ci', 'ck', 'cl', 'cm', 'cn',
|
||||
'co', 'cr', 'cu', 'cv', 'cw', 'cx', 'cy', 'cz', 'de', 'dj', 'dk', 'dm',
|
||||
'do', 'dz', 'ec', 'ee', 'eg', 'eh', 'er', 'es', 'et', 'eu', 'fi', 'fj',
|
||||
'fk', 'fm', 'fo', 'fr', 'ga', 'gb', 'gd', 'ge', 'gf', 'gg', 'gh', 'gi',
|
||||
'gl', 'gm', 'gn', 'gp', 'gq', 'gr', 'gs', 'gt', 'gu', 'gw', 'gy', 'hk',
|
||||
'hm', 'hn', 'hr', 'ht', 'hu', 'id', 'ie', 'il', 'im', 'in', 'io', 'iq',
|
||||
'ir', 'is', 'it', 'je', 'jm', 'jo', 'jp', 'ke', 'kg', 'kh', 'ki', 'km',
|
||||
'kn', 'kp', 'kr', 'kw', 'ky', 'kz', 'la', 'lb', 'lc', 'li', 'lk', 'lr',
|
||||
'ls', 'lt', 'lu', 'lv', 'ly', 'ma', 'mc', 'md', 'me', 'mf', 'mg', 'mh',
|
||||
'mk', 'ml', 'mm', 'mn', 'mo', 'mp', 'mq', 'mr', 'ms', 'mt', 'mu', 'mv',
|
||||
'mw', 'mx', 'my', 'mz', 'na', 'nc', 'ne', 'nf', 'ng', 'ni', 'nl', 'no',
|
||||
'np', 'nr', 'nu', 'nz', 'om', 'pa', 'pe', 'pf', 'pg', 'ph', 'pk', 'pl',
|
||||
'pm', 'pn', 'pr', 'ps', 'pt', 'pw', 'py', 'qa', 're', 'ro', 'rs', 'ru',
|
||||
'rw', 'sa', 'sb', 'sc', 'sd', 'se', 'sg', 'sh', 'si', 'sj', 'sk', 'sl',
|
||||
'sm', 'sn', 'so', 'sr', 'ss', 'st', 'sv', 'sx', 'sy', 'sz', 'tc', 'td',
|
||||
'tf', 'tg', 'th', 'tj', 'tk', 'tl', 'tm', 'tn', 'to', 'tr', 'tt', 'tv',
|
||||
'tw', 'tz', 'ua', 'ug', 'uk', 'um', 'us', 'uy', 'uz', 'va', 'vc', 've',
|
||||
'vg', 'vi', 'vn', 'vu', 'wf', 'ws', 'ye', 'yt', 'za', 'zm', 'zw',
|
||||
|
||||
# 3-character TLDs (generic - most common)
|
||||
'com', 'org', 'net', 'edu', 'gov', 'mil', 'int'
|
||||
}
|
||||
|
||||
logger.info(f"📋 Using {len(fallback_tlds)} fallback TLDs (≤3 characters)")
|
||||
return fallback_tlds
|
||||
@@ -1,214 +0,0 @@
|
||||
# File: src/virustotal_client.py
|
||||
"""VirusTotal API integration."""
|
||||
|
||||
import requests
|
||||
import time
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
from .data_structures import VirusTotalResult
|
||||
from .config import Config
|
||||
|
||||
# Module logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class VirusTotalClient:
|
||||
"""VirusTotal API client."""
|
||||
|
||||
BASE_URL = "https://www.virustotal.com/vtapi/v2"
|
||||
|
||||
def __init__(self, api_key: str, config: Config):
|
||||
self.api_key = api_key
|
||||
self.config = config
|
||||
self.last_request = 0
|
||||
|
||||
logger.info(f"🛡️ VirusTotal client initialized with API key ending in: ...{api_key[-4:] if len(api_key) > 4 else api_key}")
|
||||
|
||||
def _rate_limit(self):
|
||||
"""Apply rate limiting for VirusTotal."""
|
||||
now = time.time()
|
||||
time_since_last = now - self.last_request
|
||||
min_interval = 1.0 / self.config.VIRUSTOTAL_RATE_LIMIT
|
||||
|
||||
if time_since_last < min_interval:
|
||||
sleep_time = min_interval - time_since_last
|
||||
logger.debug(f"⏸️ VirusTotal rate limiting: sleeping for {sleep_time:.2f}s")
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_request = time.time()
|
||||
|
||||
def lookup_ip(self, ip: str) -> Optional[VirusTotalResult]:
|
||||
"""Lookup IP address reputation."""
|
||||
self._rate_limit()
|
||||
|
||||
logger.debug(f"🔍 Querying VirusTotal for IP: {ip}")
|
||||
|
||||
try:
|
||||
url = f"{self.BASE_URL}/ip-address/report"
|
||||
params = {
|
||||
'apikey': self.api_key,
|
||||
'ip': ip
|
||||
}
|
||||
|
||||
response = requests.get(
|
||||
url,
|
||||
params=params,
|
||||
timeout=self.config.HTTP_TIMEOUT,
|
||||
headers={'User-Agent': 'DNS-Recon-Tool/1.0'}
|
||||
)
|
||||
|
||||
logger.debug(f"📡 VirusTotal API response for IP {ip}: {response.status_code}")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
|
||||
logger.debug(f"VirusTotal IP response data keys: {data.keys()}")
|
||||
|
||||
if data.get('response_code') == 1:
|
||||
# Count detected URLs
|
||||
detected_urls = data.get('detected_urls', [])
|
||||
positives = sum(1 for url in detected_urls if url.get('positives', 0) > 0)
|
||||
total = len(detected_urls)
|
||||
|
||||
# Parse scan date
|
||||
scan_date = datetime.now()
|
||||
if data.get('scan_date'):
|
||||
try:
|
||||
scan_date = datetime.fromisoformat(data['scan_date'].replace('Z', '+00:00'))
|
||||
except ValueError:
|
||||
try:
|
||||
scan_date = datetime.strptime(data['scan_date'], '%Y-%m-%d %H:%M:%S')
|
||||
except ValueError:
|
||||
logger.debug(f"Could not parse scan_date: {data.get('scan_date')}")
|
||||
|
||||
result = VirusTotalResult(
|
||||
resource=ip,
|
||||
positives=positives,
|
||||
total=total,
|
||||
scan_date=scan_date,
|
||||
permalink=data.get('permalink', f'https://www.virustotal.com/gui/ip-address/{ip}')
|
||||
)
|
||||
|
||||
logger.info(f"✅ VirusTotal result for IP {ip}: {result.positives}/{result.total} detections")
|
||||
return result
|
||||
elif data.get('response_code') == 0:
|
||||
logger.debug(f"ℹ️ IP {ip} not found in VirusTotal database")
|
||||
return None
|
||||
else:
|
||||
logger.debug(f"VirusTotal returned response_code: {data.get('response_code')}")
|
||||
return None
|
||||
elif response.status_code == 204:
|
||||
logger.warning("⚠️ VirusTotal API rate limit exceeded")
|
||||
return None
|
||||
elif response.status_code == 403:
|
||||
logger.error("❌ VirusTotal API key is invalid or lacks permissions")
|
||||
return None
|
||||
else:
|
||||
logger.warning(f"⚠️ VirusTotal API error for IP {ip}: HTTP {response.status_code}")
|
||||
try:
|
||||
error_data = response.json()
|
||||
logger.debug(f"VirusTotal error details: {error_data}")
|
||||
except:
|
||||
pass
|
||||
return None
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
logger.warning(f"⏱️ VirusTotal query timeout for IP {ip}")
|
||||
return None
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"🌐 VirusTotal network error for IP {ip}: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Unexpected error querying VirusTotal for IP {ip}: {e}")
|
||||
return None
|
||||
|
||||
def lookup_domain(self, domain: str) -> Optional[VirusTotalResult]:
|
||||
"""Lookup domain reputation."""
|
||||
self._rate_limit()
|
||||
|
||||
logger.debug(f"🔍 Querying VirusTotal for domain: {domain}")
|
||||
|
||||
try:
|
||||
url = f"{self.BASE_URL}/domain/report"
|
||||
params = {
|
||||
'apikey': self.api_key,
|
||||
'domain': domain
|
||||
}
|
||||
|
||||
response = requests.get(
|
||||
url,
|
||||
params=params,
|
||||
timeout=self.config.HTTP_TIMEOUT,
|
||||
headers={'User-Agent': 'DNS-Recon-Tool/1.0'}
|
||||
)
|
||||
|
||||
logger.debug(f"📡 VirusTotal API response for domain {domain}: {response.status_code}")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
|
||||
logger.debug(f"VirusTotal domain response data keys: {data.keys()}")
|
||||
|
||||
if data.get('response_code') == 1:
|
||||
# Count detected URLs
|
||||
detected_urls = data.get('detected_urls', [])
|
||||
positives = sum(1 for url in detected_urls if url.get('positives', 0) > 0)
|
||||
total = len(detected_urls)
|
||||
|
||||
# Also check for malicious/suspicious categories
|
||||
categories = data.get('categories', [])
|
||||
if any(cat in ['malicious', 'suspicious', 'phishing', 'malware']
|
||||
for cat in categories):
|
||||
positives += 1
|
||||
|
||||
# Parse scan date
|
||||
scan_date = datetime.now()
|
||||
if data.get('scan_date'):
|
||||
try:
|
||||
scan_date = datetime.fromisoformat(data['scan_date'].replace('Z', '+00:00'))
|
||||
except ValueError:
|
||||
try:
|
||||
scan_date = datetime.strptime(data['scan_date'], '%Y-%m-%d %H:%M:%S')
|
||||
except ValueError:
|
||||
logger.debug(f"Could not parse scan_date: {data.get('scan_date')}")
|
||||
|
||||
result = VirusTotalResult(
|
||||
resource=domain,
|
||||
positives=positives,
|
||||
total=max(total, 1), # Ensure total is at least 1
|
||||
scan_date=scan_date,
|
||||
permalink=data.get('permalink', f'https://www.virustotal.com/gui/domain/{domain}')
|
||||
)
|
||||
|
||||
logger.info(f"✅ VirusTotal result for domain {domain}: {result.positives}/{result.total} detections")
|
||||
return result
|
||||
elif data.get('response_code') == 0:
|
||||
logger.debug(f"ℹ️ Domain {domain} not found in VirusTotal database")
|
||||
return None
|
||||
else:
|
||||
logger.debug(f"VirusTotal returned response_code: {data.get('response_code')}")
|
||||
return None
|
||||
elif response.status_code == 204:
|
||||
logger.warning("⚠️ VirusTotal API rate limit exceeded")
|
||||
return None
|
||||
elif response.status_code == 403:
|
||||
logger.error("❌ VirusTotal API key is invalid or lacks permissions")
|
||||
return None
|
||||
else:
|
||||
logger.warning(f"⚠️ VirusTotal API error for domain {domain}: HTTP {response.status_code}")
|
||||
try:
|
||||
error_data = response.json()
|
||||
logger.debug(f"VirusTotal error details: {error_data}")
|
||||
except:
|
||||
pass
|
||||
return None
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
logger.warning(f"⏱️ VirusTotal query timeout for domain {domain}")
|
||||
return None
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"🌐 VirusTotal network error for domain {domain}: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Unexpected error querying VirusTotal for domain {domain}: {e}")
|
||||
return None
|
||||
231
src/web_app.py
231
src/web_app.py
@@ -1,231 +0,0 @@
|
||||
# File: src/web_app.py
|
||||
"""Flask web application for reconnaissance tool."""
|
||||
|
||||
from flask import Flask, render_template, request, jsonify, send_from_directory
|
||||
import threading
|
||||
import time
|
||||
import logging
|
||||
from .config import Config
|
||||
from .reconnaissance import ReconnaissanceEngine
|
||||
from .report_generator import ReportGenerator
|
||||
from .data_structures import ReconData
|
||||
|
||||
# Set up logging for this module
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Global variables for tracking ongoing scans
|
||||
active_scans = {}
|
||||
scan_lock = threading.Lock()
|
||||
|
||||
def create_app(config: Config):
|
||||
"""Create Flask application."""
|
||||
app = Flask(__name__,
|
||||
template_folder='../templates',
|
||||
static_folder='../static')
|
||||
|
||||
app.config['SECRET_KEY'] = 'recon-tool-secret-key'
|
||||
|
||||
# Set up logging for web app
|
||||
config.setup_logging(cli_mode=False)
|
||||
logger.info("🌐 Web application initialized")
|
||||
|
||||
@app.route('/')
|
||||
def index():
|
||||
"""Main page."""
|
||||
return render_template('index.html')
|
||||
|
||||
@app.route('/api/scan', methods=['POST'])
|
||||
def start_scan():
|
||||
"""Start a new reconnaissance scan."""
|
||||
try:
|
||||
data = request.get_json()
|
||||
target = data.get('target')
|
||||
scan_config = Config.from_args(
|
||||
shodan_key=data.get('shodan_key'),
|
||||
virustotal_key=data.get('virustotal_key'),
|
||||
max_depth=data.get('max_depth', 2)
|
||||
)
|
||||
|
||||
if not target:
|
||||
logger.warning("⚠️ Scan request missing target")
|
||||
return jsonify({'error': 'Target is required'}), 400
|
||||
|
||||
# Generate scan ID
|
||||
scan_id = f"{target}_{int(time.time())}"
|
||||
logger.info(f"🚀 Starting new scan: {scan_id} for target: {target}")
|
||||
|
||||
# Create shared ReconData object for live updates
|
||||
shared_data = ReconData()
|
||||
|
||||
# Initialize scan data with the shared data object
|
||||
with scan_lock:
|
||||
active_scans[scan_id] = {
|
||||
'status': 'starting',
|
||||
'progress': 0,
|
||||
'message': 'Initializing...',
|
||||
'data': shared_data, # Share the data object from the start!
|
||||
'error': None,
|
||||
'live_stats': {
|
||||
'hostnames': 0,
|
||||
'ip_addresses': 0,
|
||||
'dns_records': 0,
|
||||
'certificates': 0,
|
||||
'shodan_results': 0,
|
||||
'virustotal_results': 0
|
||||
},
|
||||
'latest_discoveries': []
|
||||
}
|
||||
|
||||
# Start reconnaissance in background thread
|
||||
thread = threading.Thread(
|
||||
target=run_reconnaissance_background,
|
||||
args=(scan_id, target, scan_config, shared_data)
|
||||
)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
return jsonify({'scan_id': scan_id})
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error starting scan: {e}", exc_info=True)
|
||||
return jsonify({'error': str(e)}), 500
|
||||
|
||||
@app.route('/api/scan/<scan_id>/status')
|
||||
def get_scan_status(scan_id):
|
||||
"""Get scan status and progress with live discoveries."""
|
||||
with scan_lock:
|
||||
if scan_id not in active_scans:
|
||||
return jsonify({'error': 'Scan not found'}), 404
|
||||
|
||||
scan_data = active_scans[scan_id].copy()
|
||||
|
||||
# Don't include the full data object in status (too large)
|
||||
if 'data' in scan_data:
|
||||
del scan_data['data']
|
||||
|
||||
return jsonify(scan_data)
|
||||
|
||||
@app.route('/api/scan/<scan_id>/report')
|
||||
def get_scan_report(scan_id):
|
||||
"""Get scan report."""
|
||||
with scan_lock:
|
||||
if scan_id not in active_scans:
|
||||
return jsonify({'error': 'Scan not found'}), 404
|
||||
|
||||
scan_data = active_scans[scan_id]
|
||||
|
||||
if scan_data['status'] != 'completed' or not scan_data['data']:
|
||||
return jsonify({'error': 'Scan not completed'}), 400
|
||||
|
||||
try:
|
||||
# Generate report
|
||||
report_gen = ReportGenerator(scan_data['data'])
|
||||
|
||||
return jsonify({
|
||||
'json_report': scan_data['data'].to_json(),
|
||||
'text_report': report_gen.generate_text_report()
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error generating report for {scan_id}: {e}", exc_info=True)
|
||||
return jsonify({'error': f'Failed to generate report: {str(e)}'}), 500
|
||||
|
||||
@app.route('/api/scan/<scan_id>/live-data')
|
||||
def get_live_scan_data(scan_id):
|
||||
"""Get current reconnaissance data (for real-time updates)."""
|
||||
with scan_lock:
|
||||
if scan_id not in active_scans:
|
||||
return jsonify({'error': 'Scan not found'}), 404
|
||||
|
||||
scan_data = active_scans[scan_id]
|
||||
|
||||
# Now we always have a data object, even if it's empty initially
|
||||
data_obj = scan_data['data']
|
||||
|
||||
if not data_obj:
|
||||
return jsonify({
|
||||
'hostnames': [],
|
||||
'ip_addresses': [],
|
||||
'stats': scan_data['live_stats'],
|
||||
'latest_discoveries': []
|
||||
})
|
||||
|
||||
# Return current discoveries from the shared data object
|
||||
return jsonify({
|
||||
'hostnames': sorted(list(data_obj.hostnames)),
|
||||
'ip_addresses': sorted(list(data_obj.ip_addresses)),
|
||||
'stats': data_obj.get_stats(),
|
||||
'latest_discoveries': scan_data.get('latest_discoveries', [])
|
||||
})
|
||||
|
||||
return app
|
||||
|
||||
def run_reconnaissance_background(scan_id: str, target: str, config: Config, shared_data: ReconData):
|
||||
"""Run reconnaissance in background thread with shared data object."""
|
||||
|
||||
def update_progress(message: str, percentage: int = None):
|
||||
"""Update scan progress and live statistics."""
|
||||
with scan_lock:
|
||||
if scan_id in active_scans:
|
||||
active_scans[scan_id]['message'] = message
|
||||
if percentage is not None:
|
||||
active_scans[scan_id]['progress'] = percentage
|
||||
|
||||
# Update live stats from the shared data object
|
||||
if shared_data:
|
||||
active_scans[scan_id]['live_stats'] = shared_data.get_stats()
|
||||
|
||||
# Add to latest discoveries (keep last 10)
|
||||
if 'latest_discoveries' not in active_scans[scan_id]:
|
||||
active_scans[scan_id]['latest_discoveries'] = []
|
||||
|
||||
active_scans[scan_id]['latest_discoveries'].append({
|
||||
'timestamp': time.time(),
|
||||
'message': message
|
||||
})
|
||||
|
||||
# Keep only last 10 discoveries
|
||||
active_scans[scan_id]['latest_discoveries'] = \
|
||||
active_scans[scan_id]['latest_discoveries'][-10:]
|
||||
|
||||
logger.info(f"[{scan_id}] {message} ({percentage}%)" if percentage else f"[{scan_id}] {message}")
|
||||
|
||||
try:
|
||||
logger.info(f"🔧 Initializing reconnaissance engine for scan: {scan_id}")
|
||||
|
||||
# Initialize engine
|
||||
engine = ReconnaissanceEngine(config)
|
||||
engine.set_progress_callback(update_progress)
|
||||
|
||||
# IMPORTANT: Pass the shared data object to the engine
|
||||
engine.set_shared_data(shared_data)
|
||||
|
||||
# Update status
|
||||
with scan_lock:
|
||||
active_scans[scan_id]['status'] = 'running'
|
||||
|
||||
logger.info(f"🚀 Starting reconnaissance for: {target}")
|
||||
|
||||
# Run reconnaissance - this will populate the shared_data object incrementally
|
||||
final_data = engine.run_reconnaissance(target)
|
||||
|
||||
logger.info(f"✅ Reconnaissance completed for scan: {scan_id}")
|
||||
|
||||
# Update with final results (the shared_data should already be populated)
|
||||
with scan_lock:
|
||||
active_scans[scan_id]['status'] = 'completed'
|
||||
active_scans[scan_id]['progress'] = 100
|
||||
active_scans[scan_id]['message'] = 'Reconnaissance completed'
|
||||
active_scans[scan_id]['data'] = final_data # This should be the same as shared_data
|
||||
active_scans[scan_id]['live_stats'] = final_data.get_stats()
|
||||
|
||||
# Log final statistics
|
||||
final_stats = final_data.get_stats()
|
||||
logger.info(f"📊 Final stats for {scan_id}: {final_stats}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error in reconnaissance for {scan_id}: {e}", exc_info=True)
|
||||
# Handle errors
|
||||
with scan_lock:
|
||||
active_scans[scan_id]['status'] = 'error'
|
||||
active_scans[scan_id]['error'] = str(e)
|
||||
active_scans[scan_id]['message'] = f'Error: {str(e)}'
|
||||
1046
static/css/main.css
Normal file
1046
static/css/main.css
Normal file
File diff suppressed because it is too large
Load Diff
929
static/js/graph.js
Normal file
929
static/js/graph.js
Normal file
@@ -0,0 +1,929 @@
|
||||
/**
|
||||
* Graph visualization module for DNSRecon
|
||||
* Handles network graph rendering using vis.js
|
||||
*/
|
||||
|
||||
class GraphManager {
|
||||
constructor(containerId) {
|
||||
this.container = document.getElementById(containerId);
|
||||
this.network = null;
|
||||
this.nodes = new vis.DataSet();
|
||||
this.edges = new vis.DataSet();
|
||||
this.isInitialized = false;
|
||||
this.currentLayout = 'physics';
|
||||
this.nodeInfoPopup = null;
|
||||
|
||||
this.options = {
|
||||
nodes: {
|
||||
shape: 'dot',
|
||||
size: 15,
|
||||
font: {
|
||||
size: 12,
|
||||
color: '#c7c7c7',
|
||||
face: 'Roboto Mono, monospace',
|
||||
background: 'rgba(26, 26, 26, 0.9)',
|
||||
strokeWidth: 2,
|
||||
strokeColor: '#000000'
|
||||
},
|
||||
borderWidth: 2,
|
||||
borderColor: '#444',
|
||||
scaling: {
|
||||
min: 10,
|
||||
max: 30,
|
||||
label: {
|
||||
enabled: true,
|
||||
min: 8,
|
||||
max: 16
|
||||
}
|
||||
},
|
||||
chosen: {
|
||||
node: (values, id, selected, hovering) => {
|
||||
values.borderColor = '#00ff41';
|
||||
values.borderWidth = 3;
|
||||
}
|
||||
}
|
||||
},
|
||||
edges: {
|
||||
width: 2,
|
||||
color: {
|
||||
color: '#555',
|
||||
highlight: '#00ff41',
|
||||
hover: '#ff9900',
|
||||
inherit: false
|
||||
},
|
||||
font: {
|
||||
size: 10,
|
||||
color: '#999',
|
||||
face: 'Roboto Mono, monospace',
|
||||
background: 'rgba(26, 26, 26, 0.8)',
|
||||
strokeWidth: 1,
|
||||
strokeColor: '#000000'
|
||||
},
|
||||
arrows: {
|
||||
to: {
|
||||
enabled: true,
|
||||
scaleFactor: 1,
|
||||
type: 'arrow'
|
||||
}
|
||||
},
|
||||
smooth: {
|
||||
enabled: true,
|
||||
type: 'dynamic',
|
||||
roundness: 0.6
|
||||
},
|
||||
chosen: {
|
||||
edge: (values, id, selected, hovering) => {
|
||||
values.color = '#00ff41';
|
||||
values.width = 4;
|
||||
}
|
||||
}
|
||||
},
|
||||
physics: {
|
||||
enabled: true,
|
||||
stabilization: {
|
||||
enabled: true,
|
||||
iterations: 150,
|
||||
updateInterval: 50
|
||||
},
|
||||
barnesHut: {
|
||||
gravitationalConstant: -3000,
|
||||
centralGravity: 0.4,
|
||||
springLength: 120,
|
||||
springConstant: 0.05,
|
||||
damping: 0.1,
|
||||
avoidOverlap: 0.2
|
||||
},
|
||||
maxVelocity: 30,
|
||||
minVelocity: 0.1,
|
||||
solver: 'barnesHut',
|
||||
timestep: 0.4,
|
||||
adaptiveTimestep: true
|
||||
},
|
||||
interaction: {
|
||||
hover: true,
|
||||
hoverConnectedEdges: true,
|
||||
selectConnectedEdges: true,
|
||||
tooltipDelay: 300,
|
||||
hideEdgesOnDrag: false,
|
||||
hideNodesOnDrag: false,
|
||||
zoomView: true,
|
||||
dragView: true,
|
||||
multiselect: true
|
||||
},
|
||||
layout: {
|
||||
improvedLayout: true,
|
||||
randomSeed: 2
|
||||
}
|
||||
};
|
||||
|
||||
this.createNodeInfoPopup();
|
||||
}
|
||||
|
||||
/**
|
||||
* Create floating node info popup
|
||||
*/
|
||||
createNodeInfoPopup() {
|
||||
this.nodeInfoPopup = document.createElement('div');
|
||||
this.nodeInfoPopup.className = 'node-info-popup';
|
||||
this.nodeInfoPopup.style.display = 'none';
|
||||
document.body.appendChild(this.nodeInfoPopup);
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the network graph
|
||||
*/
|
||||
initialize() {
|
||||
if (this.isInitialized) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const data = {
|
||||
nodes: this.nodes,
|
||||
edges: this.edges
|
||||
};
|
||||
|
||||
this.network = new vis.Network(this.container, data, this.options);
|
||||
this.setupNetworkEvents();
|
||||
this.isInitialized = true;
|
||||
|
||||
// Hide placeholder
|
||||
const placeholder = this.container.querySelector('.graph-placeholder');
|
||||
if (placeholder) {
|
||||
placeholder.style.display = 'none';
|
||||
}
|
||||
|
||||
// Add graph controls
|
||||
this.addGraphControls();
|
||||
|
||||
console.log('Graph initialized successfully');
|
||||
} catch (error) {
|
||||
console.error('Failed to initialize graph:', error);
|
||||
this.showError('Failed to initialize visualization');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add interactive graph controls
|
||||
*/
|
||||
addGraphControls() {
|
||||
const controlsContainer = document.createElement('div');
|
||||
controlsContainer.className = 'graph-controls';
|
||||
controlsContainer.innerHTML = `
|
||||
<button class="graph-control-btn" id="graph-fit" title="Fit to Screen">[FIT]</button>
|
||||
<button class="graph-control-btn" id="graph-physics" title="Toggle Physics">[PHYSICS]</button>
|
||||
<button class="graph-control-btn" id="graph-cluster" title="Cluster Nodes">[CLUSTER]</button>
|
||||
`;
|
||||
|
||||
this.container.appendChild(controlsContainer);
|
||||
|
||||
// Add control event listeners
|
||||
document.getElementById('graph-fit').addEventListener('click', () => this.fitView());
|
||||
document.getElementById('graph-physics').addEventListener('click', () => this.togglePhysics());
|
||||
document.getElementById('graph-cluster').addEventListener('click', () => this.toggleClustering());
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup network event handlers
|
||||
*/
|
||||
setupNetworkEvents() {
|
||||
if (!this.network) return;
|
||||
|
||||
// Node click event with details
|
||||
this.network.on('click', (params) => {
|
||||
if (params.nodes.length > 0) {
|
||||
const nodeId = params.nodes[0];
|
||||
if (this.network.isCluster(nodeId)) {
|
||||
this.network.openCluster(nodeId);
|
||||
} else {
|
||||
const node = this.nodes.get(nodeId);
|
||||
if (node) {
|
||||
this.showNodeDetails(node);
|
||||
this.highlightNodeConnections(nodeId);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
this.clearHighlights();
|
||||
}
|
||||
});
|
||||
|
||||
// Hover events
|
||||
this.network.on('hoverNode', (params) => {
|
||||
const nodeId = params.node;
|
||||
const node = this.nodes.get(nodeId);
|
||||
if (node) {
|
||||
this.highlightConnectedNodes(nodeId, true);
|
||||
}
|
||||
});
|
||||
|
||||
// FIX: Comment out the problematic context menu handler
|
||||
this.network.on('oncontext', (params) => {
|
||||
params.event.preventDefault();
|
||||
// if (params.nodes.length > 0) {
|
||||
// this.showNodeContextMenu(params.pointer.DOM, params.nodes[0]);
|
||||
// }
|
||||
});
|
||||
|
||||
// Stabilization events with progress
|
||||
this.network.on('stabilizationProgress', (params) => {
|
||||
const progress = params.iterations / params.total;
|
||||
this.updateStabilizationProgress(progress);
|
||||
});
|
||||
|
||||
this.network.on('stabilizationIterationsDone', () => {
|
||||
this.onStabilizationComplete();
|
||||
});
|
||||
|
||||
// Selection events
|
||||
this.network.on('select', (params) => {
|
||||
console.log('Selected nodes:', params.nodes);
|
||||
console.log('Selected edges:', params.edges);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {Object} graphData - Graph data from backend
|
||||
*/
|
||||
updateGraph(graphData) {
|
||||
if (!graphData || !graphData.nodes || !graphData.edges) {
|
||||
console.warn('Invalid graph data received');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Initialize if not already done
|
||||
if (!this.isInitialized) {
|
||||
this.initialize();
|
||||
}
|
||||
|
||||
const largeEntityMap = new Map();
|
||||
graphData.nodes.forEach(node => {
|
||||
if (node.type === 'large_entity' && node.attributes && Array.isArray(node.attributes.nodes)) {
|
||||
node.attributes.nodes.forEach(nodeId => {
|
||||
largeEntityMap.set(nodeId, node.id);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
const processedNodes = graphData.nodes.map(node => {
|
||||
const processed = this.processNode(node);
|
||||
if (largeEntityMap.has(node.id)) {
|
||||
processed.hidden = true;
|
||||
}
|
||||
return processed;
|
||||
});
|
||||
|
||||
const mergedEdges = {};
|
||||
graphData.edges.forEach(edge => {
|
||||
const fromNode = largeEntityMap.has(edge.from) ? largeEntityMap.get(edge.from) : edge.from;
|
||||
const toNode = largeEntityMap.has(edge.to) ? largeEntityMap.get(edge.to) : edge.to;
|
||||
const mergeKey = `${fromNode}-${toNode}-${edge.label}`;
|
||||
|
||||
if (!mergedEdges[mergeKey]) {
|
||||
mergedEdges[mergeKey] = {
|
||||
...edge,
|
||||
from: fromNode,
|
||||
to: toNode,
|
||||
count: 0,
|
||||
confidence_score: 0
|
||||
};
|
||||
}
|
||||
|
||||
mergedEdges[mergeKey].count++;
|
||||
if (edge.confidence_score > mergedEdges[mergeKey].confidence_score) {
|
||||
mergedEdges[mergeKey].confidence_score = edge.confidence_score;
|
||||
}
|
||||
});
|
||||
|
||||
const processedEdges = Object.values(mergedEdges).map(edge => {
|
||||
const processed = this.processEdge(edge);
|
||||
if (edge.count > 1) {
|
||||
processed.label = `${edge.label} (${edge.count})`;
|
||||
}
|
||||
return processed;
|
||||
});
|
||||
|
||||
// Update datasets with animation
|
||||
const existingNodeIds = this.nodes.getIds();
|
||||
const existingEdgeIds = this.edges.getIds();
|
||||
|
||||
// Add new nodes with fade-in animation
|
||||
const newNodes = processedNodes.filter(node => !existingNodeIds.includes(node.id));
|
||||
const newEdges = processedEdges.filter(edge => !existingEdgeIds.includes(edge.id));
|
||||
|
||||
// Update existing data
|
||||
this.nodes.update(processedNodes);
|
||||
this.edges.update(processedEdges);
|
||||
|
||||
// Highlight new additions briefly
|
||||
if (newNodes.length > 0 || newEdges.length > 0) {
|
||||
setTimeout(() => this.highlightNewElements(newNodes, newEdges), 100);
|
||||
}
|
||||
|
||||
// Auto-fit view for small graphs or first update
|
||||
if (processedNodes.length <= 10 || existingNodeIds.length === 0) {
|
||||
setTimeout(() => this.fitView(), 800);
|
||||
}
|
||||
|
||||
console.log(`Graph updated: ${processedNodes.length} nodes, ${processedEdges.length} edges (${newNodes.length} new nodes, ${newEdges.length} new edges)`);
|
||||
} catch (error) {
|
||||
console.error('Failed to update graph:', error);
|
||||
this.showError('Failed to update visualization');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process node data with styling and metadata
|
||||
* @param {Object} node - Raw node data
|
||||
* @returns {Object} Processed node data
|
||||
*/
|
||||
processNode(node) {
|
||||
const processedNode = {
|
||||
id: node.id,
|
||||
label: this.formatNodeLabel(node.id, node.type),
|
||||
color: this.getNodeColor(node.type),
|
||||
size: this.getNodeSize(node.type),
|
||||
borderColor: this.getNodeBorderColor(node.type),
|
||||
shape: this.getNodeShape(node.type),
|
||||
attributes: node.attributes || {},
|
||||
description: node.description || '',
|
||||
metadata: node.metadata || {},
|
||||
type: node.type,
|
||||
incoming_edges: node.incoming_edges || [],
|
||||
outgoing_edges: node.outgoing_edges || []
|
||||
};
|
||||
|
||||
// Add confidence-based styling
|
||||
if (node.confidence) {
|
||||
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
|
||||
}
|
||||
|
||||
// Style based on certificate validity
|
||||
if (node.type === 'domain') {
|
||||
if (node.attributes && node.attributes.certificates && node.attributes.certificates.has_valid_cert === false) {
|
||||
processedNode.color = { background: '#888888', border: '#666666' };
|
||||
}
|
||||
}
|
||||
|
||||
// Handle merged correlation objects (similar to large entities)
|
||||
if (node.type === 'correlation_object') {
|
||||
const metadata = node.metadata || {};
|
||||
const values = metadata.values || [];
|
||||
const mergeCount = metadata.merge_count || 1;
|
||||
|
||||
if (mergeCount > 1) {
|
||||
// Display as merged correlation container
|
||||
processedNode.label = `Correlations (${mergeCount})`;
|
||||
processedNode.title = `Merged correlation container with ${mergeCount} values: ${values.slice(0, 3).join(', ')}${values.length > 3 ? '...' : ''}`;
|
||||
processedNode.borderWidth = 3; // Thicker border for merged nodes
|
||||
} else {
|
||||
// Single correlation value
|
||||
const value = Array.isArray(values) && values.length > 0 ? values[0] : (metadata.value || 'Unknown');
|
||||
const displayValue = typeof value === 'string' && value.length > 20 ? value.substring(0, 17) + '...' : value;
|
||||
processedNode.label = `Corr: ${displayValue}`;
|
||||
processedNode.title = `Correlation: ${value}`;
|
||||
}
|
||||
}
|
||||
|
||||
return processedNode;
|
||||
}
|
||||
|
||||
/**
|
||||
* Process edge data with styling and metadata
|
||||
* @param {Object} edge - Raw edge data
|
||||
* @returns {Object} Processed edge data
|
||||
*/
|
||||
processEdge(edge) {
|
||||
const confidence = edge.confidence_score || 0;
|
||||
const processedEdge = {
|
||||
id: `${edge.from}-${edge.to}`,
|
||||
from: edge.from,
|
||||
to: edge.to,
|
||||
label: this.formatEdgeLabel(edge.label, confidence),
|
||||
title: this.createEdgeTooltip(edge),
|
||||
width: this.getEdgeWidth(confidence),
|
||||
color: this.getEdgeColor(confidence),
|
||||
dashes: confidence < 0.6 ? [5, 5] : false,
|
||||
metadata: {
|
||||
relationship_type: edge.label,
|
||||
confidence_score: confidence,
|
||||
source_provider: edge.source_provider,
|
||||
discovery_timestamp: edge.discovery_timestamp
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
|
||||
return processedEdge;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format node label for display
|
||||
* @param {string} nodeId - Node identifier
|
||||
* @param {string} nodeType - Node type
|
||||
* @returns {string} Formatted label
|
||||
*/
|
||||
formatNodeLabel(nodeId, nodeType) {
|
||||
if (typeof nodeId !== 'string') return '';
|
||||
if (nodeId.length > 20) {
|
||||
return nodeId.substring(0, 17) + '...';
|
||||
}
|
||||
return nodeId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format edge label for display
|
||||
* @param {string} relationshipType - Type of relationship
|
||||
* @param {number} confidence - Confidence score
|
||||
* @returns {string} Formatted label
|
||||
*/
|
||||
formatEdgeLabel(relationshipType, confidence) {
|
||||
if (!relationshipType) return '';
|
||||
|
||||
const confidenceText = confidence >= 0.8 ? '●' : confidence >= 0.6 ? '◐' : '○';
|
||||
return `${relationshipType} ${confidenceText}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get node color based on type
|
||||
* @param {string} nodeType - Node type
|
||||
* @returns {string} Color value
|
||||
*/
|
||||
getNodeColor(nodeType) {
|
||||
const colors = {
|
||||
'domain': '#00ff41', // Green
|
||||
'ip': '#ff9900', // Amber
|
||||
'asn': '#00aaff', // Blue
|
||||
'large_entity': '#ff6b6b', // Red for large entities
|
||||
'correlation_object': '#9620c0ff'
|
||||
};
|
||||
return colors[nodeType] || '#ffffff';
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get node border color based on type
|
||||
* @param {string} nodeType - Node type
|
||||
* @returns {string} Border color value
|
||||
*/
|
||||
getNodeBorderColor(nodeType) {
|
||||
const borderColors = {
|
||||
'domain': '#00aa2e',
|
||||
'ip': '#cc7700',
|
||||
'asn': '#0088cc',
|
||||
'correlation_object': '#c235c9ff'
|
||||
};
|
||||
return borderColors[nodeType] || '#666666';
|
||||
}
|
||||
|
||||
/**
|
||||
* Get node size based on type
|
||||
* @param {string} nodeType - Node type
|
||||
* @returns {number} Node size
|
||||
*/
|
||||
getNodeSize(nodeType) {
|
||||
const sizes = {
|
||||
'domain': 12,
|
||||
'ip': 14,
|
||||
'asn': 16,
|
||||
'correlation_object': 8,
|
||||
'large_entity': 5
|
||||
};
|
||||
return sizes[nodeType] || 12;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get node shape based on type
|
||||
* @param {string} nodeType - Node type
|
||||
* @returns {string} Shape name
|
||||
*/
|
||||
getNodeShape(nodeType) {
|
||||
const shapes = {
|
||||
'domain': 'dot',
|
||||
'ip': 'square',
|
||||
'asn': 'triangle',
|
||||
'correlation_object': 'hexagon',
|
||||
'large_entity': 'database'
|
||||
};
|
||||
return shapes[nodeType] || 'dot';
|
||||
}
|
||||
|
||||
/**
|
||||
* Get edge color based on confidence
|
||||
* @param {number} confidence - Confidence score
|
||||
* @returns {string} Edge color
|
||||
*/
|
||||
getEdgeColor(confidence) {
|
||||
if (confidence >= 0.8) {
|
||||
return '#00ff41'; // High confidence - green
|
||||
} else if (confidence >= 0.6) {
|
||||
return '#ff9900'; // Medium confidence - amber
|
||||
} else {
|
||||
return '#666666'; // Low confidence - gray
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get edge width based on confidence
|
||||
* @param {number} confidence - Confidence score
|
||||
* @returns {number} Edge width
|
||||
*/
|
||||
getEdgeWidth(confidence) {
|
||||
if (confidence >= 0.8) {
|
||||
return 3;
|
||||
} else if (confidence >= 0.6) {
|
||||
return 2;
|
||||
} else {
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create edge tooltip with correct provider information
|
||||
* @param {Object} edge - Edge data
|
||||
* @returns {string} HTML tooltip content
|
||||
*/
|
||||
createEdgeTooltip(edge) {
|
||||
let tooltip = `<div style="font-family: 'Roboto Mono', monospace; font-size: 11px;">`;
|
||||
tooltip += `<div style="color: #00ff41; font-weight: bold; margin-bottom: 4px;">${edge.label || 'Relationship'}</div>`;
|
||||
tooltip += `<div style="color: #999; margin-bottom: 2px;">Confidence: ${(edge.confidence_score * 100).toFixed(1)}%</div>`;
|
||||
|
||||
if (edge.source_provider) {
|
||||
tooltip += `<div style="color: #999; margin-bottom: 2px;">Provider: ${edge.source_provider}</div>`;
|
||||
}
|
||||
|
||||
if (edge.discovery_timestamp) {
|
||||
const date = new Date(edge.discovery_timestamp);
|
||||
tooltip += `<div style="color: #666; font-size: 10px;">Discovered: ${date.toLocaleString()}</div>`;
|
||||
}
|
||||
|
||||
tooltip += `</div>`;
|
||||
return tooltip;
|
||||
}
|
||||
|
||||
/**
|
||||
* Determine if node is important based on connections or metadata
|
||||
* @param {Object} node - Node data
|
||||
* @returns {boolean} True if node is important
|
||||
*/
|
||||
isImportantNode(node) {
|
||||
// Mark nodes as important based on criteria
|
||||
if (node.type === 'domain' && node.id.includes('www.')) return false;
|
||||
if (node.metadata && node.metadata.connection_count > 3) return true;
|
||||
if (node.type === 'asn') return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Show node details in modal
|
||||
* @param {Object} node - Node object
|
||||
*/
|
||||
showNodeDetails(node) {
|
||||
// Trigger custom event for main application to handle
|
||||
const event = new CustomEvent('nodeSelected', {
|
||||
detail: { node }
|
||||
});
|
||||
document.dispatchEvent(event);
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide node info popup
|
||||
*/
|
||||
hideNodeInfoPopup() {
|
||||
if (this.nodeInfoPopup) {
|
||||
this.nodeInfoPopup.style.display = 'none';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Highlight node connections
|
||||
* @param {string} nodeId - Node to highlight
|
||||
*/
|
||||
highlightNodeConnections(nodeId) {
|
||||
const connectedNodes = this.network.getConnectedNodes(nodeId);
|
||||
const connectedEdges = this.network.getConnectedEdges(nodeId);
|
||||
|
||||
// Update node colors
|
||||
const nodeUpdates = connectedNodes.map(id => ({
|
||||
id: id,
|
||||
borderColor: '#ff9900',
|
||||
borderWidth: 3
|
||||
}));
|
||||
|
||||
nodeUpdates.push({
|
||||
id: nodeId,
|
||||
borderColor: '#00ff41',
|
||||
borderWidth: 4
|
||||
});
|
||||
|
||||
// Update edge colors
|
||||
const edgeUpdates = connectedEdges.map(id => ({
|
||||
id: id,
|
||||
color: { color: '#ff9900' },
|
||||
width: 3
|
||||
}));
|
||||
|
||||
this.nodes.update(nodeUpdates);
|
||||
this.edges.update(edgeUpdates);
|
||||
|
||||
// Store for cleanup
|
||||
this.highlightedElements = {
|
||||
nodes: connectedNodes.concat([nodeId]),
|
||||
edges: connectedEdges
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Highlight connected nodes on hover
|
||||
* @param {string} nodeId - Node ID
|
||||
* @param {boolean} highlight - Whether to highlight or unhighlight
|
||||
*/
|
||||
highlightConnectedNodes(nodeId, highlight) {
|
||||
const connectedNodes = this.network.getConnectedNodes(nodeId);
|
||||
const connectedEdges = this.network.getConnectedEdges(nodeId);
|
||||
|
||||
if (highlight) {
|
||||
// Dim all other elements
|
||||
this.dimUnconnectedElements([nodeId, ...connectedNodes], connectedEdges);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Dim elements not connected to the specified nodes
|
||||
* @param {Array} nodeIds - Node IDs to keep highlighted
|
||||
* @param {Array} edgeIds - Edge IDs to keep highlighted
|
||||
*/
|
||||
dimUnconnectedElements(nodeIds, edgeIds) {
|
||||
const allNodes = this.nodes.get();
|
||||
const allEdges = this.edges.get();
|
||||
|
||||
const nodeUpdates = allNodes.map(node => ({
|
||||
id: node.id,
|
||||
opacity: nodeIds.includes(node.id) ? 1 : 0.3
|
||||
}));
|
||||
|
||||
const edgeUpdates = allEdges.map(edge => ({
|
||||
id: edge.id,
|
||||
opacity: edgeIds.includes(edge.id) ? 1 : 0.1
|
||||
}));
|
||||
|
||||
this.nodes.update(nodeUpdates);
|
||||
this.edges.update(edgeUpdates);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all highlights
|
||||
*/
|
||||
clearHighlights() {
|
||||
if (this.highlightedElements) {
|
||||
// Reset highlighted nodes
|
||||
const nodeUpdates = this.highlightedElements.nodes.map(id => {
|
||||
const originalNode = this.nodes.get(id);
|
||||
return {
|
||||
id: id,
|
||||
borderColor: this.getNodeBorderColor(originalNode.type),
|
||||
borderWidth: 2
|
||||
};
|
||||
});
|
||||
|
||||
// Reset highlighted edges
|
||||
const edgeUpdates = this.highlightedElements.edges.map(id => {
|
||||
const originalEdge = this.edges.get(id);
|
||||
return {
|
||||
id: id,
|
||||
color: this.getEdgeColor(originalEdge.metadata ? originalEdge.metadata.confidence_score : 0.5),
|
||||
width: this.getEdgeWidth(originalEdge.metadata ? originalEdge.metadata.confidence_score : 0.5)
|
||||
};
|
||||
});
|
||||
|
||||
this.nodes.update(nodeUpdates);
|
||||
this.edges.update(edgeUpdates);
|
||||
|
||||
this.highlightedElements = null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear hover highlights
|
||||
*/
|
||||
clearHoverHighlights() {
|
||||
const allNodes = this.nodes.get();
|
||||
const allEdges = this.edges.get();
|
||||
|
||||
const nodeUpdates = allNodes.map(node => ({ id: node.id, opacity: 1 }));
|
||||
const edgeUpdates = allEdges.map(edge => ({ id: edge.id, opacity: 1 }));
|
||||
|
||||
this.nodes.update(nodeUpdates);
|
||||
this.edges.update(edgeUpdates);
|
||||
}
|
||||
|
||||
/**
|
||||
* Highlight newly added elements
|
||||
* @param {Array} newNodes - New nodes
|
||||
* @param {Array} newEdges - New edges
|
||||
*/
|
||||
highlightNewElements(newNodes, newEdges) {
|
||||
// Briefly highlight new nodes
|
||||
const nodeHighlights = newNodes.map(node => ({
|
||||
id: node.id,
|
||||
borderColor: '#00ff41',
|
||||
borderWidth: 4
|
||||
}));
|
||||
|
||||
// Briefly highlight new edges
|
||||
const edgeHighlights = newEdges.map(edge => ({
|
||||
id: edge.id,
|
||||
color: '#00ff41',
|
||||
width: 4
|
||||
}));
|
||||
|
||||
this.nodes.update(nodeHighlights);
|
||||
this.edges.update(edgeHighlights);
|
||||
|
||||
// Reset after animation
|
||||
setTimeout(() => {
|
||||
const nodeResets = newNodes.map(node => ({
|
||||
id: node.id,
|
||||
borderColor: this.getNodeBorderColor(node.type),
|
||||
borderWidth: 2,
|
||||
}));
|
||||
|
||||
const edgeResets = newEdges.map(edge => ({
|
||||
id: edge.id,
|
||||
color: this.getEdgeColor(edge.metadata ? edge.metadata.confidence_score : 0.5),
|
||||
width: this.getEdgeWidth(edge.metadata ? edge.metadata.confidence_score : 0.5)
|
||||
}));
|
||||
|
||||
this.nodes.update(nodeResets);
|
||||
this.edges.update(edgeResets);
|
||||
}, 2000);
|
||||
}
|
||||
|
||||
/**
|
||||
* Update stabilization progress
|
||||
* @param {number} progress - Progress value (0-1)
|
||||
*/
|
||||
updateStabilizationProgress(progress) {
|
||||
// Could show a progress indicator if needed
|
||||
console.log(`Graph stabilization: ${(progress * 100).toFixed(1)}%`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle stabilization completion
|
||||
*/
|
||||
onStabilizationComplete() {
|
||||
console.log('Graph stabilization complete');
|
||||
}
|
||||
|
||||
/**
|
||||
* Focus view on specific node
|
||||
* @param {string} nodeId - Node to focus on
|
||||
*/
|
||||
focusOnNode(nodeId) {
|
||||
const nodePosition = this.network.getPositions([nodeId]);
|
||||
if (nodePosition[nodeId]) {
|
||||
this.network.moveTo({
|
||||
position: nodePosition[nodeId],
|
||||
scale: 1.5,
|
||||
animation: {
|
||||
duration: 1000,
|
||||
easingFunction: 'easeInOutQuart'
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Toggle physics simulation
|
||||
*/
|
||||
togglePhysics() {
|
||||
const currentPhysics = this.network.physics.physicsEnabled;
|
||||
this.network.setOptions({ physics: !currentPhysics });
|
||||
|
||||
const button = document.getElementById('graph-physics');
|
||||
if (button) {
|
||||
button.textContent = currentPhysics ? '[PHYSICS OFF]' : '[PHYSICS ON]';
|
||||
button.style.color = currentPhysics ? '#ff9900' : '#00ff41';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Toggle node clustering
|
||||
*/
|
||||
toggleClustering() {
|
||||
if (this.network.isCluster('domain-cluster')) {
|
||||
this.network.openCluster('domain-cluster');
|
||||
} else {
|
||||
const clusterOptions = {
|
||||
joinCondition: (nodeOptions) => {
|
||||
return nodeOptions.type === 'domain';
|
||||
},
|
||||
clusterNodeProperties: {
|
||||
id: 'domain-cluster',
|
||||
label: 'Domains',
|
||||
shape: 'database',
|
||||
color: '#00ff41',
|
||||
borderWidth: 3,
|
||||
}
|
||||
};
|
||||
this.network.cluster(clusterOptions);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fit the view to show all nodes
|
||||
*/
|
||||
fitView() {
|
||||
if (this.network) {
|
||||
this.network.fit({
|
||||
animation: {
|
||||
duration: 1000,
|
||||
easingFunction: 'easeInOutQuad'
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the graph
|
||||
*/
|
||||
clear() {
|
||||
this.nodes.clear();
|
||||
this.edges.clear();
|
||||
|
||||
// Show placeholder
|
||||
const placeholder = this.container.querySelector('.graph-placeholder');
|
||||
if (placeholder) {
|
||||
placeholder.style.display = 'flex';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Show error message
|
||||
* @param {string} message - Error message
|
||||
*/
|
||||
showError(message) {
|
||||
const placeholder = this.container.querySelector('.graph-placeholder .placeholder-text');
|
||||
if (placeholder) {
|
||||
placeholder.textContent = `Error: ${message}`;
|
||||
placeholder.style.color = '#ff6b6b';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get network statistics
|
||||
* @returns {Object} Statistics object
|
||||
*/
|
||||
getStatistics() {
|
||||
return {
|
||||
nodeCount: this.nodes.length,
|
||||
edgeCount: this.edges.length,
|
||||
//isStabilized: this.network ? this.network.isStabilized() : false
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply filters to the graph
|
||||
* @param {string} nodeType - The type of node to show ('all' for no filter)
|
||||
* @param {number} minConfidence - The minimum confidence score for edges to be visible
|
||||
*/
|
||||
applyFilters(nodeType, minConfidence) {
|
||||
console.log(`Applying filters: nodeType=${nodeType}, minConfidence=${minConfidence}`);
|
||||
|
||||
const nodeUpdates = [];
|
||||
const edgeUpdates = [];
|
||||
|
||||
const allNodes = this.nodes.get({ returnType: 'Object' });
|
||||
const allEdges = this.edges.get();
|
||||
|
||||
// Determine which nodes are visible based on the nodeType filter
|
||||
for (const nodeId in allNodes) {
|
||||
const node = allNodes[nodeId];
|
||||
const isVisible = (nodeType === 'all' || node.type === nodeType);
|
||||
nodeUpdates.push({ id: nodeId, hidden: !isVisible });
|
||||
}
|
||||
|
||||
// Update nodes first to determine edge visibility
|
||||
this.nodes.update(nodeUpdates);
|
||||
|
||||
// Determine which edges are visible based on confidence and connected nodes
|
||||
for (const edge of allEdges) {
|
||||
const sourceNode = this.nodes.get(edge.from);
|
||||
const targetNode = this.nodes.get(edge.to);
|
||||
const confidence = edge.metadata ? edge.metadata.confidence_score : 0;
|
||||
|
||||
const isVisible = confidence >= minConfidence &&
|
||||
sourceNode && !sourceNode.hidden &&
|
||||
targetNode && !targetNode.hidden;
|
||||
|
||||
edgeUpdates.push({ id: edge.id, hidden: !isVisible });
|
||||
}
|
||||
|
||||
this.edges.update(edgeUpdates);
|
||||
|
||||
console.log('Filters applied.');
|
||||
}
|
||||
}
|
||||
|
||||
// Export for use in main.js
|
||||
window.GraphManager = GraphManager;
|
||||
1443
static/js/main.js
Normal file
1443
static/js/main.js
Normal file
File diff suppressed because it is too large
Load Diff
555
static/script.js
555
static/script.js
@@ -1,555 +0,0 @@
|
||||
// DNS Reconnaissance Tool - Enhanced Frontend JavaScript with Debug Output
|
||||
|
||||
class ReconTool {
|
||||
constructor() {
|
||||
this.currentScanId = null;
|
||||
this.pollInterval = null;
|
||||
this.liveDataInterval = null;
|
||||
this.currentReport = null;
|
||||
this.debugMode = true; // Enable debug logging
|
||||
this.init();
|
||||
}
|
||||
|
||||
debug(message, data = null) {
|
||||
if (this.debugMode) {
|
||||
if (data) {
|
||||
console.log(`🔍 DEBUG: ${message}`, data);
|
||||
} else {
|
||||
console.log(`🔍 DEBUG: ${message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
init() {
|
||||
this.bindEvents();
|
||||
this.setupRealtimeElements();
|
||||
}
|
||||
|
||||
setupRealtimeElements() {
|
||||
// Create live discovery container if it doesn't exist
|
||||
if (!document.getElementById('liveDiscoveries')) {
|
||||
const progressSection = document.getElementById('progressSection');
|
||||
const liveDiv = document.createElement('div');
|
||||
liveDiv.id = 'liveDiscoveries';
|
||||
liveDiv.innerHTML = `
|
||||
<div class="live-discoveries" style="display: none;">
|
||||
<h3>🔍 Live Discoveries</h3>
|
||||
<div class="stats-grid">
|
||||
<div class="stat-item">
|
||||
<span class="stat-label">Hostnames:</span>
|
||||
<span id="liveHostnames" class="stat-value">0</span>
|
||||
</div>
|
||||
<div class="stat-item">
|
||||
<span class="stat-label">IP Addresses:</span>
|
||||
<span id="liveIPs" class="stat-value">0</span>
|
||||
</div>
|
||||
<div class="stat-item">
|
||||
<span class="stat-label">DNS Records:</span>
|
||||
<span id="liveDNS" class="stat-value">0</span>
|
||||
</div>
|
||||
<div class="stat-item">
|
||||
<span class="stat-label">Certificates:</span>
|
||||
<span id="liveCerts" class="stat-value">0</span>
|
||||
</div>
|
||||
<div class="stat-item">
|
||||
<span class="stat-label">Shodan Results:</span>
|
||||
<span id="liveShodan" class="stat-value">0</span>
|
||||
</div>
|
||||
<div class="stat-item">
|
||||
<span class="stat-label">VirusTotal:</span>
|
||||
<span id="liveVT" class="stat-value">0</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="discoveries-list">
|
||||
<h4>📋 Recent Discoveries</h4>
|
||||
<div id="recentHostnames" class="discovery-section">
|
||||
<strong>Hostnames:</strong>
|
||||
<div class="hostname-list"></div>
|
||||
</div>
|
||||
<div id="recentIPs" class="discovery-section">
|
||||
<strong>IP Addresses:</strong>
|
||||
<div class="ip-list"></div>
|
||||
</div>
|
||||
<div id="activityLog" class="discovery-section">
|
||||
<strong>Activity Log:</strong>
|
||||
<div class="activity-list"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
progressSection.appendChild(liveDiv);
|
||||
this.debug("Live discoveries container created");
|
||||
}
|
||||
}
|
||||
|
||||
bindEvents() {
|
||||
// Start scan button
|
||||
document.getElementById('startScan').addEventListener('click', () => {
|
||||
this.startScan();
|
||||
});
|
||||
|
||||
// New scan button
|
||||
document.getElementById('newScan').addEventListener('click', () => {
|
||||
this.resetToForm();
|
||||
});
|
||||
|
||||
// Report view toggles
|
||||
document.getElementById('showJson').addEventListener('click', () => {
|
||||
this.showReport('json');
|
||||
});
|
||||
|
||||
document.getElementById('showText').addEventListener('click', () => {
|
||||
this.showReport('text');
|
||||
});
|
||||
|
||||
// Download buttons
|
||||
document.getElementById('downloadJson').addEventListener('click', () => {
|
||||
this.downloadReport('json');
|
||||
});
|
||||
|
||||
document.getElementById('downloadText').addEventListener('click', () => {
|
||||
this.downloadReport('text');
|
||||
});
|
||||
|
||||
// Enter key in target field
|
||||
document.getElementById('target').addEventListener('keypress', (e) => {
|
||||
if (e.key === 'Enter') {
|
||||
this.startScan();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
async startScan() {
|
||||
const target = document.getElementById('target').value.trim();
|
||||
|
||||
if (!target) {
|
||||
alert('Please enter a target domain or hostname');
|
||||
return;
|
||||
}
|
||||
|
||||
const scanData = {
|
||||
target: target,
|
||||
max_depth: parseInt(document.getElementById('maxDepth').value),
|
||||
shodan_key: document.getElementById('shodanKey').value.trim() || null,
|
||||
virustotal_key: document.getElementById('virustotalKey').value.trim() || null
|
||||
};
|
||||
|
||||
try {
|
||||
// Show progress section
|
||||
this.showProgressSection();
|
||||
this.updateProgress(0, 'Starting scan...');
|
||||
|
||||
this.debug('Starting scan with data:', scanData);
|
||||
|
||||
const response = await fetch('/api/scan', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(scanData)
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP error! status: ${response.status}`);
|
||||
}
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
if (result.error) {
|
||||
throw new Error(result.error);
|
||||
}
|
||||
|
||||
this.currentScanId = result.scan_id;
|
||||
this.debug('Scan started with ID:', this.currentScanId);
|
||||
|
||||
this.startPolling();
|
||||
this.startLiveDataPolling();
|
||||
|
||||
} catch (error) {
|
||||
console.error('❌ Failed to start scan:', error);
|
||||
this.showError(`Failed to start scan: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
startPolling() {
|
||||
this.debug('Starting status polling...');
|
||||
// Poll every 2 seconds for status updates
|
||||
this.pollInterval = setInterval(() => {
|
||||
this.checkScanStatus();
|
||||
}, 2000);
|
||||
|
||||
// Also check immediately
|
||||
this.checkScanStatus();
|
||||
}
|
||||
|
||||
startLiveDataPolling() {
|
||||
this.debug('Starting live data polling...');
|
||||
// Poll every 3 seconds for live data updates
|
||||
this.liveDataInterval = setInterval(() => {
|
||||
this.updateLiveData();
|
||||
}, 3000);
|
||||
|
||||
// Show the live discoveries section
|
||||
const liveSection = document.querySelector('.live-discoveries');
|
||||
if (liveSection) {
|
||||
liveSection.style.display = 'block';
|
||||
this.debug('Live discoveries section made visible');
|
||||
} else {
|
||||
this.debug('ERROR: Live discoveries section not found!');
|
||||
}
|
||||
|
||||
// Also update immediately
|
||||
this.updateLiveData();
|
||||
}
|
||||
|
||||
async checkScanStatus() {
|
||||
if (!this.currentScanId) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await fetch(`/api/scan/${this.currentScanId}/status`);
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP error! status: ${response.status}`);
|
||||
}
|
||||
|
||||
const status = await response.json();
|
||||
|
||||
if (status.error) {
|
||||
throw new Error(status.error);
|
||||
}
|
||||
|
||||
// Update progress
|
||||
this.updateProgress(status.progress, status.message);
|
||||
|
||||
// Update live stats
|
||||
if (status.live_stats) {
|
||||
this.debug('Received live stats:', status.live_stats);
|
||||
this.updateLiveStats(status.live_stats);
|
||||
}
|
||||
|
||||
// Check if completed
|
||||
if (status.status === 'completed') {
|
||||
this.debug('Scan completed, loading report...');
|
||||
this.stopPolling();
|
||||
await this.loadScanReport();
|
||||
} else if (status.status === 'error') {
|
||||
this.stopPolling();
|
||||
throw new Error(status.error || 'Scan failed');
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error('❌ Error checking scan status:', error);
|
||||
this.stopPolling();
|
||||
this.showError(`Error checking scan status: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
async updateLiveData() {
|
||||
if (!this.currentScanId) {
|
||||
return;
|
||||
}
|
||||
|
||||
this.debug(`Fetching live data for scan: ${this.currentScanId}`);
|
||||
|
||||
try {
|
||||
const response = await fetch(`/api/scan/${this.currentScanId}/live-data`);
|
||||
|
||||
if (!response.ok) {
|
||||
this.debug(`Live data request failed: HTTP ${response.status}`);
|
||||
return; // Silently fail for live data
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
if (data.error) {
|
||||
this.debug('Live data error:', data.error);
|
||||
return; // Silently fail for live data
|
||||
}
|
||||
|
||||
this.debug('Received live data:', data);
|
||||
|
||||
// Update live discoveries
|
||||
this.updateLiveDiscoveries(data);
|
||||
|
||||
} catch (error) {
|
||||
// Silently fail for live data updates
|
||||
this.debug('Live data update failed:', error);
|
||||
}
|
||||
}
|
||||
|
||||
updateLiveStats(stats) {
|
||||
this.debug('Updating live stats:', stats);
|
||||
|
||||
// Update the live statistics counters
|
||||
const statElements = {
|
||||
'liveHostnames': stats.hostnames || 0,
|
||||
'liveIPs': stats.ip_addresses || 0,
|
||||
'liveDNS': stats.dns_records || 0,
|
||||
'liveCerts': stats.certificates || 0,
|
||||
'liveShodan': stats.shodan_results || 0,
|
||||
'liveVT': stats.virustotal_results || 0
|
||||
};
|
||||
|
||||
Object.entries(statElements).forEach(([elementId, value]) => {
|
||||
const element = document.getElementById(elementId);
|
||||
if (element) {
|
||||
const currentValue = element.textContent;
|
||||
element.textContent = value;
|
||||
|
||||
if (currentValue !== value.toString()) {
|
||||
this.debug(`Updated ${elementId}: ${currentValue} -> ${value}`);
|
||||
// Add a brief highlight effect when value changes
|
||||
element.style.backgroundColor = '#ff9900';
|
||||
setTimeout(() => {
|
||||
element.style.backgroundColor = '';
|
||||
}, 1000);
|
||||
}
|
||||
} else {
|
||||
this.debug(`ERROR: Element ${elementId} not found!`);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
updateLiveDiscoveries(data) {
|
||||
this.debug('Updating live discoveries with data:', data);
|
||||
|
||||
// Update hostnames list
|
||||
const hostnameList = document.querySelector('#recentHostnames .hostname-list');
|
||||
if (hostnameList && data.hostnames && data.hostnames.length > 0) {
|
||||
// Show last 10 hostnames
|
||||
const recentHostnames = data.hostnames;
|
||||
hostnameList.innerHTML = recentHostnames.map(hostname =>
|
||||
`<span class="discovery-item">${hostname}</span>`
|
||||
).join('');
|
||||
this.debug(`Updated hostname list with ${recentHostnames.length} items`);
|
||||
} else if (hostnameList) {
|
||||
this.debug(`No hostnames to display (${data.hostnames ? data.hostnames.length : 0} total)`);
|
||||
}
|
||||
|
||||
// Update IP addresses list
|
||||
const ipList = document.querySelector('#recentIPs .ip-list');
|
||||
if (ipList && data.ip_addresses && data.ip_addresses.length > 0) {
|
||||
// Show last 10 IPs
|
||||
const recentIPs = data.ip_addresses;
|
||||
ipList.innerHTML = recentIPs.map(ip =>
|
||||
`<span class="discovery-item">${ip}</span>`
|
||||
).join('');
|
||||
this.debug(`Updated IP list with ${recentIPs.length} items`);
|
||||
} else if (ipList) {
|
||||
this.debug(`No IPs to display (${data.ip_addresses ? data.ip_addresses.length : 0} total)`);
|
||||
}
|
||||
|
||||
// Update activity log
|
||||
const activityList = document.querySelector('#activityLog .activity-list');
|
||||
if (activityList && data.latest_discoveries && data.latest_discoveries.length > 0) {
|
||||
const activities = data.latest_discoveries.slice(-5); // Last 5 activities
|
||||
activityList.innerHTML = activities.map(activity => {
|
||||
const time = new Date(activity.timestamp * 1000).toLocaleTimeString();
|
||||
return `<div class="activity-item">[${time}] ${activity.message}</div>`;
|
||||
}).join('');
|
||||
this.debug(`Updated activity log with ${activities.length} items`);
|
||||
} else if (activityList) {
|
||||
this.debug(`No activities to display (${data.latest_discoveries ? data.latest_discoveries.length : 0} total)`);
|
||||
}
|
||||
}
|
||||
|
||||
async loadScanReport() {
|
||||
try {
|
||||
this.debug('Loading scan report...');
|
||||
const response = await fetch(`/api/scan/${this.currentScanId}/report`);
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP error! status: ${response.status}`);
|
||||
}
|
||||
|
||||
const report = await response.json();
|
||||
|
||||
if (report.error) {
|
||||
throw new Error(report.error);
|
||||
}
|
||||
|
||||
this.currentReport = report;
|
||||
this.debug('Report loaded successfully');
|
||||
this.showResultsSection();
|
||||
this.showReport('text'); // Default to text view
|
||||
|
||||
} catch (error) {
|
||||
console.error('❌ Error loading report:', error);
|
||||
this.showError(`Error loading report: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
stopPolling() {
|
||||
this.debug('Stopping polling intervals...');
|
||||
if (this.pollInterval) {
|
||||
clearInterval(this.pollInterval);
|
||||
this.pollInterval = null;
|
||||
}
|
||||
if (this.liveDataInterval) {
|
||||
clearInterval(this.liveDataInterval);
|
||||
this.liveDataInterval = null;
|
||||
}
|
||||
}
|
||||
|
||||
showProgressSection() {
|
||||
document.getElementById('scanForm').style.display = 'none';
|
||||
document.getElementById('progressSection').style.display = 'block';
|
||||
document.getElementById('resultsSection').style.display = 'none';
|
||||
this.debug('Showing progress section');
|
||||
}
|
||||
|
||||
showResultsSection() {
|
||||
document.getElementById('scanForm').style.display = 'none';
|
||||
document.getElementById('progressSection').style.display = 'block'; // Keep visible
|
||||
document.getElementById('resultsSection').style.display = 'block';
|
||||
|
||||
// Change the title to show it's the final summary
|
||||
const liveSection = document.querySelector('.live-discoveries');
|
||||
if (liveSection) {
|
||||
const title = liveSection.querySelector('h3');
|
||||
if (title) {
|
||||
title.textContent = '📊 Final Discovery Summary';
|
||||
}
|
||||
liveSection.style.display = 'block';
|
||||
}
|
||||
|
||||
// Hide just the progress bar and scan controls
|
||||
const progressBar = document.querySelector('.progress-bar');
|
||||
const progressMessage = document.getElementById('progressMessage');
|
||||
const scanControls = document.querySelector('.scan-controls');
|
||||
|
||||
if (progressBar) progressBar.style.display = 'none';
|
||||
if (progressMessage) progressMessage.style.display = 'none';
|
||||
if (scanControls) scanControls.style.display = 'none';
|
||||
|
||||
this.debug('Showing results section with live discoveries');
|
||||
}
|
||||
|
||||
resetToForm() {
|
||||
this.stopPolling();
|
||||
this.currentScanId = null;
|
||||
this.currentReport = null;
|
||||
|
||||
document.getElementById('scanForm').style.display = 'block';
|
||||
document.getElementById('progressSection').style.display = 'none';
|
||||
document.getElementById('resultsSection').style.display = 'none';
|
||||
|
||||
// Show progress elements again
|
||||
const progressBar = document.querySelector('.progress-bar');
|
||||
const progressMessage = document.getElementById('progressMessage');
|
||||
const scanControls = document.querySelector('.scan-controls');
|
||||
|
||||
if (progressBar) progressBar.style.display = 'block';
|
||||
if (progressMessage) progressMessage.style.display = 'block';
|
||||
if (scanControls) scanControls.style.display = 'block';
|
||||
|
||||
// Hide live discoveries and reset title
|
||||
const liveSection = document.querySelector('.live-discoveries');
|
||||
if (liveSection) {
|
||||
liveSection.style.display = 'none';
|
||||
const title = liveSection.querySelector('h3');
|
||||
if (title) {
|
||||
title.textContent = '🔍 Live Discoveries';
|
||||
}
|
||||
}
|
||||
|
||||
// Clear form
|
||||
document.getElementById('target').value = '';
|
||||
document.getElementById('shodanKey').value = '';
|
||||
document.getElementById('virustotalKey').value = '';
|
||||
document.getElementById('maxDepth').value = '2';
|
||||
|
||||
this.debug('Reset to form view');
|
||||
}
|
||||
|
||||
updateProgress(percentage, message) {
|
||||
const progressFill = document.getElementById('progressFill');
|
||||
const progressMessage = document.getElementById('progressMessage');
|
||||
|
||||
progressFill.style.width = `${percentage || 0}%`;
|
||||
progressMessage.textContent = message || 'Processing...';
|
||||
}
|
||||
|
||||
showError(message) {
|
||||
// Update progress section to show error
|
||||
this.updateProgress(0, `Error: ${message}`);
|
||||
|
||||
// Also alert the user
|
||||
alert(`Error: ${message}`);
|
||||
}
|
||||
|
||||
showReport(type) {
|
||||
if (!this.currentReport) {
|
||||
return;
|
||||
}
|
||||
|
||||
const reportContent = document.getElementById('reportContent');
|
||||
const showJsonBtn = document.getElementById('showJson');
|
||||
const showTextBtn = document.getElementById('showText');
|
||||
|
||||
if (type === 'json') {
|
||||
// Show JSON report
|
||||
try {
|
||||
// The json_report should already be a string from the server
|
||||
let jsonData;
|
||||
if (typeof this.currentReport.json_report === 'string') {
|
||||
jsonData = JSON.parse(this.currentReport.json_report);
|
||||
} else {
|
||||
jsonData = this.currentReport.json_report;
|
||||
}
|
||||
reportContent.textContent = JSON.stringify(jsonData, null, 2);
|
||||
} catch (e) {
|
||||
console.error('Error parsing JSON report:', e);
|
||||
reportContent.textContent = this.currentReport.json_report;
|
||||
}
|
||||
|
||||
showJsonBtn.classList.add('active');
|
||||
showTextBtn.classList.remove('active');
|
||||
} else {
|
||||
// Show text report
|
||||
reportContent.textContent = this.currentReport.text_report;
|
||||
|
||||
showTextBtn.classList.add('active');
|
||||
showJsonBtn.classList.remove('active');
|
||||
}
|
||||
}
|
||||
|
||||
downloadReport(type) {
|
||||
if (!this.currentReport) {
|
||||
return;
|
||||
}
|
||||
|
||||
let content, filename, mimeType;
|
||||
|
||||
if (type === 'json') {
|
||||
content = typeof this.currentReport.json_report === 'string'
|
||||
? this.currentReport.json_report
|
||||
: JSON.stringify(this.currentReport.json_report, null, 2);
|
||||
filename = `recon-report-${this.currentScanId}.json`;
|
||||
mimeType = 'application/json';
|
||||
} else {
|
||||
content = this.currentReport.text_report;
|
||||
filename = `recon-report-${this.currentScanId}.txt`;
|
||||
mimeType = 'text/plain';
|
||||
}
|
||||
|
||||
// Create download link
|
||||
const blob = new Blob([content], { type: mimeType });
|
||||
const url = window.URL.createObjectURL(blob);
|
||||
const a = document.createElement('a');
|
||||
a.href = url;
|
||||
a.download = filename;
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
window.URL.revokeObjectURL(url);
|
||||
document.body.removeChild(a);
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize the application when DOM is loaded
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
console.log('🌐 DNS Reconnaissance Tool initialized with debug mode');
|
||||
new ReconTool();
|
||||
});
|
||||
439
static/style.css
439
static/style.css
@@ -1,439 +0,0 @@
|
||||
/*
|
||||
███████╗██████╗ ███████╗ ██████╗████████╗ ██████╗ ██████╗ ██╗ ██╗███████╗
|
||||
██╔════╝██╔══██╗██╔════╝██╔═══██╗╚══██╔══╝ ██╔═══██╗██╔═══██╗╚██╗██╔╝██╔════╝
|
||||
███████╗██████╔╝█████╗ ██║ ██║ ██║ ██║ ██║██║ ██║ ╚███╔╝ ███████╗
|
||||
╚════██║██╔══██╗██╔══╝ ██║ ██║ ██║ ██║ ██║██║ ██║ ██╔██╗ ╚════██║
|
||||
███████║██║ ██║███████╗╚██████╔╝ ██║ ╚██████╔╝╚██████╔╝██╔╝ ██╗███████║
|
||||
╚══════╝╚═╝ ╚═╝╚══════╝ ╚═════╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚══════╝
|
||||
|
||||
TACTICAL THEME - DNS RECONNAISSANCE INTERFACE
|
||||
STYLE OVERRIDE
|
||||
*/
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: 'Roboto Mono', 'Lucida Console', Monaco, monospace;
|
||||
line-height: 1.6;
|
||||
color: #c7c7c7; /* Light grey for readability */
|
||||
/* Dark, textured background for a gritty feel */
|
||||
background-color: #1a1a1a;
|
||||
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='4' height='4' viewBox='0 0 4 4'%3E%3Cpath fill='%23333333' fill-opacity='0.4' d='M1 3h1v1H1V3zm2-2h1v1H3V1z'%3E%3C/path%3E%3C/svg%3E");
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
header {
|
||||
text-align: center;
|
||||
color: #e0e0e0;
|
||||
margin-bottom: 40px;
|
||||
border-bottom: 1px solid #444;
|
||||
padding-bottom: 20px;
|
||||
}
|
||||
|
||||
header h1 {
|
||||
font-family: 'Special Elite', 'Courier New', monospace; /* Stencil / Typewriter font */
|
||||
font-size: 2.8rem;
|
||||
color: #00ff41; /* Night-vision green */
|
||||
text-shadow: 0 0 5px rgba(0, 255, 65, 0.5);
|
||||
margin-bottom: 10px;
|
||||
letter-spacing: 2px;
|
||||
}
|
||||
|
||||
header p {
|
||||
font-size: 1.1rem;
|
||||
color: #a0a0a0;
|
||||
}
|
||||
|
||||
.scan-form, .progress-section, .results-section {
|
||||
background: #2a2a2a; /* Dark charcoal */
|
||||
border-radius: 4px; /* Sharper edges */
|
||||
border: 1px solid #444;
|
||||
box-shadow: inset 0 0 15px rgba(0,0,0,0.5);
|
||||
padding: 30px;
|
||||
margin-bottom: 25px;
|
||||
}
|
||||
|
||||
.scan-form h2, .progress-section h2, .results-section h2 {
|
||||
margin-bottom: 20px;
|
||||
color: #e0e0e0;
|
||||
border-bottom: 1px solid #555;
|
||||
padding-bottom: 10px;
|
||||
text-transform: uppercase; /* Military style */
|
||||
letter-spacing: 1px;
|
||||
}
|
||||
|
||||
.form-group {
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.form-group label {
|
||||
display: block;
|
||||
margin-bottom: 8px;
|
||||
font-weight: 600;
|
||||
color: #b0b0b0;
|
||||
text-transform: uppercase;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.form-group input, .form-group select {
|
||||
width: 100%;
|
||||
padding: 12px;
|
||||
background: #1a1a1a;
|
||||
border: 1px solid #555;
|
||||
border-radius: 2px;
|
||||
font-size: 16px;
|
||||
color: #00ff41; /* Green text for input fields */
|
||||
font-family: 'Roboto Mono', monospace;
|
||||
transition: all 0.2s ease-in-out;
|
||||
}
|
||||
|
||||
.form-group input:focus, .form-group select:focus {
|
||||
outline: none;
|
||||
border-color: #ff9900; /* Amber focus color */
|
||||
box-shadow: 0 0 5px rgba(255, 153, 0, 0.5);
|
||||
}
|
||||
|
||||
.api-keys {
|
||||
background: rgba(0,0,0,0.3);
|
||||
padding: 20px;
|
||||
border-radius: 4px;
|
||||
border: 1px solid #444;
|
||||
margin: 20px 0;
|
||||
}
|
||||
|
||||
.api-keys h3 {
|
||||
margin-bottom: 15px;
|
||||
color: #c7c7c7;
|
||||
}
|
||||
|
||||
.btn-primary, .btn-secondary {
|
||||
padding: 12px 24px;
|
||||
border: 1px solid #666;
|
||||
border-radius: 2px;
|
||||
font-size: 16px;
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease-in-out;
|
||||
margin-right: 10px;
|
||||
margin-bottom: 10px;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 1px;
|
||||
}
|
||||
|
||||
.btn-primary {
|
||||
background: #2c5c34; /* Dark military green */
|
||||
color: #e0e0e0;
|
||||
border-color: #3b7b46;
|
||||
}
|
||||
|
||||
.btn-primary:hover {
|
||||
background: #3b7b46; /* Lighter green on hover */
|
||||
color: #fff;
|
||||
border-color: #4cae5c;
|
||||
}
|
||||
|
||||
.btn-secondary {
|
||||
background: #4a4a4a; /* Dark grey */
|
||||
color: #c7c7c7;
|
||||
border-color: #666;
|
||||
}
|
||||
|
||||
.btn-secondary:hover {
|
||||
background: #5a5a5a;
|
||||
}
|
||||
|
||||
.btn-secondary.active {
|
||||
background: #6a4f2a; /* Amber/Brown for active state */
|
||||
color: #fff;
|
||||
border-color: #ff9900;
|
||||
}
|
||||
|
||||
.progress-bar {
|
||||
width: 100%;
|
||||
height: 20px;
|
||||
background: #1a1a1a;
|
||||
border: 1px solid #555;
|
||||
border-radius: 2px;
|
||||
overflow: hidden;
|
||||
margin-bottom: 15px;
|
||||
padding: 2px;
|
||||
}
|
||||
|
||||
.progress-fill {
|
||||
height: 100%;
|
||||
background: #ff9900; /* Solid amber progress fill */
|
||||
width: 0%;
|
||||
transition: width 0.3s ease;
|
||||
border-radius: 0;
|
||||
}
|
||||
|
||||
#progressMessage {
|
||||
font-weight: 500;
|
||||
color: #a0a0a0;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.scan-controls {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.results-controls {
|
||||
margin-bottom: 20px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.report-container {
|
||||
background: #0a0a0a; /* Near-black terminal background */
|
||||
border-radius: 4px;
|
||||
border: 1px solid #333;
|
||||
padding: 20px;
|
||||
max-height: 600px;
|
||||
overflow-y: auto;
|
||||
box-shadow: inset 0 0 10px #000;
|
||||
}
|
||||
|
||||
#reportContent {
|
||||
color: #00ff41; /* Classic terminal green */
|
||||
font-family: 'Courier New', monospace;
|
||||
font-size: 14px;
|
||||
line-height: 1.4;
|
||||
white-space: pre-wrap;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
.hostname-list, .ip-list {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 5px;
|
||||
}
|
||||
|
||||
.discovery-item {
|
||||
background: #2a2a2a;
|
||||
color: #00ff41;
|
||||
padding: 2px 6px;
|
||||
border-radius: 2px;
|
||||
font-family: 'Courier New', monospace;
|
||||
font-size: 0.8rem;
|
||||
border: 1px solid #444;
|
||||
}
|
||||
|
||||
.activity-list {
|
||||
max-height: 150px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.activity-item {
|
||||
color: #a0a0a0;
|
||||
font-family: 'Courier New', monospace;
|
||||
font-size: 0.8rem;
|
||||
padding: 2px 0;
|
||||
border-bottom: 1px solid #333;
|
||||
}
|
||||
|
||||
.activity-item:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
/* Live Discoveries Base Styling */
|
||||
.live-discoveries {
|
||||
background: rgba(0, 20, 0, 0.6);
|
||||
border: 1px solid #00ff41;
|
||||
border-radius: 4px;
|
||||
padding: 20px;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.live-discoveries h3 {
|
||||
color: #00ff41;
|
||||
margin-bottom: 15px;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 1px;
|
||||
}
|
||||
|
||||
/* Enhanced styling for live discoveries when shown in results view */
|
||||
.results-section .live-discoveries {
|
||||
background: rgba(0, 40, 0, 0.8);
|
||||
border: 2px solid #00ff41;
|
||||
border-radius: 4px;
|
||||
padding: 20px;
|
||||
margin-bottom: 25px;
|
||||
box-shadow: 0 0 10px rgba(0, 255, 65, 0.3);
|
||||
}
|
||||
|
||||
.results-section .live-discoveries h3 {
|
||||
color: #00ff41;
|
||||
text-shadow: 0 0 3px rgba(0, 255, 65, 0.5);
|
||||
}
|
||||
|
||||
/* Ensure the progress section flows nicely when showing both progress and results */
|
||||
.progress-section.with-results {
|
||||
margin-bottom: 0;
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.results-section.with-live-data {
|
||||
border-top: 1px solid #444;
|
||||
padding-top: 20px;
|
||||
}
|
||||
|
||||
/* Better spacing for the combined view */
|
||||
.progress-section + .results-section {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
/* Hide specific progress elements while keeping the section visible */
|
||||
.progress-section .progress-bar.hidden,
|
||||
.progress-section #progressMessage.hidden,
|
||||
.progress-section .scan-controls.hidden {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
|
||||
gap: 15px;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.stat-item {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
padding: 8px 12px;
|
||||
background: rgba(0, 0, 0, 0.5);
|
||||
border: 1px solid #333;
|
||||
border-radius: 2px;
|
||||
}
|
||||
|
||||
.stat-label {
|
||||
color: #a0a0a0;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.stat-value {
|
||||
color: #00ff41;
|
||||
font-weight: bold;
|
||||
font-family: 'Courier New', monospace;
|
||||
transition: background-color 0.3s ease;
|
||||
}
|
||||
|
||||
/* Animation for final stats highlight */
|
||||
@keyframes finalHighlight {
|
||||
0% { background-color: #ff9900; }
|
||||
100% { background-color: transparent; }
|
||||
}
|
||||
|
||||
.stat-value.final {
|
||||
animation: finalHighlight 2s ease-in-out;
|
||||
}
|
||||
|
||||
.discoveries-list {
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.discoveries-list h4 {
|
||||
color: #ff9900;
|
||||
margin-bottom: 15px;
|
||||
border-bottom: 1px solid #444;
|
||||
padding-bottom: 5px;
|
||||
}
|
||||
|
||||
.discovery-section {
|
||||
margin-bottom: 15px;
|
||||
padding: 10px;
|
||||
background: rgba(0, 0, 0, 0.3);
|
||||
border: 1px solid #333;
|
||||
border-radius: 2px;
|
||||
}
|
||||
|
||||
.discovery-section strong {
|
||||
color: #c7c7c7;
|
||||
display: block;
|
||||
margin-bottom: 8px;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
/* Tactical loading spinner */
|
||||
.loading {
|
||||
display: inline-block;
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
border: 3px solid rgba(199, 199, 199, 0.3);
|
||||
border-radius: 50%;
|
||||
border-top-color: #00ff41; /* Night-vision green spinner */
|
||||
animation: spin 1s linear infinite;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
to { transform: rotate(360deg); }
|
||||
}
|
||||
|
||||
/* Responsive design adjustments */
|
||||
@media (max-width: 768px) {
|
||||
.container {
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
header h1 {
|
||||
font-size: 2.2rem;
|
||||
}
|
||||
|
||||
.scan-form, .progress-section, .results-section {
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
.btn-primary, .btn-secondary {
|
||||
width: 100%;
|
||||
margin-right: 0;
|
||||
}
|
||||
|
||||
.results-controls {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.results-controls button {
|
||||
flex: 1;
|
||||
min-width: 120px;
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.stat-item {
|
||||
padding: 6px 8px;
|
||||
}
|
||||
|
||||
.stat-label, .stat-value {
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
|
||||
.hostname-list, .ip-list {
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
/* Responsive adjustments for the combined view */
|
||||
.results-section .live-discoveries {
|
||||
padding: 15px;
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
.results-section .live-discoveries .stats-grid {
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
gap: 10px;
|
||||
}
|
||||
}
|
||||
@@ -3,78 +3,248 @@
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>DNS Reconnaissance Tool</title>
|
||||
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
|
||||
<title>DNSRecon - Infrastructure Reconnaissance</title>
|
||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}">
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/vis/4.21.0/vis.min.js"></script>
|
||||
<link href="https://cdnjs.cloudflare.com/ajax/libs/vis/4.21.0/vis.min.css" rel="stylesheet" type="text/css">
|
||||
<link href="https://fonts.googleapis.com/css2?family=Roboto+Mono:wght@300;400;500;700&family=Special+Elite&display=swap" rel="stylesheet">
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<header>
|
||||
<h1>🔍 DNS Reconnaissance Tool</h1>
|
||||
<p>Comprehensive domain and IP intelligence gathering</p>
|
||||
<header class="header">
|
||||
<div class="header-content">
|
||||
<div class="logo">
|
||||
<span class="logo-icon">[DNS]</span>
|
||||
<span class="logo-text">RECON</span>
|
||||
</div>
|
||||
<div class="status-indicator">
|
||||
<span id="connection-status" class="status-dot"></span>
|
||||
<span class="status-text">System Online</span>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<div class="scan-form" id="scanForm">
|
||||
<h2>Start New Scan</h2>
|
||||
|
||||
<div class="form-group">
|
||||
<label for="target">Target (domain.com or hostname):</label>
|
||||
<input type="text" id="target" placeholder="example.com or example" required>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label for="maxDepth">Max Recursion Depth:</label>
|
||||
<select id="maxDepth">
|
||||
<option value="1">1</option>
|
||||
<option value="2" selected>2</option>
|
||||
<option value="3">3</option>
|
||||
<option value="4">4</option>
|
||||
<option value="5">5</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="api-keys">
|
||||
<h3>Optional API Keys</h3>
|
||||
<div class="form-group">
|
||||
<label for="shodanKey">Shodan API Key:</label>
|
||||
<input type="password" id="shodanKey" placeholder="Optional - for port scanning data">
|
||||
<main class="main-content">
|
||||
<section class="control-panel">
|
||||
<div class="panel-header">
|
||||
<h2>Target Configuration</h2>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label for="virustotalKey">VirusTotal API Key:</label>
|
||||
<input type="password" id="virustotalKey" placeholder="Optional - for security analysis">
|
||||
<div class="form-container">
|
||||
<div class="input-group">
|
||||
<label for="target-domain">Target Domain</label>
|
||||
<input type="text" id="target-domain" placeholder="example.com" autocomplete="off">
|
||||
</div>
|
||||
|
||||
<div class="input-group">
|
||||
<label for="max-depth">Recursion Depth</label>
|
||||
<select id="max-depth">
|
||||
<option value="1">Depth 1 - Direct relationships</option>
|
||||
<option value="2" selected>Depth 2 - Recommended</option>
|
||||
<option value="3">Depth 3 - Extended analysis</option>
|
||||
<option value="4">Depth 4 - Deep reconnaissance</option>
|
||||
<option value="5">Depth 5 - Maximum depth</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="button-group">
|
||||
<button id="start-scan" class="btn btn-primary">
|
||||
<span class="btn-icon">[RUN]</span>
|
||||
<span>Start Reconnaissance</span>
|
||||
</button>
|
||||
<button id="add-to-graph" class="btn btn-primary">
|
||||
<span class="btn-icon">[ADD]</span>
|
||||
<span>Add to Graph</span>
|
||||
</button>
|
||||
<button id="stop-scan" class="btn btn-secondary" disabled>
|
||||
<span class="btn-icon">[STOP]</span>
|
||||
<span>Terminate Scan</span>
|
||||
</button>
|
||||
<button id="export-results" class="btn btn-secondary">
|
||||
<span class="btn-icon">[EXPORT]</span>
|
||||
<span>Download Results</span>
|
||||
</button>
|
||||
<button id="configure-api-keys" class="btn btn-secondary">
|
||||
<span class="btn-icon">[API]</span>
|
||||
<span>Configure API Keys</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<button id="startScan" class="btn-primary">Start Reconnaissance</button>
|
||||
</div>
|
||||
<section class="status-panel">
|
||||
<div class="panel-header">
|
||||
<h2>Reconnaissance Status</h2>
|
||||
</div>
|
||||
|
||||
<div class="progress-section" id="progressSection" style="display: none;">
|
||||
<h2>Scan Progress</h2>
|
||||
<div class="progress-bar">
|
||||
<div class="progress-fill" id="progressFill"></div>
|
||||
<div class="status-content">
|
||||
<div class="status-row">
|
||||
<span class="status-label">Current Status:</span>
|
||||
<span id="scan-status" class="status-value">Idle</span>
|
||||
</div>
|
||||
<div class="status-row">
|
||||
<span class="status-label">Target:</span>
|
||||
<span id="target-display" class="status-value">None</span>
|
||||
</div>
|
||||
<div class="status-row">
|
||||
<span class="status-label">Depth:</span>
|
||||
<span id="depth-display" class="status-value">0/0</span>
|
||||
</div>
|
||||
<div class="status-row">
|
||||
<span class="status-label">Progress:</span>
|
||||
<span id="progress-display" class="status-value">0%</span>
|
||||
</div>
|
||||
<div class="status-row">
|
||||
<span class="status-label">Indicators:</span>
|
||||
<span id="indicators-display" class="status-value">0</span>
|
||||
</div>
|
||||
<div class="status-row">
|
||||
<span class="status-label">Relationships:</span>
|
||||
<span id="relationships-display" class="status-value">0</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="progress-bar">
|
||||
<div id="progress-fill" class="progress-fill"></div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="visualization-panel">
|
||||
<div class="panel-header">
|
||||
<h2>Infrastructure Map</h2>
|
||||
<div class="view-controls">
|
||||
<div class="filter-group">
|
||||
<label for="node-type-filter">Node Type:</label>
|
||||
<select id="node-type-filter">
|
||||
<option value="all">All</option>
|
||||
<option value="domain">Domain</option>
|
||||
<option value="ip">IP</option>
|
||||
<option value="asn">ASN</option>
|
||||
<option value="correlation_object">Correlation Object</option>
|
||||
<option value="large_entity">Large Entity</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="filter-group">
|
||||
<label for="confidence-filter">Min Confidence:</label>
|
||||
<input type="range" id="confidence-filter" min="0" max="1" step="0.1" value="0">
|
||||
<span id="confidence-value">0</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="network-graph" class="graph-container">
|
||||
<div class="graph-placeholder">
|
||||
<div class="placeholder-content">
|
||||
<div class="placeholder-icon">[○]</div>
|
||||
<div class="placeholder-text">Infrastructure map will appear here</div>
|
||||
<div class="placeholder-subtext">Start a reconnaissance scan to visualize relationships</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="legend">
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #00ff41;"></div>
|
||||
<span>Domains</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #ff9900;"></div>
|
||||
<span>IP Addresses</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #c7c7c7;"></div>
|
||||
<span>Domain (invalid cert)</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #9d4edd;"></div>
|
||||
<span>Correlation Objects</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-edge high-confidence"></div>
|
||||
<span>High Confidence</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-edge medium-confidence"></div>
|
||||
<span>Medium Confidence</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #ff6b6b;"></div>
|
||||
<span>Large Entity</span>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="provider-panel">
|
||||
<div class="panel-header">
|
||||
<h2>Data Providers</h2>
|
||||
</div>
|
||||
|
||||
<div id="provider-list" class="provider-list">
|
||||
</div>
|
||||
</section>
|
||||
</main>
|
||||
|
||||
<footer class="footer">
|
||||
<div class="footer-content">
|
||||
<span>v0.0.0rc</span>
|
||||
<span class="footer-separator">|</span>
|
||||
<span>Passive Infrastructure Reconnaissance</span>
|
||||
<span class="footer-separator">|</span>
|
||||
<span id="session-id">Session: Loading...</span>
|
||||
</div>
|
||||
<p id="progressMessage">Initializing...</p>
|
||||
<div class="scan-controls">
|
||||
<button id="newScan" class="btn-secondary">New Scan</button>
|
||||
</footer>
|
||||
|
||||
<div id="node-modal" class="modal">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<h3 id="modal-title">Node Details</h3>
|
||||
<button id="modal-close" class="modal-close">[×]</button>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<div id="modal-details">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="results-section" id="resultsSection" style="display: none;">
|
||||
<h2>Reconnaissance Results</h2>
|
||||
|
||||
<div class="results-controls">
|
||||
<button id="showJson" class="btn-secondary">Show JSON</button>
|
||||
<button id="showText" class="btn-secondary active">Show Text Report</button>
|
||||
<button id="downloadJson" class="btn-secondary">Download JSON</button>
|
||||
<button id="downloadText" class="btn-secondary">Download Text</button>
|
||||
</div>
|
||||
|
||||
<div class="report-container">
|
||||
<pre id="reportContent"></pre>
|
||||
<div id="api-key-modal" class="modal">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<h3>Configure API Keys</h3>
|
||||
<button id="api-key-modal-close" class="modal-close">[×]</button>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<p class="modal-description">
|
||||
Enter your API keys for enhanced data providers. Keys are stored in memory for the current session only and are never saved to disk.
|
||||
</p>
|
||||
<div id="api-key-inputs">
|
||||
</div>
|
||||
<div class="button-group" style="flex-direction: row; justify-content: flex-end;">
|
||||
<button id="reset-api-keys" class="btn btn-secondary">
|
||||
<span>Reset</span>
|
||||
</button>
|
||||
<button id="save-api-keys" class="btn btn-primary">
|
||||
<span>Save Keys</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script src="{{ url_for('static', filename='script.js') }}"></script>
|
||||
<script>
|
||||
function copyToClipboard(elementId) {
|
||||
const element = document.getElementById(elementId);
|
||||
const textToCopy = element.innerText;
|
||||
navigator.clipboard.writeText(textToCopy).then(() => {
|
||||
// Optional: Show a success message
|
||||
console.log('Copied to clipboard');
|
||||
}).catch(err => {
|
||||
console.error('Failed to copy: ', err);
|
||||
});
|
||||
}
|
||||
</script>
|
||||
<script src="{{ url_for('static', filename='js/graph.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/main.js') }}"></script>
|
||||
</body>
|
||||
</html>
|
||||
1440
tlds_cache.txt
1440
tlds_cache.txt
File diff suppressed because it is too large
Load Diff
0
utils/__init__.py
Normal file
0
utils/__init__.py
Normal file
50
utils/helpers.py
Normal file
50
utils/helpers.py
Normal file
@@ -0,0 +1,50 @@
|
||||
def _is_valid_domain(domain: str) -> bool:
|
||||
"""
|
||||
Basic domain validation.
|
||||
|
||||
Args:
|
||||
domain: Domain string to validate
|
||||
|
||||
Returns:
|
||||
True if domain appears valid
|
||||
"""
|
||||
if not domain or len(domain) > 253:
|
||||
return False
|
||||
|
||||
# Check for valid characters and structure
|
||||
parts = domain.split('.')
|
||||
if len(parts) < 2:
|
||||
return False
|
||||
|
||||
for part in parts:
|
||||
if not part or len(part) > 63:
|
||||
return False
|
||||
if not part.replace('-', '').replace('_', '').isalnum():
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _is_valid_ip(ip: str) -> bool:
|
||||
"""
|
||||
Basic IP address validation.
|
||||
|
||||
Args:
|
||||
ip: IP address string to validate
|
||||
|
||||
Returns:
|
||||
True if IP appears valid
|
||||
"""
|
||||
try:
|
||||
parts = ip.split('.')
|
||||
if len(parts) != 4:
|
||||
return False
|
||||
|
||||
for part in parts:
|
||||
num = int(part)
|
||||
if not 0 <= num <= 255:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
except (ValueError, AttributeError):
|
||||
return False
|
||||
Reference in New Issue
Block a user