2 Commits

Author SHA1 Message Date
overcuriousity
4378146d0c it 2025-09-14 13:14:02 +02:00
overcuriousity
b26002eff9 fix race condition 2025-09-14 01:40:17 +02:00
26 changed files with 4265 additions and 5606 deletions

View File

@@ -1,34 +0,0 @@
# ===============================================
# DNSRecon Environment Variables
# ===============================================
# Copy this file to .env and fill in your values.
# --- API Keys ---
# Add your Shodan API key for the Shodan provider to be enabled.
SHODAN_API_KEY=
# --- Flask & Session Settings ---
# A strong, random secret key is crucial for session security.
FLASK_SECRET_KEY=your-very-secret-and-random-key-here
FLASK_HOST=127.0.0.1
FLASK_PORT=5000
FLASK_DEBUG=True
# How long a user's session in the browser lasts (in hours).
FLASK_PERMANENT_SESSION_LIFETIME_HOURS=2
# How long inactive scanner data is stored in Redis (in minutes).
SESSION_TIMEOUT_MINUTES=60
# --- Application Core Settings ---
# The default number of levels to recurse when scanning.
DEFAULT_RECURSION_DEPTH=2
# Default timeout for provider API requests in seconds.
DEFAULT_TIMEOUT=30
# The number of concurrent provider requests to make.
MAX_CONCURRENT_REQUESTS=5
# The number of results from a provider that triggers the "large entity" grouping.
LARGE_ENTITY_THRESHOLD=100
# The number of times to retry a target if a provider fails.
MAX_RETRIES_PER_TARGET=8
# How long cached provider responses are stored (in hours).
CACHE_EXPIRY_HOURS=12

2
.gitignore vendored
View File

@@ -169,4 +169,4 @@ cython_debug/
#.idea/
dump.rdb
cache/
.vscode

226
README.md
View File

@@ -4,32 +4,28 @@ DNSRecon is an interactive, passive reconnaissance tool designed to map adversar
**Current Status: Phase 2 Implementation**
* ✅ Core infrastructure and graph engine
* ✅ Multi-provider support (crt.sh, DNS, Shodan)
* ✅ Session-based multi-user support
* ✅ Real-time web interface with interactive visualization
* ✅ Forensic logging system and JSON export
-----
- ✅ Core infrastructure and graph engine
- ✅ Multi-provider support (crt.sh, DNS, Shodan)
- ✅ Session-based multi-user support
- ✅ Real-time web interface with interactive visualization
- ✅ Forensic logging system and JSON export
## Features
* **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
* **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
* **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
* **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
* **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
* **Session Management**: Supports concurrent user sessions with isolated scanner instances.
-----
- **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
- **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
- **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
- **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
- **Session Management**: Supports concurrent user sessions with isolated scanner instances.
## Installation
### Prerequisites
* Python 3.8 or higher
* A modern web browser with JavaScript enabled
* (Recommended) A Linux host for running the application and the optional DNS cache.
- Python 3.8 or higher
- A modern web browser with JavaScript enabled
- (Recommended) A Linux host for running the application and the optional DNS cache.
### 1\. Clone the Project
@@ -48,50 +44,156 @@ source venv/bin/activate
pip install -r requirements.txt
```
The `requirements.txt` file contains the following dependencies:
### 3\. (Optional but Recommended) Set up a Local DNS Caching Resolver
* Flask\>=2.3.3
* networkx\>=3.1
* requests\>=2.31.0
* python-dateutil\>=2.8.2
* Werkzeug\>=2.3.7
* urllib3\>=2.0.0
* dnspython\>=2.4.2
* gunicorn
* redis
* python-dotenv
Running a local DNS caching resolver can significantly speed up DNS queries and reduce your network footprint. Heres how to set up `unbound` on a Debian-based Linux distribution (like Ubuntu).
-----
## Configuration
DNSRecon is configured using a `.env` file. You can copy the provided example file and edit it to suit your needs:
**a. Install Unbound:**
```bash
cp .env.example .env
sudo apt update
sudo apt install unbound -y
```
The following environment variables are available for configuration:
**b. Configure Unbound:**
Create a new configuration file for DNSRecon:
| Variable | Description | Default |
| :--- | :--- | :--- |
| `SHODAN_API_KEY` | Your Shodan API key. | |
| `FLASK_SECRET_KEY`| A strong, random secret key for session security. | `your-very-secret-and-random-key-here` |
| `FLASK_HOST` | The host address for the Flask application. | `127.0.0.1` |
| `FLASK_PORT` | The port for the Flask application. | `5000` |
| `FLASK_DEBUG` | Enable or disable Flask's debug mode. | `True` |
| `FLASK_PERMANENT_SESSION_LIFETIME_HOURS`| How long a user's session in the browser lasts (in hours). | `2` |
| `SESSION_TIMEOUT_MINUTES` | How long inactive scanner data is stored in Redis (in minutes). | `60` |
| `DEFAULT_RECURSION_DEPTH` | The default number of levels to recurse when scanning. | `2` |
| `DEFAULT_TIMEOUT` | Default timeout for provider API requests in seconds. | `30` |
| `MAX_CONCURRENT_REQUESTS`| The number of concurrent provider requests to make. | `5` |
| `LARGE_ENTITY_THRESHOLD`| The number of results from a provider that triggers the "large entity" grouping. | `100` |
| `MAX_RETRIES_PER_TARGET`| The number of times to retry a target if a provider fails. | `8` |
| `CACHE_EXPIRY_HOURS`| How long cached provider responses are stored (in hours). | `12` |
```bash
sudo nano /etc/unbound/unbound.conf.d/dnsrecon.conf
```
-----
Add the following content to the file:
## Systemd Service
```
server:
# Listen on localhost for all users
interface: 127.0.0.1
access-control: 0.0.0.0/0 refuse
access-control: 127.0.0.0/8 allow
# Enable prefetching of popular items
prefetch: yes
```
**c. Restart Unbound and set it as the default resolver:**
```bash
sudo systemctl restart unbound
sudo systemctl enable unbound
```
To use this resolver for your system, you may need to update your network settings to point to `127.0.0.1` as your DNS server.
**d. Update DNSProvider to use the local resolver:**
In `dnsrecon/providers/dns_provider.py`, you can explicitly set the resolver's nameservers in the `__init__` method:
```python
# dnsrecon/providers/dns_provider.py
class DNSProvider(BaseProvider):
def __init__(self, session_config=None):
"""Initialize DNS provider with session-specific configuration."""
super().__init__(...)
# Configure DNS resolver
self.resolver = dns.resolver.Resolver()
self.resolver.nameservers = ['127.0.0.1'] # Use local caching resolver
self.resolver.timeout = 5
self.resolver.lifetime = 10
```
## Usage (Development)
### 1\. Start the Application
```bash
python app.py
```
### 2\. Open Your Browser
Navigate to `http://127.0.0.1:5000`.
### 3\. Basic Reconnaissance Workflow
1. **Enter Target Domain**: Input a domain like `example.com`.
2. **Select Recursion Depth**: Depth 2 is recommended for most investigations.
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin.
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered.
5. **Analyze and Export**: Interact with the graph and download the results when the scan is complete.
## Production Deployment
To deploy DNSRecon in a production environment, follow these steps:
### 1\. Use a Production WSGI Server
Do not use the built-in Flask development server for production. Use a WSGI server like **Gunicorn**:
```bash
pip install gunicorn
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
```
### 2\. Configure Environment Variables
Set the following environment variables for a secure and configurable deployment:
```bash
# Generate a strong, random secret key
export SECRET_KEY='your-super-secret-and-random-key'
# Set Flask to production mode
export FLASK_ENV='production'
export FLASK_DEBUG=False
# API keys (optional, but recommended for full functionality)
export SHODAN_API_KEY="your_shodan_key"
```
### 3\. Use a Reverse Proxy
Set up a reverse proxy like **Nginx** to sit in front of the Gunicorn server. This provides several benefits, including:
- **TLS/SSL Termination**: Securely handle HTTPS traffic.
- **Load Balancing**: Distribute traffic across multiple application instances.
- **Serving Static Files**: Efficiently serve CSS and JavaScript files.
**Example Nginx Configuration:**
```nginx
server {
listen 80;
server_name your_domain.com;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name your_domain.com;
# SSL cert configuration
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
alias /path/to/your/dnsrecon/static;
expires 30d;
}
}
```
## Autostart with systemd
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
@@ -143,18 +245,12 @@ You can check the status of the service at any time with:
sudo systemctl status dnsrecon.service
```
-----
## Security Considerations
- **API Keys**: API keys are stored in memory for the duration of a user session and are not written to disk.
- **Rate Limiting**: DNSRecon includes built-in rate limiting to be respectful to data sources.
- **Local Use**: The application is designed for local or trusted network use and does not have built-in authentication. **Do not expose it directly to the internet without proper security controls.**
## License
This project is licensed under the terms of the **BSD-3-Clause** license.
Copyright (c) 2025 mstoeck3.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This project is licensed under the terms of the license agreement found in the `LICENSE` file.

447
app.py
View File

@@ -1,8 +1,6 @@
# dnsrecon-reduced/app.py
"""
Flask application entry point for DNSRecon web interface.
Provides REST API endpoints and serves the web interface with user session support.
Enhanced with user session management and task-based completion model.
"""
import json
@@ -11,44 +9,81 @@ from flask import Flask, render_template, request, jsonify, send_file, session
from datetime import datetime, timezone, timedelta
import io
from core.session_manager import session_manager
from core.session_manager import session_manager, UserIdentifier
from config import config
from core.graph_manager import NodeType
from utils.helpers import is_valid_target
app = Flask(__name__)
# Use centralized configuration for Flask settings
app.config['SECRET_KEY'] = config.flask_secret_key
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=config.flask_permanent_session_lifetime_hours)
app.config['SECRET_KEY'] = 'dnsrecon-dev-key-change-in-production'
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=2) # 2 hour session lifetime
def get_user_scanner():
"""
Retrieves the scanner for the current session, or creates a new
session and scanner if one doesn't exist.
Enhanced user scanner retrieval with user identification and session consolidation.
Implements single session per user with seamless consolidation.
"""
# Get current Flask session info for debugging
current_flask_session_id = session.get('dnsrecon_session_id')
print("=== ENHANCED GET_USER_SCANNER ===")
# Try to get existing session
if current_flask_session_id:
existing_scanner = session_manager.get_session(current_flask_session_id)
if existing_scanner:
return current_flask_session_id, existing_scanner
try:
# Extract user identification from request
client_ip, user_agent = UserIdentifier.extract_request_info(request)
user_fingerprint = UserIdentifier.generate_user_fingerprint(client_ip, user_agent)
# Create new session if none exists
print("Creating new session as none was found...")
new_session_id = session_manager.create_session()
new_scanner = session_manager.get_session(new_session_id)
print(f"User fingerprint: {user_fingerprint}")
print(f"Client IP: {client_ip}")
print(f"User Agent: {user_agent[:50]}...")
if not new_scanner:
raise Exception("Failed to create new scanner session")
# Get current Flask session info for debugging
current_flask_session_id = session.get('dnsrecon_session_id')
print(f"Flask session ID: {current_flask_session_id}")
# Store in Flask session
session['dnsrecon_session_id'] = new_session_id
session.permanent = True
# Try to get existing session first
if current_flask_session_id:
existing_scanner = session_manager.get_session(current_flask_session_id)
if existing_scanner:
# Verify session belongs to current user
session_info = session_manager.get_session_info(current_flask_session_id)
if session_info.get('user_fingerprint') == user_fingerprint:
print(f"Found valid existing session {current_flask_session_id} for user {user_fingerprint}")
existing_scanner.session_id = current_flask_session_id
return current_flask_session_id, existing_scanner
else:
print(f"Session {current_flask_session_id} belongs to different user, will create new session")
else:
print(f"Session {current_flask_session_id} not found in Redis, will create new session")
# Create or replace user session (this handles consolidation automatically)
new_session_id = session_manager.create_or_replace_user_session(client_ip, user_agent)
new_scanner = session_manager.get_session(new_session_id)
if not new_scanner:
print(f"ERROR: Failed to retrieve newly created session {new_session_id}")
raise Exception("Failed to create new scanner session")
# Store in Flask session for browser persistence
session['dnsrecon_session_id'] = new_session_id
session.permanent = True
# Ensure session ID is set on scanner
new_scanner.session_id = new_session_id
# Get session info for user feedback
session_info = session_manager.get_session_info(new_session_id)
print(f"Session created/consolidated successfully")
print(f" - Session ID: {new_session_id}")
print(f" - User: {user_fingerprint}")
print(f" - Scanner status: {new_scanner.status}")
print(f" - Session age: {session_info.get('session_age_minutes', 0)} minutes")
return new_session_id, new_scanner
except Exception as e:
print(f"ERROR: Exception in get_user_scanner: {e}")
traceback.print_exc()
raise
return new_session_id, new_scanner
@app.route('/')
def index():
@@ -59,71 +94,110 @@ def index():
@app.route('/api/scan/start', methods=['POST'])
def start_scan():
"""
Start a new reconnaissance scan. Creates a new isolated scanner if
clear_graph is true, otherwise adds to the existing one.
Start a new reconnaissance scan with enhanced user session management.
"""
print("=== API: /api/scan/start called ===")
try:
print("Getting JSON data from request...")
data = request.get_json()
if not data or 'target' not in data:
return jsonify({'success': False, 'error': 'Missing target in request'}), 400
print(f"Request data: {data}")
target = data['target'].strip()
if not data or 'target_domain' not in data:
print("ERROR: Missing target_domain in request")
return jsonify({
'success': False,
'error': 'Missing target_domain in request'
}), 400
target_domain = data['target_domain'].strip()
max_depth = data.get('max_depth', config.default_recursion_depth)
clear_graph = data.get('clear_graph', True)
force_rescan_target = data.get('force_rescan_target', None) # **FIX**: Get the new parameter
print(f"Parsed - target: '{target}', max_depth: {max_depth}, clear_graph: {clear_graph}, force_rescan: {force_rescan_target}")
print(f"Parsed - target_domain: '{target_domain}', max_depth: {max_depth}, clear_graph: {clear_graph}")
# Validation
if not target:
return jsonify({'success': False, 'error': 'Target cannot be empty'}), 400
if not is_valid_target(target):
return jsonify({'success': False, 'error': 'Invalid target format. Please enter a valid domain or IP address.'}), 400
if not isinstance(max_depth, int) or not 1 <= max_depth <= 5:
return jsonify({'success': False, 'error': 'Max depth must be an integer between 1 and 5'}), 400
if not target_domain:
print("ERROR: Target domain cannot be empty")
return jsonify({
'success': False,
'error': 'Target domain cannot be empty'
}), 400
user_session_id, scanner = None, None
if not isinstance(max_depth, int) or max_depth < 1 or max_depth > 5:
print(f"ERROR: Invalid max_depth: {max_depth}")
return jsonify({
'success': False,
'error': 'Max depth must be an integer between 1 and 5'
}), 400
if clear_graph:
print("Clear graph requested: Creating a new, isolated scanner session.")
old_session_id = session.get('dnsrecon_session_id')
if old_session_id:
session_manager.terminate_session(old_session_id)
print("Validation passed, getting user scanner...")
user_session_id = session_manager.create_session()
session['dnsrecon_session_id'] = user_session_id
session.permanent = True
scanner = session_manager.get_session(user_session_id)
else:
print("Adding to existing graph: Reusing the current scanner session.")
user_session_id, scanner = get_user_scanner()
# Get user-specific scanner with enhanced session management
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'Failed to get or create a scanner instance.'}), 500
# Ensure session ID is properly set
if not scanner.session_id:
scanner.session_id = user_session_id
print(f"Using scanner {id(scanner)} in session {user_session_id}")
print(f"Using session: {user_session_id}")
print(f"Scanner object ID: {id(scanner)}")
success = scanner.start_scan(target, max_depth, clear_graph=clear_graph, force_rescan_target=force_rescan_target) # **FIX**: Pass the new parameter
# Start scan
print(f"Calling start_scan on scanner {id(scanner)}...")
success = scanner.start_scan(target_domain, max_depth, clear_graph=clear_graph)
# Immediately update session state regardless of success
session_manager.update_session_scanner(user_session_id, scanner)
if success:
scan_session_id = scanner.logger.session_id
print(f"Scan started successfully with scan session ID: {scan_session_id}")
# Get session info for user feedback
session_info = session_manager.get_session_info(user_session_id)
return jsonify({
'success': True,
'message': 'Scan started successfully',
'scan_id': scanner.logger.session_id,
'scan_id': scan_session_id,
'user_session_id': user_session_id,
'scanner_status': scanner.status,
'session_info': {
'user_fingerprint': session_info.get('user_fingerprint', 'unknown'),
'session_age_minutes': session_info.get('session_age_minutes', 0),
'consolidated': session_info.get('session_age_minutes', 0) > 0
},
'debug_info': {
'scanner_object_id': id(scanner),
'scanner_status': scanner.status
}
})
else:
print("ERROR: Scanner returned False")
# Provide more detailed error information
error_details = {
'scanner_status': scanner.status,
'scanner_object_id': id(scanner),
'session_id': user_session_id,
'providers_count': len(scanner.providers) if hasattr(scanner, 'providers') else 0
}
return jsonify({
'success': False,
'error': f'Failed to start scan (scanner status: {scanner.status})',
'debug_info': error_details
}), 409
except Exception as e:
print(f"ERROR: Exception in start_scan endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}'
}), 500
@app.route('/api/scan/stop', methods=['POST'])
def stop_scan():
@@ -178,7 +252,7 @@ def stop_scan():
@app.route('/api/scan/status', methods=['GET'])
def get_scan_status():
"""Get current scan status with error handling."""
"""Get current scan status with enhanced session information."""
try:
# Get user-specific scanner
user_session_id, scanner = get_user_scanner()
@@ -209,6 +283,15 @@ def get_scan_status():
status = scanner.get_scan_status()
status['user_session_id'] = user_session_id
# Add enhanced session information
session_info = session_manager.get_session_info(user_session_id)
status['session_info'] = {
'user_fingerprint': session_info.get('user_fingerprint', 'unknown'),
'session_age_minutes': session_info.get('session_age_minutes', 0),
'client_ip': session_info.get('client_ip', 'unknown'),
'last_activity': session_info.get('last_activity')
}
# Additional debug info
status['debug_info'] = {
'scanner_object_id': id(scanner),
@@ -237,7 +320,6 @@ def get_scan_status():
}), 500
@app.route('/api/graph', methods=['GET'])
def get_graph_data():
"""Get current graph data with error handling."""
@@ -282,109 +364,6 @@ def get_graph_data():
}
}), 500
@app.route('/api/graph/large-entity/extract', methods=['POST'])
def extract_from_large_entity():
"""Extract a node from a large entity, making it a standalone node."""
try:
data = request.get_json()
large_entity_id = data.get('large_entity_id')
node_id = data.get('node_id')
if not large_entity_id or not node_id:
return jsonify({'success': False, 'error': 'Missing required parameters'}), 400
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
success = scanner.extract_node_from_large_entity(large_entity_id, node_id)
if success:
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({'success': True, 'message': f'Node {node_id} extracted successfully.'})
else:
return jsonify({'success': False, 'error': f'Failed to extract node {node_id}.'}), 500
except Exception as e:
print(f"ERROR: Exception in extract_from_large_entity endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/graph/node/<node_id>', methods=['DELETE'])
def delete_graph_node(node_id):
"""Delete a node from the graph for the current user session."""
try:
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
success = scanner.graph.remove_node(node_id)
if success:
# Persist the change
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({'success': True, 'message': f'Node {node_id} deleted successfully.'})
else:
return jsonify({'success': False, 'error': f'Node {node_id} not found in graph.'}), 404
except Exception as e:
print(f"ERROR: Exception in delete_graph_node endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/graph/revert', methods=['POST'])
def revert_graph_action():
"""Reverts a graph action, such as re-adding a deleted node."""
try:
data = request.get_json()
if not data or 'type' not in data or 'data' not in data:
return jsonify({'success': False, 'error': 'Invalid revert request format'}), 400
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
action_type = data['type']
action_data = data['data']
if action_type == 'delete':
# Re-add the node
node_to_add = action_data.get('node')
if node_to_add:
scanner.graph.add_node(
node_id=node_to_add['id'],
node_type=NodeType(node_to_add['type']),
attributes=node_to_add.get('attributes'),
description=node_to_add.get('description'),
metadata=node_to_add.get('metadata')
)
# Re-add the edges
edges_to_add = action_data.get('edges', [])
for edge in edges_to_add:
# Add edge only if both nodes exist to prevent errors
if scanner.graph.graph.has_node(edge['from']) and scanner.graph.graph.has_node(edge['to']):
scanner.graph.add_edge(
source_id=edge['from'],
target_id=edge['to'],
relationship_type=edge['metadata']['relationship_type'],
confidence_score=edge['metadata']['confidence_score'],
source_provider=edge['metadata']['source_provider'],
raw_data=edge.get('raw_data', {})
)
# Persist the change
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({'success': True, 'message': 'Delete action reverted successfully.'})
return jsonify({'success': False, 'error': f'Unknown revert action type: {action_type}'}), 400
except Exception as e:
print(f"ERROR: Exception in revert_graph_action endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/export', methods=['GET'])
def export_results():
@@ -396,17 +375,22 @@ def export_results():
# Get complete results
results = scanner.export_results()
# Add session information to export
# Add enhanced session information to export
session_info = session_manager.get_session_info(user_session_id)
results['export_metadata'] = {
'user_session_id': user_session_id,
'user_fingerprint': session_info.get('user_fingerprint', 'unknown'),
'client_ip': session_info.get('client_ip', 'unknown'),
'session_age_minutes': session_info.get('session_age_minutes', 0),
'export_timestamp': datetime.now(timezone.utc).isoformat(),
'export_type': 'user_session_results'
}
# Create filename with timestamp
# Create filename with user fingerprint
timestamp = datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')
target = scanner.current_target or 'unknown'
filename = f"dnsrecon_{target}_{timestamp}_{user_session_id[:8]}.json"
user_fp = session_info.get('user_fingerprint', 'unknown')[:8]
filename = f"dnsrecon_{target}_{timestamp}_{user_fp}.json"
# Create in-memory file
json_data = json.dumps(results, indent=2, ensure_ascii=False)
@@ -431,19 +415,12 @@ def export_results():
@app.route('/api/providers', methods=['GET'])
def get_providers():
"""Get information about available providers for the user session."""
print("=== API: /api/providers called ===")
try:
# Get user-specific scanner
user_session_id, scanner = get_user_scanner()
if scanner:
# Updated debug print to be consistent with the new progress bar logic
completed_tasks = scanner.indicators_completed
total_tasks = scanner.total_tasks_ever_enqueued
print(f"DEBUG: Task Progress - Completed: {completed_tasks}, Total Enqueued: {total_tasks}")
else:
print("DEBUG: No active scanner session found.")
provider_info = scanner.get_provider_info()
return jsonify({
@@ -518,6 +495,122 @@ def set_api_keys():
'error': f'Internal server error: {str(e)}'
}), 500
@app.route('/api/session/info', methods=['GET'])
def get_session_info():
"""Get enhanced information about the current user session."""
try:
user_session_id, scanner = get_user_scanner()
session_info = session_manager.get_session_info(user_session_id)
return jsonify({
'success': True,
'session_info': session_info
})
except Exception as e:
print(f"ERROR: Exception in get_session_info endpoint: {e}")
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}'
}), 500
@app.route('/api/session/terminate', methods=['POST'])
def terminate_session():
"""Terminate the current user session."""
try:
user_session_id = session.get('dnsrecon_session_id')
if user_session_id:
success = session_manager.terminate_session(user_session_id)
# Clear Flask session
session.pop('dnsrecon_session_id', None)
return jsonify({
'success': success,
'message': 'Session terminated' if success else 'Session not found'
})
else:
return jsonify({
'success': False,
'error': 'No active session to terminate'
}), 400
except Exception as e:
print(f"ERROR: Exception in terminate_session endpoint: {e}")
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}'
}), 500
@app.route('/api/admin/sessions', methods=['GET'])
def list_sessions():
"""Admin endpoint to list all active sessions with enhanced information."""
try:
sessions = session_manager.list_active_sessions()
stats = session_manager.get_statistics()
return jsonify({
'success': True,
'sessions': sessions,
'statistics': stats
})
except Exception as e:
print(f"ERROR: Exception in list_sessions endpoint: {e}")
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}'
}), 500
@app.route('/api/health', methods=['GET'])
def health_check():
"""Health check endpoint with enhanced session statistics."""
try:
# Get session stats
session_stats = session_manager.get_statistics()
return jsonify({
'success': True,
'status': 'healthy',
'timestamp': datetime.now(timezone.utc).isoformat(),
'version': '2.0.0-enhanced',
'phase': 'enhanced_architecture',
'features': {
'multi_provider': True,
'concurrent_processing': True,
'real_time_updates': True,
'api_key_management': True,
'visualization': True,
'retry_logic': True,
'user_sessions': True,
'session_isolation': True,
'global_provider_caching': True,
'single_session_per_user': True,
'session_consolidation': True,
'task_completion_model': True
},
'session_statistics': session_stats,
'cache_info': {
'global_provider_cache': True,
'cache_location': '.cache/<provider_name>/',
'cache_expiry_hours': 12
}
})
except Exception as e:
print(f"ERROR: Exception in health_check endpoint: {e}")
return jsonify({
'success': False,
'error': f'Health check failed: {str(e)}'
}), 500
@app.errorhandler(404)
def not_found(error):
"""Handle 404 errors."""
@@ -539,7 +632,7 @@ def internal_error(error):
if __name__ == '__main__':
print("Starting DNSRecon Flask application with user session support...")
print("Starting DNSRecon Flask application with enhanced user session support...")
# Load configuration from environment
config.load_from_env()

125
config.py
View File

@@ -1,5 +1,3 @@
# dnsrecon-reduced/config.py
"""
Configuration management for DNSRecon tool.
Handles API key storage, rate limiting, and default settings.
@@ -7,97 +5,110 @@ Handles API key storage, rate limiting, and default settings.
import os
from typing import Dict, Optional
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
class Config:
"""Configuration manager for DNSRecon application."""
def __init__(self):
"""Initialize configuration with default values."""
self.api_keys: Dict[str, Optional[str]] = {}
self.api_keys: Dict[str, Optional[str]] = {
'shodan': None
}
# --- General Settings ---
# Default settings
self.default_recursion_depth = 2
self.default_timeout = 30
self.default_timeout = 10
self.max_concurrent_requests = 5
self.large_entity_threshold = 100
self.max_retries_per_target = 8
self.cache_expiry_hours = 12
# --- Provider Caching Settings ---
self.cache_timeout_hours = 6 # Provider-specific cache timeout
# --- Rate Limiting (requests per minute) ---
# Rate limiting settings (requests per minute)
self.rate_limits = {
'crtsh': 5,
'shodan': 60,
'dns': 100
'crtsh': 60, # Free service, be respectful
'shodan': 60, # API dependent
'dns': 100 # Local DNS queries
}
# --- Provider Settings ---
# Provider settings
self.enabled_providers = {
'crtsh': True,
'dns': True,
'shodan': False
'crtsh': True, # Always enabled (free)
'dns': True, # Always enabled (free)
'shodan': False # Requires API key
}
# --- Logging ---
# Logging configuration
self.log_level = 'INFO'
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
# --- Flask & Session Settings ---
# Flask configuration
self.flask_host = '127.0.0.1'
self.flask_port = 5000
self.flask_debug = True
self.flask_secret_key = 'default-secret-key-change-me'
self.flask_permanent_session_lifetime_hours = 2
self.session_timeout_minutes = 60
# Load environment variables to override defaults
self.load_from_env()
def set_api_key(self, provider: str, api_key: str) -> bool:
"""
Set API key for a provider.
def load_from_env(self):
"""Load configuration from environment variables."""
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
Args:
provider: Provider name (shodan, etc)
api_key: API key string
# Override settings from environment
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', self.default_recursion_depth))
self.default_timeout = int(os.getenv('DEFAULT_TIMEOUT', self.default_timeout))
self.max_concurrent_requests = int(os.getenv('MAX_CONCURRENT_REQUESTS', self.max_concurrent_requests))
self.large_entity_threshold = int(os.getenv('LARGE_ENTITY_THRESHOLD', self.large_entity_threshold))
self.max_retries_per_target = int(os.getenv('MAX_RETRIES_PER_TARGET', self.max_retries_per_target))
self.cache_expiry_hours = int(os.getenv('CACHE_EXPIRY_HOURS', self.cache_expiry_hours))
self.cache_timeout_hours = int(os.getenv('CACHE_TIMEOUT_HOURS', self.cache_timeout_hours))
# Override Flask and session settings
self.flask_host = os.getenv('FLASK_HOST', self.flask_host)
self.flask_port = int(os.getenv('FLASK_PORT', self.flask_port))
self.flask_debug = os.getenv('FLASK_DEBUG', str(self.flask_debug)).lower() == 'true'
self.flask_secret_key = os.getenv('FLASK_SECRET_KEY', self.flask_secret_key)
self.flask_permanent_session_lifetime_hours = int(os.getenv('FLASK_PERMANENT_SESSION_LIFETIME_HOURS', self.flask_permanent_session_lifetime_hours))
self.session_timeout_minutes = int(os.getenv('SESSION_TIMEOUT_MINUTES', self.session_timeout_minutes))
def set_api_key(self, provider: str, api_key: Optional[str]) -> bool:
"""Set API key for a provider."""
self.api_keys[provider] = api_key
if api_key:
self.enabled_providers[provider] = True
return True
Returns:
bool: True if key was set successfully
"""
if provider in self.api_keys:
self.api_keys[provider] = api_key
self.enabled_providers[provider] = True if api_key else False
return True
return False
def get_api_key(self, provider: str) -> Optional[str]:
"""Get API key for a provider."""
"""
Get API key for a provider.
Args:
provider: Provider name
Returns:
API key or None if not set
"""
return self.api_keys.get(provider)
def is_provider_enabled(self, provider: str) -> bool:
"""Check if a provider is enabled."""
"""
Check if a provider is enabled.
Args:
provider: Provider name
Returns:
bool: True if provider is enabled
"""
return self.enabled_providers.get(provider, False)
def get_rate_limit(self, provider: str) -> int:
"""Get rate limit for a provider."""
"""
Get rate limit for a provider.
Args:
provider: Provider name
Returns:
Rate limit in requests per minute
"""
return self.rate_limits.get(provider, 60)
def load_from_env(self):
"""Load configuration from environment variables."""
if os.getenv('SHODAN_API_KEY'):
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
# Override default settings from environment
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', '2'))
self.flask_debug = os.getenv('FLASK_DEBUG', 'True').lower() == 'true'
self.default_timeout = 30
self.max_concurrent_requests = 5
# Global configuration instance
config = Config()

View File

@@ -8,6 +8,7 @@ from .scanner import Scanner, ScanStatus
from .logger import ForensicLogger, get_forensic_logger, new_session
from .session_manager import session_manager
from .session_config import SessionConfig, create_session_config
from .task_manager import TaskManager, TaskType, ReconTask
__all__ = [
'GraphManager',
@@ -19,7 +20,10 @@ __all__ = [
'new_session',
'session_manager',
'SessionConfig',
'create_session_config'
'create_session_config',
'TaskManager',
'TaskType',
'ReconTask'
]
__version__ = "1.0.0-phase2"

View File

@@ -1,10 +1,6 @@
# dnsrecon-reduced/core/graph_manager.py
"""
Graph data model for DNSRecon using NetworkX.
Manages in-memory graph storage with confidence scoring and forensic metadata.
Now fully compatible with the unified ProviderResult data model.
UPDATED: Fixed certificate styling and correlation edge labeling.
"""
import re
from datetime import datetime, timezone
@@ -30,7 +26,6 @@ class GraphManager:
"""
Thread-safe graph manager for DNSRecon infrastructure mapping.
Uses NetworkX for in-memory graph storage with confidence scoring.
Compatible with unified ProviderResult data model.
"""
def __init__(self):
@@ -41,7 +36,6 @@ class GraphManager:
self.correlation_index = {}
# Compile regex for date filtering for efficiency
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
self.EXCLUDED_KEYS = ['confidence', 'provider', 'timestamp', 'type','crtsh_cert_validity_period_days']
def __getstate__(self):
"""Prepare GraphManager for pickling, excluding compiled regex."""
@@ -56,115 +50,173 @@ class GraphManager:
self.__dict__.update(state)
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
def process_correlations_for_node(self, node_id: str):
"""
UPDATED: Process correlations for a given node with enhanced tracking.
Now properly tracks which attribute/provider created each correlation.
"""
if not self.graph.has_node(node_id):
def _update_correlation_index(self, node_id: str, data: Any, path: List[str] = None):
"""Recursively traverse metadata and add hashable values to the index."""
if path is None:
path = []
if isinstance(data, dict):
for key, value in data.items():
self._update_correlation_index(node_id, value, path + [key])
elif isinstance(data, list):
for i, item in enumerate(data):
self._update_correlation_index(node_id, item, path + [f"[{i}]"])
else:
self._add_to_correlation_index(node_id, data, ".".join(path))
def _add_to_correlation_index(self, node_id: str, value: Any, path_str: str):
"""Add a hashable value to the correlation index, filtering out noise."""
if not isinstance(value, (str, int, float, bool)) or value is None:
return
node_attributes = self.graph.nodes[node_id].get('attributes', [])
# Ignore certain paths that contain noisy, non-unique identifiers
if any(keyword in path_str.lower() for keyword in ['count', 'total', 'timestamp', 'date']):
return
# Process each attribute for potential correlations
for attr in node_attributes:
attr_name = attr.get('name')
attr_value = attr.get('value')
attr_provider = attr.get('provider', 'unknown')
# Filter out common low-entropy values and date-like strings
if isinstance(value, str):
# FIXED: Prevent correlation on date/time strings.
if self.date_pattern.match(value):
return
if len(value) < 4 or value.lower() in ['true', 'false', 'unknown', 'none', 'crt.sh']:
return
elif isinstance(value, int) and abs(value) < 9999:
return # Ignore small integers
elif isinstance(value, bool):
return # Ignore boolean values
# Skip excluded attributes and invalid values
if attr_name in self.EXCLUDED_KEYS or not isinstance(attr_value, (str, int, float, bool)) or attr_value is None:
continue
# Add the valuable correlation data to the index
if value not in self.correlation_index:
self.correlation_index[value] = {}
if node_id not in self.correlation_index[value]:
self.correlation_index[value][node_id] = []
if path_str not in self.correlation_index[value][node_id]:
self.correlation_index[value][node_id].append(path_str)
if isinstance(attr_value, bool):
continue
def _check_for_correlations(self, new_node_id: str, data: Any, path: List[str] = None) -> List[Dict]:
"""Recursively traverse metadata to find correlations with existing data."""
if path is None:
path = []
if isinstance(attr_value, str) and (len(attr_value) < 4 or self.date_pattern.match(attr_value)):
continue
all_correlations = []
if isinstance(data, dict):
for key, value in data.items():
if key == 'source': # Avoid correlating on the provider name
continue
all_correlations.extend(self._check_for_correlations(new_node_id, value, path + [key]))
elif isinstance(data, list):
for i, item in enumerate(data):
all_correlations.extend(self._check_for_correlations(new_node_id, item, path + [f"[{i}]"]))
else:
value = data
if value in self.correlation_index:
existing_nodes_with_paths = self.correlation_index[value]
unique_nodes = set(existing_nodes_with_paths.keys())
unique_nodes.add(new_node_id)
# Initialize correlation tracking for this value
if attr_value not in self.correlation_index:
self.correlation_index[attr_value] = {
'nodes': set(),
'sources': [] # Track which provider/attribute combinations contributed
}
if len(unique_nodes) < 2:
return all_correlations # Correlation must involve at least two distinct nodes
# Add this node and source information
self.correlation_index[attr_value]['nodes'].add(node_id)
new_source = {'node_id': new_node_id, 'path': ".".join(path)}
all_sources = [new_source]
for node_id, paths in existing_nodes_with_paths.items():
for p_str in paths:
all_sources.append({'node_id': node_id, 'path': p_str})
# Track the source of this correlation value
source_info = {
'node_id': node_id,
'provider': attr_provider,
'attribute': attr_name,
'path': f"{attr_provider}_{attr_name}"
}
all_correlations.append({
'value': value,
'sources': all_sources,
'nodes': list(unique_nodes)
})
return all_correlations
# Add source if not already present (avoid duplicates)
existing_sources = [s for s in self.correlation_index[attr_value]['sources']
if s['node_id'] == node_id and s['path'] == source_info['path']]
if not existing_sources:
self.correlation_index[attr_value]['sources'].append(source_info)
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[Dict[str, Any]] = None,
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
"""Add a node to the graph, update attributes, and process correlations."""
is_new_node = not self.graph.has_node(node_id)
if is_new_node:
self.graph.add_node(node_id, type=node_type.value,
added_timestamp=datetime.now(timezone.utc).isoformat(),
attributes=attributes or {},
description=description,
metadata=metadata or {})
else:
# Safely merge new attributes into existing attributes
if attributes:
existing_attributes = self.graph.nodes[node_id].get('attributes', {})
existing_attributes.update(attributes)
self.graph.nodes[node_id]['attributes'] = existing_attributes
if description:
self.graph.nodes[node_id]['description'] = description
if metadata:
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
existing_metadata.update(metadata)
self.graph.nodes[node_id]['metadata'] = existing_metadata
# Create correlation node if we have multiple nodes with this value
if len(self.correlation_index[attr_value]['nodes']) > 1:
self._create_enhanced_correlation_node_and_edges(attr_value, self.correlation_index[attr_value])
if attributes and node_type != NodeType.CORRELATION_OBJECT:
correlations = self._check_for_correlations(node_id, attributes)
for corr in correlations:
value = corr['value']
def _create_enhanced_correlation_node_and_edges(self, value, correlation_data):
# STEP 1: Substring check against all existing nodes
if self._correlation_value_matches_existing_node(value):
# Skip creating correlation node - would be redundant
continue
# STEP 2: Filter out node pairs that already have direct edges
eligible_nodes = self._filter_nodes_without_direct_edges(set(corr['nodes']))
if len(eligible_nodes) < 2:
# Need at least 2 nodes to create a correlation
continue
# STEP 3: Check for existing correlation node with same connection pattern
correlation_nodes_with_pattern = self._find_correlation_nodes_with_same_pattern(eligible_nodes)
if correlation_nodes_with_pattern:
# STEP 4: Merge with existing correlation node
target_correlation_node = correlation_nodes_with_pattern[0]
self._merge_correlation_values(target_correlation_node, value, corr)
else:
# STEP 5: Create new correlation node for eligible nodes only
correlation_node_id = f"corr_{abs(hash(str(sorted(eligible_nodes))))}"
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT,
metadata={'values': [value], 'sources': corr['sources'],
'correlated_nodes': list(eligible_nodes)})
# Create edges from eligible nodes to this correlation node
for c_node_id in eligible_nodes:
if self.graph.has_node(c_node_id):
attribute = corr['sources'][0]['path'].split('.')[-1]
relationship_type = f"c_{attribute}"
self.add_edge(c_node_id, correlation_node_id, relationship_type, confidence_score=0.9)
self._update_correlation_index(node_id, attributes)
self.last_modified = datetime.now(timezone.utc).isoformat()
return is_new_node
def _filter_nodes_without_direct_edges(self, node_set: set) -> set:
"""
UPDATED: Create correlation node and edges with detailed provider tracking.
Filter out nodes that already have direct edges between them.
Returns set of nodes that should be included in correlation.
"""
correlation_node_id = f"corr_{hash(str(value)) & 0x7FFFFFFF}"
nodes = correlation_data['nodes']
sources = correlation_data['sources']
nodes_list = list(node_set)
eligible_nodes = set(node_set) # Start with all nodes
# Create or update correlation node
if not self.graph.has_node(correlation_node_id):
# Determine the most common provider/attribute combination
provider_counts = {}
for source in sources:
key = f"{source['provider']}_{source['attribute']}"
provider_counts[key] = provider_counts.get(key, 0) + 1
# Check all pairs of nodes
for i in range(len(nodes_list)):
for j in range(i + 1, len(nodes_list)):
node_a = nodes_list[i]
node_b = nodes_list[j]
# Use the most common provider/attribute as the primary label
primary_source = max(provider_counts.items(), key=lambda x: x[1])[0] if provider_counts else "unknown_correlation"
metadata = {
'value': value,
'correlated_nodes': list(nodes),
'sources': sources,
'primary_source': primary_source,
'correlation_count': len(nodes)
}
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT, metadata=metadata)
print(f"Created correlation node {correlation_node_id} for value '{value}' with {len(nodes)} nodes")
# Create edges from each node to the correlation node
for source in sources:
node_id = source['node_id']
provider = source['provider']
attribute = source['attribute']
if self.graph.has_node(node_id) and not self.graph.has_edge(node_id, correlation_node_id):
# Format relationship label as "corr_provider_attribute"
relationship_label = f"corr_{provider}_{attribute}"
self.add_edge(
source_id=node_id,
target_id=correlation_node_id,
relationship_type=relationship_label,
confidence_score=0.9,
source_provider=provider,
raw_data={
'correlation_value': value,
'original_attribute': attribute,
'correlation_type': 'attribute_matching'
}
)
print(f"Added correlation edge: {node_id} -> {correlation_node_id} ({relationship_label})")
# Check if direct edge exists in either direction
if self._has_direct_edge_bidirectional(node_a, node_b):
# Remove both nodes from eligible set since they're already connected
eligible_nodes.discard(node_a)
eligible_nodes.discard(node_b)
return eligible_nodes
def _has_direct_edge_bidirectional(self, node_a: str, node_b: str) -> bool:
"""
@@ -238,7 +290,7 @@ class GraphManager:
# Create set of unique sources based on (node_id, path) tuples
source_set = set()
for source in existing_sources + new_sources:
source_tuple = (source['node_id'], source.get('path', ''))
source_tuple = (source['node_id'], source['path'])
source_set.add(source_tuple)
# Convert back to list of dictionaries
@@ -261,47 +313,6 @@ class GraphManager:
f"across {node_count} nodes"
)
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[List[Dict[str, Any]]] = None,
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
"""
Add a node to the graph, update attributes, and process correlations.
Now compatible with unified data model - attributes are dictionaries from converted StandardAttribute objects.
"""
is_new_node = not self.graph.has_node(node_id)
if is_new_node:
self.graph.add_node(node_id, type=node_type.value,
added_timestamp=datetime.now(timezone.utc).isoformat(),
attributes=attributes or [], # Store as a list from the start
description=description,
metadata=metadata or {})
else:
# Safely merge new attributes into the existing list of attributes
if attributes:
existing_attributes = self.graph.nodes[node_id].get('attributes', [])
# Handle cases where old data might still be in dictionary format
if not isinstance(existing_attributes, list):
existing_attributes = []
# Create a set of existing attribute names for efficient duplicate checking
existing_attr_names = {attr['name'] for attr in existing_attributes}
for new_attr in attributes:
if new_attr['name'] not in existing_attr_names:
existing_attributes.append(new_attr)
existing_attr_names.add(new_attr['name'])
self.graph.nodes[node_id]['attributes'] = existing_attributes
if description:
self.graph.nodes[node_id]['description'] = description
if metadata:
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
existing_metadata.update(metadata)
self.graph.nodes[node_id]['metadata'] = existing_metadata
self.last_modified = datetime.now(timezone.utc).isoformat()
return is_new_node
def add_edge(self, source_id: str, target_id: str, relationship_type: str,
confidence_score: float = 0.5, source_provider: str = "unknown",
raw_data: Optional[Dict[str, Any]] = None) -> bool:
@@ -334,63 +345,6 @@ class GraphManager:
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
def extract_node_from_large_entity(self, large_entity_id: str, node_id_to_extract: str) -> bool:
"""
Removes a node from a large entity's internal lists and updates its count.
This prepares the large entity for the node's promotion to a regular node.
"""
if not self.graph.has_node(large_entity_id):
return False
node_data = self.graph.nodes[large_entity_id]
attributes = node_data.get('attributes', {})
# Remove from the list of member nodes
if 'nodes' in attributes and node_id_to_extract in attributes['nodes']:
attributes['nodes'].remove(node_id_to_extract)
# Update the count
attributes['count'] = len(attributes['nodes'])
else:
# This can happen if the node was already extracted, which is not an error.
print(f"Warning: Node {node_id_to_extract} not found in the 'nodes' list of {large_entity_id}.")
return True # Proceed as if successful
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
def remove_node(self, node_id: str) -> bool:
"""Remove a node and its connected edges from the graph."""
if not self.graph.has_node(node_id):
return False
# Remove node from the graph (NetworkX handles removing connected edges)
self.graph.remove_node(node_id)
# Clean up the correlation index
keys_to_delete = []
for value, data in self.correlation_index.items():
if isinstance(data, dict) and 'nodes' in data:
# Updated correlation structure
if node_id in data['nodes']:
data['nodes'].discard(node_id)
# Remove sources for this node
data['sources'] = [s for s in data['sources'] if s['node_id'] != node_id]
if not data['nodes']: # If no other nodes are associated, remove it
keys_to_delete.append(value)
else:
# Legacy correlation structure (fallback)
if isinstance(data, set) and node_id in data:
data.discard(node_id)
if not data:
keys_to_delete.append(value)
for key in keys_to_delete:
if key in self.correlation_index:
del self.correlation_index[key]
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
def get_node_count(self) -> int:
"""Get total number of nodes in the graph."""
return self.graph.number_of_nodes()
@@ -415,58 +369,19 @@ class GraphManager:
if d.get('confidence_score', 0) >= min_confidence]
def get_graph_data(self) -> Dict[str, Any]:
"""
Export graph data formatted for frontend visualization.
UPDATED: Fixed certificate validity styling logic for unified data model.
"""
"""Export graph data formatted for frontend visualization."""
nodes = []
for node_id, attrs in self.graph.nodes(data=True):
node_data = {'id': node_id, 'label': node_id, 'type': attrs.get('type', 'unknown'),
'attributes': attrs.get('attributes', []), # Ensure attributes is a list
'attributes': attrs.get('attributes', {}),
'description': attrs.get('description', ''),
'metadata': attrs.get('metadata', {}),
'added_timestamp': attrs.get('added_timestamp')}
# UPDATED: Fixed certificate validity styling logic
# Customize node appearance based on type and attributes
node_type = node_data['type']
attributes_list = node_data['attributes']
if node_type == 'domain' and isinstance(attributes_list, list):
# Check for certificate-related attributes
has_certificates = False
has_valid_certificates = False
has_expired_certificates = False
for attr in attributes_list:
attr_name = attr.get('name', '').lower()
attr_provider = attr.get('provider', '').lower()
attr_value = attr.get('value')
# Look for certificate attributes from crt.sh provider
if attr_provider == 'crtsh' or 'cert' in attr_name:
has_certificates = True
# Check certificate validity
if attr_name == 'cert_is_currently_valid':
if attr_value is True:
has_valid_certificates = True
elif attr_value is False:
has_expired_certificates = True
# Also check for certificate expiry indicators
elif 'expires_soon' in attr_name and attr_value is True:
has_expired_certificates = True
elif 'expired' in attr_name and attr_value is True:
has_expired_certificates = True
# Apply styling based on certificate status
if has_expired_certificates and not has_valid_certificates:
# Red for expired/invalid certificates
node_data['color'] = {'background': '#ff6b6b', 'border': '#cc5555'}
elif not has_certificates:
# Grey for domains with no certificates
node_data['color'] = {'background': '#c7c7c7', 'border': '#999999'}
# Default green styling is handled by the frontend for domains with valid certificates
attributes = node_data['attributes']
if node_type == 'domain' and attributes.get('certificates', {}).get('has_valid_cert') is False:
node_data['color'] = {'background': '#c7c7c7', 'border': '#999'} # Gray for invalid cert
# Add incoming and outgoing edges to node data
if self.graph.has_node(node_id):
@@ -497,7 +412,7 @@ class GraphManager:
'last_modified': self.last_modified,
'total_nodes': self.get_node_count(),
'total_edges': self.get_edge_count(),
'graph_format': 'dnsrecon_v1_unified_model'
'graph_format': 'dnsrecon_v1_nodeling'
},
'graph': graph_data,
'statistics': self.get_statistics()
@@ -506,14 +421,10 @@ class GraphManager:
def _get_confidence_distribution(self) -> Dict[str, int]:
"""Get distribution of edge confidence scores."""
distribution = {'high': 0, 'medium': 0, 'low': 0}
for _, _, data in self.graph.edges(data=True):
confidence = data.get('confidence_score', 0)
if confidence >= 0.8:
distribution['high'] += 1
elif confidence >= 0.6:
distribution['medium'] += 1
else:
distribution['low'] += 1
for _, _, confidence in self.graph.edges(data='confidence_score', default=0):
if confidence >= 0.8: distribution['high'] += 1
elif confidence >= 0.6: distribution['medium'] += 1
else: distribution['low'] += 1
return distribution
def get_statistics(self) -> Dict[str, Any]:
@@ -528,10 +439,9 @@ class GraphManager:
# Calculate distributions
for node_type in NodeType:
stats['node_type_distribution'][node_type.value] = self.get_nodes_by_type(node_type).__len__()
for _, _, data in self.graph.edges(data=True):
rel_type = data.get('relationship_type', 'unknown')
for _, _, rel_type in self.graph.edges(data='relationship_type', default='unknown'):
stats['relationship_type_distribution'][rel_type] = stats['relationship_type_distribution'].get(rel_type, 0) + 1
provider = data.get('source_provider', 'unknown')
for _, _, provider in self.graph.edges(data='source_provider', default='unknown'):
stats['provider_distribution'][provider] = stats['provider_distribution'].get(provider, 0) + 1
return stats

View File

@@ -42,7 +42,7 @@ class ForensicLogger:
Maintains detailed audit trail of all reconnaissance activities.
"""
def __init__(self, session_id: str = ""):
def __init__(self, session_id: str = None):
"""
Initialize forensic logger.
@@ -50,7 +50,7 @@ class ForensicLogger:
session_id: Unique identifier for this reconnaissance session
"""
self.session_id = session_id or self._generate_session_id()
self.lock = threading.Lock()
#self.lock = threading.Lock()
# Initialize audit trail storage
self.api_requests: List[APIRequest] = []
@@ -86,8 +86,6 @@ class ForensicLogger:
# Remove the unpickleable 'logger' attribute
if 'logger' in state:
del state['logger']
if 'lock' in state:
del state['lock']
return state
def __setstate__(self, state):
@@ -103,7 +101,6 @@ class ForensicLogger:
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
self.logger.addHandler(console_handler)
self.lock = threading.Lock()
def _generate_session_id(self) -> str:
"""Generate unique session identifier."""
@@ -206,6 +203,8 @@ class ForensicLogger:
self.session_metadata['target_domains'] = list(self.session_metadata['target_domains'])
self.logger.info(f"Scan Complete - Session: {self.session_id}")
self.logger.info(f"Total API Requests: {self.session_metadata['total_requests']}")
self.logger.info(f"Total Relationships: {self.session_metadata['total_relationships']}")
def export_audit_trail(self) -> Dict[str, Any]:
"""

View File

@@ -1,106 +0,0 @@
# dnsrecon-reduced/core/provider_result.py
"""
Unified data model for DNSRecon passive reconnaissance.
Standardizes the data structure across all providers to ensure consistent processing.
"""
from typing import Any, Optional, List, Dict
from dataclasses import dataclass, field
from datetime import datetime, timezone
@dataclass
class StandardAttribute:
"""A unified data structure for a single piece of information about a node."""
target_node: str
name: str
value: Any
type: str
provider: str
confidence: float
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
metadata: Optional[Dict[str, Any]] = field(default_factory=dict)
def __post_init__(self):
"""Validate the attribute after initialization."""
if not isinstance(self.confidence, (int, float)) or not 0.0 <= self.confidence <= 1.0:
raise ValueError(f"Confidence must be between 0.0 and 1.0, got {self.confidence}")
@dataclass
class Relationship:
"""A unified data structure for a directional link between two nodes."""
source_node: str
target_node: str
relationship_type: str
confidence: float
provider: str
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
raw_data: Optional[Dict[str, Any]] = field(default_factory=dict)
def __post_init__(self):
"""Validate the relationship after initialization."""
if not isinstance(self.confidence, (int, float)) or not 0.0 <= self.confidence <= 1.0:
raise ValueError(f"Confidence must be between 0.0 and 1.0, got {self.confidence}")
@dataclass
class ProviderResult:
"""A container for all data returned by a provider from a single query."""
attributes: List[StandardAttribute] = field(default_factory=list)
relationships: List[Relationship] = field(default_factory=list)
def add_attribute(self, target_node: str, name: str, value: Any, attr_type: str,
provider: str, confidence: float = 0.8,
metadata: Optional[Dict[str, Any]] = None) -> None:
"""Helper method to add an attribute to the result."""
self.attributes.append(StandardAttribute(
target_node=target_node,
name=name,
value=value,
type=attr_type,
provider=provider,
confidence=confidence,
metadata=metadata or {}
))
def add_relationship(self, source_node: str, target_node: str, relationship_type: str,
provider: str, confidence: float = 0.8,
raw_data: Optional[Dict[str, Any]] = None) -> None:
"""Helper method to add a relationship to the result."""
self.relationships.append(Relationship(
source_node=source_node,
target_node=target_node,
relationship_type=relationship_type,
confidence=confidence,
provider=provider,
raw_data=raw_data or {}
))
def get_discovered_nodes(self) -> set:
"""Get all unique node identifiers discovered in this result."""
nodes = set()
# Add nodes from relationships
for rel in self.relationships:
nodes.add(rel.source_node)
nodes.add(rel.target_node)
# Add nodes from attributes
for attr in self.attributes:
nodes.add(attr.target_node)
return nodes
def get_relationship_count(self) -> int:
"""Get the total number of relationships in this result."""
return len(self.relationships)
def get_attribute_count(self) -> int:
"""Get the total number of attributes in this result."""
return len(self.attributes)
def is_large_entity(self, threshold: int) -> bool:
"""Check if this result qualifies as a large entity based on relationship count."""
return self.get_relationship_count() > threshold

View File

@@ -1,28 +0,0 @@
# dnsrecon-reduced/core/rate_limiter.py
import time
class GlobalRateLimiter:
def __init__(self, redis_client):
self.redis = redis_client
def is_rate_limited(self, key, limit, period):
"""
Check if a key is rate-limited.
"""
now = time.time()
key = f"rate_limit:{key}"
# Remove old timestamps
self.redis.zremrangebyscore(key, 0, now - period)
# Check the count
count = self.redis.zcard(key)
if count >= limit:
return True
# Add new timestamp
self.redis.zadd(key, {now: now})
self.redis.expire(key, period)
return False

File diff suppressed because it is too large Load Diff

View File

@@ -1,20 +1,372 @@
"""
Per-session configuration management for DNSRecon.
Provides isolated configuration instances for each user session.
Enhanced per-session configuration management for DNSRecon.
Provides isolated configuration instances for each user session while supporting global caching.
"""
from config import Config
import os
from typing import Dict, Optional
class SessionConfig(Config):
class SessionConfig:
"""
Session-specific configuration that inherits from global config
but maintains isolated API keys and provider settings.
Enhanced session-specific configuration that inherits from global config
but maintains isolated API keys and provider settings while supporting global caching.
"""
def __init__(self):
"""Initialize session config with global defaults."""
super().__init__()
"""Initialize enhanced session config with global cache support."""
# Copy all attributes from global config
self.api_keys: Dict[str, Optional[str]] = {
'shodan': None
}
def create_session_config() -> 'SessionConfig':
"""Create a new session configuration instance."""
return SessionConfig()
# Default settings (copied from global config)
self.default_recursion_depth = 2
self.default_timeout = 30
self.max_concurrent_requests = 5
self.large_entity_threshold = 100
# Enhanced rate limiting settings (per session)
self.rate_limits = {
'crtsh': 60,
'shodan': 60,
'dns': 100
}
# Enhanced provider settings (per session)
self.enabled_providers = {
'crtsh': True,
'dns': True,
'shodan': False
}
# Task-based execution settings
self.task_retry_settings = {
'max_retries': 3,
'base_backoff_seconds': 1.0,
'max_backoff_seconds': 60.0,
'retry_on_rate_limit': True,
'retry_on_connection_error': True,
'retry_on_timeout': True
}
# Cache settings (global across all sessions)
self.cache_settings = {
'enabled': True,
'expiry_hours': 12,
'cache_base_dir': '.cache',
'per_provider_directories': True,
'thread_safe_operations': True
}
# Logging configuration
self.log_level = 'INFO'
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
# Flask configuration (shared)
self.flask_host = '127.0.0.1'
self.flask_port = 5000
self.flask_debug = True
# Session isolation settings
self.session_isolation = {
'enforce_single_session_per_user': True,
'consolidate_session_data_on_replacement': True,
'user_fingerprinting_enabled': True,
'session_timeout_minutes': 60
}
# Circuit breaker settings for provider reliability
self.circuit_breaker = {
'enabled': True,
'failure_threshold': 5, # Failures before opening circuit
'recovery_timeout_seconds': 300, # 5 minutes before trying again
'half_open_max_calls': 3 # Test calls when recovering
}
def set_api_key(self, provider: str, api_key: str) -> bool:
"""
Set API key for a provider in this session.
Args:
provider: Provider name (shodan, etc)
api_key: API key string (empty string to clear)
Returns:
bool: True if key was set successfully
"""
if provider in self.api_keys:
# Handle clearing of API keys
if api_key and api_key.strip():
self.api_keys[provider] = api_key.strip()
self.enabled_providers[provider] = True
else:
self.api_keys[provider] = None
self.enabled_providers[provider] = False
return True
return False
def get_api_key(self, provider: str) -> Optional[str]:
"""
Get API key for a provider in this session.
Args:
provider: Provider name
Returns:
API key or None if not set
"""
return self.api_keys.get(provider)
def is_provider_enabled(self, provider: str) -> bool:
"""
Check if a provider is enabled in this session.
Args:
provider: Provider name
Returns:
bool: True if provider is enabled
"""
return self.enabled_providers.get(provider, False)
def get_rate_limit(self, provider: str) -> int:
"""
Get rate limit for a provider in this session.
Args:
provider: Provider name
Returns:
Rate limit in requests per minute
"""
return self.rate_limits.get(provider, 60)
def get_task_retry_config(self) -> Dict[str, any]:
"""
Get task retry configuration for this session.
Returns:
Dictionary with retry settings
"""
return self.task_retry_settings.copy()
def get_cache_config(self) -> Dict[str, any]:
"""
Get cache configuration (global settings).
Returns:
Dictionary with cache settings
"""
return self.cache_settings.copy()
def is_circuit_breaker_enabled(self) -> bool:
"""Check if circuit breaker is enabled for provider reliability."""
return self.circuit_breaker.get('enabled', True)
def get_circuit_breaker_config(self) -> Dict[str, any]:
"""Get circuit breaker configuration."""
return self.circuit_breaker.copy()
def update_provider_settings(self, provider_updates: Dict[str, Dict[str, any]]) -> bool:
"""
Update provider-specific settings in bulk.
Args:
provider_updates: Dictionary of provider -> settings updates
Returns:
bool: True if updates were applied successfully
"""
try:
for provider_name, updates in provider_updates.items():
# Update rate limits
if 'rate_limit' in updates:
self.rate_limits[provider_name] = updates['rate_limit']
# Update enabled status
if 'enabled' in updates:
self.enabled_providers[provider_name] = updates['enabled']
# Update API key
if 'api_key' in updates:
self.set_api_key(provider_name, updates['api_key'])
return True
except Exception as e:
print(f"Error updating provider settings: {e}")
return False
def validate_configuration(self) -> Dict[str, any]:
"""
Validate the current configuration and return validation results.
Returns:
Dictionary with validation results and any issues found
"""
validation_result = {
'valid': True,
'warnings': [],
'errors': [],
'provider_status': {}
}
# Validate provider configurations
for provider_name, enabled in self.enabled_providers.items():
provider_status = {
'enabled': enabled,
'has_api_key': bool(self.api_keys.get(provider_name)),
'rate_limit': self.rate_limits.get(provider_name, 60)
}
# Check for potential issues
if enabled and provider_name in ['shodan'] and not provider_status['has_api_key']:
validation_result['warnings'].append(
f"Provider '{provider_name}' is enabled but missing API key"
)
validation_result['provider_status'][provider_name] = provider_status
# Validate task settings
if self.task_retry_settings['max_retries'] > 10:
validation_result['warnings'].append(
f"High retry count ({self.task_retry_settings['max_retries']}) may cause long delays"
)
# Validate concurrent settings
if self.max_concurrent_requests > 10:
validation_result['warnings'].append(
f"High concurrency ({self.max_concurrent_requests}) may overwhelm providers"
)
# Validate cache settings
if not os.path.exists(self.cache_settings['cache_base_dir']):
try:
os.makedirs(self.cache_settings['cache_base_dir'], exist_ok=True)
except Exception as e:
validation_result['errors'].append(f"Cannot create cache directory: {e}")
validation_result['valid'] = False
return validation_result
def load_from_env(self):
"""Load configuration from environment variables with enhanced validation."""
# Load API keys from environment
if os.getenv('SHODAN_API_KEY') and not self.api_keys['shodan']:
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
print("Loaded Shodan API key from environment")
# Override default settings from environment
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', '2'))
self.default_timeout = int(os.getenv('DEFAULT_TIMEOUT', '30'))
self.max_concurrent_requests = int(os.getenv('MAX_CONCURRENT_REQUESTS', '5'))
# Load task retry settings from environment
if os.getenv('TASK_MAX_RETRIES'):
self.task_retry_settings['max_retries'] = int(os.getenv('TASK_MAX_RETRIES'))
if os.getenv('TASK_BASE_BACKOFF'):
self.task_retry_settings['base_backoff_seconds'] = float(os.getenv('TASK_BASE_BACKOFF'))
# Load cache settings from environment
if os.getenv('CACHE_EXPIRY_HOURS'):
self.cache_settings['expiry_hours'] = int(os.getenv('CACHE_EXPIRY_HOURS'))
if os.getenv('CACHE_DISABLED'):
self.cache_settings['enabled'] = os.getenv('CACHE_DISABLED').lower() != 'true'
# Load circuit breaker settings
if os.getenv('CIRCUIT_BREAKER_DISABLED'):
self.circuit_breaker['enabled'] = os.getenv('CIRCUIT_BREAKER_DISABLED').lower() != 'true'
# Flask settings
self.flask_debug = os.getenv('FLASK_DEBUG', 'True').lower() == 'true'
print("Enhanced configuration loaded from environment")
def export_config_summary(self) -> Dict[str, any]:
"""
Export a summary of the current configuration for debugging/logging.
Returns:
Dictionary with configuration summary (API keys redacted)
"""
return {
'providers': {
provider: {
'enabled': self.enabled_providers.get(provider, False),
'has_api_key': bool(self.api_keys.get(provider)),
'rate_limit': self.rate_limits.get(provider, 60)
}
for provider in self.enabled_providers.keys()
},
'task_settings': {
'max_retries': self.task_retry_settings['max_retries'],
'max_concurrent_requests': self.max_concurrent_requests,
'large_entity_threshold': self.large_entity_threshold
},
'cache_settings': {
'enabled': self.cache_settings['enabled'],
'expiry_hours': self.cache_settings['expiry_hours'],
'base_directory': self.cache_settings['cache_base_dir']
},
'session_settings': {
'isolation_enabled': self.session_isolation['enforce_single_session_per_user'],
'consolidation_enabled': self.session_isolation['consolidate_session_data_on_replacement'],
'timeout_minutes': self.session_isolation['session_timeout_minutes']
},
'circuit_breaker': {
'enabled': self.circuit_breaker['enabled'],
'failure_threshold': self.circuit_breaker['failure_threshold'],
'recovery_timeout': self.circuit_breaker['recovery_timeout_seconds']
}
}
def create_session_config() -> SessionConfig:
"""
Create a new enhanced session configuration instance.
Returns:
Configured SessionConfig instance
"""
session_config = SessionConfig()
session_config.load_from_env()
# Validate configuration and log any issues
validation = session_config.validate_configuration()
if validation['warnings']:
print("Configuration warnings:")
for warning in validation['warnings']:
print(f" WARNING: {warning}")
if validation['errors']:
print("Configuration errors:")
for error in validation['errors']:
print(f" ERROR: {error}")
if not validation['valid']:
raise ValueError("Configuration validation failed - see errors above")
print(f"Enhanced session configuration created successfully")
return session_config
def create_test_config() -> SessionConfig:
"""
Create a test configuration with safe defaults for testing.
Returns:
Test-safe SessionConfig instance
"""
test_config = SessionConfig()
# Override settings for testing
test_config.max_concurrent_requests = 2
test_config.task_retry_settings['max_retries'] = 1
test_config.task_retry_settings['base_backoff_seconds'] = 0.1
test_config.cache_settings['expiry_hours'] = 1
test_config.session_isolation['session_timeout_minutes'] = 10
print("Test configuration created")
return test_config

View File

@@ -5,37 +5,153 @@ import time
import uuid
import redis
import pickle
from typing import Dict, Optional, Any
import hashlib
from typing import Dict, Optional, Any, List, Tuple
from core.scanner import Scanner
from config import config
class UserIdentifier:
"""Handles user identification for session management."""
@staticmethod
def generate_user_fingerprint(client_ip: str, user_agent: str) -> str:
"""
Generate a unique fingerprint for a user based on IP and User-Agent.
Args:
client_ip: Client IP address
user_agent: User-Agent header value
Returns:
Unique user fingerprint hash
"""
# Create deterministic user identifier
user_data = f"{client_ip}:{user_agent[:100]}" # Limit UA to 100 chars
fingerprint = hashlib.sha256(user_data.encode()).hexdigest()[:16] # 16 char fingerprint
return f"user_{fingerprint}"
@staticmethod
def extract_request_info(request) -> Tuple[str, str]:
"""
Extract client IP and User-Agent from Flask request.
Args:
request: Flask request object
Returns:
Tuple of (client_ip, user_agent)
"""
# Handle proxy headers for real IP
client_ip = request.headers.get('X-Forwarded-For', '').split(',')[0].strip()
if not client_ip:
client_ip = request.headers.get('X-Real-IP', '')
if not client_ip:
client_ip = request.remote_addr or 'unknown'
user_agent = request.headers.get('User-Agent', 'unknown')
return client_ip, user_agent
class SessionConsolidator:
"""Handles consolidation of session data when replacing sessions."""
@staticmethod
def consolidate_scanner_data(old_scanner: 'Scanner', new_scanner: 'Scanner') -> 'Scanner':
"""
Consolidate useful data from old scanner into new scanner.
Args:
old_scanner: Scanner from terminated session
new_scanner: New scanner instance
Returns:
Enhanced new scanner with consolidated data
"""
try:
# Consolidate graph data if old scanner has valuable data
if old_scanner and hasattr(old_scanner, 'graph') and old_scanner.graph:
old_stats = old_scanner.graph.get_statistics()
if old_stats['basic_metrics']['total_nodes'] > 0:
print(f"Consolidating graph data: {old_stats['basic_metrics']['total_nodes']} nodes, {old_stats['basic_metrics']['total_edges']} edges")
# Transfer nodes and edges to new scanner's graph
for node_id, node_data in old_scanner.graph.graph.nodes(data=True):
# Add node to new graph with all attributes
new_scanner.graph.graph.add_node(node_id, **node_data)
for source, target, edge_data in old_scanner.graph.graph.edges(data=True):
# Add edge to new graph with all attributes
new_scanner.graph.graph.add_edge(source, target, **edge_data)
# Update correlation index
if hasattr(old_scanner.graph, 'correlation_index'):
new_scanner.graph.correlation_index = old_scanner.graph.correlation_index.copy()
# Update timestamps
new_scanner.graph.creation_time = old_scanner.graph.creation_time
new_scanner.graph.last_modified = old_scanner.graph.last_modified
# Consolidate provider statistics
if old_scanner and hasattr(old_scanner, 'providers') and old_scanner.providers:
for old_provider in old_scanner.providers:
# Find matching provider in new scanner
matching_new_provider = None
for new_provider in new_scanner.providers:
if new_provider.get_name() == old_provider.get_name():
matching_new_provider = new_provider
break
if matching_new_provider:
# Transfer cumulative statistics
matching_new_provider.total_requests += old_provider.total_requests
matching_new_provider.successful_requests += old_provider.successful_requests
matching_new_provider.failed_requests += old_provider.failed_requests
matching_new_provider.total_relationships_found += old_provider.total_relationships_found
# Transfer cache statistics if available
if hasattr(old_provider, 'cache_hits'):
matching_new_provider.cache_hits += getattr(old_provider, 'cache_hits', 0)
matching_new_provider.cache_misses += getattr(old_provider, 'cache_misses', 0)
print(f"Consolidated {old_provider.get_name()} provider stats: {old_provider.total_requests} requests")
return new_scanner
except Exception as e:
print(f"Warning: Error during session consolidation: {e}")
return new_scanner
class SessionManager:
"""
Manages multiple scanner instances for concurrent user sessions using Redis.
Manages single scanner session per user using Redis with user identification.
Enforces one active session per user for consistent state management.
"""
def __init__(self, session_timeout_minutes: int = 0):
def __init__(self, session_timeout_minutes: int = 60):
"""
Initialize session manager with a Redis backend.
Initialize session manager with Redis backend and user tracking.
"""
if session_timeout_minutes is None:
session_timeout_minutes = config.session_timeout_minutes
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
self.session_timeout = session_timeout_minutes * 60 # Convert to seconds
self.lock = threading.Lock() # Lock for local operations, Redis handles atomic ops
self.lock = threading.Lock()
# User identification helper
self.user_identifier = UserIdentifier()
self.consolidator = SessionConsolidator()
# Start cleanup thread
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
self.cleanup_thread.start()
print(f"SessionManager initialized with Redis backend and {session_timeout_minutes}min timeout")
print(f"SessionManager initialized with Redis backend, user tracking, and {session_timeout_minutes}min timeout")
def __getstate__(self):
"""Prepare SessionManager for pickling."""
state = self.__dict__.copy()
# Exclude unpickleable attributes - Redis client and threading objects
# Exclude unpickleable attributes
unpicklable_attrs = ['lock', 'cleanup_thread', 'redis_client']
for attr in unpicklable_attrs:
if attr in state:
@@ -53,67 +169,108 @@ class SessionManager:
self.cleanup_thread.start()
def _get_session_key(self, session_id: str) -> str:
"""Generates the Redis key for a session."""
"""Generate Redis key for a session."""
return f"dnsrecon:session:{session_id}"
def _get_user_session_key(self, user_fingerprint: str) -> str:
"""Generate Redis key for user -> session mapping."""
return f"dnsrecon:user:{user_fingerprint}"
def _get_stop_signal_key(self, session_id: str) -> str:
"""Generates the Redis key for a session's stop signal."""
"""Generate Redis key for session stop signal."""
return f"dnsrecon:stop:{session_id}"
def create_session(self) -> str:
def create_or_replace_user_session(self, client_ip: str, user_agent: str) -> str:
"""
Create a new user session and store it in Redis.
Create new session for user, replacing any existing session.
Consolidates data from previous session if it exists.
Args:
client_ip: Client IP address
user_agent: User-Agent header
Returns:
New session ID
"""
session_id = str(uuid.uuid4())
print(f"=== CREATING SESSION {session_id} IN REDIS ===")
user_fingerprint = self.user_identifier.generate_user_fingerprint(client_ip, user_agent)
new_session_id = str(uuid.uuid4())
print(f"=== CREATING/REPLACING SESSION FOR USER {user_fingerprint} ===")
try:
# Check for existing user session
existing_session_id = self._get_user_current_session(user_fingerprint)
old_scanner = None
if existing_session_id:
print(f"Found existing session {existing_session_id} for user {user_fingerprint}")
# Get old scanner data for consolidation
old_scanner = self.get_session(existing_session_id)
# Terminate old session
self._terminate_session_internal(existing_session_id, cleanup_user_mapping=False)
print(f"Terminated old session {existing_session_id}")
# Create new session config and scanner
from core.session_config import create_session_config
session_config = create_session_config()
scanner_instance = Scanner(session_config=session_config)
new_scanner = Scanner(session_config=session_config)
# Set the session ID on the scanner for cross-process stop signal management
scanner_instance.session_id = session_id
# Set session ID on scanner for cross-process operations
new_scanner.session_id = new_session_id
# Consolidate data from old session if available
if old_scanner:
new_scanner = self.consolidator.consolidate_scanner_data(old_scanner, new_scanner)
print(f"Consolidated data from previous session")
# Create session data
session_data = {
'scanner': scanner_instance,
'scanner': new_scanner,
'config': session_config,
'created_at': time.time(),
'last_activity': time.time(),
'status': 'active'
'status': 'active',
'user_fingerprint': user_fingerprint,
'client_ip': client_ip,
'user_agent': user_agent[:200] # Truncate for storage
}
# Serialize the entire session data dictionary using pickle
# Store session in Redis
session_key = self._get_session_key(new_session_id)
serialized_data = pickle.dumps(session_data)
# Store in Redis
session_key = self._get_session_key(session_id)
self.redis_client.setex(session_key, self.session_timeout, serialized_data)
# Initialize stop signal as False
stop_key = self._get_stop_signal_key(session_id)
# Update user -> session mapping
user_session_key = self._get_user_session_key(user_fingerprint)
self.redis_client.setex(user_session_key, self.session_timeout, new_session_id.encode('utf-8'))
# Initialize stop signal
stop_key = self._get_stop_signal_key(new_session_id)
self.redis_client.setex(stop_key, self.session_timeout, b'0')
print(f"Session {session_id} stored in Redis with stop signal initialized")
return session_id
print(f"Created new session {new_session_id} for user {user_fingerprint}")
return new_session_id
except Exception as e:
print(f"ERROR: Failed to create session {session_id}: {e}")
print(f"ERROR: Failed to create session for user {user_fingerprint}: {e}")
raise
def _get_user_current_session(self, user_fingerprint: str) -> Optional[str]:
"""Get current session ID for a user."""
try:
user_session_key = self._get_user_session_key(user_fingerprint)
session_id_bytes = self.redis_client.get(user_session_key)
if session_id_bytes:
return session_id_bytes.decode('utf-8')
return None
except Exception as e:
print(f"Error getting user session: {e}")
return None
def set_stop_signal(self, session_id: str) -> bool:
"""
Set the stop signal for a session (cross-process safe).
Args:
session_id: Session identifier
Returns:
bool: True if signal was set successfully
"""
"""Set stop signal for session (cross-process safe)."""
try:
stop_key = self._get_stop_signal_key(session_id)
# Set stop signal to '1' with the same TTL as the session
self.redis_client.setex(stop_key, self.session_timeout, b'1')
print(f"Stop signal set for session {session_id}")
return True
@@ -122,15 +279,7 @@ class SessionManager:
return False
def is_stop_requested(self, session_id: str) -> bool:
"""
Check if stop is requested for a session (cross-process safe).
Args:
session_id: Session identifier
Returns:
bool: True if stop is requested
"""
"""Check if stop is requested for session (cross-process safe)."""
try:
stop_key = self._get_stop_signal_key(session_id)
value = self.redis_client.get(stop_key)
@@ -140,15 +289,7 @@ class SessionManager:
return False
def clear_stop_signal(self, session_id: str) -> bool:
"""
Clear the stop signal for a session.
Args:
session_id: Session identifier
Returns:
bool: True if signal was cleared successfully
"""
"""Clear stop signal for session."""
try:
stop_key = self._get_stop_signal_key(session_id)
self.redis_client.setex(stop_key, self.session_timeout, b'0')
@@ -159,13 +300,13 @@ class SessionManager:
return False
def _get_session_data(self, session_id: str) -> Optional[Dict[str, Any]]:
"""Retrieves and deserializes session data from Redis."""
"""Retrieve and deserialize session data from Redis."""
try:
session_key = self._get_session_key(session_id)
serialized_data = self.redis_client.get(session_key)
if serialized_data:
session_data = pickle.loads(serialized_data)
# Ensure the scanner has the correct session ID for stop signal checking
# Ensure scanner has correct session ID
if 'scanner' in session_data and session_data['scanner']:
session_data['scanner'].session_id = session_id
return session_data
@@ -175,37 +316,32 @@ class SessionManager:
return None
def _save_session_data(self, session_id: str, session_data: Dict[str, Any]) -> bool:
"""
Serializes and saves session data back to Redis with updated TTL.
Returns:
bool: True if save was successful
"""
"""Serialize and save session data to Redis with updated TTL."""
try:
session_key = self._get_session_key(session_id)
serialized_data = pickle.dumps(session_data)
result = self.redis_client.setex(session_key, self.session_timeout, serialized_data)
# Also refresh user mapping TTL if available
if 'user_fingerprint' in session_data:
user_session_key = self._get_user_session_key(session_data['user_fingerprint'])
self.redis_client.setex(user_session_key, self.session_timeout, session_id.encode('utf-8'))
return result
except Exception as e:
print(f"ERROR: Failed to save session data for {session_id}: {e}")
return False
def update_session_scanner(self, session_id: str, scanner: 'Scanner') -> bool:
"""
Updates just the scanner object in a session with immediate persistence.
Returns:
bool: True if update was successful
"""
"""Update scanner object in session with immediate persistence."""
try:
session_data = self._get_session_data(session_id)
if session_data:
# Ensure scanner has the session ID
# Ensure scanner has session ID
scanner.session_id = session_id
session_data['scanner'] = scanner
session_data['last_activity'] = time.time()
# Immediately save to Redis for GUI updates
success = self._save_session_data(session_id, session_data)
if success:
print(f"Scanner state updated for session {session_id} (status: {scanner.status})")
@@ -220,16 +356,7 @@ class SessionManager:
return False
def update_scanner_status(self, session_id: str, status: str) -> bool:
"""
Quickly update just the scanner status for immediate GUI feedback.
Args:
session_id: Session identifier
status: New scanner status
Returns:
bool: True if update was successful
"""
"""Quickly update scanner status for immediate GUI feedback."""
try:
session_data = self._get_session_data(session_id)
if session_data and 'scanner' in session_data:
@@ -248,9 +375,7 @@ class SessionManager:
return False
def get_session(self, session_id: str) -> Optional[Scanner]:
"""
Get scanner instance for a session from Redis with session ID management.
"""
"""Get scanner instance for session with session ID management."""
if not session_id:
return None
@@ -265,21 +390,13 @@ class SessionManager:
scanner = session_data.get('scanner')
if scanner:
# Ensure the scanner can check the Redis-based stop signal
# Ensure scanner can check Redis-based stop signal
scanner.session_id = session_id
return scanner
def get_session_status_only(self, session_id: str) -> Optional[str]:
"""
Get just the scanner status without full session retrieval (for performance).
Args:
session_id: Session identifier
Returns:
Scanner status string or None if not found
"""
"""Get scanner status without full session retrieval (for performance)."""
try:
session_data = self._get_session_data(session_id)
if session_data and 'scanner' in session_data:
@@ -290,16 +407,18 @@ class SessionManager:
return None
def terminate_session(self, session_id: str) -> bool:
"""
Terminate a specific session in Redis with reliable stop signal and immediate status update.
"""
"""Terminate specific session with reliable stop signal and immediate status update."""
return self._terminate_session_internal(session_id, cleanup_user_mapping=True)
def _terminate_session_internal(self, session_id: str, cleanup_user_mapping: bool = True) -> bool:
"""Internal session termination with configurable user mapping cleanup."""
print(f"=== TERMINATING SESSION {session_id} ===")
try:
# First, set the stop signal
# Set stop signal first
self.set_stop_signal(session_id)
# Update scanner status to stopped immediately for GUI feedback
# Update scanner status immediately for GUI feedback
self.update_scanner_status(session_id, 'stopped')
session_data = self._get_session_data(session_id)
@@ -310,16 +429,19 @@ class SessionManager:
scanner = session_data.get('scanner')
if scanner and scanner.status == 'running':
print(f"Stopping scan for session: {session_id}")
# The scanner will check the Redis stop signal
scanner.stop_scan()
# Update the scanner state immediately
self.update_session_scanner(session_id, scanner)
# Wait a moment for graceful shutdown
# Wait for graceful shutdown
time.sleep(0.5)
# Delete session data and stop signal from Redis
# Clean up user mapping if requested
if cleanup_user_mapping and 'user_fingerprint' in session_data:
user_session_key = self._get_user_session_key(session_data['user_fingerprint'])
self.redis_client.delete(user_session_key)
print(f"Cleaned up user mapping for {session_data['user_fingerprint']}")
# Delete session data and stop signal
session_key = self._get_session_key(session_id)
stop_key = self._get_stop_signal_key(session_id)
self.redis_client.delete(session_key)
@@ -333,35 +455,72 @@ class SessionManager:
return False
def _cleanup_loop(self) -> None:
"""
Background thread to cleanup inactive sessions and orphaned stop signals.
"""
"""Background thread to cleanup inactive sessions and orphaned signals."""
while True:
try:
# Clean up orphaned stop signals
stop_keys = self.redis_client.keys("dnsrecon:stop:*")
for stop_key in stop_keys:
# Extract session ID from stop key
session_id = stop_key.decode('utf-8').split(':')[-1]
session_key = self._get_session_key(session_id)
# If session doesn't exist but stop signal does, clean it up
if not self.redis_client.exists(session_key):
self.redis_client.delete(stop_key)
print(f"Cleaned up orphaned stop signal for session {session_id}")
# Clean up orphaned user mappings
user_keys = self.redis_client.keys("dnsrecon:user:*")
for user_key in user_keys:
session_id_bytes = self.redis_client.get(user_key)
if session_id_bytes:
session_id = session_id_bytes.decode('utf-8')
session_key = self._get_session_key(session_id)
if not self.redis_client.exists(session_key):
self.redis_client.delete(user_key)
print(f"Cleaned up orphaned user mapping for session {session_id}")
except Exception as e:
print(f"Error in cleanup loop: {e}")
time.sleep(300) # Sleep for 5 minutes
def list_active_sessions(self) -> List[Dict[str, Any]]:
"""List all active sessions for admin purposes."""
try:
session_keys = self.redis_client.keys("dnsrecon:session:*")
sessions = []
for session_key in session_keys:
session_id = session_key.decode('utf-8').split(':')[-1]
session_data = self._get_session_data(session_id)
if session_data:
scanner = session_data.get('scanner')
sessions.append({
'session_id': session_id,
'user_fingerprint': session_data.get('user_fingerprint', 'unknown'),
'client_ip': session_data.get('client_ip', 'unknown'),
'created_at': session_data.get('created_at'),
'last_activity': session_data.get('last_activity'),
'scanner_status': scanner.status if scanner else 'unknown',
'current_target': scanner.current_target if scanner else None
})
return sessions
except Exception as e:
print(f"ERROR: Failed to list active sessions: {e}")
return []
def get_statistics(self) -> Dict[str, Any]:
"""Get session manager statistics."""
try:
session_keys = self.redis_client.keys("dnsrecon:session:*")
user_keys = self.redis_client.keys("dnsrecon:user:*")
stop_keys = self.redis_client.keys("dnsrecon:stop:*")
active_sessions = len(session_keys)
unique_users = len(user_keys)
running_scans = 0
for session_key in session_keys:
@@ -372,16 +531,46 @@ class SessionManager:
return {
'total_active_sessions': active_sessions,
'unique_users': unique_users,
'running_scans': running_scans,
'total_stop_signals': len(stop_keys)
'total_stop_signals': len(stop_keys),
'average_sessions_per_user': round(active_sessions / unique_users, 2) if unique_users > 0 else 0
}
except Exception as e:
print(f"ERROR: Failed to get statistics: {e}")
return {
'total_active_sessions': 0,
'unique_users': 0,
'running_scans': 0,
'total_stop_signals': 0
'total_stop_signals': 0,
'average_sessions_per_user': 0
}
def get_session_info(self, session_id: str) -> Dict[str, Any]:
"""Get detailed information about a specific session."""
try:
session_data = self._get_session_data(session_id)
if not session_data:
return {'error': 'Session not found'}
scanner = session_data.get('scanner')
return {
'session_id': session_id,
'user_fingerprint': session_data.get('user_fingerprint', 'unknown'),
'client_ip': session_data.get('client_ip', 'unknown'),
'user_agent': session_data.get('user_agent', 'unknown'),
'created_at': session_data.get('created_at'),
'last_activity': session_data.get('last_activity'),
'status': session_data.get('status'),
'scanner_status': scanner.status if scanner else 'unknown',
'current_target': scanner.current_target if scanner else None,
'session_age_minutes': round((time.time() - session_data.get('created_at', time.time())) / 60, 1)
}
except Exception as e:
print(f"ERROR: Failed to get session info for {session_id}: {e}")
return {'error': f'Failed to get session info: {str(e)}'}
# Global session manager instance
session_manager = SessionManager(session_timeout_minutes=60)

564
core/task_manager.py Normal file
View File

@@ -0,0 +1,564 @@
# dnsrecon/core/task_manager.py
import threading
import time
import uuid
from enum import Enum
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Any, Set
from datetime import datetime, timezone, timedelta
from collections import deque
from utils.helpers import _is_valid_ip, _is_valid_domain
class TaskStatus(Enum):
"""Enumeration of task execution statuses."""
PENDING = "pending"
RUNNING = "running"
SUCCEEDED = "succeeded"
FAILED_RETRYING = "failed_retrying"
FAILED_PERMANENT = "failed_permanent"
CANCELLED = "cancelled"
class TaskType(Enum):
"""Enumeration of task types for provider queries."""
DOMAIN_QUERY = "domain_query"
IP_QUERY = "ip_query"
GRAPH_UPDATE = "graph_update"
@dataclass
class TaskResult:
"""Result of a task execution."""
success: bool
data: Optional[Any] = None
error: Optional[str] = None
metadata: Dict[str, Any] = field(default_factory=dict)
@dataclass
class ReconTask:
"""Represents a single reconnaissance task with retry logic."""
task_id: str
task_type: TaskType
target: str
provider_name: str
depth: int
status: TaskStatus = TaskStatus.PENDING
created_at: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
# Retry configuration
max_retries: int = 3
current_retry: int = 0
base_backoff_seconds: float = 1.0
max_backoff_seconds: float = 60.0
# Execution tracking
last_attempt_at: Optional[datetime] = None
next_retry_at: Optional[datetime] = None
execution_history: List[Dict[str, Any]] = field(default_factory=list)
# Results
result: Optional[TaskResult] = None
def __post_init__(self):
"""Initialize additional fields after creation."""
if not self.task_id:
self.task_id = str(uuid.uuid4())[:8]
def calculate_next_retry_time(self) -> datetime:
"""Calculate next retry time with exponential backoff and jitter."""
if self.current_retry >= self.max_retries:
return None
# Exponential backoff with jitter
backoff_time = min(
self.max_backoff_seconds,
self.base_backoff_seconds * (2 ** self.current_retry)
)
# Add jitter (±25%)
jitter = backoff_time * 0.25 * (0.5 - hash(self.task_id) % 1000 / 1000.0)
final_backoff = max(self.base_backoff_seconds, backoff_time + jitter)
return datetime.now(timezone.utc) + timedelta(seconds=final_backoff)
def should_retry(self) -> bool:
"""Determine if task should be retried based on status and retry count."""
if self.status != TaskStatus.FAILED_RETRYING:
return False
if self.current_retry >= self.max_retries:
return False
if self.next_retry_at and datetime.now(timezone.utc) < self.next_retry_at:
return False
return True
def mark_failed(self, error: str, metadata: Dict[str, Any] = None):
"""Mark task as failed and prepare for retry or permanent failure."""
self.current_retry += 1
self.last_attempt_at = datetime.now(timezone.utc)
# Record execution history
execution_record = {
'attempt': self.current_retry,
'timestamp': self.last_attempt_at.isoformat(),
'error': error,
'metadata': metadata or {}
}
self.execution_history.append(execution_record)
if self.current_retry >= self.max_retries:
self.status = TaskStatus.FAILED_PERMANENT
self.result = TaskResult(success=False, error=f"Permanent failure after {self.max_retries} attempts: {error}")
else:
self.status = TaskStatus.FAILED_RETRYING
self.next_retry_at = self.calculate_next_retry_time()
def mark_succeeded(self, data: Any = None, metadata: Dict[str, Any] = None):
"""Mark task as successfully completed."""
self.status = TaskStatus.SUCCEEDED
self.last_attempt_at = datetime.now(timezone.utc)
self.result = TaskResult(success=True, data=data, metadata=metadata or {})
# Record successful execution
execution_record = {
'attempt': self.current_retry + 1,
'timestamp': self.last_attempt_at.isoformat(),
'success': True,
'metadata': metadata or {}
}
self.execution_history.append(execution_record)
def get_summary(self) -> Dict[str, Any]:
"""Get task summary for progress reporting."""
return {
'task_id': self.task_id,
'task_type': self.task_type.value,
'target': self.target,
'provider': self.provider_name,
'status': self.status.value,
'current_retry': self.current_retry,
'max_retries': self.max_retries,
'created_at': self.created_at.isoformat(),
'last_attempt_at': self.last_attempt_at.isoformat() if self.last_attempt_at else None,
'next_retry_at': self.next_retry_at.isoformat() if self.next_retry_at else None,
'total_attempts': len(self.execution_history),
'has_result': self.result is not None
}
class TaskQueue:
"""Thread-safe task queue with retry logic and priority handling."""
def __init__(self, max_concurrent_tasks: int = 5):
"""Initialize task queue."""
self.max_concurrent_tasks = max_concurrent_tasks
self.tasks: Dict[str, ReconTask] = {}
self.pending_queue = deque()
self.retry_queue = deque()
self.running_tasks: Set[str] = set()
self._lock = threading.Lock()
self._stop_event = threading.Event()
def __getstate__(self):
"""Prepare TaskQueue for pickling by excluding unpicklable objects."""
state = self.__dict__.copy()
# Exclude the unpickleable '_lock' and '_stop_event' attributes
if '_lock' in state:
del state['_lock']
if '_stop_event' in state:
del state['_stop_event']
return state
def __setstate__(self, state):
"""Restore TaskQueue after unpickling by reconstructing threading objects."""
self.__dict__.update(state)
# Re-initialize the '_lock' and '_stop_event' attributes
self._lock = threading.Lock()
self._stop_event = threading.Event()
def add_task(self, task: ReconTask) -> str:
"""Add task to queue."""
with self._lock:
self.tasks[task.task_id] = task
self.pending_queue.append(task.task_id)
print(f"Added task {task.task_id}: {task.provider_name} query for {task.target}")
return task.task_id
def get_next_ready_task(self) -> Optional[ReconTask]:
"""Get next task ready for execution."""
with self._lock:
# Check if we have room for more concurrent tasks
if len(self.running_tasks) >= self.max_concurrent_tasks:
return None
# First priority: retry queue (tasks ready for retry)
while self.retry_queue:
task_id = self.retry_queue.popleft()
if task_id in self.tasks:
task = self.tasks[task_id]
if task.should_retry():
task.status = TaskStatus.RUNNING
self.running_tasks.add(task_id)
print(f"Retrying task {task_id} (attempt {task.current_retry + 1})")
return task
# Second priority: pending queue (new tasks)
while self.pending_queue:
task_id = self.pending_queue.popleft()
if task_id in self.tasks:
task = self.tasks[task_id]
if task.status == TaskStatus.PENDING:
task.status = TaskStatus.RUNNING
self.running_tasks.add(task_id)
print(f"Starting task {task_id}")
return task
return None
def complete_task(self, task_id: str, success: bool, data: Any = None,
error: str = None, metadata: Dict[str, Any] = None):
"""Mark task as completed (success or failure)."""
with self._lock:
if task_id not in self.tasks:
return
task = self.tasks[task_id]
self.running_tasks.discard(task_id)
if success:
task.mark_succeeded(data=data, metadata=metadata)
print(f"Task {task_id} succeeded")
else:
task.mark_failed(error or "Unknown error", metadata=metadata)
if task.status == TaskStatus.FAILED_RETRYING:
self.retry_queue.append(task_id)
print(f"Task {task_id} failed, scheduled for retry at {task.next_retry_at}")
else:
print(f"Task {task_id} permanently failed after {task.current_retry} attempts")
def cancel_all_tasks(self):
"""Cancel all pending and running tasks."""
with self._lock:
self._stop_event.set()
for task in self.tasks.values():
if task.status in [TaskStatus.PENDING, TaskStatus.RUNNING, TaskStatus.FAILED_RETRYING]:
task.status = TaskStatus.CANCELLED
self.pending_queue.clear()
self.retry_queue.clear()
self.running_tasks.clear()
print("All tasks cancelled")
def is_complete(self) -> bool:
"""Check if all tasks are complete (succeeded, permanently failed, or cancelled)."""
with self._lock:
for task in self.tasks.values():
if task.status in [TaskStatus.PENDING, TaskStatus.RUNNING, TaskStatus.FAILED_RETRYING]:
return False
return True
def get_statistics(self) -> Dict[str, Any]:
"""Get queue statistics."""
with self._lock:
stats = {
'total_tasks': len(self.tasks),
'pending': len(self.pending_queue),
'running': len(self.running_tasks),
'retry_queue': len(self.retry_queue),
'succeeded': 0,
'failed_permanent': 0,
'cancelled': 0,
'failed_retrying': 0
}
for task in self.tasks.values():
if task.status == TaskStatus.SUCCEEDED:
stats['succeeded'] += 1
elif task.status == TaskStatus.FAILED_PERMANENT:
stats['failed_permanent'] += 1
elif task.status == TaskStatus.CANCELLED:
stats['cancelled'] += 1
elif task.status == TaskStatus.FAILED_RETRYING:
stats['failed_retrying'] += 1
stats['completion_rate'] = (stats['succeeded'] / stats['total_tasks'] * 100) if stats['total_tasks'] > 0 else 0
stats['is_complete'] = self.is_complete()
return stats
def get_task_summaries(self) -> List[Dict[str, Any]]:
"""Get summaries of all tasks for detailed progress reporting."""
with self._lock:
return [task.get_summary() for task in self.tasks.values()]
def get_failed_tasks(self) -> List[ReconTask]:
"""Get all permanently failed tasks for analysis."""
with self._lock:
return [task for task in self.tasks.values() if task.status == TaskStatus.FAILED_PERMANENT]
class TaskExecutor:
"""Executes reconnaissance tasks using providers."""
def __init__(self, providers: List, graph_manager, logger):
"""Initialize task executor."""
self.providers = {provider.get_name(): provider for provider in providers}
self.graph = graph_manager
self.logger = logger
def execute_task(self, task: ReconTask) -> TaskResult:
"""
Execute a single reconnaissance task.
Args:
task: Task to execute
Returns:
TaskResult with success/failure information
"""
try:
print(f"Executing task {task.task_id}: {task.provider_name} query for {task.target}")
provider = self.providers.get(task.provider_name)
if not provider:
return TaskResult(
success=False,
error=f"Provider {task.provider_name} not available"
)
if not provider.is_available():
return TaskResult(
success=False,
error=f"Provider {task.provider_name} is not available (missing API key or configuration)"
)
# Execute provider query based on task type
if task.task_type == TaskType.DOMAIN_QUERY:
if not _is_valid_domain(task.target):
return TaskResult(success=False, error=f"Invalid domain: {task.target}")
relationships = provider.query_domain(task.target)
elif task.task_type == TaskType.IP_QUERY:
if not _is_valid_ip(task.target):
return TaskResult(success=False, error=f"Invalid IP: {task.target}")
relationships = provider.query_ip(task.target)
else:
return TaskResult(success=False, error=f"Unsupported task type: {task.task_type}")
# Process results and update graph
new_targets = set()
relationships_added = 0
for source, target, rel_type, confidence, raw_data in relationships:
# Add nodes to graph
from core.graph_manager import NodeType
if _is_valid_ip(target):
self.graph.add_node(target, NodeType.IP)
new_targets.add(target)
elif target.startswith('AS') and target[2:].isdigit():
self.graph.add_node(target, NodeType.ASN)
elif _is_valid_domain(target):
self.graph.add_node(target, NodeType.DOMAIN)
new_targets.add(target)
# Add edge to graph
if self.graph.add_edge(source, target, rel_type, confidence, task.provider_name, raw_data):
relationships_added += 1
# Log forensic information
self.logger.logger.info(
f"Task {task.task_id} completed: {len(relationships)} relationships found, "
f"{relationships_added} added to graph, {len(new_targets)} new targets"
)
return TaskResult(
success=True,
data={
'relationships': relationships,
'new_targets': list(new_targets),
'relationships_added': relationships_added
},
metadata={
'provider': task.provider_name,
'target': task.target,
'depth': task.depth,
'execution_time': datetime.now(timezone.utc).isoformat()
}
)
except Exception as e:
error_msg = f"Task execution failed: {str(e)}"
print(f"ERROR: {error_msg} for task {task.task_id}")
self.logger.logger.error(error_msg)
return TaskResult(
success=False,
error=error_msg,
metadata={
'provider': task.provider_name,
'target': task.target,
'exception_type': type(e).__name__
}
)
class TaskManager:
"""High-level task management for reconnaissance scans."""
def __init__(self, providers: List, graph_manager, logger, max_concurrent_tasks: int = 5):
"""Initialize task manager."""
self.task_queue = TaskQueue(max_concurrent_tasks)
self.task_executor = TaskExecutor(providers, graph_manager, logger)
self.logger = logger
# Execution control
self._stop_event = threading.Event()
self._execution_threads: List[threading.Thread] = []
self._is_running = False
def create_provider_tasks(self, target: str, depth: int, providers: List) -> List[str]:
"""
Create tasks for querying all eligible providers for a target.
Args:
target: Domain or IP to query
depth: Current recursion depth
providers: List of available providers
Returns:
List of created task IDs
"""
task_ids = []
is_ip = _is_valid_ip(target)
target_key = 'ips' if is_ip else 'domains'
task_type = TaskType.IP_QUERY if is_ip else TaskType.DOMAIN_QUERY
for provider in providers:
if provider.get_eligibility().get(target_key) and provider.is_available():
task = ReconTask(
task_id=str(uuid.uuid4())[:8],
task_type=task_type,
target=target,
provider_name=provider.get_name(),
depth=depth,
max_retries=3 # Configure retries per task type/provider
)
task_id = self.task_queue.add_task(task)
task_ids.append(task_id)
return task_ids
def start_execution(self, max_workers: int = 3):
"""Start task execution with specified number of worker threads."""
if self._is_running:
print("Task execution already running")
return
self._is_running = True
self._stop_event.clear()
print(f"Starting task execution with {max_workers} workers")
for i in range(max_workers):
worker_thread = threading.Thread(
target=self._worker_loop,
name=f"TaskWorker-{i+1}",
daemon=True
)
worker_thread.start()
self._execution_threads.append(worker_thread)
def stop_execution(self):
"""Stop task execution and cancel all tasks."""
print("Stopping task execution")
self._stop_event.set()
self.task_queue.cancel_all_tasks()
self._is_running = False
# Wait for worker threads to finish
for thread in self._execution_threads:
thread.join(timeout=5.0)
self._execution_threads.clear()
print("Task execution stopped")
def _worker_loop(self):
"""Worker thread loop for executing tasks."""
thread_name = threading.current_thread().name
print(f"{thread_name} started")
while not self._stop_event.is_set():
try:
# Get next task to execute
task = self.task_queue.get_next_ready_task()
if task is None:
# No tasks ready, check if we should exit
if self.task_queue.is_complete() or self._stop_event.is_set():
break
time.sleep(0.1) # Brief sleep before checking again
continue
# Execute the task
result = self.task_executor.execute_task(task)
# Complete the task in queue
self.task_queue.complete_task(
task.task_id,
success=result.success,
data=result.data,
error=result.error,
metadata=result.metadata
)
except Exception as e:
print(f"ERROR: Worker {thread_name} encountered error: {e}")
# Continue running even if individual task fails
continue
print(f"{thread_name} finished")
def wait_for_completion(self, timeout_seconds: int = 300) -> bool:
"""
Wait for all tasks to complete.
Args:
timeout_seconds: Maximum time to wait
Returns:
True if all tasks completed, False if timeout
"""
start_time = time.time()
while time.time() - start_time < timeout_seconds:
if self.task_queue.is_complete():
return True
if self._stop_event.is_set():
return False
time.sleep(1.0) # Check every second
print(f"Timeout waiting for task completion after {timeout_seconds} seconds")
return False
def get_progress_report(self) -> Dict[str, Any]:
"""Get detailed progress report for UI updates."""
stats = self.task_queue.get_statistics()
failed_tasks = self.task_queue.get_failed_tasks()
return {
'statistics': stats,
'failed_tasks': [task.get_summary() for task in failed_tasks],
'is_running': self._is_running,
'worker_count': len(self._execution_threads),
'detailed_tasks': self.task_queue.get_task_summaries() if stats['total_tasks'] < 50 else [] # Limit detail for performance
}

BIN
dump.rdb Normal file

Binary file not shown.

View File

@@ -3,15 +3,14 @@ Data provider modules for DNSRecon.
Contains implementations for various reconnaissance data sources.
"""
from .base_provider import BaseProvider
from .base_provider import BaseProvider, RateLimiter
from .crtsh_provider import CrtShProvider
from .dns_provider import DNSProvider
from .shodan_provider import ShodanProvider
from core.rate_limiter import GlobalRateLimiter
__all__ = [
'BaseProvider',
'GlobalRateLimiter',
'RateLimiter',
'CrtShProvider',
'DNSProvider',
'ShodanProvider'

View File

@@ -3,23 +3,175 @@
import time
import requests
import threading
import os
import json
import hashlib
from abc import ABC, abstractmethod
from typing import Dict, Any, Optional
from typing import List, Dict, Any, Optional, Tuple
from datetime import datetime, timezone
from core.logger import get_forensic_logger
from core.rate_limiter import GlobalRateLimiter
from core.provider_result import ProviderResult
class RateLimiter:
"""Thread-safe rate limiter for API calls."""
def __init__(self, requests_per_minute: int):
"""
Initialize rate limiter.
Args:
requests_per_minute: Maximum requests allowed per minute
"""
self.requests_per_minute = requests_per_minute
self.min_interval = 60.0 / requests_per_minute
self.last_request_time = 0
self._lock = threading.Lock()
def __getstate__(self):
"""RateLimiter is fully picklable, return full state."""
state = self.__dict__.copy()
# Exclude unpickleable lock
if '_lock' in state:
del state['_lock']
return state
def __setstate__(self, state):
"""Restore RateLimiter state."""
self.__dict__.update(state)
self._lock = threading.Lock()
def wait_if_needed(self) -> None:
"""Wait if necessary to respect rate limits."""
with self._lock:
current_time = time.time()
time_since_last = current_time - self.last_request_time
if time_since_last < self.min_interval:
sleep_time = self.min_interval - time_since_last
time.sleep(sleep_time)
self.last_request_time = time.time()
class ProviderCache:
"""Thread-safe global cache for provider queries."""
def __init__(self, provider_name: str, cache_expiry_hours: int = 12):
"""
Initialize provider-specific cache.
Args:
provider_name: Name of the provider for cache directory
cache_expiry_hours: Cache expiry time in hours
"""
self.provider_name = provider_name
self.cache_expiry = cache_expiry_hours * 3600 # Convert to seconds
self.cache_dir = os.path.join('.cache', provider_name)
self._lock = threading.Lock()
# Ensure cache directory exists with thread-safe creation
os.makedirs(self.cache_dir, exist_ok=True)
def _generate_cache_key(self, method: str, url: str, params: Optional[Dict[str, Any]]) -> str:
"""Generate unique cache key for request."""
cache_data = f"{method}:{url}:{json.dumps(params or {}, sort_keys=True)}"
return hashlib.md5(cache_data.encode()).hexdigest() + ".json"
def get_cached_response(self, method: str, url: str, params: Optional[Dict[str, Any]]) -> Optional[requests.Response]:
"""
Retrieve cached response if available and not expired.
Returns:
Cached Response object or None if cache miss/expired
"""
cache_key = self._generate_cache_key(method, url, params)
cache_path = os.path.join(self.cache_dir, cache_key)
with self._lock:
if not os.path.exists(cache_path):
return None
# Check if cache is expired
cache_age = time.time() - os.path.getmtime(cache_path)
if cache_age >= self.cache_expiry:
try:
os.remove(cache_path)
except OSError:
pass # File might have been removed by another thread
return None
try:
with open(cache_path, 'r', encoding='utf-8') as f:
cached_data = json.load(f)
# Reconstruct Response object
response = requests.Response()
response.status_code = cached_data['status_code']
response._content = cached_data['content'].encode('utf-8')
response.headers.update(cached_data['headers'])
return response
except (json.JSONDecodeError, KeyError, IOError) as e:
# Cache file corrupted, remove it
try:
os.remove(cache_path)
except OSError:
pass
return None
def cache_response(self, method: str, url: str, params: Optional[Dict[str, Any]],
response: requests.Response) -> bool:
"""
Cache successful response to disk.
Returns:
True if cached successfully, False otherwise
"""
if response.status_code != 200:
return False
cache_key = self._generate_cache_key(method, url, params)
cache_path = os.path.join(self.cache_dir, cache_key)
with self._lock:
try:
cache_data = {
'status_code': response.status_code,
'content': response.text,
'headers': dict(response.headers),
'cached_at': datetime.now(timezone.utc).isoformat()
}
# Write to temporary file first, then rename for atomic operation
temp_path = cache_path + '.tmp'
with open(temp_path, 'w', encoding='utf-8') as f:
json.dump(cache_data, f)
# Atomic rename to prevent partial cache files
os.rename(temp_path, cache_path)
return True
except (IOError, OSError) as e:
# Clean up temp file if it exists
try:
if os.path.exists(temp_path):
os.remove(temp_path)
except OSError:
pass
return False
class BaseProvider(ABC):
"""
Abstract base class for all DNSRecon data providers.
Now supports session-specific configuration and returns standardized ProviderResult objects.
Now supports global provider-specific caching and session-specific configuration.
"""
def __init__(self, name: str, rate_limit: int = 60, timeout: int = 30, session_config=None):
"""
Initialize base provider with session-specific configuration.
Initialize base provider with global caching and session-specific configuration.
Args:
name: Provider name for logging
@@ -36,28 +188,35 @@ class BaseProvider(ABC):
# Fallback to global config for backwards compatibility
from config import config as global_config
self.config = global_config
actual_rate_limit = rate_limit
actual_timeout = timeout
self.name = name
self.rate_limiter = RateLimiter(actual_rate_limit)
self.timeout = actual_timeout
self._local = threading.local()
self.logger = get_forensic_logger()
self._stop_event = None
# GLOBAL provider-specific caching (not session-based)
self.cache = ProviderCache(name, cache_expiry_hours=12)
# Statistics (per provider instance)
self.total_requests = 0
self.successful_requests = 0
self.failed_requests = 0
self.total_relationships_found = 0
self.cache_hits = 0
self.cache_misses = 0
print(f"Initialized {name} provider with global cache and session config (rate: {actual_rate_limit}/min)")
def __getstate__(self):
"""Prepare BaseProvider for pickling by excluding unpicklable objects."""
state = self.__dict__.copy()
# Exclude the unpickleable '_local' attribute and stop event
unpicklable_attrs = ['_local', '_stop_event']
for attr in unpicklable_attrs:
if attr in state:
del state[attr]
state['_local'] = None
state['_stop_event'] = None
return state
def __setstate__(self, state):
@@ -72,7 +231,7 @@ class BaseProvider(ABC):
if not hasattr(self._local, 'session'):
self._local.session = requests.Session()
self._local.session.headers.update({
'User-Agent': 'DNSRecon/1.0 (Passive Reconnaissance Tool)'
'User-Agent': 'DNSRecon/2.0 (Passive Reconnaissance Tool)'
})
return self._local.session
@@ -102,7 +261,7 @@ class BaseProvider(ABC):
pass
@abstractmethod
def query_domain(self, domain: str) -> ProviderResult:
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query the provider for information about a domain.
@@ -110,12 +269,12 @@ class BaseProvider(ABC):
domain: Domain to investigate
Returns:
ProviderResult containing standardized attributes and relationships
List of tuples: (source_node, target_node, relationship_type, confidence, raw_data)
"""
pass
@abstractmethod
def query_ip(self, ip: str) -> ProviderResult:
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query the provider for information about an IP address.
@@ -123,84 +282,160 @@ class BaseProvider(ABC):
ip: IP address to investigate
Returns:
ProviderResult containing standardized attributes and relationships
List of tuples: (source_node, target_node, relationship_type, confidence, raw_data)
"""
pass
def make_request(self, url: str, method: str = "GET",
params: Optional[Dict[str, Any]] = None,
headers: Optional[Dict[str, str]] = None,
target_indicator: str = "") -> Optional[requests.Response]:
target_indicator: str = "",
max_retries: int = 3) -> Optional[requests.Response]:
"""
Make a rate-limited HTTP request.
Make a rate-limited HTTP request with global caching and aggressive stop signal handling.
"""
# Check for cancellation before starting
if self._is_stop_requested():
print(f"Request cancelled before start: {url}")
return None
start_time = time.time()
response = None
error = None
# Check global cache first
cached_response = self.cache.get_cached_response(method, url, params)
if cached_response is not None:
print(f"Cache hit for {self.name}: {url}")
self.cache_hits += 1
return cached_response
try:
self.total_requests += 1
self.cache_misses += 1
request_headers = dict(self.session.headers).copy()
if headers:
request_headers.update(headers)
# Determine effective max_retries based on stop signal
effective_max_retries = 0 if self._is_stop_requested() else max_retries
last_exception = None
print(f"Making {method} request to: {url}")
for attempt in range(effective_max_retries + 1):
# Check for cancellation before each attempt
if self._is_stop_requested():
print(f"Request cancelled during attempt {attempt + 1}: {url}")
return None
if method.upper() == "GET":
response = self.session.get(
url,
params=params,
headers=request_headers,
timeout=self.timeout
# Apply rate limiting with cancellation awareness
if not self._wait_with_cancellation_check():
print(f"Request cancelled during rate limiting: {url}")
return None
# Final check before making HTTP request
if self._is_stop_requested():
print(f"Request cancelled before HTTP call: {url}")
return None
start_time = time.time()
response = None
error = None
try:
self.total_requests += 1
# Prepare request
request_headers = self.session.headers.copy()
if headers:
request_headers.update(headers)
print(f"Making {method} request to: {url} (attempt {attempt + 1})")
# Use shorter timeout if termination is requested
request_timeout = 2 if self._is_stop_requested() else self.timeout
# Make request
if method.upper() == "GET":
response = self.session.get(
url,
params=params,
headers=request_headers,
timeout=request_timeout
)
elif method.upper() == "POST":
response = self.session.post(
url,
json=params,
headers=request_headers,
timeout=request_timeout
)
else:
raise ValueError(f"Unsupported HTTP method: {method}")
print(f"Response status: {response.status_code}")
response.raise_for_status()
self.successful_requests += 1
# Success - log, cache, and return
duration_ms = (time.time() - start_time) * 1000
self.logger.log_api_request(
provider=self.name,
url=url,
method=method.upper(),
status_code=response.status_code,
response_size=len(response.content),
duration_ms=duration_ms,
error=None,
target_indicator=target_indicator
)
elif method.upper() == "POST":
response = self.session.post(
url,
json=params,
headers=request_headers,
timeout=self.timeout
)
else:
raise ValueError(f"Unsupported HTTP method: {method}")
print(f"Response status: {response.status_code}")
response.raise_for_status()
self.successful_requests += 1
# Cache the successful response globally
self.cache.cache_response(method, url, params, response)
return response
duration_ms = (time.time() - start_time) * 1000
self.logger.log_api_request(
provider=self.name,
url=url,
method=method.upper(),
status_code=response.status_code,
response_size=len(response.content),
duration_ms=duration_ms,
error=None,
target_indicator=target_indicator
)
except requests.exceptions.RequestException as e:
error = str(e)
self.failed_requests += 1
print(f"Request failed (attempt {attempt + 1}): {error}")
last_exception = e
return response
# Immediately abort retries if stop requested
if self._is_stop_requested():
print(f"Stop requested - aborting retries for: {url}")
break
except requests.exceptions.RequestException as e:
error = str(e)
self.failed_requests += 1
duration_ms = (time.time() - start_time) * 1000
self.logger.log_api_request(
provider=self.name,
url=url,
method=method.upper(),
status_code=response.status_code if response else None,
response_size=len(response.content) if response else None,
duration_ms=duration_ms,
error=error,
target_indicator=target_indicator
)
raise e
# Check if we should retry
if attempt < effective_max_retries and self._should_retry(e):
# Exponential backoff with jitter for 429 errors
if isinstance(e, requests.exceptions.HTTPError) and e.response and e.response.status_code == 429:
backoff_time = min(60, 10 * (2 ** attempt))
print(f"Rate limit hit. Retrying in {backoff_time} seconds...")
else:
backoff_time = min(2.0, (2 ** attempt) * 0.5)
print(f"Retrying in {backoff_time} seconds...")
if not self._sleep_with_cancellation_check(backoff_time):
print(f"Stop requested during backoff - aborting: {url}")
return None
continue
else:
break
except Exception as e:
error = f"Unexpected error: {str(e)}"
self.failed_requests += 1
print(f"Unexpected error: {error}")
last_exception = e
break
# All attempts failed - log and return None
duration_ms = (time.time() - start_time) * 1000
self.logger.log_api_request(
provider=self.name,
url=url,
method=method.upper(),
status_code=response.status_code if response else None,
response_size=len(response.content) if response else None,
duration_ms=duration_ms,
error=error,
target_indicator=target_indicator
)
if error and last_exception:
raise last_exception
return None
def _is_stop_requested(self) -> bool:
"""
@@ -210,6 +445,43 @@ class BaseProvider(ABC):
return True
return False
def _wait_with_cancellation_check(self) -> bool:
"""
Wait for rate limiting while aggressively checking for cancellation.
Returns False if cancelled during wait.
"""
current_time = time.time()
time_since_last = current_time - self.rate_limiter.last_request_time
if time_since_last < self.rate_limiter.min_interval:
sleep_time = self.rate_limiter.min_interval - time_since_last
if not self._sleep_with_cancellation_check(sleep_time):
return False
self.rate_limiter.last_request_time = time.time()
return True
def _sleep_with_cancellation_check(self, sleep_time: float) -> bool:
"""
Sleep for the specified time while aggressively checking for cancellation.
Args:
sleep_time: Time to sleep in seconds
Returns:
bool: True if sleep completed, False if cancelled
"""
sleep_start = time.time()
check_interval = 0.05 # Check every 50ms for aggressive responsiveness
while time.time() - sleep_start < sleep_time:
if self._is_stop_requested():
return False
remaining_time = sleep_time - (time.time() - sleep_start)
time.sleep(min(check_interval, remaining_time))
return True
def set_stop_event(self, stop_event: threading.Event) -> None:
"""
Set the stop event for this provider to enable cancellation.
@@ -219,6 +491,28 @@ class BaseProvider(ABC):
"""
self._stop_event = stop_event
def _should_retry(self, exception: requests.exceptions.RequestException) -> bool:
"""
Determine if a request should be retried based on the exception.
Args:
exception: The request exception that occurred
Returns:
True if the request should be retried
"""
# Retry on connection errors and timeouts
if isinstance(exception, (requests.exceptions.ConnectionError,
requests.exceptions.Timeout)):
return True
if isinstance(exception, requests.exceptions.HTTPError):
if hasattr(exception, 'response') and exception.response:
# Retry on server errors (5xx) AND on rate-limiting errors (429)
return exception.response.status_code >= 500 or exception.response.status_code == 429
return False
def log_relationship_discovery(self, source_node: str, target_node: str,
relationship_type: str,
confidence_score: float,
@@ -249,7 +543,7 @@ class BaseProvider(ABC):
def get_statistics(self) -> Dict[str, Any]:
"""
Get provider statistics.
Get provider statistics including cache performance.
Returns:
Dictionary containing provider performance metrics
@@ -261,5 +555,8 @@ class BaseProvider(ABC):
'failed_requests': self.failed_requests,
'success_rate': (self.successful_requests / self.total_requests * 100) if self.total_requests > 0 else 0,
'relationships_found': self.total_relationships_found,
'rate_limit': self.config.get_rate_limit(self.name)
'rate_limit': self.rate_limiter.requests_per_minute,
'cache_hits': self.cache_hits,
'cache_misses': self.cache_misses,
'cache_hit_rate': (self.cache_hits / (self.cache_hits + self.cache_misses) * 100) if (self.cache_hits + self.cache_misses) > 0 else 0
}

View File

@@ -1,25 +1,27 @@
# dnsrecon/providers/crtsh_provider.py
"""
Certificate Transparency provider using crt.sh.
Discovers domain relationships through certificate SAN analysis with comprehensive certificate tracking.
Stores certificates as metadata on domain nodes rather than creating certificate nodes.
"""
import json
import re
from pathlib import Path
from typing import List, Dict, Any, Set
from typing import List, Dict, Any, Tuple, Set
from urllib.parse import quote
from datetime import datetime, timezone
import requests
from .base_provider import BaseProvider
from core.provider_result import ProviderResult
from utils.helpers import _is_valid_domain
class CrtShProvider(BaseProvider):
"""
Provider for querying crt.sh certificate transparency database.
Now returns standardized ProviderResult objects with caching support.
Now uses session-specific configuration and caching.
"""
def __init__(self, name=None, session_config=None):
def __init__(self, session_config=None):
"""Initialize CrtSh provider with session-specific configuration."""
super().__init__(
name="crtsh",
@@ -30,13 +32,6 @@ class CrtShProvider(BaseProvider):
self.base_url = "https://crt.sh/"
self._stop_event = None
# Initialize cache directory
self.cache_dir = Path('cache') / 'crtsh'
self.cache_dir.mkdir(parents=True, exist_ok=True)
# Compile regex for date filtering for efficiency
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
def get_name(self) -> str:
"""Return the provider name."""
return "crtsh"
@@ -54,302 +49,94 @@ class CrtShProvider(BaseProvider):
return {'domains': True, 'ips': False}
def is_available(self) -> bool:
"""Check if the provider is configured to be used."""
"""
Check if the provider is configured to be used.
This method is intentionally simple and does not perform a network request
to avoid blocking application startup.
"""
return True
def _get_cache_file_path(self, domain: str) -> Path:
"""Generate cache file path for a domain."""
safe_domain = domain.replace('.', '_').replace('/', '_').replace('\\', '_')
return self.cache_dir / f"{safe_domain}.json"
def _get_cache_status(self, cache_file_path: Path) -> str:
def _parse_certificate_date(self, date_string: str) -> datetime:
"""
Check cache status for a domain.
Returns: 'not_found', 'fresh', or 'stale'
"""
if not cache_file_path.exists():
return "not_found"
try:
with open(cache_file_path, 'r') as f:
cache_data = json.load(f)
last_query_str = cache_data.get("last_upstream_query")
if not last_query_str:
return "stale"
last_query = datetime.fromisoformat(last_query_str.replace('Z', '+00:00'))
hours_since_query = (datetime.now(timezone.utc) - last_query).total_seconds() / 3600
cache_timeout = self.config.cache_timeout_hours
if hours_since_query < cache_timeout:
return "fresh"
else:
return "stale"
except (json.JSONDecodeError, ValueError, KeyError) as e:
self.logger.logger.warning(f"Invalid cache file format for {cache_file_path}: {e}")
return "stale"
def query_domain(self, domain: str) -> ProviderResult:
"""
Query crt.sh for certificates containing the domain with caching support.
Parse certificate date from crt.sh format.
Args:
domain: Domain to investigate
date_string: Date string from crt.sh API
Returns:
ProviderResult containing discovered relationships and attributes
Parsed datetime object in UTC
"""
if not _is_valid_domain(domain):
return ProviderResult()
if self._stop_event and self._stop_event.is_set():
return ProviderResult()
cache_file = self._get_cache_file_path(domain)
cache_status = self._get_cache_status(cache_file)
processed_certificates = []
result = ProviderResult()
if not date_string:
raise ValueError("Empty date string")
try:
if cache_status == "fresh":
result = self._load_from_cache(cache_file)
self.logger.logger.info(f"Using cached crt.sh data for {domain}")
else: # "stale" or "not_found"
raw_certificates = self._query_crtsh_api(domain)
if self._stop_event and self._stop_event.is_set():
return ProviderResult()
# Process raw data into the application's expected format
current_processed_certs = [self._extract_certificate_metadata(cert) for cert in raw_certificates]
if cache_status == "stale":
# Load existing and append new processed certs
existing_result = self._load_from_cache(cache_file)
result = self._merge_results(existing_result, current_processed_certs, domain)
self.logger.logger.info(f"Refreshed and merged cache for {domain}")
else: # "not_found"
# Create new result from processed certs
result = self._process_certificates_to_result(domain, raw_certificates)
self.logger.logger.info(f"Created fresh result for {domain} ({result.get_relationship_count()} relationships)")
# Save the result to cache
self._save_result_to_cache(cache_file, result, domain)
except requests.exceptions.RequestException as e:
self.logger.logger.error(f"API query failed for {domain}: {e}")
if cache_status != "not_found":
result = self._load_from_cache(cache_file)
self.logger.logger.warning(f"Using stale cache for {domain} due to API failure.")
# Handle various possible formats from crt.sh
if date_string.endswith('Z'):
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
elif '+' in date_string or date_string.endswith('UTC'):
# Handle timezone-aware strings
date_string = date_string.replace('UTC', '').strip()
if '+' in date_string:
date_string = date_string.split('+')[0]
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
else:
raise e # Re-raise if there's no cache to fall back on
return result
def query_ip(self, ip: str) -> ProviderResult:
"""
Query crt.sh for certificates containing the IP address.
Note: crt.sh doesn't typically index by IP, so this returns empty results.
Args:
ip: IP address to investigate
Returns:
Empty ProviderResult (crt.sh doesn't support IP-based certificate queries effectively)
"""
return ProviderResult()
def _load_from_cache(self, cache_file_path: Path) -> ProviderResult:
"""Load processed crt.sh data from a cache file."""
try:
with open(cache_file_path, 'r') as f:
cache_content = json.load(f)
result = ProviderResult()
# Reconstruct relationships
for rel_data in cache_content.get("relationships", []):
result.add_relationship(
source_node=rel_data["source_node"],
target_node=rel_data["target_node"],
relationship_type=rel_data["relationship_type"],
provider=rel_data["provider"],
confidence=rel_data["confidence"],
raw_data=rel_data.get("raw_data", {})
)
# Reconstruct attributes
for attr_data in cache_content.get("attributes", []):
result.add_attribute(
target_node=attr_data["target_node"],
name=attr_data["name"],
value=attr_data["value"],
attr_type=attr_data["type"],
provider=attr_data["provider"],
confidence=attr_data["confidence"],
metadata=attr_data.get("metadata", {})
)
return result
except (json.JSONDecodeError, FileNotFoundError, KeyError) as e:
self.logger.logger.error(f"Failed to load cached certificates from {cache_file_path}: {e}")
return ProviderResult()
def _save_result_to_cache(self, cache_file_path: Path, result: ProviderResult, domain: str) -> None:
"""Save processed crt.sh result to a cache file."""
try:
cache_data = {
"domain": domain,
"last_upstream_query": datetime.now(timezone.utc).isoformat(),
"relationships": [
{
"source_node": rel.source_node,
"target_node": rel.target_node,
"relationship_type": rel.relationship_type,
"confidence": rel.confidence,
"provider": rel.provider,
"raw_data": rel.raw_data
} for rel in result.relationships
],
"attributes": [
{
"target_node": attr.target_node,
"name": attr.name,
"value": attr.value,
"type": attr.type,
"provider": attr.provider,
"confidence": attr.confidence,
"metadata": attr.metadata
} for attr in result.attributes
]
}
cache_file_path.parent.mkdir(parents=True, exist_ok=True)
with open(cache_file_path, 'w') as f:
json.dump(cache_data, f, separators=(',', ':'), default=str)
# Assume UTC if no timezone specified
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
except Exception as e:
self.logger.logger.warning(f"Failed to save cache file for {domain}: {e}")
# Fallback: try parsing without timezone info and assume UTC
try:
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
except Exception:
raise ValueError(f"Unable to parse date: {date_string}") from e
def _merge_results(self, existing_result: ProviderResult, new_certificates: List[Dict[str, Any]], domain: str) -> ProviderResult:
"""Merge new certificate data with existing cached result."""
# Create a fresh result from the new certificates
new_result = self._process_certificates_to_result(domain, new_certificates)
# Simple merge strategy: combine all relationships and attributes
# In practice, you might want more sophisticated deduplication
merged_result = ProviderResult()
# Add existing relationships and attributes
merged_result.relationships.extend(existing_result.relationships)
merged_result.attributes.extend(existing_result.attributes)
# Add new relationships and attributes
merged_result.relationships.extend(new_result.relationships)
merged_result.attributes.extend(new_result.attributes)
return merged_result
def _query_crtsh_api(self, domain: str) -> List[Dict[str, Any]]:
"""Query crt.sh API for raw certificate data."""
url = f"{self.base_url}?q={quote(domain)}&output=json"
response = self.make_request(url, target_indicator=domain)
if not response or response.status_code != 200:
raise requests.exceptions.RequestException(f"crt.sh API returned status {response.status_code if response else 'None'}")
certificates = response.json()
if not certificates:
return []
return certificates
def _process_certificates_to_result(self, domain: str, certificates: List[Dict[str, Any]]) -> ProviderResult:
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
"""
Process certificates to create ProviderResult with relationships and attributes.
Check if a certificate is currently valid based on its expiry date.
Args:
cert_data: Certificate data from crt.sh
Returns:
True if certificate is currently valid (not expired)
"""
result = ProviderResult()
try:
not_after_str = cert_data.get('not_after')
if not not_after_str:
return False
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh processing cancelled before processing for domain: {domain}")
return result
not_after_date = self._parse_certificate_date(not_after_str)
not_before_str = cert_data.get('not_before')
all_discovered_domains = set()
now = datetime.now(timezone.utc)
for i, cert_data in enumerate(certificates):
if i % 5 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh processing cancelled at certificate {i} for domain: {domain}")
break
# Check if certificate is within valid date range
is_not_expired = not_after_date > now
cert_domains = self._extract_domains_from_certificate(cert_data)
all_discovered_domains.update(cert_domains)
if not_before_str:
not_before_date = self._parse_certificate_date(not_before_str)
is_not_before_valid = not_before_date <= now
return is_not_expired and is_not_before_valid
for cert_domain in cert_domains:
if not _is_valid_domain(cert_domain):
continue
return is_not_expired
for key, value in self._extract_certificate_metadata(cert_data).items():
if value is not None:
result.add_attribute(
target_node=cert_domain,
name=f"cert_{key}",
value=value,
attr_type='certificate_data',
provider=self.name,
confidence=0.9
)
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled before relationship creation for domain: {domain}")
return result
for i, discovered_domain in enumerate(all_discovered_domains):
if discovered_domain == domain:
continue
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh relationship creation cancelled for domain: {domain}")
break
if not _is_valid_domain(discovered_domain):
continue
confidence = self._calculate_domain_relationship_confidence(
domain, discovered_domain, [], all_discovered_domains
)
result.add_relationship(
source_node=domain,
target_node=discovered_domain,
relationship_type='san_certificate',
provider=self.name,
confidence=confidence,
raw_data={'relationship_type': 'certificate_discovery'}
)
self.log_relationship_discovery(
source_node=domain,
target_node=discovered_domain,
relationship_type='san_certificate',
confidence_score=confidence,
raw_data={'relationship_type': 'certificate_discovery'},
discovery_method="certificate_transparency_analysis"
)
return result
except Exception as e:
self.logger.logger.debug(f"Certificate validity check failed: {e}")
return False
def _extract_certificate_metadata(self, cert_data: Dict[str, Any]) -> Dict[str, Any]:
"""Extract comprehensive metadata from certificate data."""
raw_issuer_name = cert_data.get('issuer_name', '')
parsed_issuer_name = self._parse_issuer_organization(raw_issuer_name)
"""
Extract comprehensive metadata from certificate data.
Args:
cert_data: Raw certificate data from crt.sh
Returns:
Comprehensive certificate metadata dictionary
"""
metadata = {
'certificate_id': cert_data.get('id'),
'serial_number': cert_data.get('serial_number'),
'issuer_name': parsed_issuer_name,
'issuer_name': cert_data.get('issuer_name'),
'issuer_ca_id': cert_data.get('issuer_ca_id'),
'common_name': cert_data.get('common_name'),
'not_before': cert_data.get('not_before'),
@@ -367,6 +154,7 @@ class CrtShProvider(BaseProvider):
metadata['is_currently_valid'] = self._is_cert_valid(cert_data)
metadata['expires_soon'] = (not_after - datetime.now(timezone.utc)).days <= 30
# Add human-readable dates
metadata['not_before'] = not_before.strftime('%Y-%m-%d %H:%M:%S UTC')
metadata['not_after'] = not_after.strftime('%Y-%m-%d %H:%M:%S UTC')
@@ -377,144 +165,178 @@ class CrtShProvider(BaseProvider):
return metadata
def _parse_issuer_organization(self, issuer_dn: str) -> str:
"""Parse the issuer Distinguished Name to extract just the organization name."""
if not issuer_dn:
return issuer_dn
try:
components = [comp.strip() for comp in issuer_dn.split(',')]
for component in components:
if component.startswith('O='):
org_name = component[2:].strip()
if org_name.startswith('"') and org_name.endswith('"'):
org_name = org_name[1:-1]
return org_name
return issuer_dn
except Exception as e:
self.logger.logger.debug(f"Failed to parse issuer DN '{issuer_dn}': {e}")
return issuer_dn
def _parse_certificate_date(self, date_string: str) -> datetime:
"""Parse certificate date from crt.sh format."""
if not date_string:
raise ValueError("Empty date string")
try:
if date_string.endswith('Z'):
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
elif '+' in date_string or date_string.endswith('UTC'):
date_string = date_string.replace('UTC', '').strip()
if '+' in date_string:
date_string = date_string.split('+')[0]
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
else:
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
except Exception as e:
try:
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
except Exception:
raise ValueError(f"Unable to parse date: {date_string}") from e
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
"""Check if a certificate is currently valid based on its expiry date."""
try:
not_after_str = cert_data.get('not_after')
if not not_after_str:
return False
not_after_date = self._parse_certificate_date(not_after_str)
not_before_str = cert_data.get('not_before')
now = datetime.now(timezone.utc)
is_not_expired = not_after_date > now
if not_before_str:
not_before_date = self._parse_certificate_date(not_before_str)
is_not_before_valid = not_before_date <= now
return is_not_expired and is_not_before_valid
return is_not_expired
except Exception as e:
self.logger.logger.debug(f"Certificate validity check failed: {e}")
return False
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
"""Extract all domains from certificate data."""
domains = set()
# Extract from common name
common_name = cert_data.get('common_name', '')
if common_name:
cleaned_cn = self._clean_domain_name(common_name)
if cleaned_cn:
domains.update(cleaned_cn)
# Extract from name_value field (contains SANs)
name_value = cert_data.get('name_value', '')
if name_value:
for line in name_value.split('\n'):
cleaned_domains = self._clean_domain_name(line.strip())
if cleaned_domains:
domains.update(cleaned_domains)
return domains
def _clean_domain_name(self, domain_name: str) -> List[str]:
"""Clean and normalize domain name from certificate data."""
if not domain_name:
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query crt.sh for certificates containing the domain.
"""
if not _is_valid_domain(domain):
return []
domain = domain_name.strip().lower()
# Check for cancellation before starting
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled before start for domain: {domain}")
return []
if domain.startswith(('http://', 'https://')):
domain = domain.split('://', 1)[1]
relationships = []
if '/' in domain:
domain = domain.split('/', 1)[0]
try:
# Query crt.sh for certificates
url = f"{self.base_url}?q={quote(domain)}&output=json"
response = self.make_request(url, target_indicator=domain, max_retries=3)
if ':' in domain and not domain.count(':') > 1:
domain = domain.split(':', 1)[0]
if not response or response.status_code != 200:
return []
cleaned_domains = []
if domain.startswith('*.'):
cleaned_domains.append(domain)
cleaned_domains.append(domain[2:])
else:
cleaned_domains.append(domain)
# Check for cancellation after request
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled after request for domain: {domain}")
return []
final_domains = []
for d in cleaned_domains:
d = re.sub(r'[^\w\-\.]', '', d)
if d and not d.startswith(('.', '-')) and not d.endswith(('.', '-')):
final_domains.append(d)
certificates = response.json()
return [d for d in final_domains if _is_valid_domain(d)]
if not certificates:
return []
# Check for cancellation before processing
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled before processing for domain: {domain}")
return []
# Aggregate certificate data by domain
domain_certificates = {}
all_discovered_domains = set()
# Process certificates with cancellation checking
for i, cert_data in enumerate(certificates):
# Check for cancellation every 5 certificates instead of 10 for faster response
if i % 5 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh processing cancelled at certificate {i} for domain: {domain}")
break
cert_metadata = self._extract_certificate_metadata(cert_data)
cert_domains = self._extract_domains_from_certificate(cert_data)
# Add all domains from this certificate to our tracking
for cert_domain in cert_domains:
# Additional stop check during domain processing
if i % 20 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh domain processing cancelled for domain: {domain}")
break
if not _is_valid_domain(cert_domain):
continue
all_discovered_domains.add(cert_domain)
# Initialize domain certificate list if needed
if cert_domain not in domain_certificates:
domain_certificates[cert_domain] = []
# Add this certificate to the domain's certificate list
domain_certificates[cert_domain].append(cert_metadata)
# Final cancellation check before creating relationships
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled before relationship creation for domain: {domain}")
return []
# Create relationships from query domain to ALL discovered domains with stop checking
for i, discovered_domain in enumerate(all_discovered_domains):
if discovered_domain == domain:
continue # Skip self-relationships
# Check for cancellation every 10 relationships
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh relationship creation cancelled for domain: {domain}")
break
if not _is_valid_domain(discovered_domain):
continue
# Get certificates for both domains
query_domain_certs = domain_certificates.get(domain, [])
discovered_domain_certs = domain_certificates.get(discovered_domain, [])
# Find shared certificates (for metadata purposes)
shared_certificates = self._find_shared_certificates(query_domain_certs, discovered_domain_certs)
# Calculate confidence based on relationship type and shared certificates
confidence = self._calculate_domain_relationship_confidence(
domain, discovered_domain, shared_certificates, all_discovered_domains
)
# Create comprehensive raw data for the relationship
relationship_raw_data = {
'relationship_type': 'certificate_discovery',
'shared_certificates': shared_certificates,
'total_shared_certs': len(shared_certificates),
'discovery_context': self._determine_relationship_context(discovered_domain, domain),
'domain_certificates': {
domain: self._summarize_certificates(query_domain_certs),
discovered_domain: self._summarize_certificates(discovered_domain_certs)
}
}
# Create domain -> domain relationship
relationships.append((
domain,
discovered_domain,
'san_certificate',
confidence,
relationship_raw_data
))
# Log the relationship discovery
self.log_relationship_discovery(
source_node=domain,
target_node=discovered_domain,
relationship_type='san_certificate',
confidence_score=confidence,
raw_data=relationship_raw_data,
discovery_method="certificate_transparency_analysis"
)
except json.JSONDecodeError as e:
self.logger.logger.error(f"Failed to parse JSON response from crt.sh: {e}")
except requests.exceptions.RequestException as e:
self.logger.logger.error(f"HTTP request to crt.sh failed: {e}")
return relationships
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Find certificates that are shared between two domain certificate lists."""
"""
Find certificates that are shared between two domain certificate lists.
Args:
certs1: First domain's certificates
certs2: Second domain's certificates
Returns:
List of shared certificate metadata
"""
shared = []
cert1_ids = set()
for cert in certs1:
cert_id = cert.get('certificate_id')
if cert_id and isinstance(cert_id, (int, str, float, bool, tuple)):
cert1_ids.add(cert_id)
# Create a set of certificate IDs from the first list for quick lookup
cert1_ids = {cert.get('certificate_id') for cert in certs1 if cert.get('certificate_id')}
# Find certificates in the second list that match
for cert in certs2:
cert_id = cert.get('certificate_id')
if cert_id and isinstance(cert_id, (int, str, float, bool, tuple)):
if cert_id in cert1_ids:
shared.append(cert)
if cert.get('certificate_id') in cert1_ids:
shared.append(cert)
return shared
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Create a summary of certificates for a domain."""
"""
Create a summary of certificates for a domain.
Args:
certificates: List of certificate metadata
Returns:
Summary dictionary with aggregate statistics
"""
if not certificates:
return {
'total_certificates': 0,
@@ -523,14 +345,14 @@ class CrtShProvider(BaseProvider):
'expires_soon_count': 0,
'unique_issuers': [],
'latest_certificate': None,
'has_valid_cert': False,
'certificate_details': []
'has_valid_cert': False
}
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
expired_count = len(certificates) - valid_count
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
# Get unique issuers
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
# Find the most recent certificate
@@ -547,13 +369,6 @@ class CrtShProvider(BaseProvider):
except Exception:
continue
# Sort certificates by date for better display (newest first)
sorted_certificates = sorted(
certificates,
key=lambda c: self._get_certificate_sort_date(c),
reverse=True
)
return {
'total_certificates': len(certificates),
'valid_certificates': valid_count,
@@ -562,40 +377,37 @@ class CrtShProvider(BaseProvider):
'unique_issuers': unique_issuers,
'latest_certificate': latest_cert,
'has_valid_cert': valid_count > 0,
'certificate_details': sorted_certificates
'certificate_details': certificates # Full details for forensic analysis
}
def _get_certificate_sort_date(self, cert: Dict[str, Any]) -> datetime:
"""Get a sortable date from certificate data for chronological ordering."""
try:
if cert.get('not_before'):
return self._parse_certificate_date(cert['not_before'])
if cert.get('entry_timestamp'):
return self._parse_certificate_date(cert['entry_timestamp'])
return datetime(1970, 1, 1, tzinfo=timezone.utc)
except Exception:
return datetime(1970, 1, 1, tzinfo=timezone.utc)
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str,
shared_certificates: List[Dict[str, Any]],
all_discovered_domains: Set[str]) -> float:
"""Calculate confidence score for domain relationship based on various factors."""
"""
Calculate confidence score for domain relationship based on various factors.
Args:
domain1: Source domain (query domain)
domain2: Target domain (discovered domain)
shared_certificates: List of shared certificate metadata
all_discovered_domains: All domains discovered in this query
Returns:
Confidence score between 0.0 and 1.0
"""
base_confidence = 0.9
# Adjust confidence based on domain relationship context
relationship_context = self._determine_relationship_context(domain2, domain1)
if relationship_context == 'exact_match':
context_bonus = 0.0
context_bonus = 0.0 # This shouldn't happen, but just in case
elif relationship_context == 'subdomain':
context_bonus = 0.1
context_bonus = 0.1 # High confidence for subdomains
elif relationship_context == 'parent_domain':
context_bonus = 0.05
context_bonus = 0.05 # Medium confidence for parent domains
else:
context_bonus = 0.0
context_bonus = 0.0 # Related domains get base confidence
# Adjust confidence based on shared certificates
if shared_certificates:
@@ -607,16 +419,18 @@ class CrtShProvider(BaseProvider):
else:
shared_bonus = 0.02
# Additional bonus for valid shared certificates
valid_shared = sum(1 for cert in shared_certificates if cert.get('is_currently_valid'))
if valid_shared > 0:
validity_bonus = 0.05
else:
validity_bonus = 0.0
else:
# Even without shared certificates, domains found in the same query have some relationship
shared_bonus = 0.0
validity_bonus = 0.0
# Adjust confidence based on certificate issuer reputation
# Adjust confidence based on certificate issuer reputation (if shared certificates exist)
issuer_bonus = 0.0
if shared_certificates:
for cert in shared_certificates:
@@ -625,11 +439,21 @@ class CrtShProvider(BaseProvider):
issuer_bonus = max(issuer_bonus, 0.03)
break
# Calculate final confidence
final_confidence = base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus
return max(0.1, min(1.0, final_confidence))
return max(0.1, min(1.0, final_confidence)) # Clamp between 0.1 and 1.0
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
"""Determine the context of the relationship between certificate domain and query domain."""
"""
Determine the context of the relationship between certificate domain and query domain.
Args:
cert_domain: Domain found in certificate
query_domain: Original query domain
Returns:
String describing the relationship context
"""
if cert_domain == query_domain:
return 'exact_match'
elif cert_domain.endswith(f'.{query_domain}'):
@@ -638,3 +462,87 @@ class CrtShProvider(BaseProvider):
return 'parent_domain'
else:
return 'related_domain'
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query crt.sh for certificates containing the IP address.
Note: crt.sh doesn't typically index by IP, so this returns empty results.
Args:
ip: IP address to investigate
Returns:
Empty list (crt.sh doesn't support IP-based certificate queries effectively)
"""
# crt.sh doesn't effectively support IP-based certificate queries
return []
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
"""
Extract all domains from certificate data.
Args:
cert_data: Certificate data from crt.sh API
Returns:
Set of unique domain names found in the certificate
"""
domains = set()
# Extract from common name
common_name = cert_data.get('common_name', '')
if common_name:
cleaned_cn = self._clean_domain_name(common_name)
if cleaned_cn:
domains.update(cleaned_cn)
# Extract from name_value field (contains SANs)
name_value = cert_data.get('name_value', '')
if name_value:
# Split by newlines and clean each domain
for line in name_value.split('\n'):
cleaned_domains = self._clean_domain_name(line.strip())
if cleaned_domains:
domains.update(cleaned_domains)
return domains
def _clean_domain_name(self, domain_name: str) -> List[str]:
"""
Clean and normalize domain name from certificate data.
Now returns a list to handle wildcards correctly.
"""
if not domain_name:
return []
domain = domain_name.strip().lower()
# Remove protocol if present
if domain.startswith(('http://', 'https://')):
domain = domain.split('://', 1)[1]
# Remove path if present
if '/' in domain:
domain = domain.split('/', 1)[0]
# Remove port if present
if ':' in domain and not domain.count(':') > 1: # Avoid breaking IPv6
domain = domain.split(':', 1)[0]
# Handle wildcard domains
cleaned_domains = []
if domain.startswith('*.'):
# Add both the wildcard and the base domain
cleaned_domains.append(domain)
cleaned_domains.append(domain[2:])
else:
cleaned_domains.append(domain)
# Remove any remaining invalid characters and validate
final_domains = []
for d in cleaned_domains:
d = re.sub(r'[^\w\-\.]', '', d)
if d and not d.startswith(('.', '-')) and not d.endswith(('.', '-')):
final_domains.append(d)
return [d for d in final_domains if _is_valid_domain(d)]

View File

@@ -1,19 +1,19 @@
# dnsrecon/providers/dns_provider.py
from dns import resolver, reversename
from typing import Dict
import dns.resolver
import dns.reversename
from typing import List, Dict, Any, Tuple
from .base_provider import BaseProvider
from core.provider_result import ProviderResult
from utils.helpers import _is_valid_ip, _is_valid_domain
class DNSProvider(BaseProvider):
"""
Provider for standard DNS resolution and reverse DNS lookups.
Now returns standardized ProviderResult objects.
Now uses session-specific configuration.
"""
def __init__(self, name=None, session_config=None):
def __init__(self, session_config=None):
"""Initialize DNS provider with session-specific configuration."""
super().__init__(
name="dns",
@@ -23,9 +23,10 @@ class DNSProvider(BaseProvider):
)
# Configure DNS resolver
self.resolver = resolver.Resolver()
self.resolver = dns.resolver.Resolver()
self.resolver.timeout = 5
self.resolver.lifetime = 10
#self.resolver.nameservers = ['127.0.0.1']
def get_name(self) -> str:
"""Return the provider name."""
@@ -47,35 +48,28 @@ class DNSProvider(BaseProvider):
"""DNS is always available - no API key required."""
return True
def query_domain(self, domain: str) -> ProviderResult:
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query DNS records for the domain to discover relationships and attributes.
Query DNS records for the domain to discover relationships.
Args:
domain: Domain to investigate
Returns:
ProviderResult containing discovered relationships and attributes
List of relationships discovered from DNS analysis
"""
if not _is_valid_domain(domain):
return ProviderResult()
return []
result = ProviderResult()
relationships = []
# Query all record types
for record_type in ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SOA', 'TXT', 'SRV', 'CAA']:
try:
self._query_record(domain, record_type, result)
except resolver.NoAnswer:
# This is not an error, just a confirmation that the record doesn't exist.
self.logger.logger.debug(f"No {record_type} record found for {domain}")
except Exception as e:
self.failed_requests += 1
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
relationships.extend(self._query_record(domain, record_type))
return result
return relationships
def query_ip(self, ip: str) -> ProviderResult:
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query reverse DNS for the IP address.
@@ -83,17 +77,17 @@ class DNSProvider(BaseProvider):
ip: IP address to investigate
Returns:
ProviderResult containing discovered relationships and attributes
List of relationships discovered from reverse DNS
"""
if not _is_valid_ip(ip):
return ProviderResult()
return []
result = ProviderResult()
relationships = []
try:
# Perform reverse DNS lookup
self.total_requests += 1
reverse_name = reversename.from_address(ip)
reverse_name = dns.reversename.from_address(ip)
response = self.resolver.resolve(reverse_name, 'PTR')
self.successful_requests += 1
@@ -101,74 +95,46 @@ class DNSProvider(BaseProvider):
hostname = str(ptr_record).rstrip('.')
if _is_valid_domain(hostname):
# Add the relationship
result.add_relationship(
source_node=ip,
target_node=hostname,
relationship_type='ptr_record',
provider=self.name,
confidence=0.8,
raw_data={
'query_type': 'PTR',
'ip_address': ip,
'hostname': hostname,
'ttl': response.ttl
}
)
raw_data = {
'query_type': 'PTR',
'ip_address': ip,
'hostname': hostname,
'ttl': response.ttl
}
# Add PTR record as attribute to the IP
result.add_attribute(
target_node=ip,
name='ptr_record',
value=hostname,
attr_type='dns_record',
provider=self.name,
confidence=0.8,
metadata={'ttl': response.ttl}
)
relationships.append((
ip,
hostname,
'ptr_record',
0.8,
raw_data
))
# Log the relationship discovery
self.log_relationship_discovery(
source_node=ip,
target_node=hostname,
relationship_type='ptr_record',
confidence_score=0.8,
raw_data={
'query_type': 'PTR',
'ip_address': ip,
'hostname': hostname,
'ttl': response.ttl
},
raw_data=raw_data,
discovery_method="reverse_dns_lookup"
)
except resolver.NXDOMAIN:
self.failed_requests += 1
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: NXDOMAIN")
except Exception as e:
self.failed_requests += 1
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: {e}")
# Re-raise the exception so the scanner can handle the failure
raise e
return result
return relationships
def _query_record(self, domain: str, record_type: str, result: ProviderResult) -> None:
def _query_record(self, domain: str, record_type: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query a specific type of DNS record for the domain and add results to ProviderResult.
Args:
domain: Domain to query
record_type: DNS record type (A, AAAA, CNAME, etc.)
result: ProviderResult to populate
Query a specific type of DNS record for the domain.
"""
relationships = []
try:
self.total_requests += 1
response = self.resolver.resolve(domain, record_type)
self.successful_requests += 1
dns_records = []
for record in response:
target = ""
if record_type in ['A', 'AAAA']:
@@ -180,16 +146,12 @@ class DNSProvider(BaseProvider):
elif record_type == 'SOA':
target = str(record.mname).rstrip('.')
elif record_type in ['TXT']:
# TXT records are treated as attributes, not relationships
txt_value = str(record).strip('"')
dns_records.append(f"TXT: {txt_value}")
# TXT records are treated as metadata, not relationships.
continue
elif record_type == 'SRV':
target = str(record.target).rstrip('.')
elif record_type == 'CAA':
caa_value = f"{record.flags} {record.tag.decode('utf-8')} \"{record.value.decode('utf-8')}\""
dns_records.append(f"CAA: {caa_value}")
continue
target = f"{record.flags} {record.tag.decode('utf-8')} \"{record.value.decode('utf-8')}\""
else:
target = str(record)
@@ -201,22 +163,16 @@ class DNSProvider(BaseProvider):
'ttl': response.ttl
}
relationship_type = f"{record_type.lower()}_record"
confidence = 0.8 # Standard confidence for DNS records
confidence = 0.8 # Default confidence for DNS records
# Add relationship
result.add_relationship(
source_node=domain,
target_node=target,
relationship_type=relationship_type,
provider=self.name,
confidence=confidence,
raw_data=raw_data
)
relationships.append((
domain,
target,
relationship_type,
confidence,
raw_data
))
# Add DNS record as attribute to the source domain
dns_records.append(f"{record_type}: {target}")
# Log relationship discovery
self.log_relationship_discovery(
source_node=domain,
target_node=target,
@@ -226,20 +182,8 @@ class DNSProvider(BaseProvider):
discovery_method=f"dns_{record_type.lower()}_record"
)
# Add DNS records as a consolidated attribute
if dns_records:
result.add_attribute(
target_node=domain,
name='dns_records',
value=dns_records,
attr_type='dns_record_list',
provider=self.name,
confidence=0.8,
metadata={'record_types': [record_type]}
)
except Exception as e:
self.failed_requests += 1
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
# Re-raise the exception so the scanner can handle it
raise e
return relationships

View File

@@ -1,23 +1,21 @@
# dnsrecon/providers/shodan_provider.py
"""
Shodan provider for DNSRecon.
Discovers IP relationships and infrastructure context through Shodan API.
"""
import json
from pathlib import Path
from typing import Dict, Any
from datetime import datetime, timezone
import requests
from typing import List, Dict, Any, Tuple
from .base_provider import BaseProvider
from core.provider_result import ProviderResult
from utils.helpers import _is_valid_ip, _is_valid_domain
class ShodanProvider(BaseProvider):
"""
Provider for querying Shodan API for IP address information.
Now returns standardized ProviderResult objects with caching support.
Provider for querying Shodan API for IP address and hostname information.
Now uses session-specific API keys.
"""
def __init__(self, name=None, session_config=None):
def __init__(self, session_config=None):
"""Initialize Shodan provider with session-specific configuration."""
super().__init__(
name="shodan",
@@ -28,10 +26,6 @@ class ShodanProvider(BaseProvider):
self.base_url = "https://api.shodan.io"
self.api_key = self.config.get_api_key('shodan')
# Initialize cache directory
self.cache_dir = Path('cache') / 'shodan'
self.cache_dir.mkdir(parents=True, exist_ok=True)
def is_available(self) -> bool:
"""Check if Shodan provider is available (has valid API key in this session)."""
return self.api_key is not None and len(self.api_key.strip()) > 0
@@ -42,7 +36,7 @@ class ShodanProvider(BaseProvider):
def get_display_name(self) -> str:
"""Return the provider display name for the UI."""
return "Shodan"
return "shodan"
def requires_api_key(self) -> bool:
"""Return True if the provider requires an API key."""
@@ -50,234 +44,267 @@ class ShodanProvider(BaseProvider):
def get_eligibility(self) -> Dict[str, bool]:
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
return {'domains': False, 'ips': True}
return {'domains': True, 'ips': True}
def _get_cache_file_path(self, ip: str) -> Path:
"""Generate cache file path for an IP address."""
safe_ip = ip.replace('.', '_').replace(':', '_')
return self.cache_dir / f"{safe_ip}.json"
def _get_cache_status(self, cache_file_path: Path) -> str:
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Check cache status for an IP.
Returns: 'not_found', 'fresh', or 'stale'
"""
if not cache_file_path.exists():
return "not_found"
try:
with open(cache_file_path, 'r') as f:
cache_data = json.load(f)
last_query_str = cache_data.get("last_upstream_query")
if not last_query_str:
return "stale"
last_query = datetime.fromisoformat(last_query_str.replace('Z', '+00:00'))
hours_since_query = (datetime.now(timezone.utc) - last_query).total_seconds() / 3600
cache_timeout = self.config.cache_timeout_hours
if hours_since_query < cache_timeout:
return "fresh"
else:
return "stale"
except (json.JSONDecodeError, ValueError, KeyError):
return "stale"
def query_domain(self, domain: str) -> ProviderResult:
"""
Domain queries are no longer supported for the Shodan provider.
Query Shodan for information about a domain.
Uses Shodan's hostname search to find associated IPs.
Args:
domain: Domain to investigate
Returns:
Empty ProviderResult
List of relationships discovered from Shodan data
"""
return ProviderResult()
if not _is_valid_domain(domain) or not self.is_available():
return []
def query_ip(self, ip: str) -> ProviderResult:
relationships = []
try:
# Search for hostname in Shodan
search_query = f"hostname:{domain}"
url = f"{self.base_url}/shodan/host/search"
params = {
'key': self.api_key,
'query': search_query,
'minify': True # Get minimal data to reduce bandwidth
}
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
if not response or response.status_code != 200:
return []
data = response.json()
if 'matches' not in data:
return []
# Process search results
for match in data['matches']:
ip_address = match.get('ip_str')
hostnames = match.get('hostnames', [])
if ip_address and domain in hostnames:
raw_data = {
'ip_address': ip_address,
'hostnames': hostnames,
'country': match.get('location', {}).get('country_name', ''),
'city': match.get('location', {}).get('city', ''),
'isp': match.get('isp', ''),
'org': match.get('org', ''),
'ports': match.get('ports', []),
'last_update': match.get('last_update', '')
}
relationships.append((
domain,
ip_address,
'a_record', # Domain resolves to IP
0.8,
raw_data
))
self.log_relationship_discovery(
source_node=domain,
target_node=ip_address,
relationship_type='a_record',
confidence_score=0.8,
raw_data=raw_data,
discovery_method="shodan_hostname_search"
)
# Also create relationships to other hostnames on the same IP
for hostname in hostnames:
if hostname != domain and _is_valid_domain(hostname):
hostname_raw_data = {
'shared_ip': ip_address,
'all_hostnames': hostnames,
'discovery_context': 'shared_hosting'
}
relationships.append((
domain,
hostname,
'passive_dns', # Shared hosting relationship
0.6, # Lower confidence for shared hosting
hostname_raw_data
))
self.log_relationship_discovery(
source_node=domain,
target_node=hostname,
relationship_type='passive_dns',
confidence_score=0.6,
raw_data=hostname_raw_data,
discovery_method="shodan_shared_hosting"
)
except json.JSONDecodeError as e:
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
return relationships
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query Shodan for information about an IP address, with caching of processed data.
Query Shodan for information about an IP address.
Args:
ip: IP address to investigate
Returns:
ProviderResult containing discovered relationships and attributes
List of relationships discovered from Shodan IP data
"""
if not _is_valid_ip(ip) or not self.is_available():
return ProviderResult()
return []
cache_file = self._get_cache_file_path(ip)
cache_status = self._get_cache_status(cache_file)
result = ProviderResult()
relationships = []
try:
if cache_status == "fresh":
result = self._load_from_cache(cache_file)
self.logger.logger.info(f"Using cached Shodan data for {ip}")
else: # "stale" or "not_found"
url = f"{self.base_url}/shodan/host/{ip}"
params = {'key': self.api_key}
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
# Query Shodan host information
url = f"{self.base_url}/shodan/host/{ip}"
params = {'key': self.api_key}
if response and response.status_code == 200:
data = response.json()
# Process the data into ProviderResult BEFORE caching
result = self._process_shodan_data(ip, data)
self._save_to_cache(cache_file, result, data) # Save both result and raw data
elif cache_status == "stale":
# If API fails on a stale cache, use the old data
result = self._load_from_cache(cache_file)
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
except requests.exceptions.RequestException as e:
self.logger.logger.error(f"Shodan API query failed for {ip}: {e}")
if cache_status == "stale":
result = self._load_from_cache(cache_file)
if not response or response.status_code != 200:
return []
return result
data = response.json()
def _load_from_cache(self, cache_file_path: Path) -> ProviderResult:
"""Load processed Shodan data from a cache file."""
try:
with open(cache_file_path, 'r') as f:
cache_content = json.load(f)
# Extract hostname relationships
hostnames = data.get('hostnames', [])
for hostname in hostnames:
if _is_valid_domain(hostname):
raw_data = {
'ip_address': ip,
'hostname': hostname,
'country': data.get('country_name', ''),
'city': data.get('city', ''),
'isp': data.get('isp', ''),
'org': data.get('org', ''),
'asn': data.get('asn', ''),
'ports': data.get('ports', []),
'last_update': data.get('last_update', ''),
'os': data.get('os', '')
}
result = ProviderResult()
relationships.append((
ip,
hostname,
'a_record', # IP resolves to hostname
0.8,
raw_data
))
# Reconstruct relationships
for rel_data in cache_content.get("relationships", []):
result.add_relationship(
source_node=rel_data["source_node"],
target_node=rel_data["target_node"],
relationship_type=rel_data["relationship_type"],
provider=rel_data["provider"],
confidence=rel_data["confidence"],
raw_data=rel_data.get("raw_data", {})
)
self.log_relationship_discovery(
source_node=ip,
target_node=hostname,
relationship_type='a_record',
confidence_score=0.8,
raw_data=raw_data,
discovery_method="shodan_host_lookup"
)
# Reconstruct attributes
for attr_data in cache_content.get("attributes", []):
result.add_attribute(
target_node=attr_data["target_node"],
name=attr_data["name"],
value=attr_data["value"],
attr_type=attr_data["type"],
provider=attr_data["provider"],
confidence=attr_data["confidence"],
metadata=attr_data.get("metadata", {})
)
# Extract ASN relationship if available
asn = data.get('asn')
if asn:
# Ensure the ASN starts with "AS"
if isinstance(asn, str) and asn.startswith('AS'):
asn_name = asn
asn_number = asn[2:]
else:
asn_name = f"AS{asn}"
asn_number = str(asn)
return result
asn_raw_data = {
'ip_address': ip,
'asn': asn_number,
'isp': data.get('isp', ''),
'org': data.get('org', '')
}
except (json.JSONDecodeError, FileNotFoundError, KeyError):
return ProviderResult()
relationships.append((
ip,
asn_name,
'asn_membership',
0.7,
asn_raw_data
))
def _save_to_cache(self, cache_file_path: Path, result: ProviderResult, raw_data: Dict[str, Any]) -> None:
"""Save processed Shodan data to a cache file."""
try:
cache_data = {
"last_upstream_query": datetime.now(timezone.utc).isoformat(),
"raw_data": raw_data, # Preserve original for forensic purposes
"relationships": [
{
"source_node": rel.source_node,
"target_node": rel.target_node,
"relationship_type": rel.relationship_type,
"confidence": rel.confidence,
"provider": rel.provider,
"raw_data": rel.raw_data
} for rel in result.relationships
],
"attributes": [
{
"target_node": attr.target_node,
"name": attr.name,
"value": attr.value,
"type": attr.type,
"provider": attr.provider,
"confidence": attr.confidence,
"metadata": attr.metadata
} for attr in result.attributes
]
}
with open(cache_file_path, 'w') as f:
json.dump(cache_data, f, separators=(',', ':'), default=str)
except Exception as e:
self.logger.logger.warning(f"Failed to save Shodan cache for {cache_file_path.name}: {e}")
def _process_shodan_data(self, ip: str, data: Dict[str, Any]) -> ProviderResult:
"""
Process Shodan data to extract relationships and attributes.
Args:
ip: IP address queried
data: Raw Shodan response data
Returns:
ProviderResult with relationships and attributes
"""
result = ProviderResult()
for key, value in data.items():
if key == 'hostnames':
for hostname in value:
if _is_valid_domain(hostname):
result.add_relationship(
source_node=ip,
target_node=hostname,
relationship_type='a_record',
provider=self.name,
confidence=0.8,
raw_data=data
)
self.log_relationship_discovery(
source_node=ip,
target_node=hostname,
relationship_type='a_record',
confidence_score=0.8,
raw_data=data,
discovery_method="shodan_host_lookup"
)
elif key == 'asn':
asn_name = f"AS{value[2:]}" if isinstance(value, str) and value.startswith('AS') else f"AS{value}"
result.add_relationship(
source_node=ip,
target_node=asn_name,
relationship_type='asn_membership',
provider=self.name,
confidence=0.7,
raw_data=data
)
self.log_relationship_discovery(
source_node=ip,
target_node=asn_name,
relationship_type='asn_membership',
confidence_score=0.7,
raw_data=data,
raw_data=asn_raw_data,
discovery_method="shodan_asn_lookup"
)
elif key == 'ports':
for port in value:
result.add_attribute(
target_node=ip,
name='open_port',
value=port,
attr_type='network_info',
provider=self.name,
confidence=0.9
)
elif isinstance(value, (str, int, float, bool)) and value is not None:
result.add_attribute(
target_node=ip,
name=f"shodan_{key}",
value=value,
attr_type='shodan_info',
provider=self.name,
confidence=0.9
)
return result
except json.JSONDecodeError as e:
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
return relationships
def search_by_organization(self, org_name: str) -> List[Dict[str, Any]]:
"""
Search Shodan for hosts belonging to a specific organization.
Args:
org_name: Organization name to search for
Returns:
List of host information dictionaries
"""
if not self.is_available():
return []
try:
search_query = f"org:\"{org_name}\""
url = f"{self.base_url}/shodan/host/search"
params = {
'key': self.api_key,
'query': search_query,
'minify': True
}
response = self.make_request(url, method="GET", params=params, target_indicator=org_name)
if response and response.status_code == 200:
data = response.json()
return data.get('matches', [])
except Exception as e:
self.logger.logger.error(f"Error searching Shodan by organization {org_name}: {e}")
return []
def get_host_services(self, ip: str) -> List[Dict[str, Any]]:
"""
Get service information for a specific IP address.
Args:
ip: IP address to query
Returns:
List of service information dictionaries
"""
if not _is_valid_ip(ip) or not self.is_available():
return []
try:
url = f"{self.base_url}/shodan/host/{ip}"
params = {'key': self.api_key}
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
if response and response.status_code == 200:
data = response.json()
return data.get('data', []) # Service banners
except Exception as e:
self.logger.logger.error(f"Error getting Shodan services for IP {ip}: {e}")
return []

View File

@@ -7,4 +7,3 @@ urllib3>=2.0.0
dnspython>=2.4.2
gunicorn
redis
python-dotenv

File diff suppressed because it is too large Load Diff

View File

@@ -1,58 +1,7 @@
/**
* Graph visualization module for DNSRecon
* Handles network graph rendering using vis.js with proper large entity node hiding
* UPDATED: Now compatible with a strictly flat, unified data model for attributes.
* Handles network graph rendering using vis.js
*/
const contextMenuCSS = `
.graph-context-menu {
position: fixed;
z-index: 1000;
background: linear-gradient(135deg, #2a2a2a 0%, #1e1e1e 100%);
border: 1px solid #444;
border-radius: 6px;
box-shadow: 0 8px 25px rgba(0,0,0,0.6);
display: none;
font-family: 'Roboto Mono', monospace;
font-size: 0.9rem;
color: #c7c7c7;
min-width: 180px;
overflow: hidden;
}
.graph-context-menu ul {
list-style: none;
padding: 0.5rem 0;
margin: 0;
}
.graph-context-menu ul li {
padding: 0.75rem 1rem;
cursor: pointer;
transition: all 0.2s ease;
display: flex;
align-items: center;
gap: 0.5rem;
}
.graph-context-menu ul li:hover {
background: linear-gradient(135deg, #3a3a3a 0%, #2e2e2e 100%);
color: #00ff41;
}
.graph-context-menu .menu-icon {
font-size: 0.9rem;
width: 1.2rem;
text-align: center;
}
.graph-context-menu ul li:first-child {
border-top: none;
}
.graph-context-menu ul li:last-child {
border-bottom: none;
}
`;
class GraphManager {
constructor(containerId) {
@@ -63,13 +12,6 @@ class GraphManager {
this.isInitialized = false;
this.currentLayout = 'physics';
this.nodeInfoPopup = null;
this.contextMenu = null;
this.history = [];
this.filterPanel = null;
this.initialTargetIds = new Set();
// Track large entity members for proper hiding
this.largeEntityMembers = new Set();
this.isScanning = false;
this.options = {
nodes: {
@@ -173,14 +115,8 @@ class GraphManager {
randomSeed: 2
}
};
if (typeof document !== 'undefined') {
const style = document.createElement('style');
style.textContent = contextMenuCSS;
document.head.appendChild(style);
}
this.createNodeInfoPopup();
this.createContextMenu();
document.body.addEventListener('click', () => this.hideContextMenu());
}
/**
@@ -193,30 +129,6 @@ class GraphManager {
document.body.appendChild(this.nodeInfoPopup);
}
/**
* Create context menu
*/
createContextMenu() {
// Remove existing context menu if it exists
const existing = document.getElementById('graph-context-menu');
if (existing) {
existing.remove();
}
this.contextMenu = document.createElement('div');
this.contextMenu.id = 'graph-context-menu';
this.contextMenu.className = 'graph-context-menu';
this.contextMenu.style.display = 'none';
// Prevent body click listener from firing when clicking the menu itself
this.contextMenu.addEventListener('click', (event) => {
event.stopPropagation();
});
document.body.appendChild(this.contextMenu);
console.log('Context menu created and added to body');
}
/**
* Initialize the network graph
*/
@@ -243,7 +155,6 @@ class GraphManager {
// Add graph controls
this.addGraphControls();
this.addFilterPanel();
console.log('Graph initialized successfully');
} catch (error) {
@@ -262,8 +173,6 @@ class GraphManager {
<button class="graph-control-btn" id="graph-fit" title="Fit to Screen">[FIT]</button>
<button class="graph-control-btn" id="graph-physics" title="Toggle Physics">[PHYSICS]</button>
<button class="graph-control-btn" id="graph-cluster" title="Cluster Nodes">[CLUSTER]</button>
<button class="graph-control-btn" id="graph-unhide" title="Unhide All">[UNHIDE]</button>
<button class="graph-control-btn" id="graph-revert" title="Revert Last Action">[REVERT]</button>
`;
this.container.appendChild(controlsContainer);
@@ -272,14 +181,6 @@ class GraphManager {
document.getElementById('graph-fit').addEventListener('click', () => this.fitView());
document.getElementById('graph-physics').addEventListener('click', () => this.togglePhysics());
document.getElementById('graph-cluster').addEventListener('click', () => this.toggleClustering());
document.getElementById('graph-unhide').addEventListener('click', () => this.unhideAll());
document.getElementById('graph-revert').addEventListener('click', () => this.revertLastAction());
}
addFilterPanel() {
this.filterPanel = document.createElement('div');
this.filterPanel.className = 'graph-filter-panel';
this.container.appendChild(this.filterPanel);
}
/**
@@ -288,31 +189,8 @@ class GraphManager {
setupNetworkEvents() {
if (!this.network) return;
// FIXED: Right-click context menu
this.container.addEventListener('contextmenu', (event) => {
event.preventDefault();
console.log('Right-click detected at:', event.offsetX, event.offsetY);
// Get coordinates relative to the canvas
const pointer = {
x: event.offsetX,
y: event.offsetY
};
const nodeId = this.network.getNodeAt(pointer);
console.log('Node at pointer:', nodeId);
if (nodeId) {
// Pass the original client event for positioning
this.showContextMenu(nodeId, event);
} else {
this.hideContextMenu();
}
});
// Node click event with details
this.network.on('click', (params) => {
this.hideContextMenu();
if (params.nodes.length > 0) {
const nodeId = params.nodes[0];
if (this.network.isCluster(nodeId)) {
@@ -338,6 +216,14 @@ class GraphManager {
}
});
// FIX: Comment out the problematic context menu handler
this.network.on('oncontext', (params) => {
params.event.preventDefault();
// if (params.nodes.length > 0) {
// this.showNodeContextMenu(params.pointer.DOM, params.nodes[0]);
// }
});
// Stabilization events with progress
this.network.on('stabilizationProgress', (params) => {
const progress = params.iterations / params.total;
@@ -353,13 +239,6 @@ class GraphManager {
console.log('Selected nodes:', params.nodes);
console.log('Selected edges:', params.edges);
});
// Click away to hide context menu
document.addEventListener('click', (e) => {
if (!this.contextMenu.contains(e.target)) {
this.hideContextMenu();
}
});
}
/**
@@ -377,32 +256,21 @@ class GraphManager {
this.initialize();
}
this.largeEntityMembers.clear();
const largeEntityMap = new Map();
graphData.nodes.forEach(node => {
if (node.type === 'large_entity' && node.attributes) {
// UPDATED: Handle unified data model - look for 'nodes' attribute in the attributes list
const nodesAttribute = this.findAttributeByName(node.attributes, 'nodes');
if (nodesAttribute && Array.isArray(nodesAttribute.value)) {
nodesAttribute.value.forEach(nodeId => {
largeEntityMap.set(nodeId, node.id);
this.largeEntityMembers.add(nodeId);
});
}
if (node.type === 'large_entity' && node.attributes && Array.isArray(node.attributes.nodes)) {
node.attributes.nodes.forEach(nodeId => {
largeEntityMap.set(nodeId, node.id);
});
}
});
const filteredNodes = graphData.nodes.filter(node => {
// Only include nodes that are NOT members of large entities, but always include the container itself
return !this.largeEntityMembers.has(node.id) || node.type === 'large_entity';
});
console.log(`Filtered ${graphData.nodes.length - filteredNodes.length} large entity member nodes from visualization`);
// Process only the filtered nodes
const processedNodes = filteredNodes.map(node => {
return this.processNode(node);
const processedNodes = graphData.nodes.map(node => {
const processed = this.processNode(node);
if (largeEntityMap.has(node.id)) {
processed.hidden = true;
}
return processed;
});
const mergedEdges = {};
@@ -447,10 +315,6 @@ class GraphManager {
this.nodes.update(processedNodes);
this.edges.update(processedEdges);
// After data is loaded, apply filters
this.updateFilterControls();
this.applyAllFilters();
// Highlight new additions briefly
if (newNodes.length > 0 || newEdges.length > 0) {
setTimeout(() => this.highlightNewElements(newNodes, newEdges), 100);
@@ -462,8 +326,6 @@ class GraphManager {
}
console.log(`Graph updated: ${processedNodes.length} nodes, ${processedEdges.length} edges (${newNodes.length} new nodes, ${newEdges.length} new edges)`);
console.log(`Large entity members hidden: ${this.largeEntityMembers.size}`);
} catch (error) {
console.error('Failed to update graph:', error);
this.showError('Failed to update visualization');
@@ -471,21 +333,8 @@ class GraphManager {
}
/**
* UPDATED: Helper method to find an attribute by name in the standardized attributes list
* @param {Array} attributes - List of StandardAttribute objects
* @param {string} name - Attribute name to find
* @returns {Object|null} The attribute object if found, null otherwise
*/
findAttributeByName(attributes, name) {
if (!Array.isArray(attributes)) {
return null;
}
return attributes.find(attr => attr.name === name) || null;
}
/**
* UPDATED: Process node data with styling and metadata for the flat data model
* @param {Object} node - Raw node data with standardized attributes
* Process node data with styling and metadata
* @param {Object} node - Raw node data
* @returns {Object} Processed node data
*/
processNode(node) {
@@ -496,7 +345,7 @@ class GraphManager {
size: this.getNodeSize(node.type),
borderColor: this.getNodeBorderColor(node.type),
shape: this.getNodeShape(node.type),
attributes: node.attributes || [], // Keep as standardized attributes list
attributes: node.attributes || {},
description: node.description || '',
metadata: node.metadata || {},
type: node.type,
@@ -509,6 +358,13 @@ class GraphManager {
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
}
// Style based on certificate validity
if (node.type === 'domain') {
if (node.attributes && node.attributes.certificates && node.attributes.certificates.has_valid_cert === false) {
processedNode.color = { background: '#888888', border: '#666666' };
}
}
// Handle merged correlation objects (similar to large entities)
if (node.type === 'correlation_object') {
const metadata = node.metadata || {};
@@ -524,7 +380,7 @@ class GraphManager {
// Single correlation value
const value = Array.isArray(values) && values.length > 0 ? values[0] : (metadata.value || 'Unknown');
const displayValue = typeof value === 'string' && value.length > 20 ? value.substring(0, 17) + '...' : value;
processedNode.label = `${displayValue}`;
processedNode.label = `Corr: ${displayValue}`;
processedNode.title = `Correlation: ${value}`;
}
}
@@ -556,6 +412,8 @@ class GraphManager {
}
};
return processedEdge;
}
@@ -602,6 +460,7 @@ class GraphManager {
return colors[nodeType] || '#ffffff';
}
/**
* Get node border color based on type
* @param {string} nodeType - Node type
@@ -991,9 +850,6 @@ class GraphManager {
clear() {
this.nodes.clear();
this.edges.clear();
this.history = [];
this.largeEntityMembers.clear(); // Clear large entity tracking
this.clearInitialTargets();
// Show placeholder
const placeholder = this.container.querySelector('.graph-placeholder');
@@ -1014,577 +870,59 @@ class GraphManager {
}
}
/* * @param {Set} excludedNodeIds - Node IDs to exclude from analysis (for simulation)
* @param {Set} excludedEdgeTypes - Edge types to exclude from traversal
* @param {Set} excludedNodeTypes - Node types to exclude from traversal
* @returns {Object} Analysis results with reachable/unreachable nodes
*/
analyzeGraphReachability(excludedNodeIds = new Set(), excludedEdgeTypes = new Set(), excludedNodeTypes = new Set()) {
console.log("Performing comprehensive reachability analysis...");
const analysis = {
reachableNodes: new Set(),
unreachableNodes: new Set(),
isolatedClusters: [],
affectedNodes: new Set()
};
if (this.nodes.length === 0) return analysis;
// Build adjacency list excluding specified elements
const adjacencyList = {};
this.nodes.getIds().forEach(id => {
if (!excludedNodeIds.has(id)) {
adjacencyList[id] = [];
}
});
this.edges.forEach(edge => {
const edgeType = edge.metadata?.relationship_type || '';
if (!excludedEdgeTypes.has(edgeType) &&
!excludedNodeIds.has(edge.from) &&
!excludedNodeIds.has(edge.to)) {
if (adjacencyList[edge.from]) {
adjacencyList[edge.from].push(edge.to);
}
}
});
// BFS traversal from initial targets
const traversalQueue = [];
// Start from initial targets that aren't excluded
this.initialTargetIds.forEach(rootId => {
if (!excludedNodeIds.has(rootId)) {
const node = this.nodes.get(rootId);
if (node && !excludedNodeTypes.has(node.type)) {
if (!analysis.reachableNodes.has(rootId)) {
traversalQueue.push(rootId);
analysis.reachableNodes.add(rootId);
}
}
}
});
// BFS to find all reachable nodes
let queueIndex = 0;
while (queueIndex < traversalQueue.length) {
const currentNode = traversalQueue[queueIndex++];
for (const neighbor of (adjacencyList[currentNode] || [])) {
if (!analysis.reachableNodes.has(neighbor)) {
const node = this.nodes.get(neighbor);
if (node && !excludedNodeTypes.has(node.type)) {
analysis.reachableNodes.add(neighbor);
traversalQueue.push(neighbor);
}
}
}
}
// Identify unreachable nodes (maintaining forensic integrity)
Object.keys(adjacencyList).forEach(nodeId => {
if (!analysis.reachableNodes.has(nodeId)) {
analysis.unreachableNodes.add(nodeId);
}
});
// Find isolated clusters among unreachable nodes
analysis.isolatedClusters = this.findIsolatedClusters(
Array.from(analysis.unreachableNodes),
adjacencyList
);
console.log(`Reachability analysis complete:`, {
reachable: analysis.reachableNodes.size,
unreachable: analysis.unreachableNodes.size,
clusters: analysis.isolatedClusters.length
});
return analysis;
}
/**
* Find isolated clusters within a set of nodes
* Used for forensic analysis to identify disconnected subgraphs
*/
findIsolatedClusters(nodeIds, adjacencyList) {
const visited = new Set();
const clusters = [];
for (const nodeId of nodeIds) {
if (!visited.has(nodeId)) {
const cluster = [];
const stack = [nodeId];
while (stack.length > 0) {
const current = stack.pop();
if (!visited.has(current)) {
visited.add(current);
cluster.push(current);
// Add unvisited neighbors within the unreachable set
for (const neighbor of (adjacencyList[current] || [])) {
if (nodeIds.includes(neighbor) && !visited.has(neighbor)) {
stack.push(neighbor);
}
}
}
}
if (cluster.length > 0) {
clusters.push(cluster);
}
}
}
return clusters;
}
/**
* ENHANCED: Get comprehensive graph statistics with forensic information
* Updates the existing getStatistics() method
* Get network statistics
* @returns {Object} Statistics object
*/
getStatistics() {
const basicStats = {
return {
nodeCount: this.nodes.length,
edgeCount: this.edges.length,
largeEntityMembersHidden: this.largeEntityMembers.size
//isStabilized: this.network ? this.network.isStabilized() : false
};
// Add forensic statistics
const visibleNodes = this.nodes.get({ filter: node => !node.hidden });
const hiddenNodes = this.nodes.get({ filter: node => node.hidden });
return {
...basicStats,
forensicStatistics: {
visibleNodes: visibleNodes.length,
hiddenNodes: hiddenNodes.length,
initialTargets: this.initialTargetIds.size,
integrityStatus: visibleNodes.length > 0 && this.initialTargetIds.size > 0 ? 'INTACT' : 'COMPROMISED'
}
};
}
addInitialTarget(targetId) {
this.initialTargetIds.add(targetId);
console.log("Initial targets:", this.initialTargetIds);
}
clearInitialTargets() {
this.initialTargetIds.clear();
console.log("Initial targets cleared.");
}
updateFilterControls() {
if (!this.filterPanel) return;
const nodeTypes = new Set(this.nodes.get().map(n => n.type));
const edgeTypes = new Set(this.edges.get().map(e => e.metadata.relationship_type));
// Wrap both columns in a single container with vertical layout
let filterHTML = '<div class="filter-container">';
// Nodes section
filterHTML += '<div class="filter-column"><h4>Nodes</h4><div class="checkbox-group">';
nodeTypes.forEach(type => {
const label = type === 'correlation_object' ? 'latent correlations' : type;
const isChecked = type !== 'correlation_object';
filterHTML += `<label><input type="checkbox" data-filter-type="node" value="${type}" ${isChecked ? 'checked' : ''}> ${label}</label>`;
});
filterHTML += '</div></div>';
// Edges section
filterHTML += '<div class="filter-column"><h4>Edges</h4><div class="checkbox-group">';
edgeTypes.forEach(type => {
filterHTML += `<label><input type="checkbox" data-filter-type="edge" value="${type}" checked> ${type}</label>`;
});
filterHTML += '</div></div>';
filterHTML += '</div>'; // Close filter-container
this.filterPanel.innerHTML = filterHTML;
this.filterPanel.querySelectorAll('input[type="checkbox"]').forEach(checkbox => {
checkbox.addEventListener('change', () => this.applyAllFilters());
});
}
/**
* ENHANCED: Apply filters using consolidated reachability analysis
* Replaces the existing applyAllFilters() method
* Apply filters to the graph
* @param {string} nodeType - The type of node to show ('all' for no filter)
* @param {number} minConfidence - The minimum confidence score for edges to be visible
*/
applyAllFilters() {
console.log("Applying filters with enhanced reachability analysis...");
if (this.nodes.length === 0) return;
applyFilters(nodeType, minConfidence) {
console.log(`Applying filters: nodeType=${nodeType}, minConfidence=${minConfidence}`);
// Get filter criteria from UI
const excludedNodeTypes = new Set();
this.filterPanel?.querySelectorAll('input[data-filter-type="node"]:not(:checked)').forEach(cb => {
excludedNodeTypes.add(cb.value);
});
const nodeUpdates = [];
const edgeUpdates = [];
const excludedEdgeTypes = new Set();
this.filterPanel?.querySelectorAll('input[data-filter-type="edge"]:not(:checked)').forEach(cb => {
excludedEdgeTypes.add(cb.value);
});
const allNodes = this.nodes.get({ returnType: 'Object' });
const allEdges = this.edges.get();
// Perform comprehensive analysis
const analysis = this.analyzeGraphReachability(new Set(), excludedEdgeTypes, excludedNodeTypes);
// Apply visibility updates
const nodeUpdates = this.nodes.map(node => ({
id: node.id,
hidden: !analysis.reachableNodes.has(node.id)
}));
const edgeUpdates = this.edges.map(edge => ({
id: edge.id,
hidden: excludedEdgeTypes.has(edge.metadata?.relationship_type || '') ||
!analysis.reachableNodes.has(edge.from) ||
!analysis.reachableNodes.has(edge.to)
}));
// Determine which nodes are visible based on the nodeType filter
for (const nodeId in allNodes) {
const node = allNodes[nodeId];
const isVisible = (nodeType === 'all' || node.type === nodeType);
nodeUpdates.push({ id: nodeId, hidden: !isVisible });
}
// Update nodes first to determine edge visibility
this.nodes.update(nodeUpdates);
// Determine which edges are visible based on confidence and connected nodes
for (const edge of allEdges) {
const sourceNode = this.nodes.get(edge.from);
const targetNode = this.nodes.get(edge.to);
const confidence = edge.metadata ? edge.metadata.confidence_score : 0;
const isVisible = confidence >= minConfidence &&
sourceNode && !sourceNode.hidden &&
targetNode && !targetNode.hidden;
edgeUpdates.push({ id: edge.id, hidden: !isVisible });
}
this.edges.update(edgeUpdates);
console.log(`Enhanced filters applied. Visible nodes: ${analysis.reachableNodes.size}`);
console.log('Filters applied.');
}
/**
* ENHANCED: Hide node with forensic integrity using reachability analysis
* Replaces the existing hideNodeAndOrphans() method
*/
hideNodeWithReachabilityAnalysis(nodeId) {
console.log(`Hiding node ${nodeId} with reachability analysis...`);
// Simulate hiding this node and analyze impact
const excludedNodes = new Set([nodeId]);
const analysis = this.analyzeGraphReachability(excludedNodes);
// Nodes that will become unreachable (should be hidden)
const nodesToHide = [nodeId, ...Array.from(analysis.unreachableNodes)];
// Store history for potential revert
const historyData = {
nodeIds: nodesToHide,
operation: 'hide_with_reachability',
timestamp: Date.now()
};
// Apply hiding with forensic documentation
const updates = nodesToHide.map(id => ({
id: id,
hidden: true,
forensicNote: `Hidden due to reachability analysis from ${nodeId}`
}));
this.nodes.update(updates);
this.addToHistory('hide', historyData);
console.log(`Forensic hide operation: ${nodesToHide.length} nodes hidden`, {
originalTarget: nodeId,
cascadeNodes: nodesToHide.length - 1,
isolatedClusters: analysis.isolatedClusters.length
});
return {
hiddenNodes: nodesToHide,
isolatedClusters: analysis.isolatedClusters
};
}
/**
* ENHANCED: Delete node with forensic integrity using reachability analysis
* Replaces the existing deleteNodeAndOrphans() method
*/
async deleteNodeWithReachabilityAnalysis(nodeId) {
console.log(`Deleting node ${nodeId} with reachability analysis...`);
// Simulate deletion and analyze impact
const excludedNodes = new Set([nodeId]);
const analysis = this.analyzeGraphReachability(excludedNodes);
// Nodes that will become unreachable (should be deleted)
const nodesToDelete = [nodeId, ...Array.from(analysis.unreachableNodes)];
// Collect forensic data before deletion
const historyData = {
nodes: nodesToDelete.map(id => this.nodes.get(id)).filter(Boolean),
edges: [],
operation: 'delete_with_reachability',
timestamp: Date.now(),
forensicAnalysis: {
originalTarget: nodeId,
cascadeNodes: nodesToDelete.length - 1,
isolatedClusters: analysis.isolatedClusters.length,
clusterSizes: analysis.isolatedClusters.map(cluster => cluster.length)
}
};
// Collect affected edges
nodesToDelete.forEach(id => {
const connectedEdgeIds = this.network.getConnectedEdges(id);
const edges = this.edges.get(connectedEdgeIds);
historyData.edges.push(...edges);
});
// Remove duplicates from edges
historyData.edges = Array.from(new Map(historyData.edges.map(e => [e.id, e])).values());
// Perform backend deletion with forensic logging
let operationFailed = false;
for (const targetNodeId of nodesToDelete) {
try {
const response = await fetch(`/api/graph/node/${targetNodeId}`, {
method: 'DELETE',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
forensicContext: {
operation: 'reachability_cascade_delete',
originalTarget: nodeId,
analysisTimestamp: historyData.timestamp
}
})
});
const result = await response.json();
if (!result.success) {
console.error(`Backend deletion failed for node ${targetNodeId}:`, result.error);
operationFailed = true;
break;
}
console.log(`Node ${targetNodeId} deleted from backend with forensic context`);
this.nodes.remove({ id: targetNodeId });
} catch (error) {
console.error(`API error during deletion of node ${targetNodeId}:`, error);
operationFailed = true;
break;
}
}
// Handle operation results
if (!operationFailed) {
this.addToHistory('delete', historyData);
console.log(`Forensic delete operation completed:`, historyData.forensicAnalysis);
return {
success: true,
deletedNodes: nodesToDelete,
forensicAnalysis: historyData.forensicAnalysis
};
} else {
// Revert UI changes if backend operations failed - use update instead of add
console.log("Reverting UI changes due to backend failure");
this.nodes.update(historyData.nodes);
this.edges.update(historyData.edges);
return {
success: false,
error: "Backend deletion failed, UI reverted"
};
}
}
/**
* Show context menu for a node
* @param {string} nodeId - The ID of the node
* @param {Event} event - The contextmenu event
*/
showContextMenu(nodeId, event) {
console.log('Showing context menu for node:', nodeId);
const node = this.nodes.get(nodeId);
// Create menu items
let menuItems = `
<ul>
<li data-action="focus" data-node-id="${nodeId}">
<span class="menu-icon">🎯</span>
<span>Focus on Node</span>
</li>
`;
// Add "Iterate Scan" option only for domain or IP nodes
if (node && (node.type === 'domain' || node.type === 'ip')) {
const disabled = this.isScanning ? 'disabled' : ''; // Check if scanning
const title = this.isScanning ? 'A scan is already in progress' : 'Iterate Scan (Add to Graph)'; // Add a title for disabled state
menuItems += `
<li data-action="iterate" data-node-id="${nodeId}" ${disabled} title="${title}">
<span class="menu-icon"></span>
<span>Iterate Scan (Add to Graph)</span>
</li>
`;
}
menuItems += `
<li data-action="hide" data-node-id="${nodeId}">
<span class="menu-icon">👁️‍🗨️</span>
<span>Hide Node</span>
</li>
<li data-action="delete" data-node-id="${nodeId}">
<span class="menu-icon">🗑️</span>
<span>Delete Node</span>
</li>
<li data-action="details" data-node-id="${nodeId}">
<span class="menu-icon"></span>
<span>Show Details</span>
</li>
</ul>
`;
this.contextMenu.innerHTML = menuItems;
// Position the menu
this.contextMenu.style.left = `${event.clientX}px`;
this.contextMenu.style.top = `${event.clientY}px`;
this.contextMenu.style.display = 'block';
// Ensure menu stays within viewport
const rect = this.contextMenu.getBoundingClientRect();
if (rect.right > window.innerWidth) {
this.contextMenu.style.left = `${event.clientX - rect.width}px`;
}
if (rect.bottom > window.innerHeight) {
this.contextMenu.style.top = `${event.clientY - rect.height}px`;
}
// Add event listeners to menu items
this.contextMenu.querySelectorAll('li').forEach(item => {
item.addEventListener('click', (e) => {
if (e.currentTarget.hasAttribute('disabled')) { // Prevent action if disabled
e.stopPropagation();
return;
}
e.stopPropagation();
const action = e.currentTarget.dataset.action;
const nodeId = e.currentTarget.dataset.nodeId;
console.log('Context menu action:', action, 'for node:', nodeId);
this.performContextMenuAction(action, nodeId);
this.hideContextMenu();
});
});
}
/**
* Hide the context menu
*/
hideContextMenu() {
if (this.contextMenu) {
this.contextMenu.style.display = 'none';
}
}
/**
* UPDATED: Enhanced context menu actions using new methods
* Updates the existing performContextMenuAction() method
*/
performContextMenuAction(action, nodeId) {
console.log('Performing enhanced action:', action, 'on node:', nodeId);
switch (action) {
case 'focus':
this.focusOnNode(nodeId);
break;
case 'iterate':
const event = new CustomEvent('iterateScan', {
detail: { nodeId }
});
document.dispatchEvent(event);
break;
case 'hide':
// Use enhanced method with reachability analysis
this.hideNodeWithReachabilityAnalysis(nodeId);
break;
case 'delete':
// Use enhanced method with reachability analysis
this.deleteNodeWithReachabilityAnalysis(nodeId);
break;
case 'details':
const node = this.nodes.get(nodeId);
if (node) {
this.showNodeDetails(node);
}
break;
default:
console.warn('Unknown action:', action);
}
}
/**
* Add an operation to the history stack
* @param {string} type - The type of operation ('hide', 'delete')
* @param {Object} data - The data needed to revert the operation
*/
addToHistory(type, data) {
this.history.push({ type, data });
}
/**
* Revert the last action
*/
async revertLastAction() {
const lastAction = this.history.pop();
if (!lastAction) {
console.log('No actions to revert.');
return;
}
switch (lastAction.type) {
case 'hide':
// Revert hiding nodes by un-hiding them
const updates = lastAction.data.nodeIds.map(id => ({ id: id, hidden: false }));
this.nodes.update(updates);
break;
case 'delete':
try {
const response = await fetch('/api/graph/revert', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(lastAction)
});
const result = await response.json();
if (result.success) {
console.log('Delete action reverted successfully on backend.');
// Re-add all nodes and edges from the history to the local view - use update instead of add
this.nodes.update(lastAction.data.nodes);
this.edges.update(lastAction.data.edges);
} else {
console.error('Failed to revert delete action on backend:', result.error);
// Push the action back onto the history stack if the API call failed
this.history.push(lastAction);
}
} catch (error) {
console.error('Error during revert API call:', error);
this.history.push(lastAction);
}
break;
}
}
/**
* Unhide all hidden nodes
*/
unhideAll() {
const allNodes = this.nodes.get({
filter: (node) => node.hidden === true
});
const updates = allNodes.map(node => ({ id: node.id, hidden: false }));
this.nodes.update(updates);
}
}
// Export for use in main.js

File diff suppressed because it is too large Load Diff

View File

@@ -32,8 +32,19 @@
<div class="form-container">
<div class="input-group">
<label for="target-input">Target Domain or IP</label>
<input type="text" id="target-input" placeholder="example.com or 8.8.8.8" autocomplete="off">
<label for="target-domain">Target Domain</label>
<input type="text" id="target-domain" placeholder="example.com" autocomplete="off">
</div>
<div class="input-group">
<label for="max-depth">Recursion Depth</label>
<select id="max-depth">
<option value="1">Depth 1 - Direct relationships</option>
<option value="2" selected>Depth 2 - Recommended</option>
<option value="3">Depth 3 - Extended analysis</option>
<option value="4">Depth 4 - Deep reconnaissance</option>
<option value="5">Depth 5 - Maximum depth</option>
</select>
</div>
<div class="button-group">
@@ -53,9 +64,9 @@
<span class="btn-icon">[EXPORT]</span>
<span>Download Results</span>
</button>
<button id="configure-settings" class="btn btn-secondary">
<button id="configure-api-keys" class="btn btn-secondary">
<span class="btn-icon">[API]</span>
<span>Settings</span>
<span>Configure API Keys</span>
</button>
</div>
</div>
@@ -79,36 +90,46 @@
<span class="status-label">Depth:</span>
<span id="depth-display" class="status-value">0/0</span>
</div>
<div class="status-row">
<span class="status-label">Progress:</span>
<span id="progress-display" class="status-value">0%</span>
</div>
<div class="status-row">
<span class="status-label">Indicators:</span>
<span id="indicators-display" class="status-value">0</span>
</div>
<div class="status-row">
<span class="status-label">Relationships:</span>
<span id="relationships-display" class="status-value">0</span>
</div>
</div>
<div class="progress-container">
<div class="progress-info">
<span id="progress-label">Progress:</span>
<span id="progress-compact">0/0</span>
</div>
<div class="progress-bar">
<div id="progress-fill" class="progress-fill"></div>
</div>
<div class="progress-placeholder">
<span class="status-label">
⚠️ <strong>Important:</strong> Scanning large public services (e.g., Google, Cloudflare, AWS) is
<strong>discouraged</strong> due to rate limits (e.g., crt.sh).
<br><br>
Our task scheduler operates on a <strong>priority-based queue</strong>:
Short, targeted tasks like DNS are processed first, while resource-intensive requests (e.g., crt.sh)
are <strong>automatically deprioritized</strong> and may be processed later.
</span>
</div>
<div class="progress-bar">
<div id="progress-fill" class="progress-fill"></div>
</div>
</section>
<section class="visualization-panel">
<div class="panel-header">
<h2>Infrastructure Map</h2>
<div class="view-controls">
<div class="filter-group">
<label for="node-type-filter">Node Type:</label>
<select id="node-type-filter">
<option value="all">All</option>
<option value="domain">Domain</option>
<option value="ip">IP</option>
<option value="asn">ASN</option>
<option value="correlation_object">Correlation Object</option>
<option value="large_entity">Large Entity</option>
</select>
</div>
<div class="filter-group">
<label for="confidence-filter">Min Confidence:</label>
<input type="range" id="confidence-filter" min="0" max="1" step="0.1" value="0">
<span id="confidence-value">0</span>
</div>
</div>
</div>
<div id="network-graph" class="graph-container">
@@ -186,28 +207,16 @@
</div>
</div>
<div id="settings-modal" class="modal">
<div id="api-key-modal" class="modal">
<div class="modal-content">
<div class="modal-header">
<h3>Settings</h3>
<button id="settings-modal-close" class="modal-close">[×]</button>
<h3>Configure API Keys</h3>
<button id="api-key-modal-close" class="modal-close">[×]</button>
</div>
<div class="modal-body">
<p class="modal-description">
Configure scan settings and API keys. Keys are stored in memory for the current session only.
Only provide API-keys you dont use for anything else. Don´t enter an API-key if you don´t trust me (best practice would that you don´t).
Enter your API keys for enhanced data providers. Keys are stored in memory for the current session only and are never saved to disk.
</p>
<br>
<div class="input-group">
<label for="max-depth">Recursion Depth</label>
<select id="max-depth">
<option value="1">Depth 1 - Direct relationships</option>
<option value="2" selected>Depth 2 - Recommended</option>
<option value="3">Depth 3 - Extended analysis</option>
<option value="4">Depth 4 - Deep reconnaissance</option>
<option value="5">Depth 5 - Maximum depth</option>
</select>
</div>
<div id="api-key-inputs">
</div>
<div class="button-group" style="flex-direction: row; justify-content: flex-end;">
@@ -215,7 +224,7 @@
<span>Reset</span>
</button>
<button id="save-api-keys" class="btn btn-primary">
<span>Save API-Keys</span>
<span>Save Keys</span>
</button>
</div>
</div>

View File

@@ -48,15 +48,3 @@ def _is_valid_ip(ip: str) -> bool:
except (ValueError, AttributeError):
return False
def is_valid_target(target: str) -> bool:
"""
Checks if the target is a valid domain or IP address.
Args:
target: The target string to validate.
Returns:
True if the target is a valid domain or IP, False otherwise.
"""
return _is_valid_domain(target) or _is_valid_ip(target)