34 Commits

Author SHA1 Message Date
overcuriousity
c4e6a8998a iteration on ws implementation 2025-09-20 16:52:05 +02:00
overcuriousity
75a595c9cb try to implement websockets 2025-09-20 14:17:17 +02:00
3ee23c9d05 Merge pull request 'remove-large-entity-temporarily' (#3) from remove-large-entity-temporarily into main
Reviewed-on: mstoeck3/dnsrecon#3
2025-09-19 12:29:26 +00:00
overcuriousity
8d402ab4b1 postgres 2025-09-19 14:28:37 +02:00
overcuriousity
7472e6f416 fixes to hint for incomplete data 2025-09-19 12:35:28 +02:00
overcuriousity
eabb532557 almost fixed 2025-09-19 01:10:07 +02:00
overcuriousity
0a6d12de9a large entity recreation 2025-09-19 00:38:26 +02:00
overcuriousity
332805709d remove 2025-09-18 23:44:24 +02:00
overcuriousity
1558731c1c attempt fix large entity 2025-09-18 23:22:49 +02:00
overcuriousity
95cebbf935 bug fixes, improvements 2025-09-18 22:39:12 +02:00
overcuriousity
4c48917993 fixes for scheduler 2025-09-18 21:32:26 +02:00
overcuriousity
9d9afa6a08 fixes 2025-09-18 21:04:29 +02:00
overcuriousity
12f834bb65 correlation engine 2025-09-18 20:51:13 +02:00
overcuriousity
cbfd40ee98 adjustments to shodan & export manager 2025-09-18 19:22:58 +02:00
overcuriousity
d4081e1a32 export manager modularized 2025-09-18 17:42:39 +02:00
overcuriousity
15227b392d readme file & some ux improvements 2025-09-18 00:24:35 +02:00
overcuriousity
fdc26dcf15 executive summary 2025-09-18 00:13:37 +02:00
140ef54674 Merge pull request 'data-model' (#2) from data-model into main
Reviewed-on: mstoeck3/dnsrecon#2
2025-09-17 21:56:17 +00:00
overcuriousity
aae459446c update requirements, fix some bugs 2025-09-17 23:55:41 +02:00
overcuriousity
98e1b2280b new node types 2025-09-17 22:42:08 +02:00
overcuriousity
cd14198452 smaller css 2025-09-17 22:39:26 +02:00
overcuriousity
284660ab8c new node types 2025-09-17 22:09:39 +02:00
overcuriousity
ecfb27e02a new scheduling, removed many debug prints 2025-09-17 21:47:03 +02:00
overcuriousity
39b4242200 fix cli last task started 2025-09-17 21:35:54 +02:00
overcuriousity
a56755320c initial targets managed in backend 2025-09-17 21:29:18 +02:00
overcuriousity
b985f1e5f0 potential bugfix for the right click hiding 2025-09-17 21:15:52 +02:00
overcuriousity
8ae4fdbf80 UX improvements 2025-09-17 21:12:11 +02:00
overcuriousity
d0ee415f0d enhancements 2025-09-17 19:42:14 +02:00
overcuriousity
173c3dcf92 some adjustments for clarity 2025-09-17 17:10:11 +02:00
overcuriousity
ec755b17ad remove many unnecessary debug print, improve large entity handling 2025-09-17 13:31:35 +02:00
overcuriousity
469c133f1b fix session handling 2025-09-17 11:18:06 +02:00
overcuriousity
f775c61731 iterating on fixes 2025-09-17 11:08:50 +02:00
overcuriousity
b984189e08 scheduler fixes 2025-09-17 00:31:12 +02:00
overcuriousity
f2db739fa1 attempt to fix some logic 2025-09-17 00:05:48 +02:00
23 changed files with 5985 additions and 3771 deletions

View File

@@ -25,10 +25,10 @@ DEFAULT_RECURSION_DEPTH=2
# Default timeout for provider API requests in seconds.
DEFAULT_TIMEOUT=30
# The number of concurrent provider requests to make.
MAX_CONCURRENT_REQUESTS=5
MAX_CONCURRENT_REQUESTS=1
# The number of results from a provider that triggers the "large entity" grouping.
LARGE_ENTITY_THRESHOLD=100
# The number of times to retry a target if a provider fails.
MAX_RETRIES_PER_TARGET=8
# How long cached provider responses are stored (in hours).
CACHE_EXPIRY_HOURS=12
CACHE_TIMEOUT_HOURS=12

139
README.md
View File

@@ -1,14 +1,16 @@
# DNSRecon - Passive Infrastructure Reconnaissance Tool
DNSRecon is an interactive, passive reconnaissance tool designed to map adversary infrastructure. It operates on a "free-by-default" model, ensuring core functionality without subscriptions, while allowing power users to enhance its capabilities with paid API keys.
DNSRecon is an interactive, passive reconnaissance tool designed to map adversary infrastructure. It operates on a "free-by-default" model, ensuring core functionality without subscriptions, while allowing power users to enhance its capabilities with paid API keys. It is aimed at cybersecurity researchers, pentesters, and administrators who want to understand the public footprint of a target domain.
**Current Status: Phase 2 Implementation**
**Repo Link:** [https://git.cc24.dev/mstoeck3/dnsrecon](https://git.cc24.dev/mstoeck3/dnsrecon)
* ✅ Core infrastructure and graph engine
* ✅ Multi-provider support (crt.sh, DNS, Shodan)
* ✅ Session-based multi-user support
* ✅ Real-time web interface with interactive visualization
* ✅ Forensic logging system and JSON export
-----
## Concept and Philosophy
The core philosophy of DNSRecon is to provide a comprehensive and accurate map of a target's infrastructure using only **passive data sources** by default. This means that, out of the box, DNSRecon will not send any traffic to the target's servers. Instead, it queries public and historical data sources to build a picture of the target's online presence. This approach is ideal for researchers and pentesters who want to gather intelligence without alerting the target, and for administrators who want to see what information about their own infrastructure is publicly available.
For power users who require more in-depth information, DNSRecon can be configured to use API keys for services like Shodan, which provides a wealth of information about internet-connected devices. However, this is an optional feature, and the core functionality of the tool will always remain free and passive.
-----
@@ -20,21 +22,45 @@ DNSRecon is an interactive, passive reconnaissance tool designed to map adversar
* **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
* **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
* **Session Management**: Supports concurrent user sessions with isolated scanner instances.
* **Extensible Provider Architecture**: Easily add new data sources to expand the tool's capabilities.
* **Web-Based UI**: An intuitive and interactive web interface for managing scans and visualizing results.
-----
## Installation
## Technical Architecture
DNSRecon is a web-based application built with a modern technology stack:
* **Backend**: The backend is a **Flask** application that provides a REST API for the frontend and manages the scanning process.
* **Scanning Engine**: The core scanning engine is a multi-threaded Python application that uses a provider-based architecture to query different data sources.
* **Session Management**: **Redis** is used for session management, allowing for concurrent user sessions with isolated scanner instances.
* **Data Storage**: The application uses an in-memory graph to store and analyze the relationships between different pieces of information. The graph is built using the **NetworkX** library.
* **Frontend**: The frontend is a single-page application that uses JavaScript to interact with the backend API and visualize the graph.
-----
## Data Sources
DNSRecon queries the following data sources:
* **DNS**: Standard DNS lookups (A, AAAA, CNAME, MX, NS, SOA, TXT).
* **crt.sh**: A certificate transparency log that provides information about SSL/TLS certificates.
* **Shodan**: A search engine for internet-connected devices (requires an API key).
-----
## Installation and Setup
### Prerequisites
* Python 3.8 or higher
* A modern web browser with JavaScript enabled
* (Recommended) A Linux host for running the application and the optional DNS cache.
* A Linux host for running the application
### 1\. Clone the Project
```bash
git clone https://github.com/your-repo/dnsrecon.git
git clone https://git.cc24.dev/mstoeck3/dnsrecon
cd dnsrecon
```
@@ -50,20 +76,18 @@ pip install -r requirements.txt
The `requirements.txt` file contains the following dependencies:
* Flask\>=2.3.3
* networkx\>=3.1
* requests\>=2.31.0
* python-dateutil\>=2.8.2
* Werkzeug\>=2.3.7
* urllib3\>=2.0.0
* dnspython\>=2.4.2
* Flask
* networkx
* requests
* python-dateutil
* Werkzeug
* urllib3
* dnspython
* gunicorn
* redis
* python-dotenv
-----
## Configuration
### 3\. Configure the Application
DNSRecon is configured using a `.env` file. You can copy the provided example file and edit it to suit your needs:
@@ -91,6 +115,22 @@ The following environment variables are available for configuration:
-----
## Running the Application
For development, you can run the application using the following command:
```bash
python app.py
```
For production, it is recommended to use a more robust server, such as Gunicorn:
```bash
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
```
-----
## Systemd Service
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
@@ -145,6 +185,65 @@ sudo systemctl status dnsrecon.service
-----
## Updating the Application
To update the application, you should first pull the latest changes from the git repository. Then, you will need to wipe the Redis database and the local cache to ensure that you are using the latest data.
### 1\. Update the Code
```bash
git pull
```
### 2\. Wipe the Redis Database
```bash
redis-cli FLUSHALL
```
### 3\. Wipe the Local Cache
```bash
rm -rf cache/*
```
### 4\. Restart the Service
```bash
sudo systemctl restart dnsrecon.service
```
-----
## Extensibility
DNSRecon is designed to be extensible, and adding new providers is a straightforward process. To add a new provider, you will need to create a new Python file in the `providers` directory that inherits from the `BaseProvider` class. The new provider will need to implement the following methods:
* `get_name()`: Return the name of the provider.
* `get_display_name()`: Return a display-friendly name for the provider.
* `requires_api_key()`: Return `True` if the provider requires an API key.
* `get_eligibility()`: Return a dictionary indicating whether the provider can query domains and/or IPs.
* `is_available()`: Return `True` if the provider is available (e.g., if an API key is configured).
* `query_domain(domain)`: Query the provider for information about a domain.
* `query_ip(ip)`: Query the provider for information about an IP address.
-----
## Unique Capabilities and Limitations
### Unique Capabilities
* **Graph-Based Analysis**: The use of a graph-based data model allows for a more intuitive and powerful analysis of the relationships between different pieces of information.
* **Real-Time Visualization**: The real-time visualization of the graph provides immediate feedback and allows for a more interactive and engaging analysis experience.
* **Session Management**: The session management feature allows multiple users to use the application concurrently without interfering with each other's work.
### Limitations
* **Passive-Only by Default**: While the passive-only approach is a key feature of the tool, it also means that the information it can gather is limited to what is publicly available.
* **No Active Scanning**: The tool does not perform any active scanning, such as port scanning or vulnerability scanning.
-----
## License
This project is licensed under the terms of the **BSD-3-Clause** license.

625
app.py
View File

@@ -3,53 +3,60 @@
"""
Flask application entry point for DNSRecon web interface.
Provides REST API endpoints and serves the web interface with user session support.
FIXED: Enhanced WebSocket integration with proper connection management.
"""
import json
import traceback
from flask import Flask, render_template, request, jsonify, send_file, session
from datetime import datetime, timezone, timedelta
import io
import os
from core.session_manager import session_manager
from flask_socketio import SocketIO
from config import config
from core.graph_manager import NodeType
from utils.helpers import is_valid_target
from utils.export_manager import export_manager
from decimal import Decimal
app = Flask(__name__)
# Use centralized configuration for Flask settings
socketio = SocketIO(app, cors_allowed_origins="*")
app.config['SECRET_KEY'] = config.flask_secret_key
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=config.flask_permanent_session_lifetime_hours)
def get_user_scanner():
"""
Retrieves the scanner for the current session, or creates a new
session and scanner if one doesn't exist.
FIXED: Retrieves the scanner for the current session with proper socketio management.
"""
# Get current Flask session info for debugging
current_flask_session_id = session.get('dnsrecon_session_id')
# Try to get existing session
if current_flask_session_id:
existing_scanner = session_manager.get_session(current_flask_session_id)
if existing_scanner:
# FIXED: Ensure socketio is properly maintained
existing_scanner.socketio = socketio
print(f"✓ Retrieved existing scanner for session {current_flask_session_id[:8]}... with socketio restored")
return current_flask_session_id, existing_scanner
# Create new session if none exists
print("Creating new session as none was found...")
new_session_id = session_manager.create_session()
# FIXED: Register socketio connection when creating new session
new_session_id = session_manager.create_session(socketio)
new_scanner = session_manager.get_session(new_session_id)
if not new_scanner:
raise Exception("Failed to create new scanner session")
# Store in Flask session
# FIXED: Ensure new scanner has socketio reference and register the connection
new_scanner.socketio = socketio
session_manager.register_socketio_connection(new_session_id, socketio)
session['dnsrecon_session_id'] = new_session_id
session.permanent = True
print(f"✓ Created new scanner for session {new_session_id[:8]}... with socketio registered")
return new_session_id, new_scanner
@app.route('/')
def index():
"""Serve the main web interface."""
@@ -59,11 +66,8 @@ def index():
@app.route('/api/scan/start', methods=['POST'])
def start_scan():
"""
Start a new reconnaissance scan. Creates a new isolated scanner if
clear_graph is true, otherwise adds to the existing one.
FIXED: Starts a new reconnaissance scan with proper socketio management.
"""
print("=== API: /api/scan/start called ===")
try:
data = request.get_json()
if not data or 'target' not in data:
@@ -72,47 +76,36 @@ def start_scan():
target = data['target'].strip()
max_depth = data.get('max_depth', config.default_recursion_depth)
clear_graph = data.get('clear_graph', True)
force_rescan_target = data.get('force_rescan_target', None) # **FIX**: Get the new parameter
force_rescan_target = data.get('force_rescan_target', None)
print(f"Parsed - target: '{target}', max_depth: {max_depth}, clear_graph: {clear_graph}, force_rescan: {force_rescan_target}")
# Validation
if not target:
return jsonify({'success': False, 'error': 'Target cannot be empty'}), 400
if not is_valid_target(target):
return jsonify({'success': False, 'error': 'Invalid target format. Please enter a valid domain or IP address.'}), 400
return jsonify({'success': False, 'error': 'Invalid target format.'}), 400
if not isinstance(max_depth, int) or not 1 <= max_depth <= 5:
return jsonify({'success': False, 'error': 'Max depth must be an integer between 1 and 5'}), 400
user_session_id, scanner = None, None
if clear_graph:
print("Clear graph requested: Creating a new, isolated scanner session.")
old_session_id = session.get('dnsrecon_session_id')
if old_session_id:
session_manager.terminate_session(old_session_id)
user_session_id = session_manager.create_session()
session['dnsrecon_session_id'] = user_session_id
session.permanent = True
scanner = session_manager.get_session(user_session_id)
else:
print("Adding to existing graph: Reusing the current scanner session.")
user_session_id, scanner = get_user_scanner()
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'Failed to get or create a scanner instance.'}), 500
return jsonify({'success': False, 'error': 'Failed to get scanner instance.'}), 500
print(f"Using scanner {id(scanner)} in session {user_session_id}")
# FIXED: Ensure scanner has socketio reference and is registered
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
print(f"🚀 Starting scan for {target} with socketio enabled and registered")
success = scanner.start_scan(target, max_depth, clear_graph=clear_graph, force_rescan_target=force_rescan_target) # **FIX**: Pass the new parameter
success = scanner.start_scan(target, max_depth, clear_graph=clear_graph, force_rescan_target=force_rescan_target)
if success:
# Update session with socketio-enabled scanner
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({
'success': True,
'message': 'Scan started successfully',
'message': 'Reconnaissance scan started successfully',
'scan_id': scanner.logger.session_id,
'user_session_id': user_session_id,
'user_session_id': user_session_id
})
else:
return jsonify({
@@ -121,170 +114,152 @@ def start_scan():
}), 409
except Exception as e:
print(f"ERROR: Exception in start_scan endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/scan/stop', methods=['POST'])
def stop_scan():
"""Stop the current scan with immediate GUI feedback."""
print("=== API: /api/scan/stop called ===")
"""Stop the current scan."""
try:
# Get user-specific scanner
user_session_id, scanner = get_user_scanner()
print(f"Stopping scan for session: {user_session_id}")
if not scanner:
return jsonify({
'success': False,
'error': 'No scanner found for session'
}), 404
return jsonify({'success': False, 'error': 'No scanner found for session'}), 404
# Ensure session ID is set
if not scanner.session_id:
scanner.session_id = user_session_id
# Use the stop mechanism
success = scanner.stop_scan()
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
# Also set the Redis stop signal directly for extra reliability
scanner.stop_scan()
session_manager.set_stop_signal(user_session_id)
# Force immediate status update
session_manager.update_scanner_status(user_session_id, 'stopped')
# Update the full scanner state
session_manager.update_session_scanner(user_session_id, scanner)
print(f"Stop scan completed. Success: {success}, Scanner status: {scanner.status}")
return jsonify({
'success': True,
'message': 'Scan stop requested - termination initiated',
'user_session_id': user_session_id,
'scanner_status': scanner.status,
'stop_method': 'cross_process'
'message': 'Scan stop requested',
'user_session_id': user_session_id
})
except Exception as e:
print(f"ERROR: Exception in stop_scan endpoint: {e}")
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}'
}), 500
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/scan/status', methods=['GET'])
@socketio.on('connect')
def handle_connect():
"""
FIXED: Handle WebSocket connection with proper session management.
"""
print(f'✓ WebSocket client connected: {request.sid}')
# Try to restore existing session connection
current_flask_session_id = session.get('dnsrecon_session_id')
if current_flask_session_id:
# Register this socketio connection for the existing session
session_manager.register_socketio_connection(current_flask_session_id, socketio)
print(f'✓ Registered WebSocket for existing session: {current_flask_session_id[:8]}...')
# Immediately send current status to new connection
get_scan_status()
@socketio.on('disconnect')
def handle_disconnect():
"""
FIXED: Handle WebSocket disconnection gracefully.
"""
print(f'✗ WebSocket client disconnected: {request.sid}')
# Note: We don't immediately remove the socketio connection from session_manager
# because the user might reconnect. The cleanup will happen during session cleanup.
@socketio.on('get_status')
def get_scan_status():
"""Get current scan status with error handling."""
"""
FIXED: Get current scan status and emit real-time update with proper error handling.
"""
try:
# Get user-specific scanner
user_session_id, scanner = get_user_scanner()
if not scanner:
# Return default idle status if no scanner
return jsonify({
'success': True,
'status': {
'status': 'idle',
'target_domain': None,
'current_depth': 0,
'max_depth': 0,
'current_indicator': '',
'total_indicators_found': 0,
'indicators_processed': 0,
'progress_percentage': 0.0,
'enabled_providers': [],
'graph_statistics': {},
'user_session_id': user_session_id
}
})
status = {
'status': 'idle',
'target_domain': None,
'current_depth': 0,
'max_depth': 0,
'progress_percentage': 0.0,
'user_session_id': user_session_id,
'graph': {'nodes': [], 'edges': [], 'statistics': {'node_count': 0, 'edge_count': 0}}
}
print(f"📡 Emitting idle status for session {user_session_id[:8] if user_session_id else 'none'}...")
else:
if not scanner.session_id:
scanner.session_id = user_session_id
# FIXED: Ensure scanner has socketio reference for future updates
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
status = scanner.get_scan_status()
status['user_session_id'] = user_session_id
print(f"📡 Emitting status update: {status['status']} - "
f"Nodes: {len(status.get('graph', {}).get('nodes', []))}, "
f"Edges: {len(status.get('graph', {}).get('edges', []))}")
# Update session with socketio-enabled scanner
session_manager.update_session_scanner(user_session_id, scanner)
# Ensure session ID is set
if not scanner.session_id:
scanner.session_id = user_session_id
status = scanner.get_scan_status()
status['user_session_id'] = user_session_id
# Additional debug info
status['debug_info'] = {
'scanner_object_id': id(scanner),
'session_id_set': bool(scanner.session_id),
'has_scan_thread': bool(scanner.scan_thread and scanner.scan_thread.is_alive())
}
return jsonify({
'success': True,
'status': status
})
socketio.emit('scan_update', status)
except Exception as e:
print(f"ERROR: Exception in get_scan_status endpoint: {e}")
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}',
'fallback_status': {
'status': 'error',
'target_domain': None,
'current_depth': 0,
'max_depth': 0,
'progress_percentage': 0.0
}
}), 500
error_status = {
'status': 'error',
'message': 'Failed to get status',
'graph': {'nodes': [], 'edges': [], 'statistics': {'node_count': 0, 'edge_count': 0}}
}
print(f"⚠️ Error getting status, emitting error status")
socketio.emit('scan_update', error_status)
@app.route('/api/graph', methods=['GET'])
def get_graph_data():
"""Get current graph data with error handling."""
"""Get current graph data."""
try:
# Get user-specific scanner
user_session_id, scanner = get_user_scanner()
if not scanner:
# Return empty graph if no scanner
return jsonify({
'success': True,
'graph': {
'nodes': [],
'edges': [],
'statistics': {
'node_count': 0,
'edge_count': 0,
'creation_time': datetime.now(timezone.utc).isoformat(),
'last_modified': datetime.now(timezone.utc).isoformat()
}
},
'user_session_id': user_session_id
})
empty_graph = {
'nodes': [], 'edges': [],
'statistics': {'node_count': 0, 'edge_count': 0}
}
graph_data = scanner.get_graph_data()
return jsonify({
'success': True,
'graph': graph_data,
'user_session_id': user_session_id
})
if not scanner:
return jsonify({'success': True, 'graph': empty_graph, 'user_session_id': user_session_id})
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
graph_data = scanner.get_graph_data() or empty_graph
return jsonify({'success': True, 'graph': graph_data, 'user_session_id': user_session_id})
except Exception as e:
print(f"ERROR: Exception in get_graph_data endpoint: {e}")
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}',
'fallback_graph': {
'nodes': [],
'edges': [],
'statistics': {'node_count': 0, 'edge_count': 0}
}
'success': False, 'error': f'Internal server error: {str(e)}',
'fallback_graph': {'nodes': [], 'edges': [], 'statistics': {}}
}), 500
@app.route('/api/graph/large-entity/extract', methods=['POST'])
def extract_from_large_entity():
"""Extract a node from a large entity, making it a standalone node."""
"""Extract a node from a large entity."""
try:
data = request.get_json()
large_entity_id = data.get('large_entity_id')
@@ -297,6 +272,10 @@ def extract_from_large_entity():
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
success = scanner.extract_node_from_large_entity(large_entity_id, node_id)
if success:
@@ -306,29 +285,30 @@ def extract_from_large_entity():
return jsonify({'success': False, 'error': f'Failed to extract node {node_id}.'}), 500
except Exception as e:
print(f"ERROR: Exception in extract_from_large_entity endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/graph/node/<node_id>', methods=['DELETE'])
def delete_graph_node(node_id):
"""Delete a node from the graph for the current user session."""
"""Delete a node from the graph."""
try:
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
success = scanner.graph.remove_node(node_id)
if success:
# Persist the change
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({'success': True, 'message': f'Node {node_id} deleted successfully.'})
else:
return jsonify({'success': False, 'error': f'Node {node_id} not found in graph.'}), 404
return jsonify({'success': False, 'error': f'Node {node_id} not found.'}), 404
except Exception as e:
print(f"ERROR: Exception in delete_graph_node endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@@ -345,11 +325,14 @@ def revert_graph_action():
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
action_type = data['type']
action_data = data['data']
if action_type == 'delete':
# Re-add the node
node_to_add = action_data.get('node')
if node_to_add:
scanner.graph.add_node(
@@ -360,131 +343,168 @@ def revert_graph_action():
metadata=node_to_add.get('metadata')
)
# Re-add the edges
edges_to_add = action_data.get('edges', [])
for edge in edges_to_add:
# Add edge only if both nodes exist to prevent errors
if scanner.graph.graph.has_node(edge['from']) and scanner.graph.graph.has_node(edge['to']):
scanner.graph.add_edge(
source_id=edge['from'],
target_id=edge['to'],
source_id=edge['from'], target_id=edge['to'],
relationship_type=edge['metadata']['relationship_type'],
confidence_score=edge['metadata']['confidence_score'],
source_provider=edge['metadata']['source_provider'],
raw_data=edge.get('raw_data', {})
)
# Persist the change
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({'success': True, 'message': 'Delete action reverted successfully.'})
return jsonify({'success': False, 'error': f'Unknown revert action type: {action_type}'}), 400
except Exception as e:
print(f"ERROR: Exception in revert_graph_action endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/export', methods=['GET'])
def export_results():
"""Export complete scan results as downloadable JSON for the user session."""
"""Export scan results as a JSON file with improved error handling."""
try:
# Get user-specific scanner
user_session_id, scanner = get_user_scanner()
# Get complete results
results = scanner.export_results()
if not scanner:
return jsonify({'success': False, 'error': 'No active scanner session found'}), 404
# Add session information to export
results['export_metadata'] = {
'user_session_id': user_session_id,
'export_timestamp': datetime.now(timezone.utc).isoformat(),
'export_type': 'user_session_results'
}
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
# Create filename with timestamp
timestamp = datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')
target = scanner.current_target or 'unknown'
filename = f"dnsrecon_{target}_{timestamp}_{user_session_id[:8]}.json"
# Get export data using the new export manager
try:
results = export_manager.export_scan_results(scanner)
except Exception as e:
return jsonify({'success': False, 'error': f'Failed to gather export data: {str(e)}'}), 500
# Create in-memory file
json_data = json.dumps(results, indent=2, ensure_ascii=False)
# Add user session metadata
results['export_metadata']['user_session_id'] = user_session_id
results['export_metadata']['forensic_integrity'] = 'maintained'
# Generate filename
filename = export_manager.generate_filename(
target=scanner.current_target or 'unknown',
export_type='json'
)
# Serialize with export manager
try:
json_data = export_manager.serialize_to_json(results)
except Exception as e:
return jsonify({
'success': False,
'error': f'JSON serialization failed: {str(e)}'
}), 500
# Create file object
file_obj = io.BytesIO(json_data.encode('utf-8'))
return send_file(
file_obj,
as_attachment=True,
download_name=filename,
mimetype='application/json'
)
except Exception as e:
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Export failed: {str(e)}',
'error_type': type(e).__name__
}), 500
@app.route('/api/export/targets', methods=['GET'])
def export_targets():
"""Export all discovered targets as a TXT file."""
try:
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'No active scanner session found'}), 404
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
# Use export manager for targets export
targets_txt = export_manager.export_targets_list(scanner)
# Generate filename using export manager
filename = export_manager.generate_filename(
target=scanner.current_target or 'unknown',
export_type='targets'
)
file_obj = io.BytesIO(targets_txt.encode('utf-8'))
return send_file(
file_obj,
as_attachment=True,
download_name=filename,
mimetype='application/json'
mimetype='text/plain'
)
except Exception as e:
print(f"ERROR: Exception in export_results endpoint: {e}")
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Export failed: {str(e)}'
}), 500
return jsonify({'success': False, 'error': f'Export failed: {str(e)}'}), 500
@app.route('/api/providers', methods=['GET'])
def get_providers():
"""Get information about available providers for the user session."""
@app.route('/api/export/summary', methods=['GET'])
def export_summary():
"""Export an executive summary as a TXT file."""
try:
# Get user-specific scanner
user_session_id, scanner = get_user_scanner()
if scanner:
# Updated debug print to be consistent with the new progress bar logic
completed_tasks = scanner.indicators_completed
total_tasks = scanner.total_tasks_ever_enqueued
print(f"DEBUG: Task Progress - Completed: {completed_tasks}, Total Enqueued: {total_tasks}")
else:
print("DEBUG: No active scanner session found.")
if not scanner:
return jsonify({'success': False, 'error': 'No active scanner session found'}), 404
provider_info = scanner.get_provider_info()
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
# Use export manager for summary generation
summary_txt = export_manager.generate_executive_summary(scanner)
# Generate filename using export manager
filename = export_manager.generate_filename(
target=scanner.current_target or 'unknown',
export_type='summary'
)
return jsonify({
'success': True,
'providers': provider_info,
'user_session_id': user_session_id
})
file_obj = io.BytesIO(summary_txt.encode('utf-8'))
return send_file(
file_obj,
as_attachment=True,
download_name=filename,
mimetype='text/plain'
)
except Exception as e:
print(f"ERROR: Exception in get_providers endpoint: {e}")
traceback.print_exc()
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}'
}), 500
return jsonify({'success': False, 'error': f'Export failed: {str(e)}'}), 500
@app.route('/api/config/api-keys', methods=['POST'])
def set_api_keys():
"""
Set API keys for providers for the user session only.
"""
"""Set API keys for the current session."""
try:
data = request.get_json()
if data is None:
return jsonify({
'success': False,
'error': 'No API keys provided'
}), 400
return jsonify({'success': False, 'error': 'No API keys provided'}), 400
# Get user-specific scanner and config
user_session_id, scanner = get_user_scanner()
session_config = scanner.config
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
updated_providers = []
# Iterate over the API keys provided in the request data
for provider_name, api_key in data.items():
# This allows us to both set and clear keys. The config
# handles enabling/disabling based on if the key is empty.
api_key_value = str(api_key or '').strip()
success = session_config.set_api_key(provider_name.lower(), api_key_value)
@@ -492,63 +512,146 @@ def set_api_keys():
updated_providers.append(provider_name)
if updated_providers:
# Reinitialize scanner providers to apply the new keys
scanner._initialize_providers()
# Persist the updated scanner object back to the user's session
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({
'success': True,
'message': f'API keys updated for session {user_session_id}: {", ".join(updated_providers)}',
'updated_providers': updated_providers,
'message': f'API keys updated for: {", ".join(updated_providers)}',
'user_session_id': user_session_id
})
else:
return jsonify({
'success': False,
'error': 'No valid API keys were provided or provider names were incorrect.'
}), 400
return jsonify({'success': False, 'error': 'No valid API keys were provided.'}), 400
except Exception as e:
print(f"ERROR: Exception in set_api_keys endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/providers', methods=['GET'])
def get_providers():
"""Get enhanced information about available providers including API key sources."""
try:
user_session_id, scanner = get_user_scanner()
base_provider_info = scanner.get_provider_info()
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
# Enhance provider info with API key source information
enhanced_provider_info = {}
for provider_name, info in base_provider_info.items():
enhanced_info = dict(info) # Copy base info
if info['requires_api_key']:
# Determine API key source and configuration status
api_key = scanner.config.get_api_key(provider_name)
backend_api_key = os.getenv(f'{provider_name.upper()}_API_KEY')
if backend_api_key:
# API key configured via backend/environment
enhanced_info.update({
'api_key_configured': True,
'api_key_source': 'backend',
'api_key_help': f'API key configured via environment variable {provider_name.upper()}_API_KEY'
})
elif api_key:
# API key configured via web interface
enhanced_info.update({
'api_key_configured': True,
'api_key_source': 'frontend',
'api_key_help': f'API key set via web interface (session-only)'
})
else:
# No API key configured
enhanced_info.update({
'api_key_configured': False,
'api_key_source': None,
'api_key_help': f'Requires API key to enable {info["display_name"]} integration'
})
else:
# Provider doesn't require API key
enhanced_info.update({
'api_key_configured': True, # Always "configured" for non-API providers
'api_key_source': None,
'api_key_help': None
})
enhanced_provider_info[provider_name] = enhanced_info
return jsonify({
'success': False,
'error': f'Internal server error: {str(e)}'
}), 500
'success': True,
'providers': enhanced_provider_info,
'user_session_id': user_session_id
})
except Exception as e:
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/config/providers', methods=['POST'])
def configure_providers():
"""Configure provider settings (enable/disable)."""
try:
data = request.get_json()
if data is None:
return jsonify({'success': False, 'error': 'No provider settings provided'}), 400
user_session_id, scanner = get_user_scanner()
session_config = scanner.config
# FIXED: Ensure scanner has socketio reference
scanner.socketio = socketio
session_manager.register_socketio_connection(user_session_id, socketio)
updated_providers = []
for provider_name, settings in data.items():
provider_name_clean = provider_name.lower().strip()
if 'enabled' in settings:
# Update the enabled state in session config
session_config.enabled_providers[provider_name_clean] = settings['enabled']
updated_providers.append(provider_name_clean)
if updated_providers:
# Reinitialize providers with new settings
scanner._initialize_providers()
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({
'success': True,
'message': f'Provider settings updated for: {", ".join(updated_providers)}',
'user_session_id': user_session_id
})
else:
return jsonify({'success': False, 'error': 'No valid provider settings were provided.'}), 400
except Exception as e:
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.errorhandler(404)
def not_found(error):
"""Handle 404 errors."""
return jsonify({
'success': False,
'error': 'Endpoint not found'
}), 404
return jsonify({'success': False, 'error': 'Endpoint not found'}), 404
@app.errorhandler(500)
def internal_error(error):
"""Handle 500 errors."""
print(f"ERROR: 500 Internal Server Error: {error}")
traceback.print_exc()
return jsonify({
'success': False,
'error': 'Internal server error'
}), 500
return jsonify({'success': False, 'error': 'Internal server error'}), 500
if __name__ == '__main__':
print("Starting DNSRecon Flask application with user session support...")
# Load configuration from environment
config.load_from_env()
# Start Flask application
print(f"Starting server on {config.flask_host}:{config.flask_port}")
app.run(
host=config.flask_host,
port=config.flask_port,
debug=config.flask_debug,
threaded=True
)
print("🚀 Starting DNSRecon with enhanced WebSocket support...")
print(f" Host: {config.flask_host}")
print(f" Port: {config.flask_port}")
print(f" Debug: {config.flask_debug}")
print(" WebSocket: Enhanced connection management enabled")
socketio.run(app, host=config.flask_host, port=config.flask_port, debug=config.flask_debug)

View File

@@ -21,11 +21,10 @@ class Config:
# --- General Settings ---
self.default_recursion_depth = 2
self.default_timeout = 30
self.max_concurrent_requests = 5
self.default_timeout = 60
self.max_concurrent_requests = 1
self.large_entity_threshold = 100
self.max_retries_per_target = 8
self.cache_expiry_hours = 12
# --- Provider Caching Settings ---
self.cache_timeout_hours = 6 # Provider-specific cache timeout
@@ -34,14 +33,16 @@ class Config:
self.rate_limits = {
'crtsh': 5,
'shodan': 60,
'dns': 100
'dns': 100,
'correlation': 0 # Set to 0 to make sure correlations run last
}
# --- Provider Settings ---
self.enabled_providers = {
'crtsh': True,
'dns': True,
'shodan': False
'shodan': False,
'correlation': True # Enable the new provider by default
}
# --- Logging ---
@@ -69,7 +70,6 @@ class Config:
self.max_concurrent_requests = int(os.getenv('MAX_CONCURRENT_REQUESTS', self.max_concurrent_requests))
self.large_entity_threshold = int(os.getenv('LARGE_ENTITY_THRESHOLD', self.large_entity_threshold))
self.max_retries_per_target = int(os.getenv('MAX_RETRIES_PER_TARGET', self.max_retries_per_target))
self.cache_expiry_hours = int(os.getenv('CACHE_EXPIRY_HOURS', self.cache_expiry_hours))
self.cache_timeout_hours = int(os.getenv('CACHE_TIMEOUT_HOURS', self.cache_timeout_hours))
# Override Flask and session settings
@@ -87,6 +87,60 @@ class Config:
self.enabled_providers[provider] = True
return True
def set_provider_enabled(self, provider: str, enabled: bool) -> bool:
"""
Set provider enabled status for the session.
Args:
provider: Provider name
enabled: Whether the provider should be enabled
Returns:
True if the setting was applied successfully
"""
provider_key = provider.lower()
self.enabled_providers[provider_key] = enabled
return True
def get_provider_enabled(self, provider: str) -> bool:
"""
Get provider enabled status.
Args:
provider: Provider name
Returns:
True if the provider is enabled
"""
provider_key = provider.lower()
return self.enabled_providers.get(provider_key, True) # Default to enabled
def bulk_set_provider_settings(self, provider_settings: dict) -> dict:
"""
Set multiple provider settings at once.
Args:
provider_settings: Dict of provider_name -> {'enabled': bool, ...}
Returns:
Dict with results for each provider
"""
results = {}
for provider_name, settings in provider_settings.items():
provider_key = provider_name.lower()
try:
if 'enabled' in settings:
self.enabled_providers[provider_key] = settings['enabled']
results[provider_key] = {'success': True, 'enabled': settings['enabled']}
else:
results[provider_key] = {'success': False, 'error': 'No enabled setting provided'}
except Exception as e:
results[provider_key] = {'success': False, 'error': str(e)}
return results
def get_api_key(self, provider: str) -> Optional[str]:
"""Get API key for a provider."""
return self.api_keys.get(provider)

View File

@@ -4,7 +4,7 @@
Graph data model for DNSRecon using NetworkX.
Manages in-memory graph storage with confidence scoring and forensic metadata.
Now fully compatible with the unified ProviderResult data model.
UPDATED: Fixed certificate styling and correlation edge labeling.
FIXED: Added proper pickle support to prevent weakref serialization errors.
"""
import re
from datetime import datetime, timezone
@@ -18,7 +18,8 @@ class NodeType(Enum):
"""Enumeration of supported node types."""
DOMAIN = "domain"
IP = "ip"
ASN = "asn"
ISP = "isp"
CA = "ca"
LARGE_ENTITY = "large_entity"
CORRELATION_OBJECT = "correlation_object"
@@ -31,6 +32,7 @@ class GraphManager:
Thread-safe graph manager for DNSRecon infrastructure mapping.
Uses NetworkX for in-memory graph storage with confidence scoring.
Compatible with unified ProviderResult data model.
FIXED: Added proper pickle support to handle NetworkX graph serialization.
"""
def __init__(self):
@@ -38,228 +40,57 @@ class GraphManager:
self.graph = nx.DiGraph()
self.creation_time = datetime.now(timezone.utc).isoformat()
self.last_modified = self.creation_time
self.correlation_index = {}
# Compile regex for date filtering for efficiency
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
self.EXCLUDED_KEYS = ['confidence', 'provider', 'timestamp', 'type','crtsh_cert_validity_period_days']
def __getstate__(self):
"""Prepare GraphManager for pickling, excluding compiled regex."""
"""Prepare GraphManager for pickling by converting NetworkX graph to serializable format."""
state = self.__dict__.copy()
# Compiled regex patterns are not always picklable
if 'date_pattern' in state:
del state['date_pattern']
# Convert NetworkX graph to a serializable format
if hasattr(self, 'graph') and self.graph:
# Extract all nodes with their data
nodes_data = {}
for node_id, attrs in self.graph.nodes(data=True):
nodes_data[node_id] = dict(attrs)
# Extract all edges with their data
edges_data = []
for source, target, attrs in self.graph.edges(data=True):
edges_data.append({
'source': source,
'target': target,
'attributes': dict(attrs)
})
# Replace the NetworkX graph with serializable data
state['_graph_nodes'] = nodes_data
state['_graph_edges'] = edges_data
del state['graph']
return state
def __setstate__(self, state):
"""Restore GraphManager state and recompile regex."""
"""Restore GraphManager after unpickling by reconstructing NetworkX graph."""
# Restore basic attributes
self.__dict__.update(state)
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
def process_correlations_for_node(self, node_id: str):
"""
UPDATED: Process correlations for a given node with enhanced tracking.
Now properly tracks which attribute/provider created each correlation.
"""
if not self.graph.has_node(node_id):
return
node_attributes = self.graph.nodes[node_id].get('attributes', [])
# Process each attribute for potential correlations
for attr in node_attributes:
attr_name = attr.get('name')
attr_value = attr.get('value')
attr_provider = attr.get('provider', 'unknown')
# Skip excluded attributes and invalid values
if attr_name in self.EXCLUDED_KEYS or not isinstance(attr_value, (str, int, float, bool)) or attr_value is None:
continue
if isinstance(attr_value, bool):
continue
if isinstance(attr_value, str) and (len(attr_value) < 4 or self.date_pattern.match(attr_value)):
continue
# Initialize correlation tracking for this value
if attr_value not in self.correlation_index:
self.correlation_index[attr_value] = {
'nodes': set(),
'sources': [] # Track which provider/attribute combinations contributed
}
# Add this node and source information
self.correlation_index[attr_value]['nodes'].add(node_id)
# Track the source of this correlation value
source_info = {
'node_id': node_id,
'provider': attr_provider,
'attribute': attr_name,
'path': f"{attr_provider}_{attr_name}"
}
# Add source if not already present (avoid duplicates)
existing_sources = [s for s in self.correlation_index[attr_value]['sources']
if s['node_id'] == node_id and s['path'] == source_info['path']]
if not existing_sources:
self.correlation_index[attr_value]['sources'].append(source_info)
# Create correlation node if we have multiple nodes with this value
if len(self.correlation_index[attr_value]['nodes']) > 1:
self._create_enhanced_correlation_node_and_edges(attr_value, self.correlation_index[attr_value])
def _create_enhanced_correlation_node_and_edges(self, value, correlation_data):
"""
UPDATED: Create correlation node and edges with detailed provider tracking.
"""
correlation_node_id = f"corr_{hash(str(value)) & 0x7FFFFFFF}"
nodes = correlation_data['nodes']
sources = correlation_data['sources']
# Reconstruct NetworkX graph from serializable data
self.graph = nx.DiGraph()
# Create or update correlation node
if not self.graph.has_node(correlation_node_id):
# Determine the most common provider/attribute combination
provider_counts = {}
for source in sources:
key = f"{source['provider']}_{source['attribute']}"
provider_counts[key] = provider_counts.get(key, 0) + 1
# Use the most common provider/attribute as the primary label
primary_source = max(provider_counts.items(), key=lambda x: x[1])[0] if provider_counts else "unknown_correlation"
metadata = {
'value': value,
'correlated_nodes': list(nodes),
'sources': sources,
'primary_source': primary_source,
'correlation_count': len(nodes)
}
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT, metadata=metadata)
print(f"Created correlation node {correlation_node_id} for value '{value}' with {len(nodes)} nodes")
# Create edges from each node to the correlation node
for source in sources:
node_id = source['node_id']
provider = source['provider']
attribute = source['attribute']
if self.graph.has_node(node_id) and not self.graph.has_edge(node_id, correlation_node_id):
# Format relationship label as "corr_provider_attribute"
relationship_label = f"corr_{provider}_{attribute}"
self.add_edge(
source_id=node_id,
target_id=correlation_node_id,
relationship_type=relationship_label,
confidence_score=0.9,
source_provider=provider,
raw_data={
'correlation_value': value,
'original_attribute': attribute,
'correlation_type': 'attribute_matching'
}
# Restore nodes
if hasattr(self, '_graph_nodes'):
for node_id, attrs in self._graph_nodes.items():
self.graph.add_node(node_id, **attrs)
del self._graph_nodes
# Restore edges
if hasattr(self, '_graph_edges'):
for edge_data in self._graph_edges:
self.graph.add_edge(
edge_data['source'],
edge_data['target'],
**edge_data['attributes']
)
print(f"Added correlation edge: {node_id} -> {correlation_node_id} ({relationship_label})")
def _has_direct_edge_bidirectional(self, node_a: str, node_b: str) -> bool:
"""
Check if there's a direct edge between two nodes in either direction.
Returns True if node_a→node_b OR node_b→node_a exists.
"""
return (self.graph.has_edge(node_a, node_b) or
self.graph.has_edge(node_b, node_a))
def _correlation_value_matches_existing_node(self, correlation_value: str) -> bool:
"""
Check if correlation value contains any existing node ID as substring.
Returns True if match found (correlation node should NOT be created).
"""
correlation_str = str(correlation_value).lower()
# Check against all existing nodes
for existing_node_id in self.graph.nodes():
if existing_node_id.lower() in correlation_str:
return True
return False
def _find_correlation_nodes_with_same_pattern(self, node_set: set) -> List[str]:
"""
Find existing correlation nodes that have the exact same pattern of connected nodes.
Returns list of correlation node IDs with matching patterns.
"""
correlation_nodes = self.get_nodes_by_type(NodeType.CORRELATION_OBJECT)
matching_nodes = []
for corr_node_id in correlation_nodes:
# Get all nodes connected to this correlation node
connected_nodes = set()
# Add all predecessors (nodes pointing TO the correlation node)
connected_nodes.update(self.graph.predecessors(corr_node_id))
# Add all successors (nodes pointed TO by the correlation node)
connected_nodes.update(self.graph.successors(corr_node_id))
# Check if the pattern matches exactly
if connected_nodes == node_set:
matching_nodes.append(corr_node_id)
return matching_nodes
def _merge_correlation_values(self, target_node_id: str, new_value: Any, corr_data: Dict) -> None:
"""
Merge a new correlation value into an existing correlation node.
Uses same logic as large entity merging.
"""
if not self.graph.has_node(target_node_id):
return
target_metadata = self.graph.nodes[target_node_id]['metadata']
# Get existing values (ensure it's a list)
existing_values = target_metadata.get('values', [])
if not isinstance(existing_values, list):
existing_values = [existing_values]
# Add new value if not already present
if new_value not in existing_values:
existing_values.append(new_value)
# Merge sources
existing_sources = target_metadata.get('sources', [])
new_sources = corr_data.get('sources', [])
# Create set of unique sources based on (node_id, path) tuples
source_set = set()
for source in existing_sources + new_sources:
source_tuple = (source['node_id'], source.get('path', ''))
source_set.add(source_tuple)
# Convert back to list of dictionaries
merged_sources = [{'node_id': nid, 'path': path} for nid, path in source_set]
# Update metadata
target_metadata.update({
'values': existing_values,
'sources': merged_sources,
'correlated_nodes': list(set(target_metadata.get('correlated_nodes', []) + corr_data.get('nodes', []))),
'merge_count': len(existing_values),
'last_merge_timestamp': datetime.now(timezone.utc).isoformat()
})
# Update description to reflect merged nature
value_count = len(existing_values)
node_count = len(target_metadata['correlated_nodes'])
self.graph.nodes[target_node_id]['description'] = (
f"Correlation container with {value_count} merged values "
f"across {node_count} nodes"
)
del self._graph_edges
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[List[Dict[str, Any]]] = None,
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
@@ -303,18 +134,18 @@ class GraphManager:
return is_new_node
def add_edge(self, source_id: str, target_id: str, relationship_type: str,
confidence_score: float = 0.5, source_provider: str = "unknown",
raw_data: Optional[Dict[str, Any]] = None) -> bool:
"""Add or update an edge between two nodes, ensuring nodes exist."""
confidence_score: float = 0.5, source_provider: str = "unknown",
raw_data: Optional[Dict[str, Any]] = None) -> bool:
"""
UPDATED: Add or update an edge between two nodes with raw relationship labels.
"""
if not self.graph.has_node(source_id) or not self.graph.has_node(target_id):
return False
new_confidence = confidence_score
if relationship_type.startswith("c_"):
edge_label = relationship_type
else:
edge_label = f"{source_provider}_{relationship_type}"
# UPDATED: Use raw relationship type - no formatting
edge_label = relationship_type
if self.graph.has_edge(source_id, target_id):
# If edge exists, update confidence if the new score is higher.
@@ -324,7 +155,7 @@ class GraphManager:
self.graph.edges[source_id, target_id]['updated_by'] = source_provider
return False
# Add a new edge with all attributes.
# Add a new edge with raw attributes
self.graph.add_edge(source_id, target_id,
relationship_type=edge_label,
confidence_score=new_confidence,
@@ -333,30 +164,6 @@ class GraphManager:
raw_data=raw_data or {})
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
def extract_node_from_large_entity(self, large_entity_id: str, node_id_to_extract: str) -> bool:
"""
Removes a node from a large entity's internal lists and updates its count.
This prepares the large entity for the node's promotion to a regular node.
"""
if not self.graph.has_node(large_entity_id):
return False
node_data = self.graph.nodes[large_entity_id]
attributes = node_data.get('attributes', {})
# Remove from the list of member nodes
if 'nodes' in attributes and node_id_to_extract in attributes['nodes']:
attributes['nodes'].remove(node_id_to_extract)
# Update the count
attributes['count'] = len(attributes['nodes'])
else:
# This can happen if the node was already extracted, which is not an error.
print(f"Warning: Node {node_id_to_extract} not found in the 'nodes' list of {large_entity_id}.")
return True # Proceed as if successful
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
def remove_node(self, node_id: str) -> bool:
"""Remove a node and its connected edges from the graph."""
@@ -365,29 +172,7 @@ class GraphManager:
# Remove node from the graph (NetworkX handles removing connected edges)
self.graph.remove_node(node_id)
# Clean up the correlation index
keys_to_delete = []
for value, data in self.correlation_index.items():
if isinstance(data, dict) and 'nodes' in data:
# Updated correlation structure
if node_id in data['nodes']:
data['nodes'].discard(node_id)
# Remove sources for this node
data['sources'] = [s for s in data['sources'] if s['node_id'] != node_id]
if not data['nodes']: # If no other nodes are associated, remove it
keys_to_delete.append(value)
else:
# Legacy correlation structure (fallback)
if isinstance(data, set) and node_id in data:
data.discard(node_id)
if not data:
keys_to_delete.append(value)
for key in keys_to_delete:
if key in self.correlation_index:
del self.correlation_index[key]
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
@@ -403,12 +188,6 @@ class GraphManager:
"""Get all nodes of a specific type."""
return [n for n, d in self.graph.nodes(data=True) if d.get('type') == node_type.value]
def get_neighbors(self, node_id: str) -> List[str]:
"""Get all unique neighbors (predecessors and successors) for a node."""
if not self.graph.has_node(node_id):
return []
return list(set(self.graph.predecessors(node_id)) | set(self.graph.successors(node_id)))
def get_high_confidence_edges(self, min_confidence: float = 0.8) -> List[Tuple[str, str, Dict]]:
"""Get edges with confidence score above a given threshold."""
return [(u, v, d) for u, v, d in self.graph.edges(data=True)
@@ -417,95 +196,57 @@ class GraphManager:
def get_graph_data(self) -> Dict[str, Any]:
"""
Export graph data formatted for frontend visualization.
UPDATED: Fixed certificate validity styling logic for unified data model.
SIMPLIFIED: No certificate styling - frontend handles all visual styling.
"""
nodes = []
for node_id, attrs in self.graph.nodes(data=True):
node_data = {'id': node_id, 'label': node_id, 'type': attrs.get('type', 'unknown'),
'attributes': attrs.get('attributes', []), # Ensure attributes is a list
'description': attrs.get('description', ''),
'metadata': attrs.get('metadata', {}),
'added_timestamp': attrs.get('added_timestamp')}
# UPDATED: Fixed certificate validity styling logic
node_type = node_data['type']
attributes_list = node_data['attributes']
if node_type == 'domain' and isinstance(attributes_list, list):
# Check for certificate-related attributes
has_certificates = False
has_valid_certificates = False
has_expired_certificates = False
for attr in attributes_list:
attr_name = attr.get('name', '').lower()
attr_provider = attr.get('provider', '').lower()
attr_value = attr.get('value')
# Look for certificate attributes from crt.sh provider
if attr_provider == 'crtsh' or 'cert' in attr_name:
has_certificates = True
# Check certificate validity
if attr_name == 'cert_is_currently_valid':
if attr_value is True:
has_valid_certificates = True
elif attr_value is False:
has_expired_certificates = True
# Also check for certificate expiry indicators
elif 'expires_soon' in attr_name and attr_value is True:
has_expired_certificates = True
elif 'expired' in attr_name and attr_value is True:
has_expired_certificates = True
# Apply styling based on certificate status
if has_expired_certificates and not has_valid_certificates:
# Red for expired/invalid certificates
node_data['color'] = {'background': '#ff6b6b', 'border': '#cc5555'}
elif not has_certificates:
# Grey for domains with no certificates
node_data['color'] = {'background': '#c7c7c7', 'border': '#999999'}
# Default green styling is handled by the frontend for domains with valid certificates
node_data = {
'id': node_id,
'label': node_id,
'type': attrs.get('type', 'unknown'),
'attributes': attrs.get('attributes', []), # Raw attributes list
'description': attrs.get('description', ''),
'metadata': attrs.get('metadata', {}),
'added_timestamp': attrs.get('added_timestamp'),
'max_depth_reached': attrs.get('metadata', {}).get('max_depth_reached', False)
}
# Add incoming and outgoing edges to node data
if self.graph.has_node(node_id):
node_data['incoming_edges'] = [{'from': u, 'data': d} for u, _, d in self.graph.in_edges(node_id, data=True)]
node_data['outgoing_edges'] = [{'to': v, 'data': d} for _, v, d in self.graph.out_edges(node_id, data=True)]
node_data['incoming_edges'] = [
{'from': u, 'data': d} for u, _, d in self.graph.in_edges(node_id, data=True)
]
node_data['outgoing_edges'] = [
{'to': v, 'data': d} for _, v, d in self.graph.out_edges(node_id, data=True)
]
nodes.append(node_data)
edges = []
for source, target, attrs in self.graph.edges(data=True):
edges.append({'from': source, 'to': target,
'label': attrs.get('relationship_type', ''),
'confidence_score': attrs.get('confidence_score', 0),
'source_provider': attrs.get('source_provider', ''),
'discovery_timestamp': attrs.get('discovery_timestamp')})
edges.append({
'from': source,
'to': target,
'label': attrs.get('relationship_type', ''),
'confidence_score': attrs.get('confidence_score', 0),
'source_provider': attrs.get('source_provider', ''),
'discovery_timestamp': attrs.get('discovery_timestamp')
})
return {
'nodes': nodes, 'edges': edges,
'nodes': nodes,
'edges': edges,
'statistics': self.get_statistics()['basic_metrics']
}
def export_json(self) -> Dict[str, Any]:
"""Export complete graph data as a JSON-serializable dictionary."""
graph_data = nx.node_link_data(self.graph) # Use NetworkX's built-in robust serializer
return {
'export_metadata': {
'export_timestamp': datetime.now(timezone.utc).isoformat(),
'graph_creation_time': self.creation_time,
'last_modified': self.last_modified,
'total_nodes': self.get_node_count(),
'total_edges': self.get_edge_count(),
'graph_format': 'dnsrecon_v1_unified_model'
},
'graph': graph_data,
'statistics': self.get_statistics()
}
def _get_confidence_distribution(self) -> Dict[str, int]:
"""Get distribution of edge confidence scores."""
"""Get distribution of edge confidence scores with empty graph handling."""
distribution = {'high': 0, 'medium': 0, 'low': 0}
# FIXED: Handle empty graph case
if self.get_edge_count() == 0:
return distribution
for _, _, data in self.graph.edges(data=True):
confidence = data.get('confidence_score', 0)
if confidence >= 0.8:
@@ -517,27 +258,46 @@ class GraphManager:
return distribution
def get_statistics(self) -> Dict[str, Any]:
"""Get comprehensive statistics about the graph."""
stats = {'basic_metrics': {'total_nodes': self.get_node_count(),
'total_edges': self.get_edge_count(),
'creation_time': self.creation_time,
'last_modified': self.last_modified},
'node_type_distribution': {}, 'relationship_type_distribution': {},
'confidence_distribution': self._get_confidence_distribution(),
'provider_distribution': {}}
# Calculate distributions
for node_type in NodeType:
stats['node_type_distribution'][node_type.value] = self.get_nodes_by_type(node_type).__len__()
for _, _, data in self.graph.edges(data=True):
rel_type = data.get('relationship_type', 'unknown')
stats['relationship_type_distribution'][rel_type] = stats['relationship_type_distribution'].get(rel_type, 0) + 1
provider = data.get('source_provider', 'unknown')
stats['provider_distribution'][provider] = stats['provider_distribution'].get(provider, 0) + 1
"""Get comprehensive statistics about the graph with proper empty graph handling."""
# FIXED: Handle empty graph case properly
node_count = self.get_node_count()
edge_count = self.get_edge_count()
stats = {
'basic_metrics': {
'total_nodes': node_count,
'total_edges': edge_count,
'creation_time': self.creation_time,
'last_modified': self.last_modified
},
'node_type_distribution': {},
'relationship_type_distribution': {},
'confidence_distribution': self._get_confidence_distribution(),
'provider_distribution': {}
}
# FIXED: Only calculate distributions if we have data
if node_count > 0:
# Calculate node type distributions
for node_type in NodeType:
count = len(self.get_nodes_by_type(node_type))
if count > 0: # Only include types that exist
stats['node_type_distribution'][node_type.value] = count
if edge_count > 0:
# Calculate edge distributions
for _, _, data in self.graph.edges(data=True):
rel_type = data.get('relationship_type', 'unknown')
stats['relationship_type_distribution'][rel_type] = stats['relationship_type_distribution'].get(rel_type, 0) + 1
provider = data.get('source_provider', 'unknown')
stats['provider_distribution'][provider] = stats['provider_distribution'].get(provider, 0) + 1
return stats
def clear(self) -> None:
"""Clear all nodes, edges, and indices from the graph."""
"""Clear all nodes and edges from the graph."""
self.graph.clear()
self.correlation_index.clear()
self.creation_time = datetime.now(timezone.utc).isoformat()
self.last_modified = self.creation_time

View File

@@ -40,6 +40,7 @@ class ForensicLogger:
"""
Thread-safe forensic logging system for DNSRecon.
Maintains detailed audit trail of all reconnaissance activities.
FIXED: Enhanced pickle support to prevent weakref issues in logging handlers.
"""
def __init__(self, session_id: str = ""):
@@ -65,45 +66,74 @@ class ForensicLogger:
'target_domains': set()
}
# Configure standard logger
# Configure standard logger with simple setup to avoid weakrefs
self.logger = logging.getLogger(f'dnsrecon.{self.session_id}')
self.logger.setLevel(logging.INFO)
# Create formatter for structured logging
# Create minimal formatter
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# Add console handler if not already present
# Add console handler only if not already present (avoid duplicate handlers)
if not self.logger.handlers:
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
self.logger.addHandler(console_handler)
def __getstate__(self):
"""Prepare ForensicLogger for pickling by excluding unpicklable objects."""
"""
FIXED: Prepare ForensicLogger for pickling by excluding problematic objects.
"""
state = self.__dict__.copy()
# Remove the unpickleable 'logger' attribute
if 'logger' in state:
del state['logger']
if 'lock' in state:
del state['lock']
# Remove potentially unpickleable attributes that may contain weakrefs
unpicklable_attrs = ['logger', 'lock']
for attr in unpicklable_attrs:
if attr in state:
del state[attr]
# Convert sets to lists for JSON serialization compatibility
if 'session_metadata' in state:
metadata = state['session_metadata'].copy()
if 'providers_used' in metadata and isinstance(metadata['providers_used'], set):
metadata['providers_used'] = list(metadata['providers_used'])
if 'target_domains' in metadata and isinstance(metadata['target_domains'], set):
metadata['target_domains'] = list(metadata['target_domains'])
state['session_metadata'] = metadata
return state
def __setstate__(self, state):
"""Restore ForensicLogger after unpickling by reconstructing logger."""
"""
FIXED: Restore ForensicLogger after unpickling by reconstructing components.
"""
self.__dict__.update(state)
# Re-initialize the 'logger' attribute
# Re-initialize threading lock
self.lock = threading.Lock()
# Re-initialize logger with minimal setup
self.logger = logging.getLogger(f'dnsrecon.{self.session_id}')
self.logger.setLevel(logging.INFO)
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# Only add handler if not already present
if not self.logger.handlers:
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
self.logger.addHandler(console_handler)
self.lock = threading.Lock()
# Convert lists back to sets if needed
if 'session_metadata' in self.__dict__:
metadata = self.session_metadata
if 'providers_used' in metadata and isinstance(metadata['providers_used'], list):
metadata['providers_used'] = set(metadata['providers_used'])
if 'target_domains' in metadata and isinstance(metadata['target_domains'], list):
metadata['target_domains'] = set(metadata['target_domains'])
def _generate_session_id(self) -> str:
"""Generate unique session identifier."""
@@ -143,18 +173,23 @@ class ForensicLogger:
discovery_context=discovery_context
)
self.api_requests.append(api_request)
self.session_metadata['total_requests'] += 1
self.session_metadata['providers_used'].add(provider)
with self.lock:
self.api_requests.append(api_request)
self.session_metadata['total_requests'] += 1
self.session_metadata['providers_used'].add(provider)
if target_indicator:
self.session_metadata['target_domains'].add(target_indicator)
if target_indicator:
self.session_metadata['target_domains'].add(target_indicator)
# Log to standard logger
if error:
self.logger.error(f"API Request Failed - {provider}: {url} - {error}")
else:
self.logger.info(f"API Request - {provider}: {url} - Status: {status_code}")
# Log to standard logger with error handling
try:
if error:
self.logger.error(f"API Request Failed - {provider}: {url}")
else:
self.logger.info(f"API Request - {provider}: {url} - Status: {status_code}")
except Exception:
# If logging fails, continue without breaking the application
pass
def log_relationship_discovery(self, source_node: str, target_node: str,
relationship_type: str, confidence_score: float,
@@ -183,29 +218,44 @@ class ForensicLogger:
discovery_method=discovery_method
)
self.relationships.append(relationship)
self.session_metadata['total_relationships'] += 1
with self.lock:
self.relationships.append(relationship)
self.session_metadata['total_relationships'] += 1
self.logger.info(
f"Relationship Discovered - {source_node} -> {target_node} "
f"({relationship_type}) - Confidence: {confidence_score:.2f} - Provider: {provider}"
)
# Log to standard logger with error handling
try:
self.logger.info(
f"Relationship Discovered - {source_node} -> {target_node} "
f"({relationship_type}) - Confidence: {confidence_score:.2f} - Provider: {provider}"
)
except Exception:
# If logging fails, continue without breaking the application
pass
def log_scan_start(self, target_domain: str, recursion_depth: int,
enabled_providers: List[str]) -> None:
"""Log the start of a reconnaissance scan."""
self.logger.info(f"Scan Started - Target: {target_domain}, Depth: {recursion_depth}")
self.logger.info(f"Enabled Providers: {', '.join(enabled_providers)}")
self.session_metadata['target_domains'].add(target_domain)
try:
self.logger.info(f"Scan Started - Target: {target_domain}, Depth: {recursion_depth}")
self.logger.info(f"Enabled Providers: {', '.join(enabled_providers)}")
with self.lock:
self.session_metadata['target_domains'].add(target_domain)
except Exception:
pass
def log_scan_complete(self) -> None:
"""Log the completion of a reconnaissance scan."""
self.session_metadata['end_time'] = datetime.now(timezone.utc).isoformat()
self.session_metadata['providers_used'] = list(self.session_metadata['providers_used'])
self.session_metadata['target_domains'] = list(self.session_metadata['target_domains'])
with self.lock:
self.session_metadata['end_time'] = datetime.now(timezone.utc).isoformat()
# Convert sets to lists for serialization
self.session_metadata['providers_used'] = list(self.session_metadata['providers_used'])
self.session_metadata['target_domains'] = list(self.session_metadata['target_domains'])
self.logger.info(f"Scan Complete - Session: {self.session_id}")
try:
self.logger.info(f"Scan Complete - Session: {self.session_id}")
except Exception:
pass
def export_audit_trail(self) -> Dict[str, Any]:
"""
@@ -214,12 +264,13 @@ class ForensicLogger:
Returns:
Dictionary containing complete session audit trail
"""
return {
'session_metadata': self.session_metadata.copy(),
'api_requests': [asdict(req) for req in self.api_requests],
'relationships': [asdict(rel) for rel in self.relationships],
'export_timestamp': datetime.now(timezone.utc).isoformat()
}
with self.lock:
return {
'session_metadata': self.session_metadata.copy(),
'api_requests': [asdict(req) for req in self.api_requests],
'relationships': [asdict(rel) for rel in self.relationships],
'export_timestamp': datetime.now(timezone.utc).isoformat()
}
def get_forensic_summary(self) -> Dict[str, Any]:
"""
@@ -229,7 +280,13 @@ class ForensicLogger:
Dictionary containing summary statistics
"""
provider_stats = {}
for provider in self.session_metadata['providers_used']:
# Ensure providers_used is a set for iteration
providers_used = self.session_metadata['providers_used']
if isinstance(providers_used, list):
providers_used = set(providers_used)
for provider in providers_used:
provider_requests = [req for req in self.api_requests if req.provider == provider]
provider_relationships = [rel for rel in self.relationships if rel.provider == provider]

View File

@@ -101,6 +101,7 @@ class ProviderResult:
"""Get the total number of attributes in this result."""
return len(self.attributes)
def is_large_entity(self, threshold: int) -> bool:
"""Check if this result qualifies as a large entity based on relationship count."""
return self.get_relationship_count() > threshold
##TODO
#def is_large_entity(self, threshold: int) -> bool:
# """Check if this result qualifies as a large entity based on relationship count."""
# return self.get_relationship_count() > threshold

File diff suppressed because it is too large Load Diff

View File

@@ -6,13 +6,15 @@ import uuid
import redis
import pickle
from typing import Dict, Optional, Any
import copy
from core.scanner import Scanner
from config import config
class SessionManager:
"""
Manages multiple scanner instances for concurrent user sessions using Redis.
FIXED: Manages multiple scanner instances for concurrent user sessions using Redis.
Enhanced to properly maintain WebSocket connections throughout scan lifecycle.
"""
def __init__(self, session_timeout_minutes: int = 0):
@@ -24,7 +26,13 @@ class SessionManager:
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
self.session_timeout = session_timeout_minutes * 60 # Convert to seconds
self.lock = threading.Lock() # Lock for local operations, Redis handles atomic ops
self.lock = threading.Lock()
# FIXED: Add a creation lock to prevent race conditions
self.creation_lock = threading.Lock()
# Track active socketio connections per session
self.active_socketio_connections = {}
# Start cleanup thread
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
@@ -36,7 +44,7 @@ class SessionManager:
"""Prepare SessionManager for pickling."""
state = self.__dict__.copy()
# Exclude unpickleable attributes - Redis client and threading objects
unpicklable_attrs = ['lock', 'cleanup_thread', 'redis_client']
unpicklable_attrs = ['lock', 'cleanup_thread', 'redis_client', 'creation_lock', 'active_socketio_connections']
for attr in unpicklable_attrs:
if attr in state:
del state[attr]
@@ -46,9 +54,10 @@ class SessionManager:
"""Restore SessionManager after unpickling."""
self.__dict__.update(state)
# Re-initialize unpickleable attributes
import redis
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
self.lock = threading.Lock()
self.creation_lock = threading.Lock()
self.active_socketio_connections = {}
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
self.cleanup_thread.start()
@@ -60,46 +69,111 @@ class SessionManager:
"""Generates the Redis key for a session's stop signal."""
return f"dnsrecon:stop:{session_id}"
def create_session(self) -> str:
def register_socketio_connection(self, session_id: str, socketio) -> None:
"""
Create a new user session and store it in Redis.
FIXED: Register a socketio connection for a session.
This ensures the connection is maintained throughout the session lifecycle.
"""
session_id = str(uuid.uuid4())
print(f"=== CREATING SESSION {session_id} IN REDIS ===")
with self.lock:
self.active_socketio_connections[session_id] = socketio
print(f"Registered socketio connection for session {session_id}")
def get_socketio_connection(self, session_id: str):
"""
FIXED: Get the active socketio connection for a session.
"""
with self.lock:
return self.active_socketio_connections.get(session_id)
def _prepare_scanner_for_storage(self, scanner: Scanner, session_id: str) -> Scanner:
"""
FIXED: Prepare scanner for storage by ensuring proper cleanup of unpicklable objects.
Now preserves socketio connection info for restoration.
"""
# Set the session ID on the scanner for cross-process stop signal management
scanner.session_id = session_id
try:
from core.session_config import create_session_config
session_config = create_session_config()
scanner_instance = Scanner(session_config=session_config)
# FIXED: Don't set socketio to None if we want to preserve real-time updates
# Instead, we'll restore it when loading the scanner
scanner.socketio = None
# Force cleanup of any threading objects that might cause issues
if hasattr(scanner, 'stop_event'):
scanner.stop_event = None
if hasattr(scanner, 'scan_thread'):
scanner.scan_thread = None
if hasattr(scanner, 'executor'):
scanner.executor = None
if hasattr(scanner, 'status_logger_thread'):
scanner.status_logger_thread = None
if hasattr(scanner, 'status_logger_stop_event'):
scanner.status_logger_stop_event = None
return scanner
def create_session(self, socketio=None) -> str:
"""
FIXED: Create a new user session with enhanced WebSocket management.
"""
# FIXED: Use creation lock to prevent race conditions
with self.creation_lock:
session_id = str(uuid.uuid4())
print(f"=== CREATING SESSION {session_id} IN REDIS ===")
# Set the session ID on the scanner for cross-process stop signal management
scanner_instance.session_id = session_id
# FIXED: Register socketio connection first
if socketio:
self.register_socketio_connection(session_id, socketio)
session_data = {
'scanner': scanner_instance,
'config': session_config,
'created_at': time.time(),
'last_activity': time.time(),
'status': 'active'
}
# Serialize the entire session data dictionary using pickle
serialized_data = pickle.dumps(session_data)
# Store in Redis
session_key = self._get_session_key(session_id)
self.redis_client.setex(session_key, self.session_timeout, serialized_data)
# Initialize stop signal as False
stop_key = self._get_stop_signal_key(session_id)
self.redis_client.setex(stop_key, self.session_timeout, b'0')
print(f"Session {session_id} stored in Redis with stop signal initialized")
return session_id
except Exception as e:
print(f"ERROR: Failed to create session {session_id}: {e}")
raise
try:
from core.session_config import create_session_config
session_config = create_session_config()
# Create scanner WITHOUT socketio to avoid weakref issues
scanner_instance = Scanner(session_config=session_config, socketio=None)
# Prepare scanner for storage (removes problematic objects)
scanner_instance = self._prepare_scanner_for_storage(scanner_instance, session_id)
session_data = {
'scanner': scanner_instance,
'config': session_config,
'created_at': time.time(),
'last_activity': time.time(),
'status': 'active'
}
# Test serialization before storing to catch issues early
try:
test_serialization = pickle.dumps(session_data)
print(f"Session serialization test successful ({len(test_serialization)} bytes)")
except Exception as pickle_error:
print(f"PICKLE TEST FAILED: {pickle_error}")
# Try to identify the problematic object
for key, value in session_data.items():
try:
pickle.dumps(value)
print(f" {key}: OK")
except Exception as item_error:
print(f" {key}: FAILED - {item_error}")
raise pickle_error
# Store in Redis
session_key = self._get_session_key(session_id)
self.redis_client.setex(session_key, self.session_timeout, test_serialization)
# Initialize stop signal as False
stop_key = self._get_stop_signal_key(session_id)
self.redis_client.setex(stop_key, self.session_timeout, b'0')
print(f"Session {session_id} stored in Redis with stop signal initialized")
print(f"Session has {len(scanner_instance.providers)} providers: {[p.get_name() for p in scanner_instance.providers]}")
return session_id
except Exception as e:
print(f"ERROR: Failed to create session {session_id}: {e}")
import traceback
traceback.print_exc()
raise
def set_stop_signal(self, session_id: str) -> bool:
"""
@@ -168,31 +242,63 @@ class SessionManager:
# Ensure the scanner has the correct session ID for stop signal checking
if 'scanner' in session_data and session_data['scanner']:
session_data['scanner'].session_id = session_id
# FIXED: Restore socketio connection from our registry
socketio_conn = self.get_socketio_connection(session_id)
if socketio_conn:
session_data['scanner'].socketio = socketio_conn
print(f"Restored socketio connection for session {session_id}")
else:
print(f"No socketio connection found for session {session_id}")
session_data['scanner'].socketio = None
return session_data
return None
except Exception as e:
print(f"ERROR: Failed to get session data for {session_id}: {e}")
import traceback
traceback.print_exc()
return None
def _save_session_data(self, session_id: str, session_data: Dict[str, Any]) -> bool:
"""
Serializes and saves session data back to Redis with updated TTL.
FIXED: Now preserves socketio connection during storage.
Returns:
bool: True if save was successful
"""
try:
session_key = self._get_session_key(session_id)
serialized_data = pickle.dumps(session_data)
# Create a deep copy to avoid modifying the original scanner object
session_data_to_save = copy.deepcopy(session_data)
# Prepare scanner for storage if it exists
if 'scanner' in session_data_to_save and session_data_to_save['scanner']:
# FIXED: Preserve the original socketio connection before preparing for storage
original_socketio = session_data_to_save['scanner'].socketio
session_data_to_save['scanner'] = self._prepare_scanner_for_storage(
session_data_to_save['scanner'],
session_id
)
# FIXED: If we had a socketio connection, make sure it's registered
if original_socketio and session_id not in self.active_socketio_connections:
self.register_socketio_connection(session_id, original_socketio)
serialized_data = pickle.dumps(session_data_to_save)
result = self.redis_client.setex(session_key, self.session_timeout, serialized_data)
return result
except Exception as e:
print(f"ERROR: Failed to save session data for {session_id}: {e}")
import traceback
traceback.print_exc()
return False
def update_session_scanner(self, session_id: str, scanner: 'Scanner') -> bool:
"""
Updates just the scanner object in a session with immediate persistence.
FIXED: Updates just the scanner object in a session with immediate persistence.
Now maintains socketio connection throughout the update process.
Returns:
bool: True if update was successful
@@ -200,15 +306,28 @@ class SessionManager:
try:
session_data = self._get_session_data(session_id)
if session_data:
# Ensure scanner has the session ID
scanner.session_id = session_id
# FIXED: Preserve socketio connection before preparing for storage
original_socketio = scanner.socketio
# Prepare scanner for storage
scanner = self._prepare_scanner_for_storage(scanner, session_id)
session_data['scanner'] = scanner
session_data['last_activity'] = time.time()
# FIXED: Restore socketio connection after preparation
if original_socketio:
self.register_socketio_connection(session_id, original_socketio)
session_data['scanner'].socketio = original_socketio
# Immediately save to Redis for GUI updates
success = self._save_session_data(session_id, session_data)
if success:
print(f"Scanner state updated for session {session_id} (status: {scanner.status})")
# Only log occasionally to reduce noise
if hasattr(self, '_last_update_log'):
if time.time() - self._last_update_log > 5: # Log every 5 seconds max
self._last_update_log = time.time()
else:
self._last_update_log = time.time()
else:
print(f"WARNING: Failed to save scanner state for session {session_id}")
return success
@@ -217,6 +336,8 @@ class SessionManager:
return False
except Exception as e:
print(f"ERROR: Failed to update scanner for session {session_id}: {e}")
import traceback
traceback.print_exc()
return False
def update_scanner_status(self, session_id: str, status: str) -> bool:
@@ -249,7 +370,7 @@ class SessionManager:
def get_session(self, session_id: str) -> Optional[Scanner]:
"""
Get scanner instance for a session from Redis with session ID management.
FIXED: Get scanner instance for a session from Redis with proper socketio restoration.
"""
if not session_id:
return None
@@ -267,6 +388,15 @@ class SessionManager:
if scanner:
# Ensure the scanner can check the Redis-based stop signal
scanner.session_id = session_id
# FIXED: Restore socketio connection from our registry
socketio_conn = self.get_socketio_connection(session_id)
if socketio_conn:
scanner.socketio = socketio_conn
print(f"✓ Restored socketio connection for session {session_id}")
else:
scanner.socketio = None
print(f"⚠️ No socketio connection found for session {session_id}")
return scanner
@@ -319,6 +449,12 @@ class SessionManager:
# Wait a moment for graceful shutdown
time.sleep(0.5)
# FIXED: Clean up socketio connection
with self.lock:
if session_id in self.active_socketio_connections:
del self.active_socketio_connections[session_id]
print(f"Cleaned up socketio connection for session {session_id}")
# Delete session data and stop signal from Redis
session_key = self._get_session_key(session_id)
stop_key = self._get_stop_signal_key(session_id)
@@ -330,6 +466,8 @@ class SessionManager:
except Exception as e:
print(f"ERROR: Failed to terminate session {session_id}: {e}")
import traceback
traceback.print_exc()
return False
def _cleanup_loop(self) -> None:
@@ -350,6 +488,12 @@ class SessionManager:
self.redis_client.delete(stop_key)
print(f"Cleaned up orphaned stop signal for session {session_id}")
# Also clean up socketio connection
with self.lock:
if session_id in self.active_socketio_connections:
del self.active_socketio_connections[session_id]
print(f"Cleaned up orphaned socketio for session {session_id}")
except Exception as e:
print(f"Error in cleanup loop: {e}")
@@ -373,14 +517,16 @@ class SessionManager:
return {
'total_active_sessions': active_sessions,
'running_scans': running_scans,
'total_stop_signals': len(stop_keys)
'total_stop_signals': len(stop_keys),
'active_socketio_connections': len(self.active_socketio_connections)
}
except Exception as e:
print(f"ERROR: Failed to get statistics: {e}")
return {
'total_active_sessions': 0,
'running_scans': 0,
'total_stop_signals': 0
'total_stop_signals': 0,
'active_socketio_connections': 0
}
# Global session manager instance

View File

@@ -7,14 +7,16 @@ from .base_provider import BaseProvider
from .crtsh_provider import CrtShProvider
from .dns_provider import DNSProvider
from .shodan_provider import ShodanProvider
from .correlation_provider import CorrelationProvider
from core.rate_limiter import GlobalRateLimiter
__all__ = [
'BaseProvider',
'GlobalRateLimiter',
'GlobalRateLimiter',
'CrtShProvider',
'DNSProvider',
'ShodanProvider'
'ShodanProvider',
'CorrelationProvider'
]
__version__ = "0.0.0-rc"

View File

@@ -15,6 +15,7 @@ class BaseProvider(ABC):
"""
Abstract base class for all DNSRecon data providers.
Now supports session-specific configuration and returns standardized ProviderResult objects.
FIXED: Enhanced pickle support to prevent weakref serialization errors.
"""
def __init__(self, name: str, rate_limit: int = 60, timeout: int = 30, session_config=None):
@@ -53,22 +54,57 @@ class BaseProvider(ABC):
def __getstate__(self):
"""Prepare BaseProvider for pickling by excluding unpicklable objects."""
state = self.__dict__.copy()
# Exclude the unpickleable '_local' attribute and stop event
unpicklable_attrs = ['_local', '_stop_event']
# Exclude unpickleable attributes that may contain weakrefs
unpicklable_attrs = [
'_local', # Thread-local storage (contains requests.Session)
'_stop_event', # Threading event
'logger', # Logger may contain weakrefs in handlers
]
for attr in unpicklable_attrs:
if attr in state:
del state[attr]
# Also handle any potential weakrefs in the config object
if 'config' in state and hasattr(state['config'], '__getstate__'):
# If config has its own pickle support, let it handle itself
pass
elif 'config' in state:
# Otherwise, ensure config doesn't contain unpicklable objects
try:
# Test if config can be pickled
import pickle
pickle.dumps(state['config'])
except (TypeError, AttributeError):
# If config can't be pickled, we'll recreate it during unpickling
state['_config_class'] = type(state['config']).__name__
del state['config']
return state
def __setstate__(self, state):
"""Restore BaseProvider after unpickling by reconstructing threading objects."""
self.__dict__.update(state)
# Re-initialize the '_local' attribute and stop event
# Re-initialize unpickleable attributes
self._local = threading.local()
self._stop_event = None
self.logger = get_forensic_logger()
# Recreate config if it was removed during pickling
if not hasattr(self, 'config') and hasattr(self, '_config_class'):
if self._config_class == 'Config':
from config import config as global_config
self.config = global_config
elif self._config_class == 'SessionConfig':
from core.session_config import create_session_config
self.config = create_session_config()
del self._config_class
@property
def session(self):
"""Get or create thread-local requests session."""
if not hasattr(self._local, 'session'):
self._local.session = requests.Session()
self._local.session.headers.update({
@@ -133,6 +169,8 @@ class BaseProvider(ABC):
target_indicator: str = "") -> Optional[requests.Response]:
"""
Make a rate-limited HTTP request.
FIXED: Returns response without automatically raising HTTPError exceptions.
Individual providers should handle status codes appropriately.
"""
if self._is_stop_requested():
print(f"Request cancelled before start: {url}")
@@ -169,8 +207,14 @@ class BaseProvider(ABC):
raise ValueError(f"Unsupported HTTP method: {method}")
print(f"Response status: {response.status_code}")
response.raise_for_status()
self.successful_requests += 1
# FIXED: Don't automatically raise for HTTP error status codes
# Let individual providers handle status codes appropriately
# Only count 2xx responses as successful
if 200 <= response.status_code < 300:
self.successful_requests += 1
else:
self.failed_requests += 1
duration_ms = (time.time() - start_time) * 1000
self.logger.log_api_request(

View File

@@ -0,0 +1,220 @@
# dnsrecon/providers/correlation_provider.py
import re
from typing import Dict, Any, List
from .base_provider import BaseProvider
from core.provider_result import ProviderResult
from core.graph_manager import NodeType, GraphManager
class CorrelationProvider(BaseProvider):
"""
A provider that finds correlations between nodes in the graph.
FIXED: Enhanced pickle support to prevent weakref issues with graph references.
"""
def __init__(self, name: str = "correlation", session_config=None):
"""
Initialize the correlation provider.
"""
super().__init__(name, session_config=session_config)
self.graph: GraphManager | None = None
self.correlation_index = {}
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
self.EXCLUDED_KEYS = [
'cert_source',
'cert_issuer_ca_id',
'cert_common_name',
'cert_validity_period_days',
'cert_issuer_name',
'cert_serial_number',
'cert_entry_timestamp',
'cert_not_before',
'cert_not_after',
'dns_ttl',
'timestamp',
'last_update',
'updated_timestamp',
'discovery_timestamp',
'query_timestamp',
]
def __getstate__(self):
"""
FIXED: Prepare CorrelationProvider for pickling by excluding graph reference.
"""
state = super().__getstate__()
# Remove graph reference to prevent circular dependencies and weakrefs
if 'graph' in state:
del state['graph']
# Also handle correlation_index which might contain complex objects
if 'correlation_index' in state:
# Clear correlation index as it will be rebuilt when needed
state['correlation_index'] = {}
return state
def __setstate__(self, state):
"""
FIXED: Restore CorrelationProvider after unpickling.
"""
super().__setstate__(state)
# Re-initialize graph reference (will be set by scanner)
self.graph = None
# Re-initialize correlation index
self.correlation_index = {}
# Re-compile regex pattern
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
def get_name(self) -> str:
"""Return the provider name."""
return "correlation"
def get_display_name(self) -> str:
"""Return the provider display name for the UI."""
return "Correlation Engine"
def requires_api_key(self) -> bool:
"""Return True if the provider requires an API key."""
return False
def get_eligibility(self) -> Dict[str, bool]:
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
return {'domains': True, 'ips': True}
def is_available(self) -> bool:
"""Check if the provider is available and properly configured."""
return True
def query_domain(self, domain: str) -> ProviderResult:
"""
Query the provider for information about a domain.
"""
return self._find_correlations(domain)
def query_ip(self, ip: str) -> ProviderResult:
"""
Query the provider for information about an IP address.
"""
return self._find_correlations(ip)
def set_graph_manager(self, graph_manager: GraphManager):
"""
Set the graph manager for the provider to use.
"""
self.graph = graph_manager
def _find_correlations(self, node_id: str) -> ProviderResult:
"""
Find correlations for a given node.
FIXED: Added safety checks to prevent issues when graph is None.
"""
result = ProviderResult()
# FIXED: Ensure self.graph is not None before proceeding
if not self.graph or not self.graph.graph.has_node(node_id):
return result
try:
node_attributes = self.graph.graph.nodes[node_id].get('attributes', [])
except Exception as e:
# If there's any issue accessing the graph, return empty result
print(f"Warning: Could not access graph for correlation analysis: {e}")
return result
for attr in node_attributes:
attr_name = attr.get('name')
attr_value = attr.get('value')
attr_provider = attr.get('provider', 'unknown')
should_exclude = (
any(excluded_key in attr_name or attr_name == excluded_key for excluded_key in self.EXCLUDED_KEYS) or
not isinstance(attr_value, (str, int, float, bool)) or
attr_value is None or
isinstance(attr_value, bool) or
(isinstance(attr_value, str) and (
len(attr_value) < 4 or
self.date_pattern.match(attr_value) or
attr_value.lower() in ['unknown', 'none', 'null', 'n/a', 'true', 'false', '0', '1']
)) or
(isinstance(attr_value, (int, float)) and (
attr_value == 0 or
attr_value == 1 or
abs(attr_value) > 1000000
))
)
if should_exclude:
continue
if attr_value not in self.correlation_index:
self.correlation_index[attr_value] = {
'nodes': set(),
'sources': []
}
self.correlation_index[attr_value]['nodes'].add(node_id)
source_info = {
'node_id': node_id,
'provider': attr_provider,
'attribute': attr_name,
'path': f"{attr_provider}_{attr_name}"
}
existing_sources = [s for s in self.correlation_index[attr_value]['sources']
if s['node_id'] == node_id and s['path'] == source_info['path']]
if not existing_sources:
self.correlation_index[attr_value]['sources'].append(source_info)
if len(self.correlation_index[attr_value]['nodes']) > 1:
self._create_correlation_relationships(attr_value, self.correlation_index[attr_value], result)
return result
def _create_correlation_relationships(self, value: Any, correlation_data: Dict[str, Any], result: ProviderResult):
"""
Create correlation relationships and add them to the provider result.
"""
correlation_node_id = f"corr_{hash(str(value)) & 0x7FFFFFFF}"
nodes = correlation_data['nodes']
sources = correlation_data['sources']
# Add the correlation node as an attribute to the result
result.add_attribute(
target_node=correlation_node_id,
name="correlation_value",
value=value,
attr_type=str(type(value)),
provider=self.name,
confidence=0.9,
metadata={
'correlated_nodes': list(nodes),
'sources': sources,
}
)
for source in sources:
node_id = source['node_id']
provider = source['provider']
attribute = source['attribute']
relationship_label = f"corr_{provider}_{attribute}"
# Add the relationship to the result
result.add_relationship(
source_node=node_id,
target_node=correlation_node_id,
relationship_type=relationship_label,
provider=self.name,
confidence=0.9,
raw_data={
'correlation_value': value,
'original_attribute': attribute,
'correlation_type': 'attribute_matching'
}
)

View File

@@ -3,7 +3,7 @@
import json
import re
from pathlib import Path
from typing import List, Dict, Any, Set
from typing import List, Dict, Any, Set, Optional
from urllib.parse import quote
from datetime import datetime, timezone
import requests
@@ -11,12 +11,14 @@ import requests
from .base_provider import BaseProvider
from core.provider_result import ProviderResult
from utils.helpers import _is_valid_domain
from core.logger import get_forensic_logger
class CrtShProvider(BaseProvider):
"""
Provider for querying crt.sh certificate transparency database.
Now returns standardized ProviderResult objects with caching support.
FIXED: Now properly creates domain and CA nodes instead of large entities.
Returns standardized ProviderResult objects with caching support.
"""
def __init__(self, name=None, session_config=None):
@@ -30,13 +32,13 @@ class CrtShProvider(BaseProvider):
self.base_url = "https://crt.sh/"
self._stop_event = None
# Initialize cache directory
self.cache_dir = Path('cache') / 'crtsh'
self.cache_dir.mkdir(parents=True, exist_ok=True)
# Initialize cache directory (separate from BaseProvider's HTTP cache)
self.domain_cache_dir = Path('cache') / 'crtsh'
self.domain_cache_dir.mkdir(parents=True, exist_ok=True)
# Compile regex for date filtering for efficiency
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
def get_name(self) -> str:
"""Return the provider name."""
return "crtsh"
@@ -60,7 +62,7 @@ class CrtShProvider(BaseProvider):
def _get_cache_file_path(self, domain: str) -> Path:
"""Generate cache file path for a domain."""
safe_domain = domain.replace('.', '_').replace('/', '_').replace('\\', '_')
return self.cache_dir / f"{safe_domain}.json"
return self.domain_cache_dir / f"{safe_domain}.json"
def _get_cache_status(self, cache_file_path: Path) -> str:
"""
@@ -93,7 +95,8 @@ class CrtShProvider(BaseProvider):
def query_domain(self, domain: str) -> ProviderResult:
"""
Query crt.sh for certificates containing the domain with caching support.
FIXED: Query crt.sh for certificates containing the domain.
Now properly creates domain and CA nodes instead of large entities.
Args:
domain: Domain to investigate
@@ -110,43 +113,44 @@ class CrtShProvider(BaseProvider):
cache_file = self._get_cache_file_path(domain)
cache_status = self._get_cache_status(cache_file)
processed_certificates = []
result = ProviderResult()
try:
if cache_status == "fresh":
result = self._load_from_cache(cache_file)
self.logger.logger.info(f"Using cached crt.sh data for {domain}")
if cache_status == "fresh":
result = self._load_from_cache(cache_file)
self.logger.logger.info(f"Using fresh cached crt.sh data for {domain}")
else: # "stale" or "not_found"
# Query the API for the latest certificates
new_raw_certs = self._query_crtsh_api(domain)
else: # "stale" or "not_found"
raw_certificates = self._query_crtsh_api(domain)
if self._stop_event and self._stop_event.is_set():
return ProviderResult()
# Combine with old data if cache is stale
if cache_status == "stale":
old_raw_certs = self._load_raw_data_from_cache(cache_file)
combined_certs = old_raw_certs + new_raw_certs
if self._stop_event and self._stop_event.is_set():
return ProviderResult()
# Deduplicate the combined list
seen_ids = set()
unique_certs = []
for cert in combined_certs:
cert_id = cert.get('id')
if cert_id not in seen_ids:
unique_certs.append(cert)
seen_ids.add(cert_id)
raw_certificates_to_process = unique_certs
self.logger.logger.info(f"Refreshed and merged cache for {domain}. Total unique certs: {len(raw_certificates_to_process)}")
else: # "not_found"
raw_certificates_to_process = new_raw_certs
# FIXED: Process certificates to create proper domain and CA nodes
result = self._process_certificates_to_result_fixed(domain, raw_certificates_to_process)
self.logger.logger.info(f"Created fresh result for {domain} ({result.get_relationship_count()} relationships)")
# Process raw data into the application's expected format
current_processed_certs = [self._extract_certificate_metadata(cert) for cert in raw_certificates]
if cache_status == "stale":
# Load existing and append new processed certs
existing_result = self._load_from_cache(cache_file)
result = self._merge_results(existing_result, current_processed_certs, domain)
self.logger.logger.info(f"Refreshed and merged cache for {domain}")
else: # "not_found"
# Create new result from processed certs
result = self._process_certificates_to_result(domain, raw_certificates)
self.logger.logger.info(f"Created fresh result for {domain} ({result.get_relationship_count()} relationships)")
# Save the result to cache
self._save_result_to_cache(cache_file, result, domain)
except requests.exceptions.RequestException as e:
self.logger.logger.error(f"API query failed for {domain}: {e}")
if cache_status != "not_found":
result = self._load_from_cache(cache_file)
self.logger.logger.warning(f"Using stale cache for {domain} due to API failure.")
else:
raise e # Re-raise if there's no cache to fall back on
# Save the new result and the raw data to the cache
self._save_result_to_cache(cache_file, result, raw_certificates_to_process, domain)
return result
@@ -200,12 +204,22 @@ class CrtShProvider(BaseProvider):
self.logger.logger.error(f"Failed to load cached certificates from {cache_file_path}: {e}")
return ProviderResult()
def _save_result_to_cache(self, cache_file_path: Path, result: ProviderResult, domain: str) -> None:
"""Save processed crt.sh result to a cache file."""
def _load_raw_data_from_cache(self, cache_file_path: Path) -> List[Dict[str, Any]]:
"""Load only the raw certificate data from a cache file."""
try:
with open(cache_file_path, 'r') as f:
cache_content = json.load(f)
return cache_content.get("raw_certificates", [])
except (json.JSONDecodeError, FileNotFoundError):
return []
def _save_result_to_cache(self, cache_file_path: Path, result: ProviderResult, raw_certificates: List[Dict[str, Any]], domain: str) -> None:
"""Save processed crt.sh result and raw data to a cache file."""
try:
cache_data = {
"domain": domain,
"last_upstream_query": datetime.now(timezone.utc).isoformat(),
"raw_certificates": raw_certificates, # Store the raw data for deduplication
"relationships": [
{
"source_node": rel.source_node,
@@ -234,25 +248,6 @@ class CrtShProvider(BaseProvider):
except Exception as e:
self.logger.logger.warning(f"Failed to save cache file for {domain}: {e}")
def _merge_results(self, existing_result: ProviderResult, new_certificates: List[Dict[str, Any]], domain: str) -> ProviderResult:
"""Merge new certificate data with existing cached result."""
# Create a fresh result from the new certificates
new_result = self._process_certificates_to_result(domain, new_certificates)
# Simple merge strategy: combine all relationships and attributes
# In practice, you might want more sophisticated deduplication
merged_result = ProviderResult()
# Add existing relationships and attributes
merged_result.relationships.extend(existing_result.relationships)
merged_result.attributes.extend(existing_result.attributes)
# Add new relationships and attributes
merged_result.relationships.extend(new_result.relationships)
merged_result.attributes.extend(new_result.attributes)
return merged_result
def _query_crtsh_api(self, domain: str) -> List[Dict[str, Any]]:
"""Query crt.sh API for raw certificate data."""
url = f"{self.base_url}?q={quote(domain)}&output=json"
@@ -261,37 +256,73 @@ class CrtShProvider(BaseProvider):
if not response or response.status_code != 200:
raise requests.exceptions.RequestException(f"crt.sh API returned status {response.status_code if response else 'None'}")
certificates = response.json()
try:
certificates = response.json()
except json.JSONDecodeError:
self.logger.logger.error(f"crt.sh returned invalid JSON for {domain}")
return []
if not certificates:
return []
return certificates
def _process_certificates_to_result(self, domain: str, certificates: List[Dict[str, Any]]) -> ProviderResult:
def _process_certificates_to_result_fixed(self, query_domain: str, certificates: List[Dict[str, Any]]) -> ProviderResult:
"""
Process certificates to create ProviderResult with relationships and attributes.
FIXED: Process certificates to create proper domain and CA nodes.
Now creates individual domain nodes instead of large entities.
"""
result = ProviderResult()
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh processing cancelled before processing for domain: {domain}")
self.logger.logger.info(f"CrtSh processing cancelled before processing for domain: {query_domain}")
return result
incompleteness_warning = self._check_for_incomplete_data(query_domain, certificates)
if incompleteness_warning:
result.add_attribute(
target_node=query_domain,
name="crtsh_data_warning",
value=incompleteness_warning,
attr_type='metadata',
provider=self.name,
confidence=1.0
)
all_discovered_domains = set()
processed_issuers = set()
for i, cert_data in enumerate(certificates):
if i % 5 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh processing cancelled at certificate {i} for domain: {domain}")
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
self.logger.logger.info(f"CrtSh processing cancelled at certificate {i} for domain: {query_domain}")
break
# Extract all domains from this certificate
cert_domains = self._extract_domains_from_certificate(cert_data)
all_discovered_domains.update(cert_domains)
# FIXED: Create CA nodes for certificate issuers (not as domain metadata)
issuer_name = self._parse_issuer_organization(cert_data.get('issuer_name', ''))
if issuer_name and issuer_name not in processed_issuers:
# Create relationship from query domain to CA
result.add_relationship(
source_node=query_domain,
target_node=issuer_name,
relationship_type='crtsh_cert_issuer',
provider=self.name,
confidence=0.95,
raw_data={'issuer_dn': cert_data.get('issuer_name', '')}
)
processed_issuers.add(issuer_name)
# Add certificate metadata to each domain in this certificate
cert_metadata = self._extract_certificate_metadata(cert_data)
for cert_domain in cert_domains:
if not _is_valid_domain(cert_domain):
continue
for key, value in self._extract_certificate_metadata(cert_data).items():
# Add certificate attributes to the domain
for key, value in cert_metadata.items():
if value is not None:
result.add_attribute(
target_node=cert_domain,
@@ -299,48 +330,72 @@ class CrtShProvider(BaseProvider):
value=value,
attr_type='certificate_data',
provider=self.name,
confidence=0.9
confidence=0.9,
metadata={'certificate_id': cert_data.get('id')}
)
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled before relationship creation for domain: {domain}")
self.logger.logger.info(f"CrtSh query cancelled before relationship creation for domain: {query_domain}")
return result
for i, discovered_domain in enumerate(all_discovered_domains):
if discovered_domain == domain:
# FIXED: Create selective relationships to avoid large entities
# Only create relationships to domains that are closely related
for discovered_domain in all_discovered_domains:
if discovered_domain == query_domain:
continue
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh relationship creation cancelled for domain: {domain}")
break
if not _is_valid_domain(discovered_domain):
continue
confidence = self._calculate_domain_relationship_confidence(
domain, discovered_domain, [], all_discovered_domains
)
# FIXED: Only create relationships for domains that share a meaningful connection
# This prevents creating too many relationships that trigger large entity creation
if self._should_create_relationship(query_domain, discovered_domain):
confidence = self._calculate_domain_relationship_confidence(
query_domain, discovered_domain, [], all_discovered_domains
)
result.add_relationship(
source_node=domain,
target_node=discovered_domain,
relationship_type='san_certificate',
provider=self.name,
confidence=confidence,
raw_data={'relationship_type': 'certificate_discovery'}
)
result.add_relationship(
source_node=query_domain,
target_node=discovered_domain,
relationship_type='crtsh_san_certificate',
provider=self.name,
confidence=confidence,
raw_data={'relationship_type': 'certificate_discovery'}
)
self.log_relationship_discovery(
source_node=domain,
target_node=discovered_domain,
relationship_type='san_certificate',
confidence_score=confidence,
raw_data={'relationship_type': 'certificate_discovery'},
discovery_method="certificate_transparency_analysis"
)
self.log_relationship_discovery(
source_node=query_domain,
target_node=discovered_domain,
relationship_type='crtsh_san_certificate',
confidence_score=confidence,
raw_data={'relationship_type': 'certificate_discovery'},
discovery_method="certificate_transparency_analysis"
)
self.logger.logger.info(f"CrtSh processing completed for {query_domain}: {len(all_discovered_domains)} domains, {result.get_relationship_count()} relationships")
return result
def _should_create_relationship(self, source_domain: str, target_domain: str) -> bool:
"""
FIXED: Determine if a relationship should be created between two domains.
This helps avoid creating too many relationships that trigger large entity creation.
"""
# Always create relationships for subdomains
if target_domain.endswith(f'.{source_domain}') or source_domain.endswith(f'.{target_domain}'):
return True
# Create relationships for domains that share a common parent (up to 2 levels)
source_parts = source_domain.split('.')
target_parts = target_domain.split('.')
# Check if they share the same root domain (last 2 parts)
if len(source_parts) >= 2 and len(target_parts) >= 2:
source_root = '.'.join(source_parts[-2:])
target_root = '.'.join(target_parts[-2:])
return source_root == target_root
return False
def _extract_certificate_metadata(self, cert_data: Dict[str, Any]) -> Dict[str, Any]:
"""Extract comprehensive metadata from certificate data."""
raw_issuer_name = cert_data.get('issuer_name', '')
@@ -355,7 +410,7 @@ class CrtShProvider(BaseProvider):
'not_before': cert_data.get('not_before'),
'not_after': cert_data.get('not_after'),
'entry_timestamp': cert_data.get('entry_timestamp'),
'source': 'crt.sh'
'source': 'crtsh'
}
try:
@@ -367,8 +422,9 @@ class CrtShProvider(BaseProvider):
metadata['is_currently_valid'] = self._is_cert_valid(cert_data)
metadata['expires_soon'] = (not_after - datetime.now(timezone.utc)).days <= 30
metadata['not_before'] = not_before.strftime('%Y-%m-%d %H:%M:%S UTC')
metadata['not_after'] = not_after.strftime('%Y-%m-%d %H:%M:%S UTC')
# Keep raw date format or convert to standard format
metadata['not_before'] = not_before.isoformat()
metadata['not_after'] = not_after.isoformat()
except Exception as e:
self.logger.logger.debug(f"Error computing certificate metadata: {e}")
@@ -404,6 +460,8 @@ class CrtShProvider(BaseProvider):
raise ValueError("Empty date string")
try:
if isinstance(date_string, datetime):
return date_string.replace(tzinfo=timezone.utc)
if date_string.endswith('Z'):
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
elif '+' in date_string or date_string.endswith('UTC'):
@@ -440,7 +498,6 @@ class CrtShProvider(BaseProvider):
return is_not_expired
except Exception as e:
self.logger.logger.debug(f"Certificate validity check failed: {e}")
return False
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
@@ -495,90 +552,6 @@ class CrtShProvider(BaseProvider):
return [d for d in final_domains if _is_valid_domain(d)]
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Find certificates that are shared between two domain certificate lists."""
shared = []
cert1_ids = set()
for cert in certs1:
cert_id = cert.get('certificate_id')
if cert_id and isinstance(cert_id, (int, str, float, bool, tuple)):
cert1_ids.add(cert_id)
for cert in certs2:
cert_id = cert.get('certificate_id')
if cert_id and isinstance(cert_id, (int, str, float, bool, tuple)):
if cert_id in cert1_ids:
shared.append(cert)
return shared
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Create a summary of certificates for a domain."""
if not certificates:
return {
'total_certificates': 0,
'valid_certificates': 0,
'expired_certificates': 0,
'expires_soon_count': 0,
'unique_issuers': [],
'latest_certificate': None,
'has_valid_cert': False,
'certificate_details': []
}
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
expired_count = len(certificates) - valid_count
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
# Find the most recent certificate
latest_cert = None
latest_date = None
for cert in certificates:
try:
if cert.get('not_before'):
cert_date = self._parse_certificate_date(cert['not_before'])
if latest_date is None or cert_date > latest_date:
latest_date = cert_date
latest_cert = cert
except Exception:
continue
# Sort certificates by date for better display (newest first)
sorted_certificates = sorted(
certificates,
key=lambda c: self._get_certificate_sort_date(c),
reverse=True
)
return {
'total_certificates': len(certificates),
'valid_certificates': valid_count,
'expired_certificates': expired_count,
'expires_soon_count': expires_soon_count,
'unique_issuers': unique_issuers,
'latest_certificate': latest_cert,
'has_valid_cert': valid_count > 0,
'certificate_details': sorted_certificates
}
def _get_certificate_sort_date(self, cert: Dict[str, Any]) -> datetime:
"""Get a sortable date from certificate data for chronological ordering."""
try:
if cert.get('not_before'):
return self._parse_certificate_date(cert['not_before'])
if cert.get('entry_timestamp'):
return self._parse_certificate_date(cert['entry_timestamp'])
return datetime(1970, 1, 1, tzinfo=timezone.utc)
except Exception:
return datetime(1970, 1, 1, tzinfo=timezone.utc)
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str,
shared_certificates: List[Dict[str, Any]],
all_discovered_domains: Set[str]) -> float:
@@ -597,35 +570,7 @@ class CrtShProvider(BaseProvider):
else:
context_bonus = 0.0
# Adjust confidence based on shared certificates
if shared_certificates:
shared_count = len(shared_certificates)
if shared_count >= 3:
shared_bonus = 0.1
elif shared_count >= 2:
shared_bonus = 0.05
else:
shared_bonus = 0.02
valid_shared = sum(1 for cert in shared_certificates if cert.get('is_currently_valid'))
if valid_shared > 0:
validity_bonus = 0.05
else:
validity_bonus = 0.0
else:
shared_bonus = 0.0
validity_bonus = 0.0
# Adjust confidence based on certificate issuer reputation
issuer_bonus = 0.0
if shared_certificates:
for cert in shared_certificates:
issuer = cert.get('issuer_name', '').lower()
if any(trusted_ca in issuer for trusted_ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
issuer_bonus = max(issuer_bonus, 0.03)
break
final_confidence = base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus
final_confidence = base_confidence + context_bonus
return max(0.1, min(1.0, final_confidence))
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
@@ -637,4 +582,30 @@ class CrtShProvider(BaseProvider):
elif query_domain.endswith(f'.{cert_domain}'):
return 'parent_domain'
else:
return 'related_domain'
return 'related_domain'
def _check_for_incomplete_data(self, domain: str, certificates: List[Dict[str, Any]]) -> Optional[str]:
"""
Analyzes the certificate list to heuristically detect if the data from crt.sh is incomplete.
"""
cert_count = len(certificates)
# Heuristic 1: Check if the number of certs hits a known hard limit.
if cert_count >= 10000:
return f"Result likely truncated; received {cert_count} certificates, which may be the maximum limit."
# Heuristic 2: Check if all returned certificates are old.
if cert_count > 1000: # Only apply this for a reasonable number of certs
latest_expiry = None
for cert in certificates:
try:
not_after = self._parse_certificate_date(cert.get('not_after'))
if latest_expiry is None or not_after > latest_expiry:
latest_expiry = not_after
except (ValueError, TypeError):
continue
if latest_expiry and (datetime.now(timezone.utc) - latest_expiry).days > 365:
return f"Incomplete data suspected: The latest certificate expired more than a year ago ({latest_expiry.strftime('%Y-%m-%d')})."
return None

View File

@@ -4,13 +4,14 @@ from dns import resolver, reversename
from typing import Dict
from .base_provider import BaseProvider
from core.provider_result import ProviderResult
from utils.helpers import _is_valid_ip, _is_valid_domain
from utils.helpers import _is_valid_ip, _is_valid_domain, get_ip_version
class DNSProvider(BaseProvider):
"""
Provider for standard DNS resolution and reverse DNS lookups.
Now returns standardized ProviderResult objects.
Now returns standardized ProviderResult objects with IPv4 and IPv6 support.
FIXED: Enhanced pickle support to prevent resolver serialization issues.
"""
def __init__(self, name=None, session_config=None):
@@ -27,6 +28,22 @@ class DNSProvider(BaseProvider):
self.resolver.timeout = 5
self.resolver.lifetime = 10
def __getstate__(self):
"""Prepare the object for pickling by excluding resolver."""
state = super().__getstate__()
# Remove the unpickleable 'resolver' attribute
if 'resolver' in state:
del state['resolver']
return state
def __setstate__(self, state):
"""Restore the object after unpickling by reconstructing resolver."""
super().__setstate__(state)
# Re-initialize the 'resolver' attribute
self.resolver = resolver.Resolver()
self.resolver.timeout = 5
self.resolver.lifetime = 10
def get_name(self) -> str:
"""Return the provider name."""
return "dns"
@@ -50,6 +67,7 @@ class DNSProvider(BaseProvider):
def query_domain(self, domain: str) -> ProviderResult:
"""
Query DNS records for the domain to discover relationships and attributes.
FIXED: Now creates separate attributes for each DNS record type.
Args:
domain: Domain to investigate
@@ -62,13 +80,13 @@ class DNSProvider(BaseProvider):
result = ProviderResult()
# Query all record types
# Query all record types - each gets its own attribute
for record_type in ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SOA', 'TXT', 'SRV', 'CAA']:
try:
self._query_record(domain, record_type, result)
except resolver.NoAnswer:
#except resolver.NoAnswer:
# This is not an error, just a confirmation that the record doesn't exist.
self.logger.logger.debug(f"No {record_type} record found for {domain}")
#self.logger.logger.debug(f"No {record_type} record found for {domain}")
except Exception as e:
self.failed_requests += 1
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
@@ -77,10 +95,10 @@ class DNSProvider(BaseProvider):
def query_ip(self, ip: str) -> ProviderResult:
"""
Query reverse DNS for the IP address.
Query reverse DNS for the IP address (supports both IPv4 and IPv6).
Args:
ip: IP address to investigate
ip: IP address to investigate (IPv4 or IPv6)
Returns:
ProviderResult containing discovered relationships and attributes
@@ -89,59 +107,75 @@ class DNSProvider(BaseProvider):
return ProviderResult()
result = ProviderResult()
ip_version = get_ip_version(ip)
try:
# Perform reverse DNS lookup
# Perform reverse DNS lookup (works for both IPv4 and IPv6)
self.total_requests += 1
reverse_name = reversename.from_address(ip)
response = self.resolver.resolve(reverse_name, 'PTR')
self.successful_requests += 1
ptr_records = []
for ptr_record in response:
hostname = str(ptr_record).rstrip('.')
if _is_valid_domain(hostname):
# Determine appropriate forward relationship type based on IP version
if ip_version == 6:
relationship_type = 'shodan_aaaa_record'
record_prefix = 'AAAA'
else:
relationship_type = 'shodan_a_record'
record_prefix = 'A'
# Add the relationship
result.add_relationship(
source_node=ip,
target_node=hostname,
relationship_type='ptr_record',
relationship_type='dns_ptr_record',
provider=self.name,
confidence=0.8,
raw_data={
'query_type': 'PTR',
'ip_address': ip,
'ip_version': ip_version,
'hostname': hostname,
'ttl': response.ttl
}
)
# Add PTR record as attribute to the IP
result.add_attribute(
target_node=ip,
name='ptr_record',
value=hostname,
attr_type='dns_record',
provider=self.name,
confidence=0.8,
metadata={'ttl': response.ttl}
)
# Add to PTR records list
ptr_records.append(f"PTR: {hostname}")
# Log the relationship discovery
self.log_relationship_discovery(
source_node=ip,
target_node=hostname,
relationship_type='ptr_record',
relationship_type='dns_ptr_record',
confidence_score=0.8,
raw_data={
'query_type': 'PTR',
'ip_address': ip,
'ip_version': ip_version,
'hostname': hostname,
'ttl': response.ttl
},
discovery_method="reverse_dns_lookup"
discovery_method=f"reverse_dns_lookup_ipv{ip_version}"
)
# Add PTR records as separate attribute
if ptr_records:
result.add_attribute(
target_node=ip,
name='ptr_records', # Specific name for PTR records
value=ptr_records,
attr_type='dns_record',
provider=self.name,
confidence=0.8,
metadata={'ttl': response.ttl, 'ip_version': ip_version}
)
except resolver.NXDOMAIN:
self.failed_requests += 1
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: NXDOMAIN")
@@ -155,12 +189,8 @@ class DNSProvider(BaseProvider):
def _query_record(self, domain: str, record_type: str, result: ProviderResult) -> None:
"""
Query a specific type of DNS record for the domain and add results to ProviderResult.
Args:
domain: Domain to query
record_type: DNS record type (A, AAAA, CNAME, etc.)
result: ProviderResult to populate
FIXED: Query DNS records with unique attribute names for each record type.
Enhanced to better handle IPv6 AAAA records.
"""
try:
self.total_requests += 1
@@ -173,6 +203,10 @@ class DNSProvider(BaseProvider):
target = ""
if record_type in ['A', 'AAAA']:
target = str(record)
# Validate that the IP address is properly formed
if not _is_valid_ip(target):
self.logger.logger.debug(f"Invalid IP address in {record_type} record: {target}")
continue
elif record_type in ['CNAME', 'NS', 'PTR']:
target = str(record.target).rstrip('.')
elif record_type == 'MX':
@@ -180,28 +214,38 @@ class DNSProvider(BaseProvider):
elif record_type == 'SOA':
target = str(record.mname).rstrip('.')
elif record_type in ['TXT']:
# TXT records are treated as attributes, not relationships
# Keep raw TXT record value
txt_value = str(record).strip('"')
dns_records.append(f"TXT: {txt_value}")
dns_records.append(txt_value) # Just the value for TXT
continue
elif record_type == 'SRV':
target = str(record.target).rstrip('.')
elif record_type == 'CAA':
# Keep raw CAA record format
caa_value = f"{record.flags} {record.tag.decode('utf-8')} \"{record.value.decode('utf-8')}\""
dns_records.append(f"CAA: {caa_value}")
dns_records.append(caa_value) # Just the value for CAA
continue
else:
target = str(record)
if target:
# Determine IP version for metadata if this is an IP record
ip_version = None
if record_type in ['A', 'AAAA'] and _is_valid_ip(target):
ip_version = get_ip_version(target)
raw_data = {
'query_type': record_type,
'domain': domain,
'value': target,
'ttl': response.ttl
}
relationship_type = f"{record_type.lower()}_record"
confidence = 0.8 # Standard confidence for DNS records
if ip_version:
raw_data['ip_version'] = ip_version
relationship_type = f"dns_{record_type.lower()}_record"
confidence = 0.8
# Add relationship
result.add_relationship(
@@ -213,33 +257,47 @@ class DNSProvider(BaseProvider):
raw_data=raw_data
)
# Add DNS record as attribute to the source domain
dns_records.append(f"{record_type}: {target}")
# Add target to records list
dns_records.append(target)
# Log relationship discovery
# Log relationship discovery with IP version info
discovery_method = f"dns_{record_type.lower()}_record"
if ip_version:
discovery_method += f"_ipv{ip_version}"
self.log_relationship_discovery(
source_node=domain,
target_node=target,
relationship_type=relationship_type,
confidence_score=confidence,
raw_data=raw_data,
discovery_method=f"dns_{record_type.lower()}_record"
discovery_method=discovery_method
)
# Add DNS records as a consolidated attribute
# FIXED: Create attribute with specific name for each record type
if dns_records:
# Use record type specific attribute name (e.g., 'a_records', 'mx_records', etc.)
attribute_name = f"{record_type.lower()}_records"
metadata = {'record_type': record_type, 'ttl': response.ttl}
# Add IP version info for A/AAAA records
if record_type in ['A', 'AAAA'] and dns_records:
first_ip_version = get_ip_version(dns_records[0])
if first_ip_version:
metadata['ip_version'] = first_ip_version
result.add_attribute(
target_node=domain,
name='dns_records',
name=attribute_name, # UNIQUE name for each record type!
value=dns_records,
attr_type='dns_record_list',
provider=self.name,
confidence=0.8,
metadata={'record_types': [record_type]}
metadata=metadata
)
except Exception as e:
self.failed_requests += 1
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
# Re-raise the exception so the scanner can handle it
raise e

View File

@@ -8,13 +8,13 @@ import requests
from .base_provider import BaseProvider
from core.provider_result import ProviderResult
from utils.helpers import _is_valid_ip, _is_valid_domain
from utils.helpers import _is_valid_ip, _is_valid_domain, get_ip_version, normalize_ip
class ShodanProvider(BaseProvider):
"""
Provider for querying Shodan API for IP address information.
Now returns standardized ProviderResult objects with caching support.
Now returns standardized ProviderResult objects with caching support for IPv4 and IPv6.
"""
def __init__(self, name=None, session_config=None):
@@ -28,13 +28,61 @@ class ShodanProvider(BaseProvider):
self.base_url = "https://api.shodan.io"
self.api_key = self.config.get_api_key('shodan')
# FIXED: Don't fail initialization on connection issues - defer to actual usage
self._connection_tested = False
self._connection_works = False
# Initialize cache directory
self.cache_dir = Path('cache') / 'shodan'
self.cache_dir.mkdir(parents=True, exist_ok=True)
def __getstate__(self):
"""Prepare the object for pickling."""
state = super().__getstate__()
return state
def __setstate__(self, state):
"""Restore the object after unpickling."""
super().__setstate__(state)
def _check_api_connection(self) -> bool:
"""
FIXED: Lazy connection checking - only test when actually needed.
Don't block provider initialization on network issues.
"""
if self._connection_tested:
return self._connection_works
if not self.api_key:
self._connection_tested = True
self._connection_works = False
return False
try:
print(f"Testing Shodan API connection with key: {self.api_key[:8]}...")
response = self.session.get(f"{self.base_url}/api-info?key={self.api_key}", timeout=5)
self._connection_works = response.status_code == 200
print(f"Shodan API test result: {response.status_code} - {'Success' if self._connection_works else 'Failed'}")
except requests.exceptions.RequestException as e:
print(f"Shodan API connection test failed: {e}")
self._connection_works = False
finally:
self._connection_tested = True
return self._connection_works
def is_available(self) -> bool:
"""Check if Shodan provider is available (has valid API key in this session)."""
return self.api_key is not None and len(self.api_key.strip()) > 0
"""
FIXED: Check if Shodan provider is available based on API key presence.
Don't require successful connection test during initialization.
"""
has_api_key = self.api_key is not None and len(self.api_key.strip()) > 0
if not has_api_key:
return False
# FIXED: Only test connection on first actual usage, not during initialization
return True
def get_name(self) -> str:
"""Return the provider name."""
@@ -53,8 +101,19 @@ class ShodanProvider(BaseProvider):
return {'domains': False, 'ips': True}
def _get_cache_file_path(self, ip: str) -> Path:
"""Generate cache file path for an IP address."""
safe_ip = ip.replace('.', '_').replace(':', '_')
"""
Generate cache file path for an IP address (IPv4 or IPv6).
IPv6 addresses contain colons which are replaced with underscores for filesystem safety.
"""
# Normalize the IP address first to ensure consistent caching
normalized_ip = normalize_ip(ip)
if not normalized_ip:
# Fallback for invalid IPs
safe_ip = ip.replace('.', '_').replace(':', '_')
else:
# Replace problematic characters for both IPv4 and IPv6
safe_ip = normalized_ip.replace('.', '_').replace(':', '_')
return self.cache_dir / f"{safe_ip}.json"
def _get_cache_status(self, cache_file_path: Path) -> str:
@@ -87,58 +146,157 @@ class ShodanProvider(BaseProvider):
def query_domain(self, domain: str) -> ProviderResult:
"""
Domain queries are no longer supported for the Shodan provider.
Args:
domain: Domain to investigate
Returns:
Empty ProviderResult
Shodan does not support domain queries. This method returns an empty result.
"""
return ProviderResult()
def query_ip(self, ip: str) -> ProviderResult:
"""
Query Shodan for information about an IP address, with caching of processed data.
Query Shodan for information about an IP address (IPv4 or IPv6), with caching of processed data.
FIXED: Proper 404 handling to prevent unnecessary retries.
Args:
ip: IP address to investigate
ip: IP address to investigate (IPv4 or IPv6)
Returns:
ProviderResult containing discovered relationships and attributes
Raises:
Exception: For temporary failures that should be retried (timeouts, 502/503 errors, connection issues)
"""
if not _is_valid_ip(ip) or not self.is_available():
if not _is_valid_ip(ip):
return ProviderResult()
# Test connection only when actually making requests
if not self._check_api_connection():
print(f"Shodan API not available for {ip} - API key: {'present' if self.api_key else 'missing'}")
return ProviderResult()
cache_file = self._get_cache_file_path(ip)
# Normalize IP address for consistent processing
normalized_ip = normalize_ip(ip)
if not normalized_ip:
return ProviderResult()
cache_file = self._get_cache_file_path(normalized_ip)
cache_status = self._get_cache_status(cache_file)
result = ProviderResult()
if cache_status == "fresh":
self.logger.logger.debug(f"Using fresh cache for Shodan query: {normalized_ip}")
return self._load_from_cache(cache_file)
# Need to query API
self.logger.logger.debug(f"Querying Shodan API for: {normalized_ip}")
url = f"{self.base_url}/shodan/host/{normalized_ip}"
params = {'key': self.api_key}
try:
if cache_status == "fresh":
result = self._load_from_cache(cache_file)
self.logger.logger.info(f"Using cached Shodan data for {ip}")
else: # "stale" or "not_found"
url = f"{self.base_url}/shodan/host/{ip}"
params = {'key': self.api_key}
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
if response and response.status_code == 200:
response = self.make_request(url, method="GET", params=params, target_indicator=normalized_ip)
if not response:
self.logger.logger.warning(f"Shodan API unreachable for {normalized_ip} - network failure")
if cache_status == "stale":
self.logger.logger.info(f"Using stale cache for {normalized_ip} due to network failure")
return self._load_from_cache(cache_file)
else:
# FIXED: Treat network failures as "no information" rather than retryable errors
self.logger.logger.info(f"No Shodan data available for {normalized_ip} due to network failure")
result = ProviderResult() # Empty result
network_failure_data = {'shodan_status': 'network_unreachable', 'error': 'API unreachable'}
self._save_to_cache(cache_file, result, network_failure_data)
return result
# FIXED: Handle different status codes more precisely
if response.status_code == 200:
self.logger.logger.debug(f"Shodan returned data for {normalized_ip}")
try:
data = response.json()
# Process the data into ProviderResult BEFORE caching
result = self._process_shodan_data(ip, data)
self._save_to_cache(cache_file, result, data) # Save both result and raw data
elif cache_status == "stale":
# If API fails on a stale cache, use the old data
result = self._load_from_cache(cache_file)
except requests.exceptions.RequestException as e:
self.logger.logger.error(f"Shodan API query failed for {ip}: {e}")
result = self._process_shodan_data(normalized_ip, data)
self._save_to_cache(cache_file, result, data)
return result
except json.JSONDecodeError as e:
self.logger.logger.error(f"Invalid JSON response from Shodan for {normalized_ip}: {e}")
if cache_status == "stale":
return self._load_from_cache(cache_file)
else:
raise requests.exceptions.RequestException("Invalid JSON response from Shodan - should retry")
elif response.status_code == 404:
# FIXED: 404 = "no information available" - successful but empty result, don't retry
self.logger.logger.debug(f"Shodan has no information for {normalized_ip} (404)")
result = ProviderResult() # Empty but successful result
# Cache the empty result to avoid repeated queries
empty_data = {'shodan_status': 'no_information', 'status_code': 404}
self._save_to_cache(cache_file, result, empty_data)
return result
elif response.status_code in [401, 403]:
# Authentication/authorization errors - permanent failures, don't retry
self.logger.logger.error(f"Shodan API authentication failed for {normalized_ip} (HTTP {response.status_code})")
return ProviderResult() # Empty result, don't retry
elif response.status_code == 429:
# Rate limiting - should be handled by rate limiter, but if we get here, retry
self.logger.logger.warning(f"Shodan API rate limited for {normalized_ip} (HTTP {response.status_code})")
if cache_status == "stale":
self.logger.logger.info(f"Using stale cache for {normalized_ip} due to rate limiting")
return self._load_from_cache(cache_file)
else:
raise requests.exceptions.RequestException(f"Shodan API rate limited (HTTP {response.status_code}) - should retry")
elif response.status_code in [500, 502, 503, 504]:
# Server errors - temporary failures that should be retried
self.logger.logger.warning(f"Shodan API server error for {normalized_ip} (HTTP {response.status_code})")
if cache_status == "stale":
self.logger.logger.info(f"Using stale cache for {normalized_ip} due to server error")
return self._load_from_cache(cache_file)
else:
raise requests.exceptions.RequestException(f"Shodan API server error (HTTP {response.status_code}) - should retry")
else:
# FIXED: Other HTTP status codes - treat as no information available, don't retry
self.logger.logger.info(f"Shodan returned status {response.status_code} for {normalized_ip} - treating as no information")
result = ProviderResult() # Empty result
no_info_data = {'shodan_status': 'no_information', 'status_code': response.status_code}
self._save_to_cache(cache_file, result, no_info_data)
return result
except requests.exceptions.Timeout:
# Timeout errors - should be retried
self.logger.logger.warning(f"Shodan API timeout for {normalized_ip}")
if cache_status == "stale":
result = self._load_from_cache(cache_file)
self.logger.logger.info(f"Using stale cache for {normalized_ip} due to timeout")
return self._load_from_cache(cache_file)
else:
raise # Re-raise timeout for retry
return result
except requests.exceptions.ConnectionError:
# Connection errors - should be retried
self.logger.logger.warning(f"Shodan API connection error for {normalized_ip}")
if cache_status == "stale":
self.logger.logger.info(f"Using stale cache for {normalized_ip} due to connection error")
return self._load_from_cache(cache_file)
else:
raise # Re-raise connection error for retry
except json.JSONDecodeError:
# JSON parsing error - treat as temporary failure
self.logger.logger.error(f"Invalid JSON response from Shodan for {normalized_ip}")
if cache_status == "stale":
self.logger.logger.info(f"Using stale cache for {normalized_ip} due to JSON parsing error")
return self._load_from_cache(cache_file)
else:
raise requests.exceptions.RequestException("Invalid JSON response from Shodan - should retry")
# FIXED: Remove the generic RequestException handler that was causing 404s to retry
# Now only specific exceptions that should be retried are re-raised
except Exception as e:
# FIXED: Unexpected exceptions - log but treat as no information available, don't retry
self.logger.logger.warning(f"Unexpected exception in Shodan query for {normalized_ip}: {e}")
result = ProviderResult() # Empty result
error_data = {'shodan_status': 'error', 'error': str(e)}
self._save_to_cache(cache_file, result, error_data)
return result
def _load_from_cache(self, cache_file_path: Path) -> ProviderResult:
"""Load processed Shodan data from a cache file."""
@@ -211,73 +369,100 @@ class ShodanProvider(BaseProvider):
def _process_shodan_data(self, ip: str, data: Dict[str, Any]) -> ProviderResult:
"""
Process Shodan data to extract relationships and attributes.
Args:
ip: IP address queried
data: Raw Shodan response data
Returns:
ProviderResult with relationships and attributes
VERIFIED: Process Shodan data creating ISP nodes with ASN attributes and proper relationships.
Enhanced to include IP version information for IPv6 addresses.
"""
result = ProviderResult()
# Determine IP version for metadata
ip_version = get_ip_version(ip)
# VERIFIED: Extract ISP information and create proper ISP node with ASN
isp_name = data.get('org')
asn_value = data.get('asn')
if isp_name and asn_value:
# Create relationship from IP to ISP
result.add_relationship(
source_node=ip,
target_node=isp_name,
relationship_type='shodan_isp',
provider=self.name,
confidence=0.9,
raw_data={'asn': asn_value, 'shodan_org': isp_name, 'ip_version': ip_version}
)
# Add ASN as attribute to the ISP node
result.add_attribute(
target_node=isp_name,
name='asn',
value=asn_value,
attr_type='isp_info',
provider=self.name,
confidence=0.9,
metadata={'description': 'Autonomous System Number from Shodan', 'ip_version': ip_version}
)
# Also add organization name as attribute to ISP node for completeness
result.add_attribute(
target_node=isp_name,
name='organization_name',
value=isp_name,
attr_type='isp_info',
provider=self.name,
confidence=0.9,
metadata={'description': 'Organization name from Shodan', 'ip_version': ip_version}
)
# Process hostnames (reverse DNS)
for key, value in data.items():
if key == 'hostnames':
for hostname in value:
if _is_valid_domain(hostname):
# Use appropriate relationship type based on IP version
if ip_version == 6:
relationship_type = 'shodan_aaaa_record'
else:
relationship_type = 'shodan_a_record'
result.add_relationship(
source_node=ip,
target_node=hostname,
relationship_type='a_record',
relationship_type=relationship_type,
provider=self.name,
confidence=0.8,
raw_data=data
raw_data={**data, 'ip_version': ip_version}
)
self.log_relationship_discovery(
source_node=ip,
target_node=hostname,
relationship_type='a_record',
relationship_type=relationship_type,
confidence_score=0.8,
raw_data=data,
discovery_method="shodan_host_lookup"
raw_data={**data, 'ip_version': ip_version},
discovery_method=f"shodan_host_lookup_ipv{ip_version}"
)
elif key == 'asn':
asn_name = f"AS{value[2:]}" if isinstance(value, str) and value.startswith('AS') else f"AS{value}"
result.add_relationship(
source_node=ip,
target_node=asn_name,
relationship_type='asn_membership',
provider=self.name,
confidence=0.7,
raw_data=data
)
self.log_relationship_discovery(
source_node=ip,
target_node=asn_name,
relationship_type='asn_membership',
confidence_score=0.7,
raw_data=data,
discovery_method="shodan_asn_lookup"
)
elif key == 'ports':
# Add open ports as attributes to the IP
for port in value:
result.add_attribute(
target_node=ip,
name='open_port',
name='shodan_open_port',
value=port,
attr_type='network_info',
attr_type='shodan_network_info',
provider=self.name,
confidence=0.9
confidence=0.9,
metadata={'ip_version': ip_version}
)
elif isinstance(value, (str, int, float, bool)) and value is not None:
# Add other Shodan fields as IP attributes (keep raw field names)
result.add_attribute(
target_node=ip,
name=f"shodan_{key}",
name=key, # Raw field name from Shodan API
value=value,
attr_type='shodan_info',
provider=self.name,
confidence=0.9
confidence=0.9,
metadata={'ip_version': ip_version}
)
return result

View File

@@ -1,10 +1,13 @@
Flask>=2.3.3
networkx>=3.1
requests>=2.31.0
python-dateutil>=2.8.2
Werkzeug>=2.3.7
urllib3>=2.0.0
dnspython>=2.4.2
Flask
networkx
requests
python-dateutil
Werkzeug
urllib3
dnspython
gunicorn
redis
python-dotenv
python-dotenv
psycopg2-binary
Flask-SocketIO
eventlet

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
// dnsrecon-reduced/static/js/graph.js
/**
* Graph visualization module for DNSRecon
* Handles network graph rendering using vis.js with proper large entity node hiding
@@ -214,7 +215,6 @@ class GraphManager {
});
document.body.appendChild(this.contextMenu);
console.log('Context menu created and added to body');
}
/**
@@ -291,7 +291,6 @@ class GraphManager {
// FIXED: Right-click context menu
this.container.addEventListener('contextmenu', (event) => {
event.preventDefault();
console.log('Right-click detected at:', event.offsetX, event.offsetY);
// Get coordinates relative to the canvas
const pointer = {
@@ -300,7 +299,6 @@ class GraphManager {
};
const nodeId = this.network.getNodeAt(pointer);
console.log('Node at pointer:', nodeId);
if (nodeId) {
// Pass the original client event for positioning
@@ -341,19 +339,12 @@ class GraphManager {
// Stabilization events with progress
this.network.on('stabilizationProgress', (params) => {
const progress = params.iterations / params.total;
this.updateStabilizationProgress(progress);
});
this.network.on('stabilizationIterationsDone', () => {
this.onStabilizationComplete();
});
// Selection events
this.network.on('select', (params) => {
console.log('Selected nodes:', params.nodes);
console.log('Selected edges:', params.edges);
});
// Click away to hide context menu
document.addEventListener('click', (e) => {
if (!this.contextMenu.contains(e.target)) {
@@ -372,97 +363,79 @@ class GraphManager {
}
try {
// Initialize if not already done
if (!this.isInitialized) {
this.initialize();
}
this.largeEntityMembers.clear();
const largeEntityMap = new Map();
this.initialTargetIds = new Set(graphData.initial_targets || []);
const hasData = graphData.nodes.length > 0 || graphData.edges.length > 0;
graphData.nodes.forEach(node => {
if (node.type === 'large_entity' && node.attributes) {
// UPDATED: Handle unified data model - look for 'nodes' attribute in the attributes list
const nodesAttribute = this.findAttributeByName(node.attributes, 'nodes');
if (nodesAttribute && Array.isArray(nodesAttribute.value)) {
nodesAttribute.value.forEach(nodeId => {
largeEntityMap.set(nodeId, node.id);
this.largeEntityMembers.add(nodeId);
});
}
}
});
const placeholder = this.container.querySelector('.graph-placeholder');
if (placeholder) {
placeholder.style.display = hasData ? 'none' : 'flex';
}
if (!hasData) {
this.nodes.clear();
this.edges.clear();
return;
}
const filteredNodes = graphData.nodes.filter(node => {
// Only include nodes that are NOT members of large entities, but always include the container itself
return !this.largeEntityMembers.has(node.id) || node.type === 'large_entity';
});
const nodeMap = new Map(graphData.nodes.map(node => [node.id, node]));
console.log(`Filtered ${graphData.nodes.length - filteredNodes.length} large entity member nodes from visualization`);
// Filter out hidden nodes before processing for rendering
const filteredNodes = graphData.nodes.filter(node =>
!(node.metadata && node.metadata.large_entity_id)
);
// Process only the filtered nodes
const processedNodes = filteredNodes.map(node => {
return this.processNode(node);
});
const mergedEdges = {};
graphData.edges.forEach(edge => {
const fromNode = largeEntityMap.has(edge.from) ? largeEntityMap.get(edge.from) : edge.from;
const toNode = largeEntityMap.has(edge.to) ? largeEntityMap.get(edge.to) : edge.to;
const mergeKey = `${fromNode}-${toNode}-${edge.label}`;
if (!mergedEdges[mergeKey]) {
mergedEdges[mergeKey] = {
...edge,
from: fromNode,
to: toNode,
count: 0,
confidence_score: 0
};
}
mergedEdges[mergeKey].count++;
if (edge.confidence_score > mergedEdges[mergeKey].confidence_score) {
mergedEdges[mergeKey].confidence_score = edge.confidence_score;
}
});
const processedEdges = Object.values(mergedEdges).map(edge => {
const processed = this.processEdge(edge);
if (edge.count > 1) {
processed.label = `${edge.label} (${edge.count})`;
const processedNodes = graphData.nodes.map(node => {
const processed = this.processNode(node);
if (node.metadata && node.metadata.large_entity_id) {
processed.hidden = true;
}
return processed;
});
const processedEdges = graphData.edges.map(edge => {
let fromNode = nodeMap.get(edge.from);
let toNode = nodeMap.get(edge.to);
let fromId = edge.from;
let toId = edge.to;
if (fromNode && fromNode.metadata && fromNode.metadata.large_entity_id) {
fromId = fromNode.metadata.large_entity_id;
}
if (toNode && toNode.metadata && toNode.metadata.large_entity_id) {
toId = toNode.metadata.large_entity_id;
}
// Avoid self-referencing edges from re-routing
if (fromId === toId) {
return null;
}
const reRoutedEdge = { ...edge, from: fromId, to: toId };
return this.processEdge(reRoutedEdge);
}).filter(Boolean); // Remove nulls from self-referencing edges
// Update datasets with animation
const existingNodeIds = this.nodes.getIds();
const existingEdgeIds = this.edges.getIds();
// Add new nodes with fade-in animation
const newNodes = processedNodes.filter(node => !existingNodeIds.includes(node.id));
const newEdges = processedEdges.filter(edge => !existingEdgeIds.includes(edge.id));
// Update existing data
this.nodes.update(processedNodes);
this.edges.update(processedEdges);
// After data is loaded, apply filters
this.updateFilterControls();
this.applyAllFilters();
// Highlight new additions briefly
if (newNodes.length > 0 || newEdges.length > 0) {
setTimeout(() => this.highlightNewElements(newNodes, newEdges), 100);
}
// Auto-fit view for small graphs or first update
if (processedNodes.length <= 10 || existingNodeIds.length === 0) {
if (this.nodes.length <= 10 || existingNodeIds.length === 0) {
setTimeout(() => this.fitView(), 800);
}
console.log(`Graph updated: ${processedNodes.length} nodes, ${processedEdges.length} edges (${newNodes.length} new nodes, ${newEdges.length} new edges)`);
console.log(`Large entity members hidden: ${this.largeEntityMembers.size}`);
} catch (error) {
console.error('Failed to update graph:', error);
@@ -470,6 +443,46 @@ class GraphManager {
}
}
analyzeCertificateInfo(attributes) {
let hasCertificates = false;
let hasValidCertificates = false;
let hasExpiredCertificates = false;
for (const attr of attributes) {
const attrName = (attr.name || '').toLowerCase();
const attrProvider = (attr.provider || '').toLowerCase();
const attrValue = attr.value;
// Look for certificate attributes from crtsh provider
if (attrProvider === 'crtsh' || attrName.startsWith('cert_')) {
hasCertificates = true;
// Check certificate validity using raw attribute names
if (attrName === 'cert_is_currently_valid') {
if (attrValue === true) {
hasValidCertificates = true;
} else if (attrValue === false) {
hasExpiredCertificates = true;
}
}
// Check for expiry indicators
else if (attrName === 'cert_expires_soon' && attrValue === true) {
hasExpiredCertificates = true;
}
else if (attrName.includes('expired') && attrValue === true) {
hasExpiredCertificates = true;
}
}
}
return {
hasCertificates,
hasValidCertificates,
hasExpiredCertificates,
hasExpiredOnly: hasExpiredCertificates && !hasValidCertificates
};
}
/**
* UPDATED: Helper method to find an attribute by name in the standardized attributes list
* @param {Array} attributes - List of StandardAttribute objects
@@ -496,7 +509,7 @@ class GraphManager {
size: this.getNodeSize(node.type),
borderColor: this.getNodeBorderColor(node.type),
shape: this.getNodeShape(node.type),
attributes: node.attributes || [], // Keep as standardized attributes list
attributes: node.attributes || [],
description: node.description || '',
metadata: node.metadata || {},
type: node.type,
@@ -504,29 +517,40 @@ class GraphManager {
outgoing_edges: node.outgoing_edges || []
};
if (node.max_depth_reached) {
processedNode.borderColor = '#ff0000'; // Red border for max depth
}
// Add confidence-based styling
if (node.confidence) {
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
}
// Handle merged correlation objects (similar to large entities)
if (node.type === 'correlation_object') {
const metadata = node.metadata || {};
const values = metadata.values || [];
const mergeCount = metadata.merge_count || 1;
// FIXED: Certificate-based domain coloring
if (node.type === 'domain' && Array.isArray(node.attributes)) {
const certInfo = this.analyzeCertificateInfo(node.attributes);
if (mergeCount > 1) {
// Display as merged correlation container
processedNode.label = `Correlations (${mergeCount})`;
processedNode.title = `Merged correlation container with ${mergeCount} values: ${values.slice(0, 3).join(', ')}${values.length > 3 ? '...' : ''}`;
processedNode.borderWidth = 3; // Thicker border for merged nodes
} else {
// Single correlation value
const value = Array.isArray(values) && values.length > 0 ? values[0] : (metadata.value || 'Unknown');
const displayValue = typeof value === 'string' && value.length > 20 ? value.substring(0, 17) + '...' : value;
processedNode.label = `${displayValue}`;
processedNode.title = `Correlation: ${value}`;
if (certInfo.hasExpiredOnly) {
// Red for domains with only expired/invalid certificates
processedNode.color = '#ff6b6b';
processedNode.borderColor = '#cc5555';
} else if (!certInfo.hasCertificates) {
// Grey for domains with no certificates
processedNode.color = '#c7c7c7';
processedNode.borderColor = '#999999';
}
// Green for valid certificates (default color)
}
// Handle merged correlation objects
if (node.type === 'correlation_object') {
const correlationValueAttr = this.findAttributeByName(node.attributes, 'correlation_value');
const value = correlationValueAttr ? correlationValueAttr.value : 'Unknown';
const displayValue = typeof value === 'string' && value.length > 20 ? value.substring(0, 17) + '...' : value;
processedNode.label = `${displayValue}`;
processedNode.title = `Correlation: ${value}`;
}
return processedNode;
@@ -540,7 +564,7 @@ class GraphManager {
processEdge(edge) {
const confidence = edge.confidence_score || 0;
const processedEdge = {
id: `${edge.from}-${edge.to}`,
id: `${edge.from}-${edge.to}-${edge.label}`,
from: edge.from,
to: edge.to,
label: this.formatEdgeLabel(edge.label, confidence),
@@ -595,7 +619,8 @@ class GraphManager {
const colors = {
'domain': '#00ff41', // Green
'ip': '#ff9900', // Amber
'asn': '#00aaff', // Blue
'isp': '#00aaff', // Blue
'ca': '#ff6b6b', // Red
'large_entity': '#ff6b6b', // Red for large entities
'correlation_object': '#9620c0ff'
};
@@ -611,7 +636,8 @@ class GraphManager {
const borderColors = {
'domain': '#00aa2e',
'ip': '#cc7700',
'asn': '#0088cc',
'isp': '#0088cc',
'ca': '#cc5555',
'correlation_object': '#c235c9ff'
};
return borderColors[nodeType] || '#666666';
@@ -626,9 +652,10 @@ class GraphManager {
const sizes = {
'domain': 12,
'ip': 14,
'asn': 16,
'isp': 16,
'ca': 16,
'correlation_object': 8,
'large_entity': 5
'large_entity': 25
};
return sizes[nodeType] || 12;
}
@@ -642,9 +669,10 @@ class GraphManager {
const shapes = {
'domain': 'dot',
'ip': 'square',
'asn': 'triangle',
'isp': 'triangle',
'ca': 'diamond',
'correlation_object': 'hexagon',
'large_entity': 'database'
'large_entity': 'dot'
};
return shapes[nodeType] || 'dot';
}
@@ -900,15 +928,6 @@ class GraphManager {
}, 2000);
}
/**
* Update stabilization progress
* @param {number} progress - Progress value (0-1)
*/
updateStabilizationProgress(progress) {
// Could show a progress indicator if needed
console.log(`Graph stabilization: ${(progress * 100).toFixed(1)}%`);
}
/**
* Handle stabilization completion
*/
@@ -992,8 +1011,8 @@ class GraphManager {
this.nodes.clear();
this.edges.clear();
this.history = [];
this.largeEntityMembers.clear(); // Clear large entity tracking
this.clearInitialTargets();
this.largeEntityMembers.clear();
this.initialTargetIds.clear();
// Show placeholder
const placeholder = this.container.querySelector('.graph-placeholder');
@@ -1096,11 +1115,11 @@ class GraphManager {
adjacencyList
);
console.log(`Reachability analysis complete:`, {
/*console.log(`Reachability analysis complete:`, {
reachable: analysis.reachableNodes.size,
unreachable: analysis.unreachableNodes.size,
clusters: analysis.isolatedClusters.length
});
});*/
return analysis;
}
@@ -1150,7 +1169,6 @@ class GraphManager {
const basicStats = {
nodeCount: this.nodes.length,
edgeCount: this.edges.length,
largeEntityMembersHidden: this.largeEntityMembers.size
};
// Add forensic statistics
@@ -1168,16 +1186,6 @@ class GraphManager {
};
}
addInitialTarget(targetId) {
this.initialTargetIds.add(targetId);
console.log("Initial targets:", this.initialTargetIds);
}
clearInitialTargets() {
this.initialTargetIds.clear();
console.log("Initial targets cleared.");
}
updateFilterControls() {
if (!this.filterPanel) return;
const nodeTypes = new Set(this.nodes.get().map(n => n.type));
@@ -1215,7 +1223,6 @@ class GraphManager {
* Replaces the existing applyAllFilters() method
*/
applyAllFilters() {
console.log("Applying filters with enhanced reachability analysis...");
if (this.nodes.length === 0) return;
// Get filter criteria from UI
@@ -1271,23 +1278,11 @@ class GraphManager {
operation: 'hide_with_reachability',
timestamp: Date.now()
};
// Apply hiding with forensic documentation
const updates = nodesToHide.map(id => ({
id: id,
hidden: true,
forensicNote: `Hidden due to reachability analysis from ${nodeId}`
}));
const updates = nodesToHide.map(id => ({ id: id, hidden: true }));
this.nodes.update(updates);
this.addToHistory('hide', historyData);
console.log(`Forensic hide operation: ${nodesToHide.length} nodes hidden`, {
originalTarget: nodeId,
cascadeNodes: nodesToHide.length - 1,
isolatedClusters: analysis.isolatedClusters.length
});
return {
hiddenNodes: nodesToHide,
isolatedClusters: analysis.isolatedClusters
@@ -1370,9 +1365,7 @@ class GraphManager {
// Handle operation results
if (!operationFailed) {
this.addToHistory('delete', historyData);
console.log(`Forensic delete operation completed:`, historyData.forensicAnalysis);
this.addToHistory('delete', historyData);
return {
success: true,
deletedNodes: nodesToDelete,
@@ -1463,7 +1456,6 @@ class GraphManager {
e.stopPropagation();
const action = e.currentTarget.dataset.action;
const nodeId = e.currentTarget.dataset.nodeId;
console.log('Context menu action:', action, 'for node:', nodeId);
this.performContextMenuAction(action, nodeId);
this.hideContextMenu();
});
@@ -1483,9 +1475,7 @@ class GraphManager {
* UPDATED: Enhanced context menu actions using new methods
* Updates the existing performContextMenuAction() method
*/
performContextMenuAction(action, nodeId) {
console.log('Performing enhanced action:', action, 'on node:', nodeId);
performContextMenuAction(action, nodeId) {
switch (action) {
case 'focus':
this.focusOnNode(nodeId);
@@ -1575,14 +1565,43 @@ class GraphManager {
}
/**
* Unhide all hidden nodes
* FIXED: Unhide all hidden nodes, excluding large entity members and disconnected nodes.
* This prevents orphaned large entity members from appearing as free-floating nodes.
*/
unhideAll() {
const allNodes = this.nodes.get({
filter: (node) => node.hidden === true
const allHiddenNodes = this.nodes.get({
filter: (node) => {
// Skip nodes that are part of a large entity
if (node.metadata && node.metadata.large_entity_id) {
return false;
}
// Skip nodes that are not hidden
if (node.hidden !== true) {
return false;
}
// Skip nodes that have no edges (would appear disconnected)
const nodeId = node.id;
const hasIncomingEdges = this.edges.get().some(edge => edge.to === nodeId && !edge.hidden);
const hasOutgoingEdges = this.edges.get().some(edge => edge.from === nodeId && !edge.hidden);
if (!hasIncomingEdges && !hasOutgoingEdges) {
console.log(`Skipping disconnected node ${nodeId} from unhide`);
return false;
}
return true;
}
});
const updates = allNodes.map(node => ({ id: node.id, hidden: false }));
this.nodes.update(updates);
if (allHiddenNodes.length > 0) {
console.log(`Unhiding ${allHiddenNodes.length} nodes with valid connections`);
const updates = allHiddenNodes.map(node => ({ id: node.id, hidden: false }));
this.nodes.update(updates);
} else {
console.log('No eligible nodes to unhide');
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +1,19 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>DNSRecon - Infrastructure Reconnaissance</title>
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}">
<script src="https://cdnjs.cloudflare.com/ajax/libs/vis/4.21.0/vis.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.7.2/socket.io.js"></script>
<link href="https://cdnjs.cloudflare.com/ajax/libs/vis/4.21.0/vis.min.css" rel="stylesheet" type="text/css">
<link href="https://fonts.googleapis.com/css2?family=Roboto+Mono:wght@300;400;500;700&family=Special+Elite&display=swap" rel="stylesheet">
<link
href="https://fonts.googleapis.com/css2?family=Roboto+Mono:wght@300;400;500;700&family=Special+Elite&display=swap"
rel="stylesheet">
</head>
<body>
<div class="container">
<header class="header">
@@ -29,13 +34,13 @@
<div class="panel-header">
<h2>Target Configuration</h2>
</div>
<div class="form-container">
<div class="input-group">
<label for="target-input">Target Domain or IP</label>
<input type="text" id="target-input" placeholder="example.com or 8.8.8.8" autocomplete="off">
</div>
<div class="button-group">
<button id="start-scan" class="btn btn-primary">
<span class="btn-icon">[RUN]</span>
@@ -49,9 +54,9 @@
<span class="btn-icon">[STOP]</span>
<span>Terminate Scan</span>
</button>
<button id="export-results" class="btn btn-secondary">
<button id="export-options" class="btn btn-secondary">
<span class="btn-icon">[EXPORT]</span>
<span>Download Results</span>
<span>Export Options</span>
</button>
<button id="configure-settings" class="btn btn-secondary">
<span class="btn-icon">[API]</span>
@@ -65,7 +70,7 @@
<div class="panel-header">
<h2>Reconnaissance Status</h2>
</div>
<div class="status-content">
<div class="status-row">
<span class="status-label">Current Status:</span>
@@ -84,7 +89,7 @@
<span id="relationships-display" class="status-value">0</span>
</div>
</div>
<div class="progress-container">
<div class="progress-info">
<span id="progress-label">Progress:</span>
@@ -95,11 +100,13 @@
</div>
<div class="progress-placeholder">
<span class="status-label">
⚠️ <strong>Important:</strong> Scanning large public services (e.g., Google, Cloudflare, AWS) is
⚠️ <strong>Important:</strong> Scanning large public services (e.g., Google, Cloudflare,
AWS) is
<strong>discouraged</strong> due to rate limits (e.g., crt.sh).
<br><br>
Our task scheduler operates on a <strong>priority-based queue</strong>:
Short, targeted tasks like DNS are processed first, while resource-intensive requests (e.g., crt.sh)
Short, targeted tasks like DNS are processed first, while resource-intensive requests (e.g.,
crt.sh)
are <strong>automatically deprioritized</strong> and may be processed later.
</span>
</div>
@@ -110,45 +117,47 @@
<div class="panel-header">
<h2>Infrastructure Map</h2>
</div>
<div id="network-graph" class="graph-container">
<div class="graph-placeholder">
<div class="placeholder-content">
<div class="placeholder-icon">[]</div>
<div class="placeholder-icon">[]</div>
<div class="placeholder-text">Infrastructure map will appear here</div>
<div class="placeholder-subtext">Start a reconnaissance scan to visualize relationships</div>
<div class="placeholder-subtext">Start a reconnaissance scan to visualize relationships
</div>
</div>
</div>
</div>
<div class="legend">
<div class="legend-item">
<div class="legend-color" style="background-color: #00ff41;"></div>
<span>Domains</span>
</div>
<div class="legend-item">
<div class="legend-color" style="background-color: #c92f2f;"></div>
<span>Domain (no valid cert)</span>
</div>
<div class="legend-item">
<div class="legend-color" style="background-color: #c7c7c7;"></div>
<span>Domain (never had cert)</span>
</div>
<div class="legend-item">
<div class="legend-color" style="background-color: #ff9900;"></div>
<span>IP Addresses</span>
</div>
<div class="legend-item">
<div class="legend-color" style="background-color: #c7c7c7;"></div>
<span>Domain (invalid cert)</span>
</div>
<div class="legend-item">
<div class="legend-color" style="background-color: #9d4edd;"></div>
<span>Correlation Objects</span>
</div>
<div class="legend-item">
<div class="legend-edge high-confidence"></div>
<span>High Confidence</span>
</div>
<div class="legend-item">
<div class="legend-edge medium-confidence"></div>
<span>Medium Confidence</span>
<div class="legend-color" style="background-color: #00aaff;"></div>
<span>ISPs</span>
</div>
<div class="legend-item">
<div class="legend-color" style="background-color: #ff6b6b;"></div>
<span>Large Entity</span>
<span>Certificate Authorities</span>
</div>
<div class="legend-item">
<div class="legend-color" style="background-color: #9d4edd;"></div>
<span>Correlation Objects</span>
</div>
</div>
</section>
@@ -157,9 +166,9 @@
<div class="panel-header">
<h2>Data Providers</h2>
</div>
<div id="provider-list" class="provider-list">
</div>
</div>
</section>
</main>
@@ -181,7 +190,7 @@
</div>
<div class="modal-body">
<div id="modal-details">
</div>
</div>
</div>
</div>
</div>
@@ -189,53 +198,125 @@
<div id="settings-modal" class="modal">
<div class="modal-content">
<div class="modal-header">
<h3>Settings</h3>
<h3>Scanner Configuration</h3>
<button id="settings-modal-close" class="modal-close">[×]</button>
</div>
<div class="modal-body">
<p class="modal-description">
Configure scan settings and API keys. Keys are stored in memory for the current session only.
Only provide API-keys you dont use for anything else. Don´t enter an API-key if you don´t trust me (best practice would that you don´t).
</p>
<br>
<div class="input-group">
<label for="max-depth">Recursion Depth</label>
<select id="max-depth">
<option value="1">Depth 1 - Direct relationships</option>
<option value="2" selected>Depth 2 - Recommended</option>
<option value="3">Depth 3 - Extended analysis</option>
<option value="4">Depth 4 - Deep reconnaissance</option>
<option value="5">Depth 5 - Maximum depth</option>
</select>
</div>
<div id="api-key-inputs">
<div class="modal-details">
<section class="modal-section">
<details open>
<summary>
<span>⚙️ Scan Settings</span>
</summary>
<div class="modal-section-content">
<div class="input-group">
<label for="max-depth">Recursion Depth</label>
<select id="max-depth">
<option value="1">Depth 1 - Direct relationships</option>
<option value="2" selected>Depth 2 - Recommended</option>
<option value="3">Depth 3 - Extended analysis</option>
<option value="4">Depth 4 - Deep reconnaissance</option>
<option value="5">Depth 5 - Maximum depth</option>
</select>
</div>
</div>
</details>
</section>
<section class="modal-section">
<details open>
<summary>
<span>🔧 Provider Configuration</span>
<span class="merge-badge" id="provider-count">0</span>
</summary>
<div class="modal-section-content">
<div id="provider-config-list">
</div>
</div>
</details>
</section>
<section class="modal-section">
<details>
<summary>
<span>🔑 API Keys</span>
<span class="merge-badge" id="api-key-count">0</span>
</summary>
<div class="modal-section-content">
<p class="placeholder-subtext" style="margin-bottom: 1rem;">
⚠️ API keys are stored in memory for the current session only.
Only provide API keys you don't use for anything else.
</p>
<div id="api-key-inputs">
</div>
</div>
</details>
</section>
<div class="button-group" style="margin-top: 1.5rem;">
<button id="save-settings" class="btn btn-primary">
<span class="btn-icon">[SAVE]</span>
<span>Save Configuration</span>
</button>
<button id="reset-settings" class="btn btn-secondary">
<span class="btn-icon">[RESET]</span>
<span>Reset to Defaults</span>
</button>
</div>
<div class="button-group" style="flex-direction: row; justify-content: flex-end;">
<button id="reset-api-keys" class="btn btn-secondary">
<span>Reset</span>
</button>
<button id="save-api-keys" class="btn btn-primary">
<span>Save API-Keys</span>
</button>
</div>
</div>
</div>
</div>
<div id="export-modal" class="modal">
<div class="modal-content">
<div class="modal-header">
<h3>Export Options</h3>
<button id="export-modal-close" class="modal-close">[×]</button>
</div>
<div class="modal-body">
<div class="modal-details">
<section class="modal-section">
<details open>
<summary>
<span>📊 Available Exports</span>
</summary>
<div class="modal-section-content">
<div class="button-group" style="margin-top: 1rem;">
<button id="export-graph-json" class="btn btn-primary">
<span class="btn-icon">[JSON]</span>
<span>Export Graph Data</span>
</button>
<div class="status-row" style="margin-top: 0.5rem;">
<span class="status-label">Complete graph data with forensic audit trail,
provider statistics, and scan metadata in JSON format for analysis and
archival.</span>
</div>
<button id="export-targets-txt" class="btn btn-primary" style="margin-top: 1rem;">
<span class="btn-icon">[TXT]</span>
<span>Export Targets</span>
</button>
<div class="status-row" style="margin-top: 0.5rem;">
<span class="status-label">A simple text file containing all discovered domains and IP addresses.</span>
</div>
<button id="export-executive-summary" class="btn btn-primary" style="margin-top: 1rem;">
<span class="btn-icon">[TXT]</span>
<span>Export Executive Summary</span>
</button>
<div class="status-row" style="margin-top: 0.5rem;">
<span class="status-label">A natural-language summary of the scan findings.</span>
</div>
</div>
</div>
</details>
</section>
</div>
</div>
</div>
</div>
</div>
<script>
function copyToClipboard(elementId) {
const element = document.getElementById(elementId);
const textToCopy = element.innerText;
navigator.clipboard.writeText(textToCopy).then(() => {
// Optional: Show a success message
console.log('Copied to clipboard');
}).catch(err => {
console.error('Failed to copy: ', err);
});
}
</script>
<script src="{{ url_for('static', filename='js/graph.js') }}"></script>
<script src="{{ url_for('static', filename='js/main.js') }}"></script>
</body>
</html>

View File

@@ -0,0 +1,22 @@
# dnsrecon-reduced/utils/__init__.py
"""
Utility modules for DNSRecon.
Contains helper functions, export management, and supporting utilities.
"""
from .helpers import is_valid_target, _is_valid_domain, _is_valid_ip, get_ip_version, normalize_ip
from .export_manager import export_manager, ExportManager, CustomJSONEncoder
__all__ = [
'is_valid_target',
'_is_valid_domain',
'_is_valid_ip',
'get_ip_version',
'normalize_ip',
'export_manager',
'ExportManager',
'CustomJSONEncoder'
]
__version__ = "1.0.0"

849
utils/export_manager.py Normal file
View File

@@ -0,0 +1,849 @@
# dnsrecon-reduced/utils/export_manager.py
"""
Centralized export functionality for DNSRecon.
Handles all data export operations with forensic integrity and proper formatting.
ENHANCED: Professional forensic executive summary generation for court-ready documentation.
"""
import json
from datetime import datetime, timezone
from typing import Dict, Any, List, Optional, Set, Tuple
from decimal import Decimal
from collections import defaultdict, Counter
import networkx as nx
from utils.helpers import _is_valid_domain, _is_valid_ip
class ExportManager:
"""
Centralized manager for all DNSRecon export operations.
Maintains forensic integrity and provides consistent export formats.
ENHANCED: Advanced forensic analysis and professional reporting capabilities.
"""
def __init__(self):
"""Initialize export manager."""
pass
def export_scan_results(self, scanner) -> Dict[str, Any]:
"""
Export complete scan results with forensic metadata.
Args:
scanner: Scanner instance with completed scan data
Returns:
Complete scan results dictionary
"""
graph_data = self.export_graph_json(scanner.graph)
audit_trail = scanner.logger.export_audit_trail()
provider_stats = {}
for provider in scanner.providers:
provider_stats[provider.get_name()] = provider.get_statistics()
results = {
'scan_metadata': {
'target_domain': scanner.current_target,
'max_depth': scanner.max_depth,
'final_status': scanner.status,
'total_indicators_processed': scanner.indicators_processed,
'enabled_providers': list(provider_stats.keys()),
'session_id': scanner.session_id
},
'graph_data': graph_data,
'forensic_audit': audit_trail,
'provider_statistics': provider_stats,
'scan_summary': scanner.logger.get_forensic_summary()
}
# Add export metadata
results['export_metadata'] = {
'export_timestamp': datetime.now(timezone.utc).isoformat(),
'export_version': '1.0.0',
'forensic_integrity': 'maintained'
}
return results
def export_targets_list(self, scanner) -> str:
"""
Export all discovered domains and IPs as a text file.
Args:
scanner: Scanner instance with graph data
Returns:
Newline-separated list of targets
"""
nodes = scanner.graph.get_graph_data().get('nodes', [])
targets = {
node['id'] for node in nodes
if _is_valid_domain(node['id']) or _is_valid_ip(node['id'])
}
return "\n".join(sorted(list(targets)))
def generate_executive_summary(self, scanner) -> str:
"""
ENHANCED: Generate a comprehensive, court-ready forensic executive summary.
Args:
scanner: Scanner instance with completed scan data
Returns:
Professional forensic summary formatted for investigative use
"""
report = []
now = datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S UTC')
# Get comprehensive data for analysis
graph_data = scanner.graph.get_graph_data()
nodes = graph_data.get('nodes', [])
edges = graph_data.get('edges', [])
audit_trail = scanner.logger.export_audit_trail()
# Perform advanced analysis
infrastructure_analysis = self._analyze_infrastructure_patterns(nodes, edges)
# === HEADER AND METADATA ===
report.extend([
"=" * 80,
"DIGITAL INFRASTRUCTURE RECONNAISSANCE REPORT",
"=" * 80,
"",
f"Report Generated: {now}",
f"Investigation Target: {scanner.current_target}",
f"Analysis Session: {scanner.session_id}",
f"Scan Depth: {scanner.max_depth} levels",
f"Final Status: {scanner.status.upper()}",
""
])
# === EXECUTIVE SUMMARY ===
report.extend([
"EXECUTIVE SUMMARY",
"-" * 40,
"",
f"This report presents the findings of a comprehensive passive reconnaissance analysis "
f"conducted against the target '{scanner.current_target}'. The investigation employed "
f"multiple intelligence sources and discovered {len(nodes)} distinct digital entities "
f"connected through {len(edges)} verified relationships.",
"",
f"The analysis reveals a digital infrastructure comprising {infrastructure_analysis['domains']} "
f"domain names, {infrastructure_analysis['ips']} IP addresses, and {infrastructure_analysis['isps']} "
f"infrastructure service providers. Certificate transparency analysis identified "
f"{infrastructure_analysis['cas']} certificate authorities managing the cryptographic "
f"infrastructure for the investigated entities.",
"",
])
# === METHODOLOGY ===
report.extend([
"INVESTIGATIVE METHODOLOGY",
"-" * 40,
"",
"This analysis employed passive reconnaissance techniques using the following verified data sources:",
""
])
provider_info = {
'dns': 'Standard DNS resolution and reverse DNS lookups',
'crtsh': 'Certificate Transparency database analysis via crt.sh',
'shodan': 'Internet-connected device intelligence via Shodan API'
}
for provider in scanner.providers:
provider_name = provider.get_name()
stats = provider.get_statistics()
description = provider_info.get(provider_name, f'{provider_name} data provider')
report.extend([
f"{provider.get_display_name()}: {description}",
f" - Total Requests: {stats['total_requests']}",
f" - Success Rate: {stats['success_rate']:.1f}%",
f" - Relationships Discovered: {stats['relationships_found']}",
""
])
# === INFRASTRUCTURE ANALYSIS ===
report.extend([
"INFRASTRUCTURE ANALYSIS",
"-" * 40,
""
])
# Domain Analysis
if infrastructure_analysis['domains'] > 0:
report.extend([
f"Domain Name Infrastructure ({infrastructure_analysis['domains']} entities):",
""
])
domain_details = self._get_detailed_domain_analysis(nodes, edges)
for domain_info in domain_details[:10]: # Top 10 domains
report.extend([
f"{domain_info['domain']}",
f" - Type: {domain_info['classification']}",
f" - Connected IPs: {len(domain_info['ips'])}",
f" - Certificate Status: {domain_info['cert_status']}",
f" - Relationship Confidence: {domain_info['avg_confidence']:.2f}",
])
if domain_info['security_notes']:
report.extend([
f" - Security Notes: {', '.join(domain_info['security_notes'])}",
])
report.append("")
# IP Address Analysis
if infrastructure_analysis['ips'] > 0:
report.extend([
f"IP Address Infrastructure ({infrastructure_analysis['ips']} entities):",
""
])
ip_details = self._get_detailed_ip_analysis(nodes, edges)
for ip_info in ip_details[:8]: # Top 8 IPs
report.extend([
f"{ip_info['ip']} ({ip_info['version']})",
f" - Associated Domains: {len(ip_info['domains'])}",
f" - ISP: {ip_info['isp'] or 'Unknown'}",
f" - Geographic Location: {ip_info['location'] or 'Not determined'}",
])
if ip_info['open_ports']:
report.extend([
f" - Exposed Services: {', '.join(map(str, ip_info['open_ports'][:5]))}"
+ (f" (and {len(ip_info['open_ports']) - 5} more)" if len(ip_info['open_ports']) > 5 else ""),
])
report.append("")
# === RELATIONSHIP ANALYSIS ===
report.extend([
"ENTITY RELATIONSHIP ANALYSIS",
"-" * 40,
""
])
# Network topology insights
topology = self._analyze_network_topology(nodes, edges)
report.extend([
f"Network Topology Assessment:",
f"• Central Hubs: {len(topology['hubs'])} entities serve as primary connection points",
f"• Isolated Clusters: {len(topology['clusters'])} distinct groupings identified",
f"• Relationship Density: {topology['density']:.3f} (0=sparse, 1=fully connected)",
f"• Average Path Length: {topology['avg_path_length']:.2f} degrees of separation",
""
])
# Key relationships
key_relationships = self._identify_key_relationships(edges)
if key_relationships:
report.extend([
"Critical Infrastructure Relationships:",
""
])
for rel in key_relationships[:8]: # Top 8 relationships
confidence_desc = self._describe_confidence(rel['confidence'])
report.extend([
f"{rel['source']}{rel['target']}",
f" - Relationship: {self._humanize_relationship_type(rel['type'])}",
f" - Evidence Strength: {confidence_desc} ({rel['confidence']:.2f})",
f" - Discovery Method: {rel['provider']}",
""
])
# === CERTIFICATE ANALYSIS ===
cert_analysis = self._analyze_certificate_infrastructure(nodes)
if cert_analysis['total_certs'] > 0:
report.extend([
"CERTIFICATE INFRASTRUCTURE ANALYSIS",
"-" * 40,
"",
f"Certificate Status Overview:",
f"• Total Certificates Analyzed: {cert_analysis['total_certs']}",
f"• Valid Certificates: {cert_analysis['valid']}",
f"• Expired/Invalid: {cert_analysis['expired']}",
f"• Certificate Authorities: {len(cert_analysis['cas'])}",
""
])
if cert_analysis['cas']:
report.extend([
"Certificate Authority Distribution:",
""
])
for ca, count in cert_analysis['cas'].most_common(5):
report.extend([
f"{ca}: {count} certificate(s)",
])
report.append("")
# === TECHNICAL APPENDIX ===
report.extend([
"TECHNICAL APPENDIX",
"-" * 40,
"",
"Data Quality Assessment:",
f"• Total API Requests: {audit_trail.get('session_metadata', {}).get('total_requests', 0)}",
f"• Data Providers Used: {len(audit_trail.get('session_metadata', {}).get('providers_used', []))}",
f"• Relationship Confidence Distribution:",
])
# Confidence distribution
confidence_dist = self._calculate_confidence_distribution(edges)
for level, count in confidence_dist.items():
percentage = (count / len(edges) * 100) if edges else 0
report.extend([
f" - {level.title()} Confidence (≥{self._get_confidence_threshold(level)}): {count} ({percentage:.1f}%)",
])
report.extend([
"",
"Correlation Analysis:",
f"• Entity Correlations Identified: {len(scanner.graph.correlation_index)}",
f"• Cross-Reference Validation: {self._count_cross_validated_relationships(edges)} relationships verified by multiple sources",
""
])
# === CONCLUSION ===
report.extend([
"CONCLUSION",
"-" * 40,
"",
self._generate_conclusion(scanner.current_target, infrastructure_analysis,
len(edges)),
"",
"This analysis was conducted using passive reconnaissance techniques and represents "
"the digital infrastructure observable through public data sources at the time of investigation. "
"All findings are supported by verifiable technical evidence and documented through "
"a complete audit trail maintained for forensic integrity.",
"",
f"Investigation completed: {now}",
f"Report authenticated by: DNSRecon v{self._get_version()}",
"",
"=" * 80,
"END OF REPORT",
"=" * 80
])
return "\n".join(report)
def _analyze_infrastructure_patterns(self, nodes: List[Dict], edges: List[Dict]) -> Dict[str, Any]:
"""Analyze infrastructure patterns and classify entities."""
analysis = {
'domains': len([n for n in nodes if n['type'] == 'domain']),
'ips': len([n for n in nodes if n['type'] == 'ip']),
'isps': len([n for n in nodes if n['type'] == 'isp']),
'cas': len([n for n in nodes if n['type'] == 'ca']),
'correlations': len([n for n in nodes if n['type'] == 'correlation_object'])
}
return analysis
def _get_detailed_domain_analysis(self, nodes: List[Dict], edges: List[Dict]) -> List[Dict[str, Any]]:
"""Generate detailed analysis for each domain."""
domain_nodes = [n for n in nodes if n['type'] == 'domain']
domain_analysis = []
for domain in domain_nodes:
# Find connected IPs
connected_ips = [e['to'] for e in edges
if e['from'] == domain['id'] and _is_valid_ip(e['to'])]
# Determine classification
classification = "Primary Domain"
if domain['id'].startswith('www.'):
classification = "Web Interface"
elif any(subdomain in domain['id'] for subdomain in ['api.', 'mail.', 'smtp.']):
classification = "Service Endpoint"
elif domain['id'].count('.') > 1:
classification = "Subdomain"
# Certificate status
cert_status = self._determine_certificate_status(domain)
# Security notes
security_notes = []
if cert_status == "Expired/Invalid":
security_notes.append("Certificate validation issues")
if len(connected_ips) == 0:
security_notes.append("No IP resolution found")
if len(connected_ips) > 5:
security_notes.append("Multiple IP endpoints")
# Average confidence
domain_edges = [e for e in edges if e['from'] == domain['id']]
avg_confidence = sum(e['confidence_score'] for e in domain_edges) / len(domain_edges) if domain_edges else 0
domain_analysis.append({
'domain': domain['id'],
'classification': classification,
'ips': connected_ips,
'cert_status': cert_status,
'security_notes': security_notes,
'avg_confidence': avg_confidence
})
# Sort by number of connections (most connected first)
return sorted(domain_analysis, key=lambda x: len(x['ips']), reverse=True)
def _get_detailed_ip_analysis(self, nodes: List[Dict], edges: List[Dict]) -> List[Dict[str, Any]]:
"""Generate detailed analysis for each IP address."""
ip_nodes = [n for n in nodes if n['type'] == 'ip']
ip_analysis = []
for ip in ip_nodes:
# Find connected domains
connected_domains = [e['from'] for e in edges
if e['to'] == ip['id'] and _is_valid_domain(e['from'])]
# Extract metadata from attributes
ip_version = "IPv4"
location = None
isp = None
open_ports = []
for attr in ip.get('attributes', []):
if attr.get('name') == 'country':
location = attr.get('value')
elif attr.get('name') == 'org':
isp = attr.get('value')
elif attr.get('name') == 'shodan_open_port':
open_ports.append(attr.get('value'))
elif 'ipv6' in str(attr.get('metadata', {})).lower():
ip_version = "IPv6"
# Find ISP from relationships
if not isp:
isp_edges = [e for e in edges if e['from'] == ip['id'] and e['label'].endswith('_isp')]
isp = isp_edges[0]['to'] if isp_edges else None
ip_analysis.append({
'ip': ip['id'],
'version': ip_version,
'domains': connected_domains,
'isp': isp,
'location': location,
'open_ports': open_ports
})
# Sort by number of connected domains
return sorted(ip_analysis, key=lambda x: len(x['domains']), reverse=True)
def _analyze_network_topology(self, nodes: List[Dict], edges: List[Dict]) -> Dict[str, Any]:
"""Analyze network topology and identify key structural patterns."""
if not nodes or not edges:
return {'hubs': [], 'clusters': [], 'density': 0, 'avg_path_length': 0}
# Create NetworkX graph
G = nx.DiGraph()
for node in nodes:
G.add_node(node['id'])
for edge in edges:
G.add_edge(edge['from'], edge['to'])
# Convert to undirected for certain analyses
G_undirected = G.to_undirected()
# Identify hubs (nodes with high degree centrality)
centrality = nx.degree_centrality(G_undirected)
hub_threshold = max(centrality.values()) * 0.7 if centrality else 0
hubs = [node for node, cent in centrality.items() if cent >= hub_threshold]
# Find connected components (clusters)
clusters = list(nx.connected_components(G_undirected))
# Calculate density
density = nx.density(G_undirected)
# Calculate average path length (for largest component)
if G_undirected.number_of_nodes() > 1:
largest_cc = max(nx.connected_components(G_undirected), key=len)
subgraph = G_undirected.subgraph(largest_cc)
try:
avg_path_length = nx.average_shortest_path_length(subgraph)
except:
avg_path_length = 0
else:
avg_path_length = 0
return {
'hubs': hubs,
'clusters': clusters,
'density': density,
'avg_path_length': avg_path_length
}
def _identify_key_relationships(self, edges: List[Dict]) -> List[Dict[str, Any]]:
"""Identify the most significant relationships in the infrastructure."""
# Score relationships by confidence and type importance
relationship_importance = {
'dns_a_record': 0.9,
'dns_aaaa_record': 0.9,
'crtsh_cert_issuer': 0.8,
'shodan_isp': 0.8,
'crtsh_san_certificate': 0.7,
'dns_mx_record': 0.7,
'dns_ns_record': 0.7
}
scored_edges = []
for edge in edges:
base_confidence = edge.get('confidence_score', 0)
type_weight = relationship_importance.get(edge.get('label', ''), 0.5)
combined_score = (base_confidence * 0.7) + (type_weight * 0.3)
scored_edges.append({
'source': edge['from'],
'target': edge['to'],
'type': edge.get('label', ''),
'confidence': base_confidence,
'provider': edge.get('source_provider', ''),
'score': combined_score
})
# Return top relationships by score
return sorted(scored_edges, key=lambda x: x['score'], reverse=True)
def _analyze_certificate_infrastructure(self, nodes: List[Dict]) -> Dict[str, Any]:
"""Analyze certificate infrastructure across all domains."""
domain_nodes = [n for n in nodes if n['type'] == 'domain']
ca_nodes = [n for n in nodes if n['type'] == 'ca']
valid_certs = 0
expired_certs = 0
total_certs = 0
cas = Counter()
for domain in domain_nodes:
for attr in domain.get('attributes', []):
if attr.get('name') == 'cert_is_currently_valid':
total_certs += 1
if attr.get('value') is True:
valid_certs += 1
else:
expired_certs += 1
elif attr.get('name') == 'cert_issuer_name':
issuer = attr.get('value')
if issuer:
cas[issuer] += 1
return {
'total_certs': total_certs,
'valid': valid_certs,
'expired': expired_certs,
'cas': cas
}
def _has_expired_certificates(self, domain_node: Dict) -> bool:
"""Check if domain has expired certificates."""
for attr in domain_node.get('attributes', []):
if (attr.get('name') == 'cert_is_currently_valid' and
attr.get('value') is False):
return True
return False
def _determine_certificate_status(self, domain_node: Dict) -> str:
"""Determine the certificate status for a domain."""
has_valid = False
has_expired = False
has_any = False
for attr in domain_node.get('attributes', []):
if attr.get('name') == 'cert_is_currently_valid':
has_any = True
if attr.get('value') is True:
has_valid = True
else:
has_expired = True
if not has_any:
return "No Certificate Data"
elif has_valid and not has_expired:
return "Valid"
elif has_expired and not has_valid:
return "Expired/Invalid"
else:
return "Mixed Status"
def _describe_confidence(self, confidence: float) -> str:
"""Convert confidence score to descriptive text."""
if confidence >= 0.9:
return "Very High"
elif confidence >= 0.8:
return "High"
elif confidence >= 0.6:
return "Medium"
elif confidence >= 0.4:
return "Low"
else:
return "Very Low"
def _humanize_relationship_type(self, rel_type: str) -> str:
"""Convert technical relationship types to human-readable descriptions."""
type_map = {
'dns_a_record': 'DNS A Record Resolution',
'dns_aaaa_record': 'DNS AAAA Record (IPv6) Resolution',
'dns_mx_record': 'Email Server (MX) Configuration',
'dns_ns_record': 'Name Server Delegation',
'dns_cname_record': 'DNS Alias (CNAME) Resolution',
'crtsh_cert_issuer': 'SSL Certificate Issuer Relationship',
'crtsh_san_certificate': 'Shared SSL Certificate',
'shodan_isp': 'Internet Service Provider Assignment',
'shodan_a_record': 'IP-to-Domain Resolution (Shodan)',
'dns_ptr_record': 'Reverse DNS Resolution'
}
return type_map.get(rel_type, rel_type.replace('_', ' ').title())
def _calculate_confidence_distribution(self, edges: List[Dict]) -> Dict[str, int]:
"""Calculate confidence score distribution."""
distribution = {'high': 0, 'medium': 0, 'low': 0}
for edge in edges:
confidence = edge.get('confidence_score', 0)
if confidence >= 0.8:
distribution['high'] += 1
elif confidence >= 0.6:
distribution['medium'] += 1
else:
distribution['low'] += 1
return distribution
def _get_confidence_threshold(self, level: str) -> str:
"""Get confidence threshold for a level."""
thresholds = {'high': '0.80', 'medium': '0.60', 'low': '0.00'}
return thresholds.get(level, '0.00')
def _count_cross_validated_relationships(self, edges: List[Dict]) -> int:
"""Count relationships verified by multiple providers."""
# Group edges by source-target pair
edge_pairs = defaultdict(list)
for edge in edges:
pair_key = f"{edge['from']}->{edge['to']}"
edge_pairs[pair_key].append(edge.get('source_provider', ''))
# Count pairs with multiple providers
cross_validated = 0
for pair, providers in edge_pairs.items():
if len(set(providers)) > 1: # Multiple unique providers
cross_validated += 1
return cross_validated
def _generate_security_recommendations(self, infrastructure_analysis: Dict) -> List[str]:
"""Generate actionable security recommendations."""
recommendations = []
# Check for complex infrastructure
if infrastructure_analysis['ips'] > 10:
recommendations.append(
"Document and validate the necessity of extensive IP address infrastructure"
)
if infrastructure_analysis['correlations'] > 5:
recommendations.append(
"Investigate shared infrastructure components for operational security implications"
)
if not recommendations:
recommendations.append(
"Continue monitoring for changes in the identified digital infrastructure"
)
return recommendations
def _generate_conclusion(self, target: str, infrastructure_analysis: Dict, total_relationships: int) -> str:
"""Generate a professional conclusion for the report."""
conclusion_parts = [
f"The passive reconnaissance analysis of '{target}' has successfully mapped "
f"a digital infrastructure ecosystem consisting of {infrastructure_analysis['domains']} "
f"domain names, {infrastructure_analysis['ips']} IP addresses, and "
f"{total_relationships} verified inter-entity relationships."
]
conclusion_parts.append(
"All findings in this report are based on publicly available information and "
"passive reconnaissance techniques. The analysis maintains full forensic integrity "
"with complete audit trails for all data collection activities."
)
return " ".join(conclusion_parts)
def _count_bidirectional_relationships(self, graph) -> int:
"""Count bidirectional relationships in the graph."""
count = 0
for u, v in graph.edges():
if graph.has_edge(v, u):
count += 1
return count // 2 # Each pair counted twice
def _identify_hub_nodes(self, graph, nodes: List[Dict]) -> List[str]:
"""Identify nodes that serve as major hubs in the network."""
if not graph.nodes():
return []
degree_centrality = nx.degree_centrality(graph.to_undirected())
threshold = max(degree_centrality.values()) * 0.8 if degree_centrality else 0
return [node for node, centrality in degree_centrality.items()
if centrality >= threshold]
def _get_version(self) -> str:
"""Get DNSRecon version for report authentication."""
return "1.0.0-forensic"
def export_graph_json(self, graph_manager) -> Dict[str, Any]:
"""
Export complete graph data as a JSON-serializable dictionary.
Moved from GraphManager to centralize export functionality.
Args:
graph_manager: GraphManager instance with graph data
Returns:
Complete graph data with export metadata
"""
graph_data = nx.node_link_data(graph_manager.graph, edges="edges")
return {
'export_metadata': {
'export_timestamp': datetime.now(timezone.utc).isoformat(),
'graph_creation_time': graph_manager.creation_time,
'last_modified': graph_manager.last_modified,
'total_nodes': graph_manager.get_node_count(),
'total_edges': graph_manager.get_edge_count(),
'graph_format': 'dnsrecon_v1_unified_model'
},
'graph': graph_data,
'statistics': graph_manager.get_statistics()
}
def serialize_to_json(self, data: Dict[str, Any], indent: int = 2) -> str:
"""
Serialize data to JSON with custom handling for non-serializable objects.
Args:
data: Data to serialize
indent: JSON indentation level
Returns:
JSON string representation
"""
try:
return json.dumps(data, indent=indent, cls=CustomJSONEncoder, ensure_ascii=False)
except Exception:
# Fallback to aggressive cleaning
cleaned_data = self._clean_for_json(data)
return json.dumps(cleaned_data, indent=indent, ensure_ascii=False)
def _clean_for_json(self, obj, max_depth: int = 10, current_depth: int = 0) -> Any:
"""
Recursively clean an object to make it JSON serializable.
Handles circular references and problematic object types.
Args:
obj: Object to clean
max_depth: Maximum recursion depth
current_depth: Current recursion depth
Returns:
JSON-serializable object
"""
if current_depth > max_depth:
return f"<max_depth_exceeded_{type(obj).__name__}>"
if obj is None or isinstance(obj, (bool, int, float, str)):
return obj
elif isinstance(obj, datetime):
return obj.isoformat()
elif isinstance(obj, (set, frozenset)):
return list(obj)
elif isinstance(obj, dict):
cleaned = {}
for key, value in obj.items():
try:
# Ensure key is string
clean_key = str(key) if not isinstance(key, str) else key
cleaned[clean_key] = self._clean_for_json(value, max_depth, current_depth + 1)
except Exception:
cleaned[str(key)] = f"<serialization_error_{type(value).__name__}>"
return cleaned
elif isinstance(obj, (list, tuple)):
cleaned = []
for item in obj:
try:
cleaned.append(self._clean_for_json(item, max_depth, current_depth + 1))
except Exception:
cleaned.append(f"<serialization_error_{type(item).__name__}>")
return cleaned
elif hasattr(obj, '__dict__'):
try:
return self._clean_for_json(obj.__dict__, max_depth, current_depth + 1)
except Exception:
return str(obj)
elif hasattr(obj, 'value'):
# For enum-like objects
return obj.value
else:
return str(obj)
def generate_filename(self, target: str, export_type: str, timestamp: Optional[datetime] = None) -> str:
"""
Generate standardized filename for exports.
Args:
target: Target domain/IP being scanned
export_type: Type of export (json, txt, summary)
timestamp: Optional timestamp (defaults to now)
Returns:
Formatted filename with forensic naming convention
"""
if timestamp is None:
timestamp = datetime.now(timezone.utc)
timestamp_str = timestamp.strftime('%Y%m%d_%H%M%S')
safe_target = "".join(c for c in target if c.isalnum() or c in ('-', '_', '.')).rstrip()
extension_map = {
'json': 'json',
'txt': 'txt',
'summary': 'txt',
'targets': 'txt'
}
extension = extension_map.get(export_type, 'txt')
return f"dnsrecon_{export_type}_{safe_target}_{timestamp_str}.{extension}"
class CustomJSONEncoder(json.JSONEncoder):
"""Custom JSON encoder to handle non-serializable objects."""
def default(self, obj):
if isinstance(obj, datetime):
return obj.isoformat()
elif isinstance(obj, set):
return list(obj)
elif isinstance(obj, Decimal):
return float(obj)
elif hasattr(obj, '__dict__'):
# For custom objects, try to serialize their dict representation
try:
return obj.__dict__
except:
return str(obj)
elif hasattr(obj, 'value') and hasattr(obj, 'name'):
# For enum objects
return obj.value
else:
# For any other non-serializable object, convert to string
return str(obj)
# Global export manager instance
export_manager = ExportManager()

View File

@@ -1,3 +1,8 @@
# dnsrecon-reduced/utils/helpers.py
import ipaddress
from typing import Union
def _is_valid_domain(domain: str) -> bool:
"""
Basic domain validation.
@@ -26,32 +31,27 @@ def _is_valid_domain(domain: str) -> bool:
def _is_valid_ip(ip: str) -> bool:
"""
Basic IP address validation.
IP address validation supporting both IPv4 and IPv6.
Args:
ip: IP address string to validate
Returns:
True if IP appears valid
True if IP appears valid (IPv4 or IPv6)
"""
if not ip:
return False
try:
parts = ip.split('.')
if len(parts) != 4:
return False
for part in parts:
num = int(part)
if not 0 <= num <= 255:
return False
# This handles both IPv4 and IPv6 validation
ipaddress.ip_address(ip.strip())
return True
except (ValueError, AttributeError):
return False
def is_valid_target(target: str) -> bool:
"""
Checks if the target is a valid domain or IP address.
Checks if the target is a valid domain or IP address (IPv4/IPv6).
Args:
target: The target string to validate.
@@ -59,4 +59,36 @@ def is_valid_target(target: str) -> bool:
Returns:
True if the target is a valid domain or IP, False otherwise.
"""
return _is_valid_domain(target) or _is_valid_ip(target)
return _is_valid_domain(target) or _is_valid_ip(target)
def get_ip_version(ip: str) -> Union[int, None]:
"""
Get the IP version (4 or 6) of a valid IP address.
Args:
ip: IP address string
Returns:
4 for IPv4, 6 for IPv6, None if invalid
"""
try:
addr = ipaddress.ip_address(ip.strip())
return addr.version
except (ValueError, AttributeError):
return None
def normalize_ip(ip: str) -> Union[str, None]:
"""
Normalize an IP address to its canonical form.
Args:
ip: IP address string
Returns:
Normalized IP address string, None if invalid
"""
try:
addr = ipaddress.ip_address(ip.strip())
return str(addr)
except (ValueError, AttributeError):
return None