Compare commits

..

28 Commits

Author SHA1 Message Date
overcuriousity
0e92ec6e9a readme 2025-09-16 01:00:21 +02:00
overcuriousity
baa57bfac2 update the shodan to use only ip 2025-09-16 00:57:24 +02:00
overcuriousity
f0f80be955 finalize pop-out 2025-09-16 00:32:46 +02:00
overcuriousity
ecc143ddbb extraction feature works 2025-09-16 00:08:27 +02:00
overcuriousity
2c48316477 extract from node feature 2025-09-16 00:01:24 +02:00
overcuriousity
fc098aed28 remove cache 2025-09-15 22:48:49 +02:00
9285226cbc Merge pull request 'new-scheduler' (#1) from new-scheduler into main
Reviewed-on: #1
2025-09-15 20:45:47 +00:00
overcuriousity
350055fcec successfully implemented scheduler 2025-09-15 22:44:38 +02:00
overcuriousity
4a5ecf7a37 new highest-priority-first scheduler 2025-09-15 22:21:17 +02:00
overcuriousity
71b2855d01 fixes to iteration context menu 2025-09-15 21:37:19 +02:00
overcuriousity
93a258170a context menu option 2025-09-15 21:27:21 +02:00
overcuriousity
e2d4e12057 also allow ip lookups in scan 2025-09-15 21:00:57 +02:00
overcuriousity
c076ee028f main page refinement 2025-09-15 20:44:45 +02:00
overcuriousity
cbfac0922a fix node modals 2025-09-15 20:27:43 +02:00
overcuriousity
881f7b74e5 fix graph delete&revert 2025-09-15 20:20:15 +02:00
overcuriousity
c347581a6c fix graph trueRoot 2025-09-15 20:11:40 +02:00
overcuriousity
30ee21f087 revert graph.js refactor 2025-09-15 18:06:11 +02:00
overcuriousity
2496ca26a5 small fixes 2025-09-15 17:52:09 +02:00
overcuriousity
8aa3c4933e fix large entity 2025-09-15 14:12:02 +02:00
overcuriousity
fc326a66c8 fix large entity 2025-09-15 13:58:30 +02:00
overcuriousity
51902e3155 it 2025-09-15 13:35:58 +02:00
overcuriousity
a261d706c8 update style 2025-09-15 01:10:28 +02:00
overcuriousity
2410e689b8 visual enhancements 2025-09-15 00:25:27 +02:00
overcuriousity
62470673fe integrate checkbox filters 2025-09-14 23:54:27 +02:00
overcuriousity
2658bd148b context menu 2025-09-14 23:42:45 +02:00
overcuriousity
f02381910d Merge branch 'main' of https://git.cc24.dev/mstoeck3/dnsrecon 2025-09-14 23:10:16 +02:00
overcuriousity
674ac59c98 iteration 2025-09-14 23:09:38 +02:00
434d1f4803 dump.rdb gelöscht 2025-09-14 20:55:33 +00:00
20 changed files with 4622 additions and 1589 deletions

View File

@ -29,6 +29,6 @@ MAX_CONCURRENT_REQUESTS=5
# The number of results from a provider that triggers the "large entity" grouping.
LARGE_ENTITY_THRESHOLD=100
# The number of times to retry a target if a provider fails.
MAX_RETRIES_PER_TARGET=3
MAX_RETRIES_PER_TARGET=8
# How long cached provider responses are stored (in hours).
CACHE_EXPIRY_HOURS=12

1
.gitignore vendored
View File

@ -169,3 +169,4 @@ cython_debug/
#.idea/
dump.rdb
cache/

226
README.md
View File

@ -4,28 +4,32 @@ DNSRecon is an interactive, passive reconnaissance tool designed to map adversar
**Current Status: Phase 2 Implementation**
- ✅ Core infrastructure and graph engine
- ✅ Multi-provider support (crt.sh, DNS, Shodan)
- ✅ Session-based multi-user support
- ✅ Real-time web interface with interactive visualization
- ✅ Forensic logging system and JSON export
* ✅ Core infrastructure and graph engine
* ✅ Multi-provider support (crt.sh, DNS, Shodan)
* ✅ Session-based multi-user support
* ✅ Real-time web interface with interactive visualization
* ✅ Forensic logging system and JSON export
-----
## Features
- **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
- **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
- **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
- **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
- **Session Management**: Supports concurrent user sessions with isolated scanner instances.
* **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
* **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
* **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
* **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
* **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
* **Session Management**: Supports concurrent user sessions with isolated scanner instances.
-----
## Installation
### Prerequisites
- Python 3.8 or higher
- A modern web browser with JavaScript enabled
- (Recommended) A Linux host for running the application and the optional DNS cache.
* Python 3.8 or higher
* A modern web browser with JavaScript enabled
* (Recommended) A Linux host for running the application and the optional DNS cache.
### 1\. Clone the Project
@ -44,156 +48,50 @@ source venv/bin/activate
pip install -r requirements.txt
```
### 3\. (Optional but Recommended) Set up a Local DNS Caching Resolver
The `requirements.txt` file contains the following dependencies:
Running a local DNS caching resolver can significantly speed up DNS queries and reduce your network footprint. Heres how to set up `unbound` on a Debian-based Linux distribution (like Ubuntu).
* Flask\>=2.3.3
* networkx\>=3.1
* requests\>=2.31.0
* python-dateutil\>=2.8.2
* Werkzeug\>=2.3.7
* urllib3\>=2.0.0
* dnspython\>=2.4.2
* gunicorn
* redis
* python-dotenv
**a. Install Unbound:**
-----
## Configuration
DNSRecon is configured using a `.env` file. You can copy the provided example file and edit it to suit your needs:
```bash
sudo apt update
sudo apt install unbound -y
cp .env.example .env
```
**b. Configure Unbound:**
Create a new configuration file for DNSRecon:
The following environment variables are available for configuration:
```bash
sudo nano /etc/unbound/unbound.conf.d/dnsrecon.conf
```
| Variable | Description | Default |
| :--- | :--- | :--- |
| `SHODAN_API_KEY` | Your Shodan API key. | |
| `FLASK_SECRET_KEY`| A strong, random secret key for session security. | `your-very-secret-and-random-key-here` |
| `FLASK_HOST` | The host address for the Flask application. | `127.0.0.1` |
| `FLASK_PORT` | The port for the Flask application. | `5000` |
| `FLASK_DEBUG` | Enable or disable Flask's debug mode. | `True` |
| `FLASK_PERMANENT_SESSION_LIFETIME_HOURS`| How long a user's session in the browser lasts (in hours). | `2` |
| `SESSION_TIMEOUT_MINUTES` | How long inactive scanner data is stored in Redis (in minutes). | `60` |
| `DEFAULT_RECURSION_DEPTH` | The default number of levels to recurse when scanning. | `2` |
| `DEFAULT_TIMEOUT` | Default timeout for provider API requests in seconds. | `30` |
| `MAX_CONCURRENT_REQUESTS`| The number of concurrent provider requests to make. | `5` |
| `LARGE_ENTITY_THRESHOLD`| The number of results from a provider that triggers the "large entity" grouping. | `100` |
| `MAX_RETRIES_PER_TARGET`| The number of times to retry a target if a provider fails. | `8` |
| `CACHE_EXPIRY_HOURS`| How long cached provider responses are stored (in hours). | `12` |
Add the following content to the file:
-----
```
server:
# Listen on localhost for all users
interface: 127.0.0.1
access-control: 0.0.0.0/0 refuse
access-control: 127.0.0.0/8 allow
# Enable prefetching of popular items
prefetch: yes
```
**c. Restart Unbound and set it as the default resolver:**
```bash
sudo systemctl restart unbound
sudo systemctl enable unbound
```
To use this resolver for your system, you may need to update your network settings to point to `127.0.0.1` as your DNS server.
**d. Update DNSProvider to use the local resolver:**
In `dnsrecon/providers/dns_provider.py`, you can explicitly set the resolver's nameservers in the `__init__` method:
```python
# dnsrecon/providers/dns_provider.py
class DNSProvider(BaseProvider):
def __init__(self, session_config=None):
"""Initialize DNS provider with session-specific configuration."""
super().__init__(...)
# Configure DNS resolver
self.resolver = dns.resolver.Resolver()
self.resolver.nameservers = ['127.0.0.1'] # Use local caching resolver
self.resolver.timeout = 5
self.resolver.lifetime = 10
```
## Usage (Development)
### 1\. Start the Application
```bash
python app.py
```
### 2\. Open Your Browser
Navigate to `http://127.0.0.1:5000`.
### 3\. Basic Reconnaissance Workflow
1. **Enter Target Domain**: Input a domain like `example.com`.
2. **Select Recursion Depth**: Depth 2 is recommended for most investigations.
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin.
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered.
5. **Analyze and Export**: Interact with the graph and download the results when the scan is complete.
## Production Deployment
To deploy DNSRecon in a production environment, follow these steps:
### 1\. Use a Production WSGI Server
Do not use the built-in Flask development server for production. Use a WSGI server like **Gunicorn**:
```bash
pip install gunicorn
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
```
### 2\. Configure Environment Variables
Set the following environment variables for a secure and configurable deployment:
```bash
# Generate a strong, random secret key
export SECRET_KEY='your-super-secret-and-random-key'
# Set Flask to production mode
export FLASK_ENV='production'
export FLASK_DEBUG=False
# API keys (optional, but recommended for full functionality)
export SHODAN_API_KEY="your_shodan_key"
```
### 3\. Use a Reverse Proxy
Set up a reverse proxy like **Nginx** to sit in front of the Gunicorn server. This provides several benefits, including:
- **TLS/SSL Termination**: Securely handle HTTPS traffic.
- **Load Balancing**: Distribute traffic across multiple application instances.
- **Serving Static Files**: Efficiently serve CSS and JavaScript files.
**Example Nginx Configuration:**
```nginx
server {
listen 80;
server_name your_domain.com;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name your_domain.com;
# SSL cert configuration
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
alias /path/to/your/dnsrecon/static;
expires 30d;
}
}
```
## Autostart with systemd
## Systemd Service
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
@ -245,12 +143,18 @@ You can check the status of the service at any time with:
sudo systemctl status dnsrecon.service
```
## Security Considerations
- **API Keys**: API keys are stored in memory for the duration of a user session and are not written to disk.
- **Rate Limiting**: DNSRecon includes built-in rate limiting to be respectful to data sources.
- **Local Use**: The application is designed for local or trusted network use and does not have built-in authentication. **Do not expose it directly to the internet without proper security controls.**
-----
## License
This project is licensed under the terms of the license agreement found in the `LICENSE` file.
This project is licensed under the terms of the **BSD-3-Clause** license.
Copyright (c) 2025 mstoeck3.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

126
app.py
View File

@ -13,6 +13,8 @@ import io
from core.session_manager import session_manager
from config import config
from core.graph_manager import NodeType
from utils.helpers import is_valid_target
app = Flask(__name__)
@ -64,18 +66,21 @@ def start_scan():
try:
data = request.get_json()
if not data or 'target_domain' not in data:
return jsonify({'success': False, 'error': 'Missing target_domain in request'}), 400
if not data or 'target' not in data:
return jsonify({'success': False, 'error': 'Missing target in request'}), 400
target_domain = data['target_domain'].strip()
target = data['target'].strip()
max_depth = data.get('max_depth', config.default_recursion_depth)
clear_graph = data.get('clear_graph', True)
force_rescan_target = data.get('force_rescan_target', None) # **FIX**: Get the new parameter
print(f"Parsed - target_domain: '{target_domain}', max_depth: {max_depth}, clear_graph: {clear_graph}")
print(f"Parsed - target: '{target}', max_depth: {max_depth}, clear_graph: {clear_graph}, force_rescan: {force_rescan_target}")
# Validation
if not target_domain:
return jsonify({'success': False, 'error': 'Target domain cannot be empty'}), 400
if not target:
return jsonify({'success': False, 'error': 'Target cannot be empty'}), 400
if not is_valid_target(target):
return jsonify({'success': False, 'error': 'Invalid target format. Please enter a valid domain or IP address.'}), 400
if not isinstance(max_depth, int) or not 1 <= max_depth <= 5:
return jsonify({'success': False, 'error': 'Max depth must be an integer between 1 and 5'}), 400
@ -100,7 +105,7 @@ def start_scan():
print(f"Using scanner {id(scanner)} in session {user_session_id}")
success = scanner.start_scan(target_domain, max_depth, clear_graph=clear_graph)
success = scanner.start_scan(target, max_depth, clear_graph=clear_graph, force_rescan_target=force_rescan_target) # **FIX**: Pass the new parameter
if success:
return jsonify({
@ -277,6 +282,108 @@ def get_graph_data():
}
}), 500
@app.route('/api/graph/large-entity/extract', methods=['POST'])
def extract_from_large_entity():
"""Extract a node from a large entity, making it a standalone node."""
try:
data = request.get_json()
large_entity_id = data.get('large_entity_id')
node_id = data.get('node_id')
if not large_entity_id or not node_id:
return jsonify({'success': False, 'error': 'Missing required parameters'}), 400
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
success = scanner.extract_node_from_large_entity(large_entity_id, node_id)
if success:
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({'success': True, 'message': f'Node {node_id} extracted successfully.'})
else:
return jsonify({'success': False, 'error': f'Failed to extract node {node_id}.'}), 500
except Exception as e:
print(f"ERROR: Exception in extract_from_large_entity endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/graph/node/<node_id>', methods=['DELETE'])
def delete_graph_node(node_id):
"""Delete a node from the graph for the current user session."""
try:
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
success = scanner.graph.remove_node(node_id)
if success:
# Persist the change
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({'success': True, 'message': f'Node {node_id} deleted successfully.'})
else:
return jsonify({'success': False, 'error': f'Node {node_id} not found in graph.'}), 404
except Exception as e:
print(f"ERROR: Exception in delete_graph_node endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/graph/revert', methods=['POST'])
def revert_graph_action():
"""Reverts a graph action, such as re-adding a deleted node."""
try:
data = request.get_json()
if not data or 'type' not in data or 'data' not in data:
return jsonify({'success': False, 'error': 'Invalid revert request format'}), 400
user_session_id, scanner = get_user_scanner()
if not scanner:
return jsonify({'success': False, 'error': 'No active session found'}), 404
action_type = data['type']
action_data = data['data']
if action_type == 'delete':
# Re-add the node
node_to_add = action_data.get('node')
if node_to_add:
scanner.graph.add_node(
node_id=node_to_add['id'],
node_type=NodeType(node_to_add['type']),
attributes=node_to_add.get('attributes'),
description=node_to_add.get('description'),
metadata=node_to_add.get('metadata')
)
# Re-add the edges
edges_to_add = action_data.get('edges', [])
for edge in edges_to_add:
# Add edge only if both nodes exist to prevent errors
if scanner.graph.graph.has_node(edge['from']) and scanner.graph.graph.has_node(edge['to']):
scanner.graph.add_edge(
source_id=edge['from'],
target_id=edge['to'],
relationship_type=edge['metadata']['relationship_type'],
confidence_score=edge['metadata']['confidence_score'],
source_provider=edge['metadata']['source_provider'],
raw_data=edge.get('raw_data', {})
)
# Persist the change
session_manager.update_session_scanner(user_session_id, scanner)
return jsonify({'success': True, 'message': 'Delete action reverted successfully.'})
return jsonify({'success': False, 'error': f'Unknown revert action type: {action_type}'}), 400
except Exception as e:
print(f"ERROR: Exception in revert_graph_action endpoint: {e}")
traceback.print_exc()
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
@app.route('/api/export', methods=['GET'])
@ -330,9 +437,10 @@ def get_providers():
user_session_id, scanner = get_user_scanner()
if scanner:
# Updated debug print to be consistent with the new progress bar logic
completed_tasks = scanner.indicators_completed
enqueued_tasks = len(scanner.task_queue)
print(f"DEBUG: Tasks - Completed: {completed_tasks}, Enqueued: {enqueued_tasks}")
total_tasks = scanner.total_tasks_ever_enqueued
print(f"DEBUG: Task Progress - Completed: {completed_tasks}, Total Enqueued: {total_tasks}")
else:
print("DEBUG: No active scanner session found.")

View File

@ -1,3 +1,5 @@
# dnsrecon-reduced/config.py
"""
Configuration management for DNSRecon tool.
Handles API key storage, rate limiting, and default settings.
@ -19,10 +21,10 @@ class Config:
# --- General Settings ---
self.default_recursion_depth = 2
self.default_timeout = 15
self.default_timeout = 30
self.max_concurrent_requests = 5
self.large_entity_threshold = 100
self.max_retries_per_target = 3
self.max_retries_per_target = 8
self.cache_expiry_hours = 12
# --- Provider Caching Settings ---
@ -30,7 +32,7 @@ class Config:
# --- Rate Limiting (requests per minute) ---
self.rate_limits = {
'crtsh': 30,
'crtsh': 5,
'shodan': 60,
'dns': 100
}

View File

@ -414,6 +414,53 @@ class GraphManager:
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
def extract_node_from_large_entity(self, large_entity_id: str, node_id_to_extract: str) -> bool:
"""
Removes a node from a large entity's internal lists and updates its count.
This prepares the large entity for the node's promotion to a regular node.
"""
if not self.graph.has_node(large_entity_id):
return False
node_data = self.graph.nodes[large_entity_id]
attributes = node_data.get('attributes', {})
# Remove from the list of member nodes
if 'nodes' in attributes and node_id_to_extract in attributes['nodes']:
attributes['nodes'].remove(node_id_to_extract)
# Update the count
attributes['count'] = len(attributes['nodes'])
else:
# This can happen if the node was already extracted, which is not an error.
print(f"Warning: Node {node_id_to_extract} not found in the 'nodes' list of {large_entity_id}.")
return True # Proceed as if successful
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
def remove_node(self, node_id: str) -> bool:
"""Remove a node and its connected edges from the graph."""
if not self.graph.has_node(node_id):
return False
# Remove node from the graph (NetworkX handles removing connected edges)
self.graph.remove_node(node_id)
# Clean up the correlation index
keys_to_delete = []
for value, nodes in self.correlation_index.items():
if node_id in nodes:
del nodes[node_id]
if not nodes: # If no other nodes are associated with this value, remove it
keys_to_delete.append(value)
for key in keys_to_delete:
if key in self.correlation_index:
del self.correlation_index[key]
self.last_modified = datetime.now(timezone.utc).isoformat()
return True
def get_node_count(self) -> int:
"""Get total number of nodes in the graph."""
return self.graph.number_of_nodes()

View File

@ -50,7 +50,7 @@ class ForensicLogger:
session_id: Unique identifier for this reconnaissance session
"""
self.session_id = session_id or self._generate_session_id()
#self.lock = threading.Lock()
self.lock = threading.Lock()
# Initialize audit trail storage
self.api_requests: List[APIRequest] = []
@ -86,6 +86,8 @@ class ForensicLogger:
# Remove the unpickleable 'logger' attribute
if 'logger' in state:
del state['logger']
if 'lock' in state:
del state['lock']
return state
def __setstate__(self, state):
@ -101,6 +103,7 @@ class ForensicLogger:
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
self.logger.addHandler(console_handler)
self.lock = threading.Lock()
def _generate_session_id(self) -> str:
"""Generate unique session identifier."""

29
core/rate_limiter.py Normal file
View File

@ -0,0 +1,29 @@
# dnsrecon-reduced/core/rate_limiter.py
import time
import redis
class GlobalRateLimiter:
def __init__(self, redis_client):
self.redis = redis_client
def is_rate_limited(self, key, limit, period):
"""
Check if a key is rate-limited.
"""
now = time.time()
key = f"rate_limit:{key}"
# Remove old timestamps
self.redis.zremrangebyscore(key, 0, now - period)
# Check the count
count = self.redis.zcard(key)
if count >= limit:
return True
# Add new timestamp
self.redis.zadd(key, {now: now})
self.redis.expire(key, period)
return False

View File

@ -1,20 +1,22 @@
# dnsrecon/core/scanner.py
# dnsrecon-reduced/core/scanner.py
import threading
import traceback
import time
import os
import importlib
import redis
from typing import List, Set, Dict, Any, Tuple, Optional
from concurrent.futures import ThreadPoolExecutor, as_completed, CancelledError, Future
from collections import defaultdict, deque
from collections import defaultdict
from queue import PriorityQueue
from datetime import datetime, timezone
from core.graph_manager import GraphManager, NodeType
from core.logger import get_forensic_logger, new_session
from utils.helpers import _is_valid_ip, _is_valid_domain
from providers.base_provider import BaseProvider
from core.rate_limiter import GlobalRateLimiter
class ScanStatus:
"""Enumeration of scan statuses."""
@ -50,7 +52,7 @@ class Scanner:
self.stop_event = threading.Event()
self.scan_thread = None
self.session_id: Optional[str] = None # Will be set by session manager
self.task_queue = deque([])
self.task_queue = PriorityQueue()
self.target_retries = defaultdict(int)
self.scan_failed_due_to_retries = False
@ -63,6 +65,7 @@ class Scanner:
self.indicators_processed = 0
self.indicators_completed = 0
self.tasks_re_enqueued = 0
self.total_tasks_ever_enqueued = 0
self.current_indicator = ""
# Concurrent processing configuration
@ -77,6 +80,9 @@ class Scanner:
print("Initializing forensic logger...")
self.logger = get_forensic_logger()
# Initialize global rate limiter
self.rate_limiter = GlobalRateLimiter(redis.StrictRedis(db=0))
print("Scanner initialization complete")
except Exception as e:
@ -129,7 +135,10 @@ class Scanner:
'stop_event',
'scan_thread',
'executor',
'processing_lock' # **NEW**: Exclude the processing lock
'processing_lock',
'task_queue',
'rate_limiter',
'logger'
]
for attr in unpicklable_attrs:
@ -138,7 +147,6 @@ class Scanner:
# Handle providers separately to ensure they're picklable
if 'providers' in state:
# The providers should be picklable now, but let's ensure clean state
for provider in state['providers']:
if hasattr(provider, '_stop_event'):
provider._stop_event = None
@ -153,9 +161,15 @@ class Scanner:
self.stop_event = threading.Event()
self.scan_thread = None
self.executor = None
self.processing_lock = threading.Lock() # **NEW**: Recreate processing lock
self.processing_lock = threading.Lock()
self.task_queue = PriorityQueue()
self.rate_limiter = GlobalRateLimiter(redis.StrictRedis(db=0))
self.logger = get_forensic_logger()
if not hasattr(self, 'providers') or not self.providers:
print("Providers not found after loading session, re-initializing...")
self._initialize_providers()
# **NEW**: Reset processing tracking
if not hasattr(self, 'currently_processing'):
self.currently_processing = set()
@ -204,11 +218,12 @@ class Scanner:
self._initialize_providers()
print("Session configuration updated")
def start_scan(self, target_domain: str, max_depth: int = 2, clear_graph: bool = True) -> bool:
def start_scan(self, target: str, max_depth: int = 2, clear_graph: bool = True, force_rescan_target: Optional[str] = None) -> bool:
"""Start a new reconnaissance scan with proper cleanup of previous scans."""
print(f"=== STARTING SCAN IN SCANNER {id(self)} ===")
print(f"Session ID: {self.session_id}")
print(f"Initial scanner status: {self.status}")
self.total_tasks_ever_enqueued = 0
# **IMPROVED**: More aggressive cleanup of previous scan
if self.scan_thread and self.scan_thread.is_alive():
@ -221,7 +236,7 @@ class Scanner:
# Clear all processing state
with self.processing_lock:
self.currently_processing.clear()
self.task_queue.clear()
self.task_queue = PriorityQueue()
# Shutdown executor aggressively
if self.executor:
@ -251,7 +266,7 @@ class Scanner:
with self.processing_lock:
self.currently_processing.clear()
self.task_queue.clear()
self.task_queue = PriorityQueue()
self.target_retries.clear()
self.scan_failed_due_to_retries = False
@ -268,7 +283,14 @@ class Scanner:
if clear_graph:
self.graph.clear()
self.current_target = target_domain.lower().strip()
if force_rescan_target and self.graph.graph.has_node(force_rescan_target):
print(f"Forcing rescan of {force_rescan_target}, clearing provider states.")
node_data = self.graph.graph.nodes[force_rescan_target]
if 'metadata' in node_data and 'provider_states' in node_data['metadata']:
node_data['metadata']['provider_states'] = {}
self.current_target = target.lower().strip()
self.max_depth = max_depth
self.current_depth = 0
@ -304,95 +326,119 @@ class Scanner:
self._update_session_state()
return False
def _execute_scan(self, target_domain: str, max_depth: int) -> None:
"""Execute the reconnaissance scan with proper termination handling."""
print(f"_execute_scan started for {target_domain} with depth {max_depth}")
self.executor = ThreadPoolExecutor(max_workers=self.max_workers)
processed_targets = set()
def _get_priority(self, provider_name):
rate_limit = self.config.get_rate_limit(provider_name)
if rate_limit > 90:
return 1 # Highest priority
elif rate_limit > 50:
return 2
else:
return 3 # Lowest priority
self.task_queue.append((target_domain, 0, False))
def _execute_scan(self, target: str, max_depth: int) -> None:
"""Execute the reconnaissance scan with proper termination handling."""
print(f"_execute_scan started for {target} with depth {max_depth}")
self.executor = ThreadPoolExecutor(max_workers=self.max_workers)
processed_tasks = set()
# Initial task population for the main target
is_ip = _is_valid_ip(target)
initial_providers = self._get_eligible_providers(target, is_ip, False)
for provider in initial_providers:
provider_name = provider.get_name()
self.task_queue.put((self._get_priority(provider_name), (provider_name, target, 0)))
self.total_tasks_ever_enqueued += 1
try:
self.status = ScanStatus.RUNNING
self._update_session_state()
enabled_providers = [provider.get_name() for provider in self.providers]
self.logger.log_scan_start(target_domain, max_depth, enabled_providers)
self.graph.add_node(target_domain, NodeType.DOMAIN)
self._initialize_provider_states(target_domain)
self.logger.log_scan_start(target, max_depth, enabled_providers)
# **IMPROVED**: Better termination checking in main loop
while self.task_queue and not self._is_stop_requested():
# Determine initial node type
node_type = NodeType.IP if is_ip else NodeType.DOMAIN
self.graph.add_node(target, node_type)
self._initialize_provider_states(target)
# Better termination checking in main loop
while not self.task_queue.empty() and not self._is_stop_requested():
try:
target, depth, is_large_entity_member = self.task_queue.popleft()
priority, (provider_name, target_item, depth) = self.task_queue.get()
except IndexError:
# Queue became empty during processing
break
if target in processed_targets:
task_tuple = (provider_name, target_item)
if task_tuple in processed_tasks:
continue
if depth > max_depth:
continue
# **NEW**: Track this target as currently processing
if self.rate_limiter.is_rate_limited(provider_name, self.config.get_rate_limit(provider_name), 60):
self.task_queue.put((priority + 1, (provider_name, target_item, depth))) # Postpone
continue
with self.processing_lock:
if self._is_stop_requested():
print(f"Stop requested before processing {target}")
print(f"Stop requested before processing {target_item}")
break
self.currently_processing.add(target)
self.currently_processing.add(target_item)
try:
self.current_depth = depth
self.current_indicator = target
self.current_indicator = target_item
self._update_session_state()
# **IMPROVED**: More frequent stop checking during processing
if self._is_stop_requested():
print(f"Stop requested during processing setup for {target}")
print(f"Stop requested during processing setup for {target_item}")
break
new_targets, large_entity_members, success = self._query_providers_for_target(target, depth, is_large_entity_member)
provider = next((p for p in self.providers if p.get_name() == provider_name), None)
# **NEW**: Check stop signal after provider queries
if self._is_stop_requested():
print(f"Stop requested after querying providers for {target}")
break
if provider:
new_targets, large_entity_members, success = self._query_single_provider_for_target(provider, target_item, depth)
if not success:
self.target_retries[target] += 1
if self.target_retries[target] <= self.config.max_retries_per_target:
print(f"Re-queueing target {target} (attempt {self.target_retries[target]})")
self.task_queue.append((target, depth, is_large_entity_member))
self.tasks_re_enqueued += 1
if self._is_stop_requested():
print(f"Stop requested after querying providers for {target_item}")
break
if not success:
self.target_retries[task_tuple] += 1
if self.target_retries[task_tuple] <= self.config.max_retries_per_target:
print(f"Re-queueing task {task_tuple} (attempt {self.target_retries[task_tuple]})")
self.task_queue.put((priority, (provider_name, target_item, depth)))
self.tasks_re_enqueued += 1
self.total_tasks_ever_enqueued += 1
else:
print(f"ERROR: Max retries exceeded for task {task_tuple}")
self.scan_failed_due_to_retries = True
self._log_target_processing_error(str(task_tuple), "Max retries exceeded")
else:
print(f"ERROR: Max retries exceeded for target {target}")
self.scan_failed_due_to_retries = True
self._log_target_processing_error(target, "Max retries exceeded")
else:
processed_targets.add(target)
self.indicators_completed += 1
# **NEW**: Only add new targets if not stopped
if not self._is_stop_requested():
for new_target in new_targets:
if new_target not in processed_targets:
self.task_queue.append((new_target, depth + 1, False))
for member in large_entity_members:
if member not in processed_targets:
self.task_queue.append((member, depth, True))
processed_tasks.add(task_tuple)
self.indicators_completed += 1
if not self._is_stop_requested():
all_new_targets = new_targets.union(large_entity_members)
for new_target in all_new_targets:
is_ip_new = _is_valid_ip(new_target)
eligible_providers_new = self._get_eligible_providers(new_target, is_ip_new, False)
for p_new in eligible_providers_new:
p_name_new = p_new.get_name()
if (p_name_new, new_target) not in processed_tasks:
new_depth = depth + 1 if new_target in new_targets else depth
self.task_queue.put((self._get_priority(p_name_new), (p_name_new, new_target, new_depth)))
self.total_tasks_ever_enqueued += 1
finally:
# **NEW**: Always remove from processing set
with self.processing_lock:
self.currently_processing.discard(target)
self.currently_processing.discard(target_item)
# **NEW**: Log termination reason
if self._is_stop_requested():
print("Scan terminated due to stop request")
self.logger.logger.info("Scan terminated by user request")
elif not self.task_queue:
elif self.task_queue.empty():
print("Scan completed - no more targets to process")
self.logger.logger.info("Scan completed - all targets processed")
@ -402,7 +448,6 @@ class Scanner:
self.status = ScanStatus.FAILED
self.logger.logger.error(f"Scan failed: {e}")
finally:
# **NEW**: Clear processing state on exit
with self.processing_lock:
self.currently_processing.clear()
@ -422,18 +467,16 @@ class Scanner:
print("Final scan statistics:")
print(f" - Total nodes: {stats['basic_metrics']['total_nodes']}")
print(f" - Total edges: {stats['basic_metrics']['total_edges']}")
print(f" - Targets processed: {len(processed_targets)}")
print(f" - Tasks processed: {len(processed_tasks)}")
def _query_providers_for_target(self, target: str, depth: int, dns_only: bool = False) -> Tuple[Set[str], Set[str], bool]:
"""Query providers for a single target with enhanced stop checking."""
# **NEW**: Early termination check
def _query_single_provider_for_target(self, provider: BaseProvider, target: str, depth: int) -> Tuple[Set[str], Set[str], bool]:
if self._is_stop_requested():
print(f"Stop requested before querying providers for {target}")
print(f"Stop requested before querying {provider.get_name()} for {target}")
return set(), set(), False
is_ip = _is_valid_ip(target)
target_type = NodeType.IP if is_ip else NodeType.DOMAIN
print(f"Querying providers for {target_type.value}: {target} at depth {depth}")
print(f"Querying {provider.get_name()} for {target_type.value}: {target} at depth {depth}")
self.graph.add_node(target, target_type)
self._initialize_provider_states(target)
@ -441,41 +484,26 @@ class Scanner:
new_targets = set()
large_entity_members = set()
node_attributes = defaultdict(lambda: defaultdict(list))
all_providers_successful = True
provider_successful = True
eligible_providers = self._get_eligible_providers(target, is_ip, dns_only)
if not eligible_providers:
self._log_no_eligible_providers(target, is_ip)
return new_targets, large_entity_members, True
# **IMPROVED**: Check stop signal before each provider
for i, provider in enumerate(eligible_providers):
if self._is_stop_requested():
print(f"Stop requested while querying provider {i+1}/{len(eligible_providers)} for {target}")
all_providers_successful = False
break
try:
provider_results = self._query_single_provider_forensic(provider, target, is_ip, depth)
if provider_results is None:
all_providers_successful = False
elif not self._is_stop_requested():
discovered, is_large_entity = self._process_provider_results_forensic(
target, provider, provider_results, node_attributes, depth
)
if is_large_entity:
large_entity_members.update(discovered)
else:
new_targets.update(discovered)
try:
provider_results = self._query_single_provider_forensic(provider, target, is_ip, depth)
if provider_results is None:
provider_successful = False
elif not self._is_stop_requested():
discovered, is_large_entity = self._process_provider_results(
target, provider, provider_results, node_attributes, depth
)
if is_large_entity:
large_entity_members.update(discovered)
else:
print(f"Stop requested after processing results from {provider.get_name()}")
break
except Exception as e:
all_providers_successful = False
self._log_provider_error(target, provider.get_name(), str(e))
new_targets.update(discovered)
else:
print(f"Stop requested after processing results from {provider.get_name()}")
except Exception as e:
provider_successful = False
self._log_provider_error(target, provider.get_name(), str(e))
# **NEW**: Only update node attributes if not stopped
if not self._is_stop_requested():
for node_id, attributes in node_attributes.items():
if self.graph.graph.has_node(node_id):
@ -483,7 +511,7 @@ class Scanner:
node_type_to_add = NodeType.IP if node_is_ip else NodeType.DOMAIN
self.graph.add_node(node_id, node_type_to_add, attributes=attributes)
return new_targets, large_entity_members, all_providers_successful
return new_targets, large_entity_members, provider_successful
def stop_scan(self) -> bool:
"""Request immediate scan termination with proper cleanup."""
@ -502,8 +530,10 @@ class Scanner:
print(f"Cleared {len(currently_processing_copy)} currently processing targets: {currently_processing_copy}")
# **IMPROVED**: Clear task queue and log what was discarded
discarded_tasks = list(self.task_queue)
self.task_queue.clear()
discarded_tasks = []
while not self.task_queue.empty():
discarded_tasks.append(self.task_queue.get())
self.task_queue = PriorityQueue()
print(f"Discarded {len(discarded_tasks)} pending tasks")
# **IMPROVED**: Aggressively shut down executor
@ -559,11 +589,12 @@ class Scanner:
'indicators_completed': self.indicators_completed,
'tasks_re_enqueued': self.tasks_re_enqueued,
'progress_percentage': self._calculate_progress(),
'total_tasks_ever_enqueued': self.total_tasks_ever_enqueued,
'enabled_providers': [provider.get_name() for provider in self.providers],
'graph_statistics': self.graph.get_statistics(),
'task_queue_size': len(self.task_queue),
'currently_processing_count': currently_processing_count, # **NEW**
'currently_processing': currently_processing_list[:5] # **NEW**: Show first 5 for debugging
'task_queue_size': self.task_queue.qsize(),
'currently_processing_count': currently_processing_count,
'currently_processing': currently_processing_list[:5]
}
except Exception as e:
print(f"ERROR: Exception in get_scan_status: {e}")
@ -680,7 +711,7 @@ class Scanner:
self.logger.logger.info(f"Provider state updated: {target} -> {provider_name} -> {status} ({results_count} results)")
def _process_provider_results_forensic(self, target: str, provider, results: List,
def _process_provider_results(self, target: str, provider, results: List,
node_attributes: Dict, current_depth: int) -> Tuple[Set[str], bool]:
"""Process provider results, returns (discovered_targets, is_large_entity)."""
provider_name = provider.get_name()
@ -710,8 +741,14 @@ class Scanner:
discovery_method=f"{provider_name}_query_depth_{current_depth}"
)
# Collect attributes for the source node
self._collect_node_attributes(source, provider_name, rel_type, rel_target, raw_data, node_attributes[source])
# If the relationship is asn_membership, collect attributes for the target ASN node
if rel_type == 'asn_membership':
self._collect_node_attributes(rel_target, provider_name, rel_type, source, raw_data, node_attributes[rel_target])
if isinstance(rel_target, list):
# If the target is a list, iterate and process each item
for single_target in rel_target:
@ -763,6 +800,7 @@ class Scanner:
elif _is_valid_ip(targets[0]):
node_type = 'ip'
# We still create the nodes so they exist in the graph, they are just not processed for edges yet.
for target in targets:
self.graph.add_node(target, NodeType.DOMAIN if node_type == 'domain' else NodeType.IP)
@ -788,6 +826,73 @@ class Scanner:
return set(targets)
def extract_node_from_large_entity(self, large_entity_id: str, node_id_to_extract: str) -> bool:
"""
Extracts a node from a large entity, re-creates its original edge, and
re-queues it for full scanning.
"""
if not self.graph.graph.has_node(large_entity_id):
print(f"ERROR: Large entity {large_entity_id} not found.")
return False
# 1. Get the original source node that discovered the large entity
predecessors = list(self.graph.graph.predecessors(large_entity_id))
if not predecessors:
print(f"ERROR: No source node found for large entity {large_entity_id}.")
return False
source_node_id = predecessors[0]
# Get the original edge data to replicate it for the extracted node
original_edge_data = self.graph.graph.get_edge_data(source_node_id, large_entity_id)
if not original_edge_data:
print(f"ERROR: Could not find original edge data from {source_node_id} to {large_entity_id}.")
return False
# 2. Modify the graph data structure first
success = self.graph.extract_node_from_large_entity(large_entity_id, node_id_to_extract)
if not success:
print(f"ERROR: Node {node_id_to_extract} could not be removed from {large_entity_id}'s attributes.")
return False
# 3. Create the direct edge from the original source to the newly extracted node
print(f"Re-creating direct edge from {source_node_id} to extracted node {node_id_to_extract}")
self.graph.add_edge(
source_id=source_node_id,
target_id=node_id_to_extract,
relationship_type=original_edge_data.get('relationship_type', 'extracted_from_large_entity'),
confidence_score=original_edge_data.get('confidence_score', 0.85), # Slightly lower confidence
source_provider=original_edge_data.get('source_provider', 'unknown'),
raw_data={'context': f'Extracted from large entity {large_entity_id}'}
)
# 4. Re-queue the extracted node for full processing by all eligible providers
print(f"Re-queueing extracted node {node_id_to_extract} for full reconnaissance...")
is_ip = _is_valid_ip(node_id_to_extract)
current_depth = self.graph.graph.nodes[large_entity_id].get('attributes', {}).get('discovery_depth', 0)
eligible_providers = self._get_eligible_providers(node_id_to_extract, is_ip, False)
for provider in eligible_providers:
provider_name = provider.get_name()
self.task_queue.put((self._get_priority(provider_name), (provider_name, node_id_to_extract, current_depth)))
self.total_tasks_ever_enqueued += 1
# 5. If the scanner is not running, we need to kickstart it to process this one item.
if self.status != ScanStatus.RUNNING:
print("Scanner is idle. Starting a mini-scan to process the extracted node.")
self.status = ScanStatus.RUNNING
self._update_session_state()
if not self.scan_thread or not self.scan_thread.is_alive():
self.scan_thread = threading.Thread(
target=self._execute_scan,
args=(self.current_target, self.max_depth),
daemon=True
)
self.scan_thread.start()
print(f"Successfully extracted and re-queued {node_id_to_extract} from {large_entity_id}.")
return True
def _collect_node_attributes(self, node_id: str, provider_name: str, rel_type: str,
target: str, raw_data: Dict[str, Any], attributes: Dict[str, Any]) -> None:
"""Collect and organize attributes for a node."""
@ -810,18 +915,22 @@ class Scanner:
attributes.setdefault('related_domains_san', []).append(target)
elif provider_name == 'shodan':
# This logic will now apply to the correct node (ASN or IP)
shodan_attributes = attributes.setdefault('shodan', {})
for key, value in raw_data.items():
if key not in shodan_attributes or not shodan_attributes.get(key):
shodan_attributes[key] = value
if _is_valid_ip(node_id):
if 'ports' in raw_data:
attributes['ports'] = raw_data['ports']
if 'os' in raw_data and raw_data['os']:
attributes['os'] = raw_data['os']
if rel_type == "asn_membership":
attributes['asn'] = {
'id': target,
'description': raw_data.get('org', ''),
'isp': raw_data.get('isp', ''),
'country': raw_data.get('country', '')
}
# This is the key change: these attributes are for the target (the ASN),
# not the source (the IP). We will add them to the ASN node later.
pass
record_type_name = rel_type
if record_type_name not in attributes:
@ -848,10 +957,9 @@ class Scanner:
def _calculate_progress(self) -> float:
"""Calculate scan progress percentage based on task completion."""
total_tasks = self.indicators_completed + len(self.task_queue)
if total_tasks == 0:
if self.total_tasks_ever_enqueued == 0:
return 0.0
return min(100.0, (self.indicators_completed / total_tasks) * 100)
return min(100.0, (self.indicators_completed / self.total_tasks_ever_enqueued) * 100)
def get_graph_data(self) -> Dict[str, Any]:
"""Get current graph data for visualization."""

BIN
dump.rdb

Binary file not shown.

View File

@ -3,14 +3,15 @@ Data provider modules for DNSRecon.
Contains implementations for various reconnaissance data sources.
"""
from .base_provider import BaseProvider, RateLimiter
from .base_provider import BaseProvider
from .crtsh_provider import CrtShProvider
from .dns_provider import DNSProvider
from .shodan_provider import ShodanProvider
from core.rate_limiter import GlobalRateLimiter
__all__ = [
'BaseProvider',
'RateLimiter',
'GlobalRateLimiter',
'CrtShProvider',
'DNSProvider',
'ShodanProvider'

View File

@ -7,40 +7,7 @@ from abc import ABC, abstractmethod
from typing import List, Dict, Any, Optional, Tuple
from core.logger import get_forensic_logger
class RateLimiter:
"""Simple rate limiter for API calls."""
def __init__(self, requests_per_minute: int):
"""
Initialize rate limiter.
Args:
requests_per_minute: Maximum requests allowed per minute
"""
self.requests_per_minute = requests_per_minute
self.min_interval = 60.0 / requests_per_minute
self.last_request_time = 0
def __getstate__(self):
"""RateLimiter is fully picklable, return full state."""
return self.__dict__.copy()
def __setstate__(self, state):
"""Restore RateLimiter state."""
self.__dict__.update(state)
def wait_if_needed(self) -> None:
"""Wait if necessary to respect rate limits."""
current_time = time.time()
time_since_last = current_time - self.last_request_time
if time_since_last < self.min_interval:
sleep_time = self.min_interval - time_since_last
time.sleep(sleep_time)
self.last_request_time = time.time()
from core.rate_limiter import GlobalRateLimiter
class BaseProvider(ABC):
@ -68,11 +35,9 @@ class BaseProvider(ABC):
# Fallback to global config for backwards compatibility
from config import config as global_config
self.config = global_config
actual_rate_limit = rate_limit
actual_timeout = timeout
self.name = name
self.rate_limiter = RateLimiter(actual_rate_limit)
self.timeout = actual_timeout
self._local = threading.local()
self.logger = get_forensic_logger()
@ -172,8 +137,6 @@ class BaseProvider(ABC):
print(f"Request cancelled before start: {url}")
return None
self.rate_limiter.wait_if_needed()
start_time = time.time()
response = None
error = None
@ -297,5 +260,5 @@ class BaseProvider(ABC):
'failed_requests': self.failed_requests,
'success_rate': (self.successful_requests / self.total_requests * 100) if self.total_requests > 0 else 0,
'relationships_found': self.total_relationships_found,
'rate_limit': self.rate_limiter.requests_per_minute
'rate_limit': self.config.get_rate_limit(self.name)
}

View File

@ -5,46 +5,32 @@ import re
import os
from pathlib import Path
from typing import List, Dict, Any, Tuple, Set
from urllib.parse import quote
from datetime import datetime, timezone
# New dependency required for this provider
try:
import psycopg2
import psycopg2.extras
PSYCOPG2_AVAILABLE = True
except ImportError:
PSYCOPG2_AVAILABLE = False
import requests
from .base_provider import BaseProvider
from utils.helpers import _is_valid_domain
# We use requests only to raise the same exception type for compatibility with core retry logic
import requests
class CrtShProvider(BaseProvider):
"""
Provider for querying crt.sh certificate transparency database via its public PostgreSQL endpoint.
This version is designed to be a drop-in, high-performance replacement for the API-based provider.
It preserves the same caching and data processing logic.
Provider for querying crt.sh certificate transparency database.
Now uses session-specific configuration and caching with accumulative behavior.
"""
def __init__(self, name=None, session_config=None):
"""Initialize CrtShDB provider with session-specific configuration."""
"""Initialize CrtSh provider with session-specific configuration."""
super().__init__(
name="crtsh",
rate_limit=0, # No rate limit for direct DB access
timeout=60, # Increased timeout for potentially long DB queries
rate_limit=60,
timeout=15,
session_config=session_config
)
# Database connection details
self.db_host = "crt.sh"
self.db_port = 5432
self.db_name = "certwatch"
self.db_user = "guest"
self.base_url = "https://crt.sh/"
self._stop_event = None
# Initialize cache directory (same as original provider)
# Initialize cache directory
self.cache_dir = Path('cache') / 'crtsh'
self.cache_dir.mkdir(parents=True, exist_ok=True)
@ -54,7 +40,7 @@ class CrtShProvider(BaseProvider):
def get_display_name(self) -> str:
"""Return the provider display name for the UI."""
return "crt.sh (DB)"
return "crt.sh"
def requires_api_key(self) -> bool:
"""Return True if the provider requires an API key."""
@ -66,161 +52,23 @@ class CrtShProvider(BaseProvider):
def is_available(self) -> bool:
"""
Check if the provider can be used. Requires the psycopg2 library.
Check if the provider is configured to be used.
This method is intentionally simple and does not perform a network request
to avoid blocking application startup.
"""
if not PSYCOPG2_AVAILABLE:
self.logger.logger.warning("psycopg2 library not found. CrtShDBProvider is unavailable. "
"Please run 'pip install psycopg2-binary'.")
return False
return True
def _query_crtsh(self, domain: str) -> List[Dict[str, Any]]:
"""
Query the crt.sh PostgreSQL database for raw certificate data.
Raises exceptions for DB/network errors to allow core logic to retry.
"""
conn = None
certificates = []
# SQL Query to find all certificate IDs related to the domain (including subdomains),
# then retrieve comprehensive details for each certificate, mimicking the JSON API structure.
sql_query = """
WITH certificates_of_interest AS (
SELECT DISTINCT ci.certificate_id
FROM certificate_identity ci
WHERE ci.name_value ILIKE %(domain_wildcard)s OR ci.name_value = %(domain)s
)
SELECT
c.id,
c.serial_number,
c.not_before,
c.not_after,
(SELECT min(entry_timestamp) FROM ct_log_entry cle WHERE cle.certificate_id = c.id) as entry_timestamp,
ca.id as issuer_ca_id,
ca.name as issuer_name,
(SELECT array_to_string(array_agg(DISTINCT ci.name_value), E'\n') FROM certificate_identity ci WHERE ci.certificate_id = c.id) as name_value,
(SELECT name_value FROM certificate_identity ci WHERE ci.certificate_id = c.id AND ci.name_type = 'commonName' LIMIT 1) as common_name
FROM
certificate c
JOIN ca ON c.issuer_ca_id = ca.id
WHERE c.id IN (SELECT certificate_id FROM certificates_of_interest);
"""
try:
conn = psycopg2.connect(
dbname=self.db_name,
user=self.db_user,
host=self.db_host,
port=self.db_port,
connect_timeout=self.timeout
)
with conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) as cursor:
cursor.execute(sql_query, {'domain': domain, 'domain_wildcard': f'%.{domain}'})
results = cursor.fetchall()
certificates = [dict(row) for row in results]
self.logger.logger.info(f"crt.sh DB query for '{domain}' returned {len(certificates)} certificates.")
except psycopg2.Error as e:
self.logger.logger.error(f"PostgreSQL query failed for {domain}: {e}")
# Raise a RequestException to be compatible with the existing retry logic in the core application
raise requests.exceptions.RequestException(f"PostgreSQL query failed: {e}") from e
finally:
if conn:
conn.close()
return certificates
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query crt.sh for certificates containing the domain with caching support.
Properly raises exceptions for network errors to allow core logic retries.
"""
if not _is_valid_domain(domain):
return []
if self._stop_event and self._stop_event.is_set():
return []
cache_file = self._get_cache_file_path(domain)
cache_status = self._get_cache_status(cache_file)
certificates = []
try:
if cache_status == "fresh":
certificates = self._load_cached_certificates(cache_file)
self.logger.logger.info(f"Using cached data for {domain} ({len(certificates)} certificates)")
elif cache_status == "not_found":
# Fresh query from DB, create new cache
certificates = self._query_crtsh(domain)
if certificates:
self._create_cache_file(cache_file, domain, self._serialize_certs_for_cache(certificates))
else:
self.logger.logger.info(f"No certificates found for {domain}, not caching")
elif cache_status == "stale":
try:
new_certificates = self._query_crtsh(domain)
if new_certificates:
certificates = self._append_to_cache(cache_file, self._serialize_certs_for_cache(new_certificates))
else:
certificates = self._load_cached_certificates(cache_file)
except requests.exceptions.RequestException:
certificates = self._load_cached_certificates(cache_file)
if certificates:
self.logger.logger.warning(f"DB query failed for {domain}, using stale cache data.")
else:
raise
except requests.exceptions.RequestException as e:
# Re-raise so core logic can retry
self.logger.logger.error(f"DB query failed for {domain}: {e}")
raise e
except json.JSONDecodeError as e:
# JSON parsing errors from cache should also be handled
self.logger.logger.error(f"Failed to parse JSON from cache for {domain}: {e}")
raise e
if self._stop_event and self._stop_event.is_set():
return []
if not certificates:
return []
return self._process_certificates_to_relationships(domain, certificates)
def _serialize_certs_for_cache(self, certificates: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
Serialize certificate data for JSON caching, converting datetime objects to ISO strings.
"""
serialized_certs = []
for cert in certificates:
serialized_cert = cert.copy()
for key in ['not_before', 'not_after', 'entry_timestamp']:
if isinstance(serialized_cert.get(key), datetime):
# Ensure datetime is timezone-aware before converting
dt_obj = serialized_cert[key]
if dt_obj.tzinfo is None:
dt_obj = dt_obj.replace(tzinfo=timezone.utc)
serialized_cert[key] = dt_obj.isoformat()
serialized_certs.append(serialized_cert)
return serialized_certs
# --- All methods below are copied directly from the original CrtShProvider ---
# They are compatible because _query_crtsh returns data in the same format
# as the original _query_crtsh_api method. A small adjustment is made to
# _parse_certificate_date to handle datetime objects directly from the DB.
def _get_cache_file_path(self, domain: str) -> Path:
"""Generate cache file path for a domain."""
# Sanitize domain for filename safety
safe_domain = domain.replace('.', '_').replace('/', '_').replace('\\', '_')
return self.cache_dir / f"{safe_domain}.json"
def _get_cache_status(self, cache_file_path: Path) -> str:
"""Check cache status for a domain."""
"""
Check cache status for a domain.
Returns: 'not_found', 'fresh', or 'stale'
"""
if not cache_file_path.exists():
return "not_found"
@ -230,7 +78,7 @@ class CrtShProvider(BaseProvider):
last_query_str = cache_data.get("last_upstream_query")
if not last_query_str:
return "stale"
return "stale" # Invalid cache format
last_query = datetime.fromisoformat(last_query_str.replace('Z', '+00:00'))
hours_since_query = (datetime.now(timezone.utc) - last_query).total_seconds() / 3600
@ -255,6 +103,24 @@ class CrtShProvider(BaseProvider):
self.logger.logger.error(f"Failed to load cached certificates from {cache_file_path}: {e}")
return []
def _query_crtsh_api(self, domain: str) -> List[Dict[str, Any]]:
"""
Query crt.sh API for raw certificate data.
Raises exceptions for network errors to allow core logic to retry.
"""
url = f"{self.base_url}?q={quote(domain)}&output=json"
response = self.make_request(url, target_indicator=domain)
if not response or response.status_code != 200:
# This could be a temporary error - raise exception so core can retry
raise requests.exceptions.RequestException(f"crt.sh API returned status {response.status_code if response else 'None'}")
certificates = response.json()
if not certificates:
return []
return certificates
def _create_cache_file(self, cache_file_path: Path, domain: str, certificates: List[Dict[str, Any]]) -> None:
"""Create new cache file with certificates."""
try:
@ -265,20 +131,27 @@ class CrtShProvider(BaseProvider):
"upstream_query_count": 1,
"certificates": certificates
}
cache_file_path.parent.mkdir(parents=True, exist_ok=True)
with open(cache_file_path, 'w') as f:
json.dump(cache_data, f, separators=(',', ':'))
self.logger.logger.info(f"Created cache file for {domain} with {len(certificates)} certificates")
except Exception as e:
self.logger.logger.warning(f"Failed to create cache file for {domain}: {e}")
def _append_to_cache(self, cache_file_path: Path, new_certificates: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Append new certificates to existing cache and return all certificates."""
try:
# Load existing cache
with open(cache_file_path, 'r') as f:
cache_data = json.load(f)
# Track existing certificate IDs to avoid duplicates
existing_ids = {cert.get('id') for cert in cache_data.get('certificates', [])}
# Add only new certificates
added_count = 0
for cert in new_certificates:
cert_id = cert.get('id')
@ -287,141 +160,314 @@ class CrtShProvider(BaseProvider):
existing_ids.add(cert_id)
added_count += 1
# Update metadata
cache_data['last_upstream_query'] = datetime.now(timezone.utc).isoformat()
cache_data['upstream_query_count'] = cache_data.get('upstream_query_count', 0) + 1
# Write updated cache
with open(cache_file_path, 'w') as f:
json.dump(cache_data, f, separators=(',', ':'))
total_certs = len(cache_data['certificates'])
self.logger.logger.info(f"Appended {added_count} new certificates to cache. Total: {total_certs}")
return cache_data['certificates']
except Exception as e:
self.logger.logger.warning(f"Failed to append to cache: {e}")
return new_certificates
return new_certificates # Fallback to new certificates only
def _parse_issuer_organization(self, issuer_dn: str) -> str:
"""Parse the issuer Distinguished Name to extract just the organization name."""
if not issuer_dn: return issuer_dn
"""
Parse the issuer Distinguished Name to extract just the organization name.
Args:
issuer_dn: Full issuer DN string (e.g., "C=US, O=Let's Encrypt, CN=R11")
Returns:
Organization name (e.g., "Let's Encrypt") or original string if parsing fails
"""
if not issuer_dn:
return issuer_dn
try:
# Split by comma and look for O= component
components = [comp.strip() for comp in issuer_dn.split(',')]
for component in components:
if component.startswith('O='):
# Extract the value after O=
org_name = component[2:].strip()
# Remove quotes if present
if org_name.startswith('"') and org_name.endswith('"'):
org_name = org_name[1:-1]
return org_name
# If no O= component found, return the original string
return issuer_dn
except Exception as e:
self.logger.logger.debug(f"Failed to parse issuer DN '{issuer_dn}': {e}")
return issuer_dn
def _parse_certificate_date(self, date_input: Any) -> datetime:
def _parse_certificate_date(self, date_string: str) -> datetime:
"""
Parse certificate date from various formats (string from cache, datetime from DB).
"""
if isinstance(date_input, datetime):
# If it's already a datetime object from the DB, just ensure it's UTC
if date_input.tzinfo is None:
return date_input.replace(tzinfo=timezone.utc)
return date_input
Parse certificate date from crt.sh format.
date_string = str(date_input)
Args:
date_string: Date string from crt.sh API
Returns:
Parsed datetime object in UTC
"""
if not date_string:
raise ValueError("Empty date string")
try:
if 'Z' in date_string:
return datetime.fromisoformat(date_string.replace('Z', '+00:00'))
# Handle standard ISO format with or without timezone
dt = datetime.fromisoformat(date_string)
if dt.tzinfo is None:
return dt.replace(tzinfo=timezone.utc)
return dt
except ValueError as e:
# Handle various possible formats from crt.sh
if date_string.endswith('Z'):
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
elif '+' in date_string or date_string.endswith('UTC'):
# Handle timezone-aware strings
date_string = date_string.replace('UTC', '').strip()
if '+' in date_string:
date_string = date_string.split('+')[0]
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
else:
# Assume UTC if no timezone specified
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
except Exception as e:
# Fallback: try parsing without timezone info and assume UTC
try:
# Fallback for other formats
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
except Exception:
raise ValueError(f"Unable to parse date: {date_string}") from e
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
"""Check if a certificate is currently valid based on its expiry date."""
"""
Check if a certificate is currently valid based on its expiry date.
Args:
cert_data: Certificate data from crt.sh
Returns:
True if certificate is currently valid (not expired)
"""
try:
not_after_str = cert_data.get('not_after')
if not not_after_str: return False
if not not_after_str:
return False
not_after_date = self._parse_certificate_date(not_after_str)
not_before_str = cert_data.get('not_before')
now = datetime.now(timezone.utc)
# Check if certificate is within valid date range
is_not_expired = not_after_date > now
if not_before_str:
not_before_date = self._parse_certificate_date(not_before_str)
is_not_before_valid = not_before_date <= now
return is_not_expired and is_not_before_valid
return is_not_expired
except Exception as e:
self.logger.logger.debug(f"Certificate validity check failed: {e}")
return False
def _extract_certificate_metadata(self, cert_data: Dict[str, Any]) -> Dict[str, Any]:
# This method works as-is.
"""
Extract comprehensive metadata from certificate data.
Args:
cert_data: Raw certificate data from crt.sh
Returns:
Comprehensive certificate metadata dictionary
"""
# Parse the issuer name to get just the organization
raw_issuer_name = cert_data.get('issuer_name', '')
parsed_issuer_name = self._parse_issuer_organization(raw_issuer_name)
metadata = {
'certificate_id': cert_data.get('id'),
'serial_number': cert_data.get('serial_number'),
'issuer_name': parsed_issuer_name,
'issuer_name': parsed_issuer_name, # Use parsed organization name
#'issuer_name_full': raw_issuer_name, # deliberately left out, because its not useful in most cases
'issuer_ca_id': cert_data.get('issuer_ca_id'),
'common_name': cert_data.get('common_name'),
'not_before': cert_data.get('not_before'),
'not_after': cert_data.get('not_after'),
'entry_timestamp': cert_data.get('entry_timestamp'),
'source': 'crt.sh (DB)'
'source': 'crt.sh'
}
try:
if metadata['not_before'] and metadata['not_after']:
not_before = self._parse_certificate_date(metadata['not_before'])
not_after = self._parse_certificate_date(metadata['not_after'])
metadata['validity_period_days'] = (not_after - not_before).days
metadata['is_currently_valid'] = self._is_cert_valid(cert_data)
metadata['expires_soon'] = (not_after - datetime.now(timezone.utc)).days <= 30
# Add human-readable dates
metadata['not_before'] = not_before.strftime('%Y-%m-%d %H:%M:%S UTC')
metadata['not_after'] = not_after.strftime('%Y-%m-%d %H:%M:%S UTC')
except Exception as e:
self.logger.logger.debug(f"Error computing certificate metadata: {e}")
metadata['is_currently_valid'] = False
metadata['expires_soon'] = False
return metadata
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query crt.sh for certificates containing the domain with caching support.
Properly raises exceptions for network errors to allow core logic retries.
"""
if not _is_valid_domain(domain):
return []
# Check for cancellation before starting
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled before start for domain: {domain}")
return []
# === CACHING LOGIC ===
cache_file = self._get_cache_file_path(domain)
cache_status = self._get_cache_status(cache_file)
certificates = []
try:
if cache_status == "fresh":
# Use cached data
certificates = self._load_cached_certificates(cache_file)
self.logger.logger.info(f"Using cached data for {domain} ({len(certificates)} certificates)")
elif cache_status == "not_found":
# Fresh query, create new cache
certificates = self._query_crtsh_api(domain)
if certificates: # Only cache if we got results
self._create_cache_file(cache_file, domain, certificates)
self.logger.logger.info(f"Cached fresh data for {domain} ({len(certificates)} certificates)")
else:
self.logger.logger.info(f"No certificates found for {domain}, not caching")
elif cache_status == "stale":
# Append query, update existing cache
try:
new_certificates = self._query_crtsh_api(domain)
if new_certificates:
certificates = self._append_to_cache(cache_file, new_certificates)
self.logger.logger.info(f"Refreshed and appended cache for {domain}")
else:
# Use existing cache if API returns no results
certificates = self._load_cached_certificates(cache_file)
self.logger.logger.info(f"API returned no new results, using existing cache for {domain}")
except requests.exceptions.RequestException:
# If API call fails for stale cache, use cached data and re-raise for retry logic
certificates = self._load_cached_certificates(cache_file)
if certificates:
self.logger.logger.warning(f"API call failed for {domain}, using stale cache data ({len(certificates)} certificates)")
# Don't re-raise here, just use cached data
else:
# No cached data and API failed - re-raise for retry
raise
except requests.exceptions.RequestException as e:
# Network/API errors should be re-raised so core logic can retry
self.logger.logger.error(f"API query failed for {domain}: {e}")
raise e
except json.JSONDecodeError as e:
# JSON parsing errors should also be raised for retry
self.logger.logger.error(f"Failed to parse JSON response from crt.sh for {domain}: {e}")
raise e
# Check for cancellation after cache operations
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled after cache operations for domain: {domain}")
return []
if not certificates:
return []
return self._process_certificates_to_relationships(domain, certificates)
def _process_certificates_to_relationships(self, domain: str, certificates: List[Dict[str, Any]]) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
# This method works as-is.
"""
Process certificates to relationships using existing logic.
This method contains the original processing logic from query_domain.
"""
relationships = []
if self._stop_event and self._stop_event.is_set(): return []
# Check for cancellation before processing
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh processing cancelled before processing for domain: {domain}")
return []
# Aggregate certificate data by domain
domain_certificates = {}
all_discovered_domains = set()
# Process certificates with cancellation checking
for i, cert_data in enumerate(certificates):
if i % 5 == 0 and self._stop_event and self._stop_event.is_set(): break
# Check for cancellation every 5 certificates for faster response
if i % 5 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh processing cancelled at certificate {i} for domain: {domain}")
break
cert_metadata = self._extract_certificate_metadata(cert_data)
cert_domains = self._extract_domains_from_certificate(cert_data)
# Add all domains from this certificate to our tracking
all_discovered_domains.update(cert_domains)
for cert_domain in cert_domains:
if not _is_valid_domain(cert_domain): continue
if not _is_valid_domain(cert_domain):
continue
# Initialize domain certificate list if needed
if cert_domain not in domain_certificates:
domain_certificates[cert_domain] = []
# Add this certificate to the domain's certificate list
domain_certificates[cert_domain].append(cert_metadata)
if self._stop_event and self._stop_event.is_set(): return []
# Final cancellation check before creating relationships
if self._stop_event and self._stop_event.is_set():
print(f"CrtSh query cancelled before relationship creation for domain: {domain}")
return []
# Create relationships from query domain to ALL discovered domains with stop checking
for i, discovered_domain in enumerate(all_discovered_domains):
if discovered_domain == domain: continue
if i % 10 == 0 and self._stop_event and self._stop_event.is_set(): break
if not _is_valid_domain(discovered_domain): continue
if discovered_domain == domain:
continue # Skip self-relationships
# Check for cancellation every 10 relationships
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
print(f"CrtSh relationship creation cancelled for domain: {domain}")
break
if not _is_valid_domain(discovered_domain):
continue
# Get certificates for both domains
query_domain_certs = domain_certificates.get(domain, [])
discovered_domain_certs = domain_certificates.get(discovered_domain, [])
# Find shared certificates (for metadata purposes)
shared_certificates = self._find_shared_certificates(query_domain_certs, discovered_domain_certs)
# Calculate confidence based on relationship type and shared certificates
confidence = self._calculate_domain_relationship_confidence(
domain, discovered_domain, shared_certificates, all_discovered_domains
)
# Create comprehensive raw data for the relationship
relationship_raw_data = {
'relationship_type': 'certificate_discovery',
'shared_certificates': shared_certificates,
@ -432,82 +478,309 @@ class CrtShProvider(BaseProvider):
discovered_domain: self._summarize_certificates(discovered_domain_certs)
}
}
# Create domain -> domain relationship
relationships.append((
domain, discovered_domain, 'san_certificate', confidence, relationship_raw_data
domain,
discovered_domain,
'san_certificate',
confidence,
relationship_raw_data
))
# Log the relationship discovery
self.log_relationship_discovery(
source_node=domain, target_node=discovered_domain, relationship_type='san_certificate',
confidence_score=confidence, raw_data=relationship_raw_data,
source_node=domain,
target_node=discovered_domain,
relationship_type='san_certificate',
confidence_score=confidence,
raw_data=relationship_raw_data,
discovery_method="certificate_transparency_analysis"
)
return relationships
# --- All remaining helper methods are identical to the original and fully compatible ---
# They are included here for completeness.
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
cert1_ids = {cert.get('certificate_id') for cert in certs1 if cert.get('certificate_id')}
return [cert for cert in certs2 if cert.get('certificate_id') in cert1_ids]
"""
Find certificates that are shared between two domain certificate lists.
Args:
certs1: First domain's certificates
certs2: Second domain's certificates
Returns:
List of shared certificate metadata
"""
shared = []
# Create a set of certificate IDs from the first list for quick lookup
# <<< FIX: Added robust type checking to handle potentially malformed API data
cert1_ids = set()
for cert in certs1:
cert_id = cert.get('certificate_id')
# Ensure the ID is not None and is a hashable type before adding to the set
if cert_id and isinstance(cert_id, (int, str, float, bool, tuple)):
cert1_ids.add(cert_id)
# Find certificates in the second list that match
for cert in certs2:
cert_id = cert.get('certificate_id')
if cert_id and isinstance(cert_id, (int, str, float, bool, tuple)):
if cert_id in cert1_ids:
shared.append(cert)
return shared
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
if not certificates: return {'total_certificates': 0, 'valid_certificates': 0, 'expired_certificates': 0, 'expires_soon_count': 0, 'unique_issuers': [], 'latest_certificate': None, 'has_valid_cert': False}
"""
Create a summary of certificates for a domain.
Args:
certificates: List of certificate metadata
Returns:
Summary dictionary with aggregate statistics
"""
if not certificates:
return {
'total_certificates': 0,
'valid_certificates': 0,
'expired_certificates': 0,
'expires_soon_count': 0,
'unique_issuers': [],
'latest_certificate': None,
'has_valid_cert': False,
'certificate_details': [] # Always include empty list
}
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
expired_count = len(certificates) - valid_count
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
# Get unique issuers (using parsed organization names)
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
latest_cert, latest_date = None, None
# Find the most recent certificate
latest_cert = None
latest_date = None
for cert in certificates:
try:
if cert.get('not_before'):
cert_date = self._parse_certificate_date(cert['not_before'])
if latest_date is None or cert_date > latest_date:
latest_date, latest_cert = cert_date, cert
except Exception: continue
return {'total_certificates': len(certificates), 'valid_certificates': valid_count, 'expired_certificates': len(certificates) - valid_count, 'expires_soon_count': expires_soon_count, 'unique_issuers': unique_issuers, 'latest_certificate': latest_cert, 'has_valid_cert': valid_count > 0, 'certificate_details': certificates}
latest_date = cert_date
latest_cert = cert
except Exception:
continue
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str, shared_certificates: List[Dict[str, Any]], all_discovered_domains: Set[str]) -> float:
base_confidence, context_bonus, shared_bonus, validity_bonus, issuer_bonus = 0.9, 0.0, 0.0, 0.0, 0.0
# Sort certificates by date for better display (newest first)
sorted_certificates = sorted(
certificates,
key=lambda c: self._get_certificate_sort_date(c),
reverse=True
)
return {
'total_certificates': len(certificates),
'valid_certificates': valid_count,
'expired_certificates': expired_count,
'expires_soon_count': expires_soon_count,
'unique_issuers': unique_issuers,
'latest_certificate': latest_cert,
'has_valid_cert': valid_count > 0,
'certificate_details': sorted_certificates # Include full certificate details
}
def _get_certificate_sort_date(self, cert: Dict[str, Any]) -> datetime:
"""
Get a sortable date from certificate data for chronological ordering.
Args:
cert: Certificate metadata dictionary
Returns:
Datetime object for sorting (falls back to epoch if parsing fails)
"""
try:
# Try not_before first (issue date)
if cert.get('not_before'):
return self._parse_certificate_date(cert['not_before'])
# Fall back to entry_timestamp if available
if cert.get('entry_timestamp'):
return self._parse_certificate_date(cert['entry_timestamp'])
# Last resort - return a very old date for certificates without dates
return datetime(1970, 1, 1, tzinfo=timezone.utc)
except Exception:
# If all parsing fails, return epoch
return datetime(1970, 1, 1, tzinfo=timezone.utc)
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str,
shared_certificates: List[Dict[str, Any]],
all_discovered_domains: Set[str]) -> float:
"""
Calculate confidence score for domain relationship based on various factors.
Args:
domain1: Source domain (query domain)
domain2: Target domain (discovered domain)
shared_certificates: List of shared certificate metadata
all_discovered_domains: All domains discovered in this query
Returns:
Confidence score between 0.0 and 1.0
"""
base_confidence = 0.9
# Adjust confidence based on domain relationship context
relationship_context = self._determine_relationship_context(domain2, domain1)
if relationship_context == 'subdomain': context_bonus = 0.1
elif relationship_context == 'parent_domain': context_bonus = 0.05
if relationship_context == 'exact_match':
context_bonus = 0.0 # This shouldn't happen, but just in case
elif relationship_context == 'subdomain':
context_bonus = 0.1 # High confidence for subdomains
elif relationship_context == 'parent_domain':
context_bonus = 0.05 # Medium confidence for parent domains
else:
context_bonus = 0.0 # Related domains get base confidence
# Adjust confidence based on shared certificates
if shared_certificates:
shared_count = len(shared_certificates)
if shared_count >= 3:
shared_bonus = 0.1
elif shared_count >= 2:
shared_bonus = 0.05
else:
shared_bonus = 0.02
# Additional bonus for valid shared certificates
valid_shared = sum(1 for cert in shared_certificates if cert.get('is_currently_valid'))
if valid_shared > 0:
validity_bonus = 0.05
else:
validity_bonus = 0.0
else:
# Even without shared certificates, domains found in the same query have some relationship
shared_bonus = 0.0
validity_bonus = 0.0
# Adjust confidence based on certificate issuer reputation (if shared certificates exist)
issuer_bonus = 0.0
if shared_certificates:
if len(shared_certificates) >= 3: shared_bonus = 0.1
elif len(shared_certificates) >= 2: shared_bonus = 0.05
else: shared_bonus = 0.02
if any(cert.get('is_currently_valid') for cert in shared_certificates): validity_bonus = 0.05
for cert in shared_certificates:
if any(ca in cert.get('issuer_name', '').lower() for ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
issuer = cert.get('issuer_name', '').lower()
if any(trusted_ca in issuer for trusted_ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
issuer_bonus = max(issuer_bonus, 0.03)
break
return max(0.1, min(1.0, base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus))
# Calculate final confidence
final_confidence = base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus
return max(0.1, min(1.0, final_confidence)) # Clamp between 0.1 and 1.0
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
if cert_domain == query_domain: return 'exact_match'
if cert_domain.endswith(f'.{query_domain}'): return 'subdomain'
if query_domain.endswith(f'.{cert_domain}'): return 'parent_domain'
return 'related_domain'
"""
Determine the context of the relationship between certificate domain and query domain.
Args:
cert_domain: Domain found in certificate
query_domain: Original query domain
Returns:
String describing the relationship context
"""
if cert_domain == query_domain:
return 'exact_match'
elif cert_domain.endswith(f'.{query_domain}'):
return 'subdomain'
elif query_domain.endswith(f'.{cert_domain}'):
return 'parent_domain'
else:
return 'related_domain'
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query crt.sh for certificates containing the IP address.
Note: crt.sh doesn't typically index by IP, so this returns empty results.
Args:
ip: IP address to investigate
Returns:
Empty list (crt.sh doesn't support IP-based certificate queries effectively)
"""
# crt.sh doesn't effectively support IP-based certificate queries
return []
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
"""
Extract all domains from certificate data.
Args:
cert_data: Certificate data from crt.sh API
Returns:
Set of unique domain names found in the certificate
"""
domains = set()
if cn := cert_data.get('common_name'):
if cleaned := self._clean_domain_name(cn):
domains.update(cleaned)
if nv := cert_data.get('name_value'):
for line in nv.split('\n'):
if cleaned := self._clean_domain_name(line.strip()):
domains.update(cleaned)
# Extract from common name
common_name = cert_data.get('common_name', '')
if common_name:
cleaned_cn = self._clean_domain_name(common_name)
if cleaned_cn:
domains.update(cleaned_cn)
# Extract from name_value field (contains SANs)
name_value = cert_data.get('name_value', '')
if name_value:
# Split by newlines and clean each domain
for line in name_value.split('\n'):
cleaned_domains = self._clean_domain_name(line.strip())
if cleaned_domains:
domains.update(cleaned_domains)
return domains
def _clean_domain_name(self, domain_name: str) -> List[str]:
if not domain_name: return []
domain = domain_name.strip().lower().split('://', 1)[-1].split('/', 1)[0]
if ':' in domain and not domain.count(':') > 1: domain = domain.split(':', 1)[0]
cleaned_domains = [domain, domain[2:]] if domain.startswith('*.') else [domain]
"""
Clean and normalize domain name from certificate data.
Now returns a list to handle wildcards correctly.
"""
if not domain_name:
return []
domain = domain_name.strip().lower()
# Remove protocol if present
if domain.startswith(('http://', 'https://')):
domain = domain.split('://', 1)[1]
# Remove path if present
if '/' in domain:
domain = domain.split('/', 1)[0]
# Remove port if present
if ':' in domain and not domain.count(':') > 1: # Avoid breaking IPv6
domain = domain.split(':', 1)[0]
# Handle wildcard domains
cleaned_domains = []
if domain.startswith('*.'):
# Add both the wildcard and the base domain
cleaned_domains.append(domain)
cleaned_domains.append(domain[2:])
else:
cleaned_domains.append(domain)
# Remove any remaining invalid characters and validate
final_domains = []
for d in cleaned_domains:
d = re.sub(r'[^\w\-\.]', '', d)
if d and not d.startswith(('.', '-')) and not d.endswith(('.', '-')):
final_domains.append(d)
return [d for d in final_domains if _is_valid_domain(d)]

View File

@ -1,15 +1,20 @@
# dnsrecon/providers/shodan_provider.py
import json
import os
from pathlib import Path
from typing import List, Dict, Any, Tuple
from datetime import datetime, timezone
import requests
from .base_provider import BaseProvider
from utils.helpers import _is_valid_ip, _is_valid_domain
class ShodanProvider(BaseProvider):
"""
Provider for querying Shodan API for IP address and hostname information.
Now uses session-specific API keys.
Provider for querying Shodan API for IP address information.
Now uses session-specific API keys, is limited to IP-only queries, and includes caching.
"""
def __init__(self, name=None, session_config=None):
@ -23,6 +28,10 @@ class ShodanProvider(BaseProvider):
self.base_url = "https://api.shodan.io"
self.api_key = self.config.get_api_key('shodan')
# Initialize cache directory
self.cache_dir = Path('cache') / 'shodan'
self.cache_dir.mkdir(parents=True, exist_ok=True)
def is_available(self) -> bool:
"""Check if Shodan provider is available (has valid API key in this session)."""
return self.api_key is not None and len(self.api_key.strip()) > 0
@ -33,7 +42,7 @@ class ShodanProvider(BaseProvider):
def get_display_name(self) -> str:
"""Return the provider display name for the UI."""
return "shodan"
return "Shodan"
def requires_api_key(self) -> bool:
"""Return True if the provider requires an API key."""
@ -41,267 +50,146 @@ class ShodanProvider(BaseProvider):
def get_eligibility(self) -> Dict[str, bool]:
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
return {'domains': True, 'ips': True}
return {'domains': False, 'ips': True}
def _get_cache_file_path(self, ip: str) -> Path:
"""Generate cache file path for an IP address."""
safe_ip = ip.replace('.', '_').replace(':', '_')
return self.cache_dir / f"{safe_ip}.json"
def _get_cache_status(self, cache_file_path: Path) -> str:
"""
Check cache status for an IP.
Returns: 'not_found', 'fresh', or 'stale'
"""
if not cache_file_path.exists():
return "not_found"
try:
with open(cache_file_path, 'r') as f:
cache_data = json.load(f)
last_query_str = cache_data.get("last_upstream_query")
if not last_query_str:
return "stale"
last_query = datetime.fromisoformat(last_query_str.replace('Z', '+00:00'))
hours_since_query = (datetime.now(timezone.utc) - last_query).total_seconds() / 3600
cache_timeout = self.config.cache_timeout_hours
if hours_since_query < cache_timeout:
return "fresh"
else:
return "stale"
except (json.JSONDecodeError, ValueError, KeyError):
return "stale"
def _load_from_cache(self, cache_file_path: Path) -> Dict[str, Any]:
"""Load Shodan data from a cache file."""
try:
with open(cache_file_path, 'r') as f:
return json.load(f)
except (json.JSONDecodeError, FileNotFoundError):
return {}
def _save_to_cache(self, cache_file_path: Path, data: Dict[str, Any]) -> None:
"""Save Shodan data to a cache file."""
try:
data['last_upstream_query'] = datetime.now(timezone.utc).isoformat()
with open(cache_file_path, 'w') as f:
json.dump(data, f, separators=(',', ':'))
except Exception as e:
self.logger.logger.warning(f"Failed to save Shodan cache for {cache_file_path.name}: {e}")
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query Shodan for information about a domain.
Uses Shodan's hostname search to find associated IPs.
Args:
domain: Domain to investigate
Returns:
List of relationships discovered from Shodan data
Domain queries are no longer supported for the Shodan provider.
"""
if not _is_valid_domain(domain) or not self.is_available():
return []
relationships = []
try:
# Search for hostname in Shodan
search_query = f"hostname:{domain}"
url = f"{self.base_url}/shodan/host/search"
params = {
'key': self.api_key,
'query': search_query,
'minify': True # Get minimal data to reduce bandwidth
}
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
if not response or response.status_code != 200:
return []
data = response.json()
if 'matches' not in data:
return []
# Process search results
for match in data['matches']:
ip_address = match.get('ip_str')
hostnames = match.get('hostnames', [])
if ip_address and domain in hostnames:
raw_data = {
'ip_address': ip_address,
'hostnames': hostnames,
'country': match.get('location', {}).get('country_name', ''),
'city': match.get('location', {}).get('city', ''),
'isp': match.get('isp', ''),
'org': match.get('org', ''),
'ports': match.get('ports', []),
'last_update': match.get('last_update', '')
}
relationships.append((
domain,
ip_address,
'a_record', # Domain resolves to IP
0.8,
raw_data
))
self.log_relationship_discovery(
source_node=domain,
target_node=ip_address,
relationship_type='a_record',
confidence_score=0.8,
raw_data=raw_data,
discovery_method="shodan_hostname_search"
)
# Also create relationships to other hostnames on the same IP
for hostname in hostnames:
if hostname != domain and _is_valid_domain(hostname):
hostname_raw_data = {
'shared_ip': ip_address,
'all_hostnames': hostnames,
'discovery_context': 'shared_hosting'
}
relationships.append((
domain,
hostname,
'passive_dns', # Shared hosting relationship
0.6, # Lower confidence for shared hosting
hostname_raw_data
))
self.log_relationship_discovery(
source_node=domain,
target_node=hostname,
relationship_type='passive_dns',
confidence_score=0.6,
raw_data=hostname_raw_data,
discovery_method="shodan_shared_hosting"
)
except json.JSONDecodeError as e:
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
return relationships
return []
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Query Shodan for information about an IP address.
Args:
ip: IP address to investigate
Returns:
List of relationships discovered from Shodan IP data
Query Shodan for information about an IP address, with caching.
"""
if not _is_valid_ip(ip) or not self.is_available():
return []
cache_file = self._get_cache_file_path(ip)
cache_status = self._get_cache_status(cache_file)
data = {}
try:
if cache_status == "fresh":
data = self._load_from_cache(cache_file)
self.logger.logger.info(f"Using cached Shodan data for {ip}")
else: # "stale" or "not_found"
url = f"{self.base_url}/shodan/host/{ip}"
params = {'key': self.api_key}
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
if response and response.status_code == 200:
data = response.json()
self._save_to_cache(cache_file, data)
elif cache_status == "stale":
# If API fails on a stale cache, use the old data
data = self._load_from_cache(cache_file)
except requests.exceptions.RequestException as e:
self.logger.logger.error(f"Shodan API query failed for {ip}: {e}")
if cache_status == "stale":
data = self._load_from_cache(cache_file)
if not data:
return []
return self._process_shodan_data(ip, data)
def _process_shodan_data(self, ip: str, data: Dict[str, Any]) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
"""
Process Shodan data to extract relationships.
"""
relationships = []
try:
# Query Shodan host information
url = f"{self.base_url}/shodan/host/{ip}"
params = {'key': self.api_key}
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
if not response or response.status_code != 200:
return []
data = response.json()
# Extract hostname relationships
hostnames = data.get('hostnames', [])
for hostname in hostnames:
if _is_valid_domain(hostname):
raw_data = {
'ip_address': ip,
'hostname': hostname,
'country': data.get('country_name', ''),
'city': data.get('city', ''),
'isp': data.get('isp', ''),
'org': data.get('org', ''),
'asn': data.get('asn', ''),
'ports': data.get('ports', []),
'last_update': data.get('last_update', ''),
'os': data.get('os', '')
}
relationships.append((
ip,
hostname,
'a_record', # IP resolves to hostname
0.8,
raw_data
))
self.log_relationship_discovery(
source_node=ip,
target_node=hostname,
relationship_type='a_record',
confidence_score=0.8,
raw_data=raw_data,
discovery_method="shodan_host_lookup"
)
# Extract ASN relationship if available
asn = data.get('asn')
if asn:
# Ensure the ASN starts with "AS"
if isinstance(asn, str) and asn.startswith('AS'):
asn_name = asn
asn_number = asn[2:]
else:
asn_name = f"AS{asn}"
asn_number = str(asn)
asn_raw_data = {
'ip_address': ip,
'asn': asn_number,
'isp': data.get('isp', ''),
'org': data.get('org', '')
}
# Extract hostname relationships
hostnames = data.get('hostnames', [])
for hostname in hostnames:
if _is_valid_domain(hostname):
relationships.append((
ip,
asn_name,
'asn_membership',
0.7,
asn_raw_data
hostname,
'a_record',
0.8,
data
))
self.log_relationship_discovery(
source_node=ip,
target_node=asn_name,
relationship_type='asn_membership',
confidence_score=0.7,
raw_data=asn_raw_data,
discovery_method="shodan_asn_lookup"
target_node=hostname,
relationship_type='a_record',
confidence_score=0.8,
raw_data=data,
discovery_method="shodan_host_lookup"
)
except json.JSONDecodeError as e:
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
# Extract ASN relationship
asn = data.get('asn')
if asn:
asn_name = f"AS{asn[2:]}" if isinstance(asn, str) and asn.startswith('AS') else f"AS{asn}"
relationships.append((
ip,
asn_name,
'asn_membership',
0.7,
data
))
self.log_relationship_discovery(
source_node=ip,
target_node=asn_name,
relationship_type='asn_membership',
confidence_score=0.7,
raw_data=data,
discovery_method="shodan_asn_lookup"
)
return relationships
def search_by_organization(self, org_name: str) -> List[Dict[str, Any]]:
"""
Search Shodan for hosts belonging to a specific organization.
Args:
org_name: Organization name to search for
Returns:
List of host information dictionaries
"""
if not self.is_available():
return []
try:
search_query = f"org:\"{org_name}\""
url = f"{self.base_url}/shodan/host/search"
params = {
'key': self.api_key,
'query': search_query,
'minify': True
}
response = self.make_request(url, method="GET", params=params, target_indicator=org_name)
if response and response.status_code == 200:
data = response.json()
return data.get('matches', [])
except Exception as e:
self.logger.logger.error(f"Error searching Shodan by organization {org_name}: {e}")
return []
def get_host_services(self, ip: str) -> List[Dict[str, Any]]:
"""
Get service information for a specific IP address.
Args:
ip: IP address to query
Returns:
List of service information dictionaries
"""
if not _is_valid_ip(ip) or not self.is_available():
return []
try:
url = f"{self.base_url}/shodan/host/{ip}"
params = {'key': self.api_key}
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
if response and response.status_code == 200:
data = response.json()
return data.get('data', []) # Service banners
except Exception as e:
self.logger.logger.error(f"Error getting Shodan services for IP {ip}: {e}")
return []

View File

@ -8,4 +8,3 @@ dnspython>=2.4.2
gunicorn
redis
python-dotenv
psycopg2-binary

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,57 @@
/**
* Graph visualization module for DNSRecon
* Handles network graph rendering using vis.js
* Handles network graph rendering using vis.js with proper large entity node hiding
*/
const contextMenuCSS = `
.graph-context-menu {
position: fixed;
z-index: 1000;
background: linear-gradient(135deg, #2a2a2a 0%, #1e1e1e 100%);
border: 1px solid #444;
border-radius: 6px;
box-shadow: 0 8px 25px rgba(0,0,0,0.6);
display: none;
font-family: 'Roboto Mono', monospace;
font-size: 0.9rem;
color: #c7c7c7;
min-width: 180px;
overflow: hidden;
}
.graph-context-menu ul {
list-style: none;
padding: 0.5rem 0;
margin: 0;
}
.graph-context-menu ul li {
padding: 0.75rem 1rem;
cursor: pointer;
transition: all 0.2s ease;
display: flex;
align-items: center;
gap: 0.5rem;
}
.graph-context-menu ul li:hover {
background: linear-gradient(135deg, #3a3a3a 0%, #2e2e2e 100%);
color: #00ff41;
}
.graph-context-menu .menu-icon {
font-size: 0.9rem;
width: 1.2rem;
text-align: center;
}
.graph-context-menu ul li:first-child {
border-top: none;
}
.graph-context-menu ul li:last-child {
border-bottom: none;
}
`;
class GraphManager {
constructor(containerId) {
@ -12,6 +62,13 @@ class GraphManager {
this.isInitialized = false;
this.currentLayout = 'physics';
this.nodeInfoPopup = null;
this.contextMenu = null;
this.history = [];
this.filterPanel = null;
this.trueRootIds = new Set();
// Track large entity members for proper hiding
this.largeEntityMembers = new Set();
this.isScanning = false;
this.options = {
nodes: {
@ -115,8 +172,14 @@ class GraphManager {
randomSeed: 2
}
};
if (typeof document !== 'undefined') {
const style = document.createElement('style');
style.textContent = contextMenuCSS;
document.head.appendChild(style);
}
this.createNodeInfoPopup();
this.createContextMenu();
document.body.addEventListener('click', () => this.hideContextMenu());
}
/**
@ -129,6 +192,30 @@ class GraphManager {
document.body.appendChild(this.nodeInfoPopup);
}
/**
* Create context menu
*/
createContextMenu() {
// Remove existing context menu if it exists
const existing = document.getElementById('graph-context-menu');
if (existing) {
existing.remove();
}
this.contextMenu = document.createElement('div');
this.contextMenu.id = 'graph-context-menu';
this.contextMenu.className = 'graph-context-menu';
this.contextMenu.style.display = 'none';
// Prevent body click listener from firing when clicking the menu itself
this.contextMenu.addEventListener('click', (event) => {
event.stopPropagation();
});
document.body.appendChild(this.contextMenu);
console.log('Context menu created and added to body');
}
/**
* Initialize the network graph
*/
@ -155,6 +242,7 @@ class GraphManager {
// Add graph controls
this.addGraphControls();
this.addFilterPanel();
console.log('Graph initialized successfully');
} catch (error) {
@ -173,6 +261,8 @@ class GraphManager {
<button class="graph-control-btn" id="graph-fit" title="Fit to Screen">[FIT]</button>
<button class="graph-control-btn" id="graph-physics" title="Toggle Physics">[PHYSICS]</button>
<button class="graph-control-btn" id="graph-cluster" title="Cluster Nodes">[CLUSTER]</button>
<button class="graph-control-btn" id="graph-unhide" title="Unhide All">[UNHIDE]</button>
<button class="graph-control-btn" id="graph-revert" title="Revert Last Action">[REVERT]</button>
`;
this.container.appendChild(controlsContainer);
@ -181,6 +271,14 @@ class GraphManager {
document.getElementById('graph-fit').addEventListener('click', () => this.fitView());
document.getElementById('graph-physics').addEventListener('click', () => this.togglePhysics());
document.getElementById('graph-cluster').addEventListener('click', () => this.toggleClustering());
document.getElementById('graph-unhide').addEventListener('click', () => this.unhideAll());
document.getElementById('graph-revert').addEventListener('click', () => this.revertLastAction());
}
addFilterPanel() {
this.filterPanel = document.createElement('div');
this.filterPanel.className = 'graph-filter-panel';
this.container.appendChild(this.filterPanel);
}
/**
@ -189,8 +287,31 @@ class GraphManager {
setupNetworkEvents() {
if (!this.network) return;
// FIXED: Right-click context menu
this.container.addEventListener('contextmenu', (event) => {
event.preventDefault();
console.log('Right-click detected at:', event.offsetX, event.offsetY);
// Get coordinates relative to the canvas
const pointer = {
x: event.offsetX,
y: event.offsetY
};
const nodeId = this.network.getNodeAt(pointer);
console.log('Node at pointer:', nodeId);
if (nodeId) {
// Pass the original client event for positioning
this.showContextMenu(nodeId, event);
} else {
this.hideContextMenu();
}
});
// Node click event with details
this.network.on('click', (params) => {
this.hideContextMenu();
if (params.nodes.length > 0) {
const nodeId = params.nodes[0];
if (this.network.isCluster(nodeId)) {
@ -216,10 +337,6 @@ class GraphManager {
}
});
this.network.on('oncontext', (params) => {
params.event.preventDefault();
});
// Stabilization events with progress
this.network.on('stabilizationProgress', (params) => {
const progress = params.iterations / params.total;
@ -235,6 +352,13 @@ class GraphManager {
console.log('Selected nodes:', params.nodes);
console.log('Selected edges:', params.edges);
});
// Click away to hide context menu
document.addEventListener('click', (e) => {
if (!this.contextMenu.contains(e.target)) {
this.hideContextMenu();
}
});
}
/**
@ -252,21 +376,28 @@ class GraphManager {
this.initialize();
}
this.largeEntityMembers.clear();
const largeEntityMap = new Map();
graphData.nodes.forEach(node => {
if (node.type === 'large_entity' && node.attributes && Array.isArray(node.attributes.nodes)) {
node.attributes.nodes.forEach(nodeId => {
largeEntityMap.set(nodeId, node.id);
this.largeEntityMembers.add(nodeId);
});
}
});
const processedNodes = graphData.nodes.map(node => {
const processed = this.processNode(node);
if (largeEntityMap.has(node.id)) {
processed.hidden = true;
}
return processed;
const filteredNodes = graphData.nodes.filter(node => {
// Only include nodes that are NOT members of large entities, but always include the container itself
return !this.largeEntityMembers.has(node.id) || node.type === 'large_entity';
});
console.log(`Filtered ${graphData.nodes.length - filteredNodes.length} large entity member nodes from visualization`);
// Process only the filtered nodes
const processedNodes = filteredNodes.map(node => {
return this.processNode(node);
});
const mergedEdges = {};
@ -311,6 +442,11 @@ class GraphManager {
this.nodes.update(processedNodes);
this.edges.update(processedEdges);
// After data is loaded, compute roots and apply filters
this.computeTrueRoots();
this.updateFilterControls();
this.applyAllFilters();
// Highlight new additions briefly
if (newNodes.length > 0 || newEdges.length > 0) {
setTimeout(() => this.highlightNewElements(newNodes, newEdges), 100);
@ -322,6 +458,8 @@ class GraphManager {
}
console.log(`Graph updated: ${processedNodes.length} nodes, ${processedEdges.length} edges (${newNodes.length} new nodes, ${newEdges.length} new edges)`);
console.log(`Large entity members hidden: ${this.largeEntityMembers.size}`);
} catch (error) {
console.error('Failed to update graph:', error);
this.showError('Failed to update visualization');
@ -408,8 +546,6 @@ class GraphManager {
}
};
return processedEdge;
}
@ -456,7 +592,6 @@ class GraphManager {
return colors[nodeType] || '#ffffff';
}
/**
* Get node border color based on type
* @param {string} nodeType - Node type
@ -846,6 +981,8 @@ class GraphManager {
clear() {
this.nodes.clear();
this.edges.clear();
this.history = [];
this.largeEntityMembers.clear(); // Clear large entity tracking
// Show placeholder
const placeholder = this.container.querySelector('.graph-placeholder');
@ -866,59 +1003,590 @@ class GraphManager {
}
}
/* * @param {Set} excludedNodeIds - Node IDs to exclude from analysis (for simulation)
* @param {Set} excludedEdgeTypes - Edge types to exclude from traversal
* @param {Set} excludedNodeTypes - Node types to exclude from traversal
* @returns {Object} Analysis results with reachable/unreachable nodes
*/
analyzeGraphReachability(excludedNodeIds = new Set(), excludedEdgeTypes = new Set(), excludedNodeTypes = new Set()) {
console.log("Performing comprehensive reachability analysis...");
const analysis = {
reachableNodes: new Set(),
unreachableNodes: new Set(),
isolatedClusters: [],
affectedNodes: new Set()
};
if (this.nodes.length === 0) return analysis;
// Build adjacency list excluding specified elements
const adjacencyList = {};
this.nodes.getIds().forEach(id => {
if (!excludedNodeIds.has(id)) {
adjacencyList[id] = [];
}
});
this.edges.forEach(edge => {
const edgeType = edge.metadata?.relationship_type || '';
if (!excludedEdgeTypes.has(edgeType) &&
!excludedNodeIds.has(edge.from) &&
!excludedNodeIds.has(edge.to)) {
if (adjacencyList[edge.from]) {
adjacencyList[edge.from].push(edge.to);
}
}
});
// BFS traversal from true roots
const traversalQueue = [];
// Start from true roots that aren't excluded
this.trueRootIds.forEach(rootId => {
if (!excludedNodeIds.has(rootId)) {
const node = this.nodes.get(rootId);
if (node && !excludedNodeTypes.has(node.type)) {
if (!analysis.reachableNodes.has(rootId)) {
traversalQueue.push(rootId);
analysis.reachableNodes.add(rootId);
}
}
}
});
// BFS to find all reachable nodes
let queueIndex = 0;
while (queueIndex < traversalQueue.length) {
const currentNode = traversalQueue[queueIndex++];
for (const neighbor of (adjacencyList[currentNode] || [])) {
if (!analysis.reachableNodes.has(neighbor)) {
const node = this.nodes.get(neighbor);
if (node && !excludedNodeTypes.has(node.type)) {
analysis.reachableNodes.add(neighbor);
traversalQueue.push(neighbor);
}
}
}
}
// Identify unreachable nodes (maintaining forensic integrity)
Object.keys(adjacencyList).forEach(nodeId => {
if (!analysis.reachableNodes.has(nodeId)) {
analysis.unreachableNodes.add(nodeId);
}
});
// Find isolated clusters among unreachable nodes
analysis.isolatedClusters = this.findIsolatedClusters(
Array.from(analysis.unreachableNodes),
adjacencyList
);
console.log(`Reachability analysis complete:`, {
reachable: analysis.reachableNodes.size,
unreachable: analysis.unreachableNodes.size,
clusters: analysis.isolatedClusters.length
});
return analysis;
}
/**
* Get network statistics
* @returns {Object} Statistics object
* Find isolated clusters within a set of nodes
* Used for forensic analysis to identify disconnected subgraphs
*/
findIsolatedClusters(nodeIds, adjacencyList) {
const visited = new Set();
const clusters = [];
for (const nodeId of nodeIds) {
if (!visited.has(nodeId)) {
const cluster = [];
const stack = [nodeId];
while (stack.length > 0) {
const current = stack.pop();
if (!visited.has(current)) {
visited.add(current);
cluster.push(current);
// Add unvisited neighbors within the unreachable set
for (const neighbor of (adjacencyList[current] || [])) {
if (nodeIds.includes(neighbor) && !visited.has(neighbor)) {
stack.push(neighbor);
}
}
}
}
if (cluster.length > 0) {
clusters.push(cluster);
}
}
}
return clusters;
}
/**
* ENHANCED: Get comprehensive graph statistics with forensic information
* Updates the existing getStatistics() method
*/
getStatistics() {
return {
const basicStats = {
nodeCount: this.nodes.length,
edgeCount: this.edges.length,
//isStabilized: this.network ? this.network.isStabilized() : false
largeEntityMembersHidden: this.largeEntityMembers.size
};
// Add forensic statistics
const visibleNodes = this.nodes.get({ filter: node => !node.hidden });
const hiddenNodes = this.nodes.get({ filter: node => node.hidden });
return {
...basicStats,
forensicStatistics: {
visibleNodes: visibleNodes.length,
hiddenNodes: hiddenNodes.length,
trueRoots: this.trueRootIds.size,
integrityStatus: visibleNodes.length > 0 && this.trueRootIds.size > 0 ? 'INTACT' : 'COMPROMISED'
}
};
}
computeTrueRoots() {
this.trueRootIds.clear();
const allNodes = this.nodes.get({ returnType: 'Object' });
const allEdges = this.edges.get();
const inDegrees = {};
for (const nodeId in allNodes) {
inDegrees[nodeId] = 0;
}
allEdges.forEach(edge => {
if (inDegrees[edge.to] !== undefined) {
inDegrees[edge.to]++;
}
});
for (const nodeId in allNodes) {
if (inDegrees[nodeId] === 0) {
this.trueRootIds.add(nodeId);
}
}
console.log("Computed true roots:", this.trueRootIds);
}
updateFilterControls() {
if (!this.filterPanel) return;
const nodeTypes = new Set(this.nodes.get().map(n => n.type));
const edgeTypes = new Set(this.edges.get().map(e => e.metadata.relationship_type));
// Wrap both columns in a single container with vertical layout
let filterHTML = '<div class="filter-container">';
// Nodes section
filterHTML += '<div class="filter-column"><h4>Nodes</h4><div class="checkbox-group">';
nodeTypes.forEach(type => {
const label = type === 'correlation_object' ? 'latent correlations' : type;
const isChecked = type !== 'correlation_object';
filterHTML += `<label><input type="checkbox" data-filter-type="node" value="${type}" ${isChecked ? 'checked' : ''}> ${label}</label>`;
});
filterHTML += '</div></div>';
// Edges section
filterHTML += '<div class="filter-column"><h4>Edges</h4><div class="checkbox-group">';
edgeTypes.forEach(type => {
filterHTML += `<label><input type="checkbox" data-filter-type="edge" value="${type}" checked> ${type}</label>`;
});
filterHTML += '</div></div>';
filterHTML += '</div>'; // Close filter-container
this.filterPanel.innerHTML = filterHTML;
this.filterPanel.querySelectorAll('input[type="checkbox"]').forEach(checkbox => {
checkbox.addEventListener('change', () => this.applyAllFilters());
});
}
/**
* ENHANCED: Apply filters using consolidated reachability analysis
* Replaces the existing applyAllFilters() method
*/
applyAllFilters() {
console.log("Applying filters with enhanced reachability analysis...");
if (this.nodes.length === 0) return;
// Get filter criteria from UI
const excludedNodeTypes = new Set();
this.filterPanel?.querySelectorAll('input[data-filter-type="node"]:not(:checked)').forEach(cb => {
excludedNodeTypes.add(cb.value);
});
const excludedEdgeTypes = new Set();
this.filterPanel?.querySelectorAll('input[data-filter-type="edge"]:not(:checked)').forEach(cb => {
excludedEdgeTypes.add(cb.value);
});
// Perform comprehensive analysis
const analysis = this.analyzeGraphReachability(new Set(), excludedEdgeTypes, excludedNodeTypes);
// Apply visibility updates
const nodeUpdates = this.nodes.map(node => ({
id: node.id,
hidden: !analysis.reachableNodes.has(node.id)
}));
const edgeUpdates = this.edges.map(edge => ({
id: edge.id,
hidden: excludedEdgeTypes.has(edge.metadata?.relationship_type || '') ||
!analysis.reachableNodes.has(edge.from) ||
!analysis.reachableNodes.has(edge.to)
}));
this.nodes.update(nodeUpdates);
this.edges.update(edgeUpdates);
console.log(`Enhanced filters applied. Visible nodes: ${analysis.reachableNodes.size}`);
}
/**
* ENHANCED: Hide node with forensic integrity using reachability analysis
* Replaces the existing hideNodeAndOrphans() method
*/
hideNodeWithReachabilityAnalysis(nodeId) {
console.log(`Hiding node ${nodeId} with reachability analysis...`);
// Simulate hiding this node and analyze impact
const excludedNodes = new Set([nodeId]);
const analysis = this.analyzeGraphReachability(excludedNodes);
// Nodes that will become unreachable (should be hidden)
const nodesToHide = [nodeId, ...Array.from(analysis.unreachableNodes)];
// Store history for potential revert
const historyData = {
nodeIds: nodesToHide,
operation: 'hide_with_reachability',
timestamp: Date.now()
};
// Apply hiding with forensic documentation
const updates = nodesToHide.map(id => ({
id: id,
hidden: true,
forensicNote: `Hidden due to reachability analysis from ${nodeId}`
}));
this.nodes.update(updates);
this.addToHistory('hide', historyData);
console.log(`Forensic hide operation: ${nodesToHide.length} nodes hidden`, {
originalTarget: nodeId,
cascadeNodes: nodesToHide.length - 1,
isolatedClusters: analysis.isolatedClusters.length
});
return {
hiddenNodes: nodesToHide,
isolatedClusters: analysis.isolatedClusters
};
}
/**
* Apply filters to the graph
* @param {string} nodeType - The type of node to show ('all' for no filter)
* @param {number} minConfidence - The minimum confidence score for edges to be visible
* ENHANCED: Delete node with forensic integrity using reachability analysis
* Replaces the existing deleteNodeAndOrphans() method
*/
applyFilters(nodeType, minConfidence) {
console.log(`Applying filters: nodeType=${nodeType}, minConfidence=${minConfidence}`);
async deleteNodeWithReachabilityAnalysis(nodeId) {
console.log(`Deleting node ${nodeId} with reachability analysis...`);
const nodeUpdates = [];
const edgeUpdates = [];
// Simulate deletion and analyze impact
const excludedNodes = new Set([nodeId]);
const analysis = this.analyzeGraphReachability(excludedNodes);
const allNodes = this.nodes.get({ returnType: 'Object' });
const allEdges = this.edges.get();
// Nodes that will become unreachable (should be deleted)
const nodesToDelete = [nodeId, ...Array.from(analysis.unreachableNodes)];
// Determine which nodes are visible based on the nodeType filter
for (const nodeId in allNodes) {
const node = allNodes[nodeId];
const isVisible = (nodeType === 'all' || node.type === nodeType);
nodeUpdates.push({ id: nodeId, hidden: !isVisible });
// Collect forensic data before deletion
const historyData = {
nodes: nodesToDelete.map(id => this.nodes.get(id)).filter(Boolean),
edges: [],
operation: 'delete_with_reachability',
timestamp: Date.now(),
forensicAnalysis: {
originalTarget: nodeId,
cascadeNodes: nodesToDelete.length - 1,
isolatedClusters: analysis.isolatedClusters.length,
clusterSizes: analysis.isolatedClusters.map(cluster => cluster.length)
}
};
// Collect affected edges
nodesToDelete.forEach(id => {
const connectedEdgeIds = this.network.getConnectedEdges(id);
const edges = this.edges.get(connectedEdgeIds);
historyData.edges.push(...edges);
});
// Remove duplicates from edges
historyData.edges = Array.from(new Map(historyData.edges.map(e => [e.id, e])).values());
// Perform backend deletion with forensic logging
let operationFailed = false;
for (const targetNodeId of nodesToDelete) {
try {
const response = await fetch(`/api/graph/node/${targetNodeId}`, {
method: 'DELETE',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
forensicContext: {
operation: 'reachability_cascade_delete',
originalTarget: nodeId,
analysisTimestamp: historyData.timestamp
}
})
});
const result = await response.json();
if (!result.success) {
console.error(`Backend deletion failed for node ${targetNodeId}:`, result.error);
operationFailed = true;
break;
}
console.log(`Node ${targetNodeId} deleted from backend with forensic context`);
this.nodes.remove({ id: targetNodeId });
} catch (error) {
console.error(`API error during deletion of node ${targetNodeId}:`, error);
operationFailed = true;
break;
}
}
// Update nodes first to determine edge visibility
this.nodes.update(nodeUpdates);
// Handle operation results
if (!operationFailed) {
this.addToHistory('delete', historyData);
console.log(`Forensic delete operation completed:`, historyData.forensicAnalysis);
// Determine which edges are visible based on confidence and connected nodes
for (const edge of allEdges) {
const sourceNode = this.nodes.get(edge.from);
const targetNode = this.nodes.get(edge.to);
const confidence = edge.metadata ? edge.metadata.confidence_score : 0;
return {
success: true,
deletedNodes: nodesToDelete,
forensicAnalysis: historyData.forensicAnalysis
};
} else {
// Revert UI changes if backend operations failed - use update instead of add
console.log("Reverting UI changes due to backend failure");
this.nodes.update(historyData.nodes);
this.edges.update(historyData.edges);
const isVisible = confidence >= minConfidence &&
sourceNode && !sourceNode.hidden &&
targetNode && !targetNode.hidden;
edgeUpdates.push({ id: edge.id, hidden: !isVisible });
return {
success: false,
error: "Backend deletion failed, UI reverted"
};
}
this.edges.update(edgeUpdates);
console.log('Filters applied.');
}
/**
* Show context menu for a node
* @param {string} nodeId - The ID of the node
* @param {Event} event - The contextmenu event
*/
showContextMenu(nodeId, event) {
console.log('Showing context menu for node:', nodeId);
const node = this.nodes.get(nodeId);
// Create menu items
let menuItems = `
<ul>
<li data-action="focus" data-node-id="${nodeId}">
<span class="menu-icon">🎯</span>
<span>Focus on Node</span>
</li>
`;
// Add "Iterate Scan" option only for domain or IP nodes
if (node && (node.type === 'domain' || node.type === 'ip')) {
const disabled = this.isScanning ? 'disabled' : ''; // Check if scanning
const title = this.isScanning ? 'A scan is already in progress' : 'Iterate Scan (Add to Graph)'; // Add a title for disabled state
menuItems += `
<li data-action="iterate" data-node-id="${nodeId}" ${disabled} title="${title}">
<span class="menu-icon"></span>
<span>Iterate Scan (Add to Graph)</span>
</li>
`;
}
menuItems += `
<li data-action="hide" data-node-id="${nodeId}">
<span class="menu-icon">👁🗨</span>
<span>Hide Node</span>
</li>
<li data-action="delete" data-node-id="${nodeId}">
<span class="menu-icon">🗑</span>
<span>Delete Node</span>
</li>
<li data-action="details" data-node-id="${nodeId}">
<span class="menu-icon"></span>
<span>Show Details</span>
</li>
</ul>
`;
this.contextMenu.innerHTML = menuItems;
// Position the menu
this.contextMenu.style.left = `${event.clientX}px`;
this.contextMenu.style.top = `${event.clientY}px`;
this.contextMenu.style.display = 'block';
// Ensure menu stays within viewport
const rect = this.contextMenu.getBoundingClientRect();
if (rect.right > window.innerWidth) {
this.contextMenu.style.left = `${event.clientX - rect.width}px`;
}
if (rect.bottom > window.innerHeight) {
this.contextMenu.style.top = `${event.clientY - rect.height}px`;
}
// Add event listeners to menu items
this.contextMenu.querySelectorAll('li').forEach(item => {
item.addEventListener('click', (e) => {
if (e.currentTarget.hasAttribute('disabled')) { // Prevent action if disabled
e.stopPropagation();
return;
}
e.stopPropagation();
const action = e.currentTarget.dataset.action;
const nodeId = e.currentTarget.dataset.nodeId;
console.log('Context menu action:', action, 'for node:', nodeId);
this.performContextMenuAction(action, nodeId);
this.hideContextMenu();
});
});
}
/**
* Hide the context menu
*/
hideContextMenu() {
if (this.contextMenu) {
this.contextMenu.style.display = 'none';
}
}
/**
* UPDATED: Enhanced context menu actions using new methods
* Updates the existing performContextMenuAction() method
*/
performContextMenuAction(action, nodeId) {
console.log('Performing enhanced action:', action, 'on node:', nodeId);
switch (action) {
case 'focus':
this.focusOnNode(nodeId);
break;
case 'iterate':
const event = new CustomEvent('iterateScan', {
detail: { nodeId }
});
document.dispatchEvent(event);
break;
case 'hide':
// Use enhanced method with reachability analysis
this.hideNodeWithReachabilityAnalysis(nodeId);
break;
case 'delete':
// Use enhanced method with reachability analysis
this.deleteNodeWithReachabilityAnalysis(nodeId);
break;
case 'details':
const node = this.nodes.get(nodeId);
if (node) {
this.showNodeDetails(node);
}
break;
default:
console.warn('Unknown action:', action);
}
}
/**
* Add an operation to the history stack
* @param {string} type - The type of operation ('hide', 'delete')
* @param {Object} data - The data needed to revert the operation
*/
addToHistory(type, data) {
this.history.push({ type, data });
}
/**
* Revert the last action
*/
async revertLastAction() {
const lastAction = this.history.pop();
if (!lastAction) {
console.log('No actions to revert.');
return;
}
switch (lastAction.type) {
case 'hide':
// Revert hiding nodes by un-hiding them
const updates = lastAction.data.nodeIds.map(id => ({ id: id, hidden: false }));
this.nodes.update(updates);
break;
case 'delete':
try {
const response = await fetch('/api/graph/revert', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(lastAction)
});
const result = await response.json();
if (result.success) {
console.log('Delete action reverted successfully on backend.');
// Re-add all nodes and edges from the history to the local view - use update instead of add
this.nodes.update(lastAction.data.nodes);
this.edges.update(lastAction.data.edges);
} else {
console.error('Failed to revert delete action on backend:', result.error);
// Push the action back onto the history stack if the API call failed
this.history.push(lastAction);
}
} catch (error) {
console.error('Error during revert API call:', error);
this.history.push(lastAction);
}
break;
}
}
/**
* Unhide all hidden nodes
*/
unhideAll() {
const allNodes = this.nodes.get({
filter: (node) => node.hidden === true
});
const updates = allNodes.map(node => ({ id: node.id, hidden: false }));
this.nodes.update(updates);
}
}
// Export for use in main.js

File diff suppressed because it is too large Load Diff

View File

@ -32,19 +32,8 @@
<div class="form-container">
<div class="input-group">
<label for="target-domain">Target Domain</label>
<input type="text" id="target-domain" placeholder="example.com" autocomplete="off">
</div>
<div class="input-group">
<label for="max-depth">Recursion Depth</label>
<select id="max-depth">
<option value="1">Depth 1 - Direct relationships</option>
<option value="2" selected>Depth 2 - Recommended</option>
<option value="3">Depth 3 - Extended analysis</option>
<option value="4">Depth 4 - Deep reconnaissance</option>
<option value="5">Depth 5 - Maximum depth</option>
</select>
<label for="target-input">Target Domain or IP</label>
<input type="text" id="target-input" placeholder="example.com or 8.8.8.8" autocomplete="off">
</div>
<div class="button-group">
@ -64,9 +53,9 @@
<span class="btn-icon">[EXPORT]</span>
<span>Download Results</span>
</button>
<button id="configure-api-keys" class="btn btn-secondary">
<button id="configure-settings" class="btn btn-secondary">
<span class="btn-icon">[API]</span>
<span>Configure API Keys</span>
<span>Settings</span>
</button>
</div>
</div>
@ -104,30 +93,22 @@
<div class="progress-bar">
<div id="progress-fill" class="progress-fill"></div>
</div>
<div class="progress-placeholder">
<span class="status-label">
⚠️ <strong>Important:</strong> Scanning large public services (e.g., Google, Cloudflare, AWS) is
<strong>discouraged</strong> due to rate limits (e.g., crt.sh).
<br><br>
Our task scheduler operates on a <strong>priority-based queue</strong>:
Short, targeted tasks like DNS are processed first, while resource-intensive requests (e.g., crt.sh)
are <strong>automatically deprioritized</strong> and may be processed later.
</span>
</div>
</div>
</section>
<section class="visualization-panel">
<div class="panel-header">
<h2>Infrastructure Map</h2>
<div class="view-controls">
<div class="filter-group">
<label for="node-type-filter">Node Type:</label>
<select id="node-type-filter">
<option value="all">All</option>
<option value="domain">Domain</option>
<option value="ip">IP</option>
<option value="asn">ASN</option>
<option value="correlation_object">Correlation Object</option>
<option value="large_entity">Large Entity</option>
</select>
</div>
<div class="filter-group">
<label for="confidence-filter">Min Confidence:</label>
<input type="range" id="confidence-filter" min="0" max="1" step="0.1" value="0">
<span id="confidence-value">0</span>
</div>
</div>
</div>
<div id="network-graph" class="graph-container">
@ -205,16 +186,28 @@
</div>
</div>
<div id="api-key-modal" class="modal">
<div id="settings-modal" class="modal">
<div class="modal-content">
<div class="modal-header">
<h3>Configure API Keys</h3>
<button id="api-key-modal-close" class="modal-close">[×]</button>
<h3>Settings</h3>
<button id="settings-modal-close" class="modal-close">[×]</button>
</div>
<div class="modal-body">
<p class="modal-description">
Enter your API keys for enhanced data providers. Keys are stored in memory for the current session only and are never saved to disk.
Configure scan settings and API keys. Keys are stored in memory for the current session only.
Only provide API-keys you dont use for anything else. Don´t enter an API-key if you don´t trust me (best practice would that you don´t).
</p>
<br>
<div class="input-group">
<label for="max-depth">Recursion Depth</label>
<select id="max-depth">
<option value="1">Depth 1 - Direct relationships</option>
<option value="2" selected>Depth 2 - Recommended</option>
<option value="3">Depth 3 - Extended analysis</option>
<option value="4">Depth 4 - Deep reconnaissance</option>
<option value="5">Depth 5 - Maximum depth</option>
</select>
</div>
<div id="api-key-inputs">
</div>
<div class="button-group" style="flex-direction: row; justify-content: flex-end;">
@ -222,7 +215,7 @@
<span>Reset</span>
</button>
<button id="save-api-keys" class="btn btn-primary">
<span>Save Keys</span>
<span>Save API-Keys</span>
</button>
</div>
</div>

View File

@ -48,3 +48,15 @@ def _is_valid_ip(ip: str) -> bool:
except (ValueError, AttributeError):
return False
def is_valid_target(target: str) -> bool:
"""
Checks if the target is a valid domain or IP address.
Args:
target: The target string to validate.
Returns:
True if the target is a valid domain or IP, False otherwise.
"""
return _is_valid_domain(target) or _is_valid_ip(target)