Compare commits
34 Commits
database_c
...
47ce7ff883
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
47ce7ff883 | ||
|
|
229746e1ec | ||
|
|
733e1da640 | ||
|
|
97aa18f788 | ||
|
|
15421dd4a5 | ||
|
|
ad4086b156 | ||
|
|
0e92ec6e9a | ||
|
|
baa57bfac2 | ||
|
|
f0f80be955 | ||
|
|
ecc143ddbb | ||
|
|
2c48316477 | ||
|
|
fc098aed28 | ||
| 9285226cbc | |||
|
|
350055fcec | ||
|
|
4a5ecf7a37 | ||
|
|
71b2855d01 | ||
|
|
93a258170a | ||
|
|
e2d4e12057 | ||
|
|
c076ee028f | ||
|
|
cbfac0922a | ||
|
|
881f7b74e5 | ||
|
|
c347581a6c | ||
|
|
30ee21f087 | ||
|
|
2496ca26a5 | ||
|
|
8aa3c4933e | ||
|
|
fc326a66c8 | ||
|
|
51902e3155 | ||
|
|
a261d706c8 | ||
|
|
2410e689b8 | ||
|
|
62470673fe | ||
|
|
2658bd148b | ||
|
|
f02381910d | ||
|
|
674ac59c98 | ||
| 434d1f4803 |
@@ -29,6 +29,6 @@ MAX_CONCURRENT_REQUESTS=5
|
||||
# The number of results from a provider that triggers the "large entity" grouping.
|
||||
LARGE_ENTITY_THRESHOLD=100
|
||||
# The number of times to retry a target if a provider fails.
|
||||
MAX_RETRIES_PER_TARGET=3
|
||||
MAX_RETRIES_PER_TARGET=8
|
||||
# How long cached provider responses are stored (in hours).
|
||||
CACHE_EXPIRY_HOURS=12
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -169,3 +169,4 @@ cython_debug/
|
||||
#.idea/
|
||||
|
||||
dump.rdb
|
||||
cache/
|
||||
226
README.md
226
README.md
@@ -4,28 +4,32 @@ DNSRecon is an interactive, passive reconnaissance tool designed to map adversar
|
||||
|
||||
**Current Status: Phase 2 Implementation**
|
||||
|
||||
- ✅ Core infrastructure and graph engine
|
||||
- ✅ Multi-provider support (crt.sh, DNS, Shodan)
|
||||
- ✅ Session-based multi-user support
|
||||
- ✅ Real-time web interface with interactive visualization
|
||||
- ✅ Forensic logging system and JSON export
|
||||
* ✅ Core infrastructure and graph engine
|
||||
* ✅ Multi-provider support (crt.sh, DNS, Shodan)
|
||||
* ✅ Session-based multi-user support
|
||||
* ✅ Real-time web interface with interactive visualization
|
||||
* ✅ Forensic logging system and JSON export
|
||||
|
||||
-----
|
||||
|
||||
## Features
|
||||
|
||||
- **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
|
||||
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
|
||||
- **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
|
||||
- **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
|
||||
- **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
|
||||
- **Session Management**: Supports concurrent user sessions with isolated scanner instances.
|
||||
* **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
|
||||
* **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
|
||||
* **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
|
||||
* **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
|
||||
* **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
|
||||
* **Session Management**: Supports concurrent user sessions with isolated scanner instances.
|
||||
|
||||
-----
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python 3.8 or higher
|
||||
- A modern web browser with JavaScript enabled
|
||||
- (Recommended) A Linux host for running the application and the optional DNS cache.
|
||||
* Python 3.8 or higher
|
||||
* A modern web browser with JavaScript enabled
|
||||
* (Recommended) A Linux host for running the application and the optional DNS cache.
|
||||
|
||||
### 1\. Clone the Project
|
||||
|
||||
@@ -44,156 +48,50 @@ source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 3\. (Optional but Recommended) Set up a Local DNS Caching Resolver
|
||||
The `requirements.txt` file contains the following dependencies:
|
||||
|
||||
Running a local DNS caching resolver can significantly speed up DNS queries and reduce your network footprint. Here’s how to set up `unbound` on a Debian-based Linux distribution (like Ubuntu).
|
||||
* Flask\>=2.3.3
|
||||
* networkx\>=3.1
|
||||
* requests\>=2.31.0
|
||||
* python-dateutil\>=2.8.2
|
||||
* Werkzeug\>=2.3.7
|
||||
* urllib3\>=2.0.0
|
||||
* dnspython\>=2.4.2
|
||||
* gunicorn
|
||||
* redis
|
||||
* python-dotenv
|
||||
|
||||
**a. Install Unbound:**
|
||||
-----
|
||||
|
||||
## Configuration
|
||||
|
||||
DNSRecon is configured using a `.env` file. You can copy the provided example file and edit it to suit your needs:
|
||||
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install unbound -y
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
**b. Configure Unbound:**
|
||||
Create a new configuration file for DNSRecon:
|
||||
The following environment variables are available for configuration:
|
||||
|
||||
```bash
|
||||
sudo nano /etc/unbound/unbound.conf.d/dnsrecon.conf
|
||||
```
|
||||
| Variable | Description | Default |
|
||||
| :--- | :--- | :--- |
|
||||
| `SHODAN_API_KEY` | Your Shodan API key. | |
|
||||
| `FLASK_SECRET_KEY`| A strong, random secret key for session security. | `your-very-secret-and-random-key-here` |
|
||||
| `FLASK_HOST` | The host address for the Flask application. | `127.0.0.1` |
|
||||
| `FLASK_PORT` | The port for the Flask application. | `5000` |
|
||||
| `FLASK_DEBUG` | Enable or disable Flask's debug mode. | `True` |
|
||||
| `FLASK_PERMANENT_SESSION_LIFETIME_HOURS`| How long a user's session in the browser lasts (in hours). | `2` |
|
||||
| `SESSION_TIMEOUT_MINUTES` | How long inactive scanner data is stored in Redis (in minutes). | `60` |
|
||||
| `DEFAULT_RECURSION_DEPTH` | The default number of levels to recurse when scanning. | `2` |
|
||||
| `DEFAULT_TIMEOUT` | Default timeout for provider API requests in seconds. | `30` |
|
||||
| `MAX_CONCURRENT_REQUESTS`| The number of concurrent provider requests to make. | `5` |
|
||||
| `LARGE_ENTITY_THRESHOLD`| The number of results from a provider that triggers the "large entity" grouping. | `100` |
|
||||
| `MAX_RETRIES_PER_TARGET`| The number of times to retry a target if a provider fails. | `8` |
|
||||
| `CACHE_EXPIRY_HOURS`| How long cached provider responses are stored (in hours). | `12` |
|
||||
|
||||
Add the following content to the file:
|
||||
-----
|
||||
|
||||
```
|
||||
server:
|
||||
# Listen on localhost for all users
|
||||
interface: 127.0.0.1
|
||||
access-control: 0.0.0.0/0 refuse
|
||||
access-control: 127.0.0.0/8 allow
|
||||
|
||||
# Enable prefetching of popular items
|
||||
prefetch: yes
|
||||
```
|
||||
|
||||
**c. Restart Unbound and set it as the default resolver:**
|
||||
|
||||
```bash
|
||||
sudo systemctl restart unbound
|
||||
sudo systemctl enable unbound
|
||||
```
|
||||
|
||||
To use this resolver for your system, you may need to update your network settings to point to `127.0.0.1` as your DNS server.
|
||||
|
||||
**d. Update DNSProvider to use the local resolver:**
|
||||
In `dnsrecon/providers/dns_provider.py`, you can explicitly set the resolver's nameservers in the `__init__` method:
|
||||
|
||||
```python
|
||||
# dnsrecon/providers/dns_provider.py
|
||||
|
||||
class DNSProvider(BaseProvider):
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize DNS provider with session-specific configuration."""
|
||||
super().__init__(...)
|
||||
|
||||
# Configure DNS resolver
|
||||
self.resolver = dns.resolver.Resolver()
|
||||
self.resolver.nameservers = ['127.0.0.1'] # Use local caching resolver
|
||||
self.resolver.timeout = 5
|
||||
self.resolver.lifetime = 10
|
||||
```
|
||||
|
||||
## Usage (Development)
|
||||
|
||||
### 1\. Start the Application
|
||||
|
||||
```bash
|
||||
python app.py
|
||||
```
|
||||
|
||||
### 2\. Open Your Browser
|
||||
|
||||
Navigate to `http://127.0.0.1:5000`.
|
||||
|
||||
### 3\. Basic Reconnaissance Workflow
|
||||
|
||||
1. **Enter Target Domain**: Input a domain like `example.com`.
|
||||
2. **Select Recursion Depth**: Depth 2 is recommended for most investigations.
|
||||
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin.
|
||||
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered.
|
||||
5. **Analyze and Export**: Interact with the graph and download the results when the scan is complete.
|
||||
|
||||
## Production Deployment
|
||||
|
||||
To deploy DNSRecon in a production environment, follow these steps:
|
||||
|
||||
### 1\. Use a Production WSGI Server
|
||||
|
||||
Do not use the built-in Flask development server for production. Use a WSGI server like **Gunicorn**:
|
||||
|
||||
```bash
|
||||
pip install gunicorn
|
||||
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||
```
|
||||
|
||||
### 2\. Configure Environment Variables
|
||||
|
||||
Set the following environment variables for a secure and configurable deployment:
|
||||
|
||||
```bash
|
||||
# Generate a strong, random secret key
|
||||
export SECRET_KEY='your-super-secret-and-random-key'
|
||||
|
||||
# Set Flask to production mode
|
||||
export FLASK_ENV='production'
|
||||
export FLASK_DEBUG=False
|
||||
|
||||
# API keys (optional, but recommended for full functionality)
|
||||
export SHODAN_API_KEY="your_shodan_key"
|
||||
```
|
||||
|
||||
### 3\. Use a Reverse Proxy
|
||||
|
||||
Set up a reverse proxy like **Nginx** to sit in front of the Gunicorn server. This provides several benefits, including:
|
||||
|
||||
- **TLS/SSL Termination**: Securely handle HTTPS traffic.
|
||||
- **Load Balancing**: Distribute traffic across multiple application instances.
|
||||
- **Serving Static Files**: Efficiently serve CSS and JavaScript files.
|
||||
|
||||
**Example Nginx Configuration:**
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name your_domain.com;
|
||||
|
||||
location / {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name your_domain.com;
|
||||
|
||||
# SSL cert configuration
|
||||
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:5000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
}
|
||||
|
||||
location /static {
|
||||
alias /path/to/your/dnsrecon/static;
|
||||
expires 30d;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Autostart with systemd
|
||||
## Systemd Service
|
||||
|
||||
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
|
||||
|
||||
@@ -245,12 +143,18 @@ You can check the status of the service at any time with:
|
||||
sudo systemctl status dnsrecon.service
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **API Keys**: API keys are stored in memory for the duration of a user session and are not written to disk.
|
||||
- **Rate Limiting**: DNSRecon includes built-in rate limiting to be respectful to data sources.
|
||||
- **Local Use**: The application is designed for local or trusted network use and does not have built-in authentication. **Do not expose it directly to the internet without proper security controls.**
|
||||
-----
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the terms of the license agreement found in the `LICENSE` file.
|
||||
This project is licensed under the terms of the **BSD-3-Clause** license.
|
||||
|
||||
Copyright (c) 2025 mstoeck3.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
||||
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
126
app.py
126
app.py
@@ -13,6 +13,8 @@ import io
|
||||
|
||||
from core.session_manager import session_manager
|
||||
from config import config
|
||||
from core.graph_manager import NodeType
|
||||
from utils.helpers import is_valid_target
|
||||
|
||||
|
||||
app = Flask(__name__)
|
||||
@@ -64,18 +66,21 @@ def start_scan():
|
||||
|
||||
try:
|
||||
data = request.get_json()
|
||||
if not data or 'target_domain' not in data:
|
||||
return jsonify({'success': False, 'error': 'Missing target_domain in request'}), 400
|
||||
if not data or 'target' not in data:
|
||||
return jsonify({'success': False, 'error': 'Missing target in request'}), 400
|
||||
|
||||
target_domain = data['target_domain'].strip()
|
||||
target = data['target'].strip()
|
||||
max_depth = data.get('max_depth', config.default_recursion_depth)
|
||||
clear_graph = data.get('clear_graph', True)
|
||||
force_rescan_target = data.get('force_rescan_target', None) # **FIX**: Get the new parameter
|
||||
|
||||
print(f"Parsed - target_domain: '{target_domain}', max_depth: {max_depth}, clear_graph: {clear_graph}")
|
||||
print(f"Parsed - target: '{target}', max_depth: {max_depth}, clear_graph: {clear_graph}, force_rescan: {force_rescan_target}")
|
||||
|
||||
# Validation
|
||||
if not target_domain:
|
||||
return jsonify({'success': False, 'error': 'Target domain cannot be empty'}), 400
|
||||
if not target:
|
||||
return jsonify({'success': False, 'error': 'Target cannot be empty'}), 400
|
||||
if not is_valid_target(target):
|
||||
return jsonify({'success': False, 'error': 'Invalid target format. Please enter a valid domain or IP address.'}), 400
|
||||
if not isinstance(max_depth, int) or not 1 <= max_depth <= 5:
|
||||
return jsonify({'success': False, 'error': 'Max depth must be an integer between 1 and 5'}), 400
|
||||
|
||||
@@ -100,7 +105,7 @@ def start_scan():
|
||||
|
||||
print(f"Using scanner {id(scanner)} in session {user_session_id}")
|
||||
|
||||
success = scanner.start_scan(target_domain, max_depth, clear_graph=clear_graph)
|
||||
success = scanner.start_scan(target, max_depth, clear_graph=clear_graph, force_rescan_target=force_rescan_target) # **FIX**: Pass the new parameter
|
||||
|
||||
if success:
|
||||
return jsonify({
|
||||
@@ -277,6 +282,108 @@ def get_graph_data():
|
||||
}
|
||||
}), 500
|
||||
|
||||
@app.route('/api/graph/large-entity/extract', methods=['POST'])
|
||||
def extract_from_large_entity():
|
||||
"""Extract a node from a large entity, making it a standalone node."""
|
||||
try:
|
||||
data = request.get_json()
|
||||
large_entity_id = data.get('large_entity_id')
|
||||
node_id = data.get('node_id')
|
||||
|
||||
if not large_entity_id or not node_id:
|
||||
return jsonify({'success': False, 'error': 'Missing required parameters'}), 400
|
||||
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
if not scanner:
|
||||
return jsonify({'success': False, 'error': 'No active session found'}), 404
|
||||
|
||||
success = scanner.extract_node_from_large_entity(large_entity_id, node_id)
|
||||
|
||||
if success:
|
||||
session_manager.update_session_scanner(user_session_id, scanner)
|
||||
return jsonify({'success': True, 'message': f'Node {node_id} extracted successfully.'})
|
||||
else:
|
||||
return jsonify({'success': False, 'error': f'Failed to extract node {node_id}.'}), 500
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in extract_from_large_entity endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
|
||||
|
||||
@app.route('/api/graph/node/<node_id>', methods=['DELETE'])
|
||||
def delete_graph_node(node_id):
|
||||
"""Delete a node from the graph for the current user session."""
|
||||
try:
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
if not scanner:
|
||||
return jsonify({'success': False, 'error': 'No active session found'}), 404
|
||||
|
||||
success = scanner.graph.remove_node(node_id)
|
||||
|
||||
if success:
|
||||
# Persist the change
|
||||
session_manager.update_session_scanner(user_session_id, scanner)
|
||||
return jsonify({'success': True, 'message': f'Node {node_id} deleted successfully.'})
|
||||
else:
|
||||
return jsonify({'success': False, 'error': f'Node {node_id} not found in graph.'}), 404
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in delete_graph_node endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
|
||||
|
||||
|
||||
@app.route('/api/graph/revert', methods=['POST'])
|
||||
def revert_graph_action():
|
||||
"""Reverts a graph action, such as re-adding a deleted node."""
|
||||
try:
|
||||
data = request.get_json()
|
||||
if not data or 'type' not in data or 'data' not in data:
|
||||
return jsonify({'success': False, 'error': 'Invalid revert request format'}), 400
|
||||
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
if not scanner:
|
||||
return jsonify({'success': False, 'error': 'No active session found'}), 404
|
||||
|
||||
action_type = data['type']
|
||||
action_data = data['data']
|
||||
|
||||
if action_type == 'delete':
|
||||
# Re-add the node
|
||||
node_to_add = action_data.get('node')
|
||||
if node_to_add:
|
||||
scanner.graph.add_node(
|
||||
node_id=node_to_add['id'],
|
||||
node_type=NodeType(node_to_add['type']),
|
||||
attributes=node_to_add.get('attributes'),
|
||||
description=node_to_add.get('description'),
|
||||
metadata=node_to_add.get('metadata')
|
||||
)
|
||||
|
||||
# Re-add the edges
|
||||
edges_to_add = action_data.get('edges', [])
|
||||
for edge in edges_to_add:
|
||||
# Add edge only if both nodes exist to prevent errors
|
||||
if scanner.graph.graph.has_node(edge['from']) and scanner.graph.graph.has_node(edge['to']):
|
||||
scanner.graph.add_edge(
|
||||
source_id=edge['from'],
|
||||
target_id=edge['to'],
|
||||
relationship_type=edge['metadata']['relationship_type'],
|
||||
confidence_score=edge['metadata']['confidence_score'],
|
||||
source_provider=edge['metadata']['source_provider'],
|
||||
raw_data=edge.get('raw_data', {})
|
||||
)
|
||||
|
||||
# Persist the change
|
||||
session_manager.update_session_scanner(user_session_id, scanner)
|
||||
return jsonify({'success': True, 'message': 'Delete action reverted successfully.'})
|
||||
|
||||
return jsonify({'success': False, 'error': f'Unknown revert action type: {action_type}'}), 400
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in revert_graph_action endpoint: {e}")
|
||||
traceback.print_exc()
|
||||
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
|
||||
|
||||
|
||||
@app.route('/api/export', methods=['GET'])
|
||||
@@ -330,9 +437,10 @@ def get_providers():
|
||||
user_session_id, scanner = get_user_scanner()
|
||||
|
||||
if scanner:
|
||||
# Updated debug print to be consistent with the new progress bar logic
|
||||
completed_tasks = scanner.indicators_completed
|
||||
enqueued_tasks = len(scanner.task_queue)
|
||||
print(f"DEBUG: Tasks - Completed: {completed_tasks}, Enqueued: {enqueued_tasks}")
|
||||
total_tasks = scanner.total_tasks_ever_enqueued
|
||||
print(f"DEBUG: Task Progress - Completed: {completed_tasks}, Total Enqueued: {total_tasks}")
|
||||
else:
|
||||
print("DEBUG: No active scanner session found.")
|
||||
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
# dnsrecon-reduced/config.py
|
||||
|
||||
"""
|
||||
Configuration management for DNSRecon tool.
|
||||
Handles API key storage, rate limiting, and default settings.
|
||||
@@ -19,10 +21,10 @@ class Config:
|
||||
|
||||
# --- General Settings ---
|
||||
self.default_recursion_depth = 2
|
||||
self.default_timeout = 15
|
||||
self.default_timeout = 30
|
||||
self.max_concurrent_requests = 5
|
||||
self.large_entity_threshold = 100
|
||||
self.max_retries_per_target = 3
|
||||
self.max_retries_per_target = 8
|
||||
self.cache_expiry_hours = 12
|
||||
|
||||
# --- Provider Caching Settings ---
|
||||
@@ -30,7 +32,7 @@ class Config:
|
||||
|
||||
# --- Rate Limiting (requests per minute) ---
|
||||
self.rate_limits = {
|
||||
'crtsh': 30,
|
||||
'crtsh': 5,
|
||||
'shodan': 60,
|
||||
'dns': 100
|
||||
}
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
# core/graph_manager.py
|
||||
# dnsrecon-reduced/core/graph_manager.py
|
||||
|
||||
"""
|
||||
Graph data model for DNSRecon using NetworkX.
|
||||
Manages in-memory graph storage with confidence scoring and forensic metadata.
|
||||
Now fully compatible with the unified ProviderResult data model.
|
||||
UPDATED: Fixed certificate styling and correlation edge labeling.
|
||||
"""
|
||||
import re
|
||||
from datetime import datetime, timezone
|
||||
@@ -28,6 +30,7 @@ class GraphManager:
|
||||
"""
|
||||
Thread-safe graph manager for DNSRecon infrastructure mapping.
|
||||
Uses NetworkX for in-memory graph storage with confidence scoring.
|
||||
Compatible with unified ProviderResult data model.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
@@ -38,6 +41,7 @@ class GraphManager:
|
||||
self.correlation_index = {}
|
||||
# Compile regex for date filtering for efficiency
|
||||
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||
self.EXCLUDED_KEYS = ['confidence', 'provider', 'timestamp', 'type','crtsh_cert_validity_period_days']
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare GraphManager for pickling, excluding compiled regex."""
|
||||
@@ -52,240 +56,115 @@ class GraphManager:
|
||||
self.__dict__.update(state)
|
||||
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||
|
||||
def _update_correlation_index(self, node_id: str, data: Any, path: List[str] = [], parent_attr: str = ""):
|
||||
"""Recursively traverse metadata and add hashable values to the index with better path tracking."""
|
||||
if path is None:
|
||||
path = []
|
||||
|
||||
if isinstance(data, dict):
|
||||
for key, value in data.items():
|
||||
self._update_correlation_index(node_id, value, path + [key], key)
|
||||
elif isinstance(data, list):
|
||||
for i, item in enumerate(data):
|
||||
# Instead of just using [i], include the parent attribute context
|
||||
list_path_component = f"[{i}]" if not parent_attr else f"{parent_attr}[{i}]"
|
||||
self._update_correlation_index(node_id, item, path + [list_path_component], parent_attr)
|
||||
else:
|
||||
self._add_to_correlation_index(node_id, data, ".".join(path), parent_attr)
|
||||
|
||||
def _add_to_correlation_index(self, node_id: str, value: Any, path_str: str, parent_attr: str = ""):
|
||||
"""Add a hashable value to the correlation index, filtering out noise."""
|
||||
if not isinstance(value, (str, int, float, bool)) or value is None:
|
||||
def process_correlations_for_node(self, node_id: str):
|
||||
"""
|
||||
UPDATED: Process correlations for a given node with enhanced tracking.
|
||||
Now properly tracks which attribute/provider created each correlation.
|
||||
"""
|
||||
if not self.graph.has_node(node_id):
|
||||
return
|
||||
|
||||
# Ignore certain paths that contain noisy, non-unique identifiers
|
||||
if any(keyword in path_str.lower() for keyword in ['count', 'total', 'timestamp', 'date']):
|
||||
return
|
||||
node_attributes = self.graph.nodes[node_id].get('attributes', [])
|
||||
|
||||
# Filter out common low-entropy values and date-like strings
|
||||
if isinstance(value, str):
|
||||
# FIXED: Prevent correlation on date/time strings.
|
||||
if self.date_pattern.match(value):
|
||||
return
|
||||
if len(value) < 4 or value.lower() in ['true', 'false', 'unknown', 'none', 'crt.sh']:
|
||||
return
|
||||
elif isinstance(value, int) and (abs(value) < 1024 or abs(value) > 65535):
|
||||
return # Ignore small integers and common port numbers
|
||||
elif isinstance(value, bool):
|
||||
return # Ignore boolean values
|
||||
# Process each attribute for potential correlations
|
||||
for attr in node_attributes:
|
||||
attr_name = attr.get('name')
|
||||
attr_value = attr.get('value')
|
||||
attr_provider = attr.get('provider', 'unknown')
|
||||
|
||||
# Add the valuable correlation data to the index
|
||||
if value not in self.correlation_index:
|
||||
self.correlation_index[value] = {}
|
||||
if node_id not in self.correlation_index[value]:
|
||||
self.correlation_index[value][node_id] = []
|
||||
# Skip excluded attributes and invalid values
|
||||
if attr_name in self.EXCLUDED_KEYS or not isinstance(attr_value, (str, int, float, bool)) or attr_value is None:
|
||||
continue
|
||||
|
||||
# Store both the full path and the parent attribute for better edge labeling
|
||||
correlation_entry = {
|
||||
'path': path_str,
|
||||
'parent_attr': parent_attr,
|
||||
'meaningful_attr': self._extract_meaningful_attribute(path_str, parent_attr)
|
||||
}
|
||||
if isinstance(attr_value, bool):
|
||||
continue
|
||||
|
||||
if correlation_entry not in self.correlation_index[value][node_id]:
|
||||
self.correlation_index[value][node_id].append(correlation_entry)
|
||||
if isinstance(attr_value, str) and (len(attr_value) < 4 or self.date_pattern.match(attr_value)):
|
||||
continue
|
||||
|
||||
def _extract_meaningful_attribute(self, path_str: str, parent_attr: str = "") -> str:
|
||||
"""Extract the most meaningful attribute name from a path string."""
|
||||
if not path_str:
|
||||
return "unknown"
|
||||
|
||||
path_parts = path_str.split('.')
|
||||
|
||||
# Look for the last non-array-index part
|
||||
for part in reversed(path_parts):
|
||||
# Skip array indices like [0], [1], etc.
|
||||
if not (part.startswith('[') and part.endswith(']') and part[1:-1].isdigit()):
|
||||
# Clean up compound names like "hostnames[0]" to just "hostnames"
|
||||
clean_part = re.sub(r'\[\d+\]$', '', part)
|
||||
if clean_part:
|
||||
return clean_part
|
||||
|
||||
# Fallback to parent attribute if available
|
||||
if parent_attr:
|
||||
return parent_attr
|
||||
|
||||
# Last resort - use the first meaningful part
|
||||
for part in path_parts:
|
||||
if not (part.startswith('[') and part.endswith(']') and part[1:-1].isdigit()):
|
||||
clean_part = re.sub(r'\[\d+\]$', '', part)
|
||||
if clean_part:
|
||||
return clean_part
|
||||
|
||||
return "correlation"
|
||||
|
||||
def _check_for_correlations(self, new_node_id: str, data: Any, path: List[str] = [], parent_attr: str = "") -> List[Dict]:
|
||||
"""Recursively traverse metadata to find correlations with existing data."""
|
||||
if path is None:
|
||||
path = []
|
||||
|
||||
all_correlations = []
|
||||
if isinstance(data, dict):
|
||||
for key, value in data.items():
|
||||
if key == 'source': # Avoid correlating on the provider name
|
||||
continue
|
||||
all_correlations.extend(self._check_for_correlations(new_node_id, value, path + [key], key))
|
||||
elif isinstance(data, list):
|
||||
for i, item in enumerate(data):
|
||||
list_path_component = f"[{i}]" if not parent_attr else f"{parent_attr}[{i}]"
|
||||
all_correlations.extend(self._check_for_correlations(new_node_id, item, path + [list_path_component], parent_attr))
|
||||
else:
|
||||
value = data
|
||||
if value in self.correlation_index:
|
||||
existing_nodes_with_paths = self.correlation_index[value]
|
||||
unique_nodes = set(existing_nodes_with_paths.keys())
|
||||
unique_nodes.add(new_node_id)
|
||||
|
||||
if len(unique_nodes) < 2:
|
||||
return all_correlations # Correlation must involve at least two distinct nodes
|
||||
|
||||
new_source = {
|
||||
'node_id': new_node_id,
|
||||
'path': ".".join(path),
|
||||
'parent_attr': parent_attr,
|
||||
'meaningful_attr': self._extract_meaningful_attribute(".".join(path), parent_attr)
|
||||
# Initialize correlation tracking for this value
|
||||
if attr_value not in self.correlation_index:
|
||||
self.correlation_index[attr_value] = {
|
||||
'nodes': set(),
|
||||
'sources': [] # Track which provider/attribute combinations contributed
|
||||
}
|
||||
all_sources = [new_source]
|
||||
|
||||
for node_id, path_entries in existing_nodes_with_paths.items():
|
||||
for entry in path_entries:
|
||||
if isinstance(entry, dict):
|
||||
all_sources.append({
|
||||
'node_id': node_id,
|
||||
'path': entry['path'],
|
||||
'parent_attr': entry.get('parent_attr', ''),
|
||||
'meaningful_attr': entry.get('meaningful_attr', self._extract_meaningful_attribute(entry['path'], entry.get('parent_attr', '')))
|
||||
})
|
||||
else:
|
||||
# Handle legacy string-only entries
|
||||
all_sources.append({
|
||||
'node_id': node_id,
|
||||
'path': str(entry),
|
||||
'parent_attr': '',
|
||||
'meaningful_attr': self._extract_meaningful_attribute(str(entry))
|
||||
})
|
||||
# Add this node and source information
|
||||
self.correlation_index[attr_value]['nodes'].add(node_id)
|
||||
|
||||
all_correlations.append({
|
||||
'value': value,
|
||||
'sources': all_sources,
|
||||
'nodes': list(unique_nodes)
|
||||
})
|
||||
return all_correlations
|
||||
# Track the source of this correlation value
|
||||
source_info = {
|
||||
'node_id': node_id,
|
||||
'provider': attr_provider,
|
||||
'attribute': attr_name,
|
||||
'path': f"{attr_provider}_{attr_name}"
|
||||
}
|
||||
|
||||
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[Dict[str, Any]] = None,
|
||||
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
|
||||
"""Add a node to the graph, update attributes, and process correlations."""
|
||||
is_new_node = not self.graph.has_node(node_id)
|
||||
if is_new_node:
|
||||
self.graph.add_node(node_id, type=node_type.value,
|
||||
added_timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
attributes=attributes or {},
|
||||
description=description,
|
||||
metadata=metadata or {})
|
||||
else:
|
||||
# Safely merge new attributes into existing attributes
|
||||
if attributes:
|
||||
existing_attributes = self.graph.nodes[node_id].get('attributes', {})
|
||||
existing_attributes.update(attributes)
|
||||
self.graph.nodes[node_id]['attributes'] = existing_attributes
|
||||
if description:
|
||||
self.graph.nodes[node_id]['description'] = description
|
||||
if metadata:
|
||||
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
|
||||
existing_metadata.update(metadata)
|
||||
self.graph.nodes[node_id]['metadata'] = existing_metadata
|
||||
# Add source if not already present (avoid duplicates)
|
||||
existing_sources = [s for s in self.correlation_index[attr_value]['sources']
|
||||
if s['node_id'] == node_id and s['path'] == source_info['path']]
|
||||
if not existing_sources:
|
||||
self.correlation_index[attr_value]['sources'].append(source_info)
|
||||
|
||||
if attributes and node_type != NodeType.CORRELATION_OBJECT:
|
||||
correlations = self._check_for_correlations(node_id, attributes)
|
||||
for corr in correlations:
|
||||
value = corr['value']
|
||||
# Create correlation node if we have multiple nodes with this value
|
||||
if len(self.correlation_index[attr_value]['nodes']) > 1:
|
||||
self._create_enhanced_correlation_node_and_edges(attr_value, self.correlation_index[attr_value])
|
||||
|
||||
# STEP 1: Substring check against all existing nodes
|
||||
if self._correlation_value_matches_existing_node(value):
|
||||
# Skip creating correlation node - would be redundant
|
||||
continue
|
||||
def _create_enhanced_correlation_node_and_edges(self, value, correlation_data):
|
||||
"""
|
||||
UPDATED: Create correlation node and edges with detailed provider tracking.
|
||||
"""
|
||||
correlation_node_id = f"corr_{hash(str(value)) & 0x7FFFFFFF}"
|
||||
nodes = correlation_data['nodes']
|
||||
sources = correlation_data['sources']
|
||||
|
||||
eligible_nodes = set(corr['nodes'])
|
||||
# Create or update correlation node
|
||||
if not self.graph.has_node(correlation_node_id):
|
||||
# Determine the most common provider/attribute combination
|
||||
provider_counts = {}
|
||||
for source in sources:
|
||||
key = f"{source['provider']}_{source['attribute']}"
|
||||
provider_counts[key] = provider_counts.get(key, 0) + 1
|
||||
|
||||
if len(eligible_nodes) < 2:
|
||||
# Need at least 2 nodes to create a correlation
|
||||
continue
|
||||
# Use the most common provider/attribute as the primary label
|
||||
primary_source = max(provider_counts.items(), key=lambda x: x[1])[0] if provider_counts else "unknown_correlation"
|
||||
|
||||
# STEP 3: Check for existing correlation node with same connection pattern
|
||||
correlation_nodes_with_pattern = self._find_correlation_nodes_with_same_pattern(eligible_nodes)
|
||||
metadata = {
|
||||
'value': value,
|
||||
'correlated_nodes': list(nodes),
|
||||
'sources': sources,
|
||||
'primary_source': primary_source,
|
||||
'correlation_count': len(nodes)
|
||||
}
|
||||
|
||||
if correlation_nodes_with_pattern:
|
||||
# STEP 4: Merge with existing correlation node
|
||||
target_correlation_node = correlation_nodes_with_pattern[0]
|
||||
self._merge_correlation_values(target_correlation_node, value, corr)
|
||||
else:
|
||||
# STEP 5: Create new correlation node for eligible nodes only
|
||||
correlation_node_id = f"corr_{abs(hash(str(sorted(eligible_nodes))))}"
|
||||
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT,
|
||||
metadata={'values': [value], 'sources': corr['sources'],
|
||||
'correlated_nodes': list(eligible_nodes)})
|
||||
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT, metadata=metadata)
|
||||
print(f"Created correlation node {correlation_node_id} for value '{value}' with {len(nodes)} nodes")
|
||||
|
||||
# Create edges from eligible nodes to this correlation node with better labeling
|
||||
for c_node_id in eligible_nodes:
|
||||
if self.graph.has_node(c_node_id):
|
||||
# Find the best attribute name for this node
|
||||
meaningful_attr = self._find_best_attribute_name_for_node(c_node_id, corr['sources'])
|
||||
relationship_type = f"c_{meaningful_attr}"
|
||||
self.add_edge(c_node_id, correlation_node_id, relationship_type, confidence_score=0.9)
|
||||
# Create edges from each node to the correlation node
|
||||
for source in sources:
|
||||
node_id = source['node_id']
|
||||
provider = source['provider']
|
||||
attribute = source['attribute']
|
||||
|
||||
self._update_correlation_index(node_id, attributes)
|
||||
if self.graph.has_node(node_id) and not self.graph.has_edge(node_id, correlation_node_id):
|
||||
# Format relationship label as "corr_provider_attribute"
|
||||
relationship_label = f"corr_{provider}_{attribute}"
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return is_new_node
|
||||
self.add_edge(
|
||||
source_id=node_id,
|
||||
target_id=correlation_node_id,
|
||||
relationship_type=relationship_label,
|
||||
confidence_score=0.9,
|
||||
source_provider=provider,
|
||||
raw_data={
|
||||
'correlation_value': value,
|
||||
'original_attribute': attribute,
|
||||
'correlation_type': 'attribute_matching'
|
||||
}
|
||||
)
|
||||
|
||||
def _find_best_attribute_name_for_node(self, node_id: str, sources: List[Dict]) -> str:
|
||||
"""Find the best attribute name for a correlation edge by looking at the sources."""
|
||||
node_sources = [s for s in sources if s['node_id'] == node_id]
|
||||
print(f"Added correlation edge: {node_id} -> {correlation_node_id} ({relationship_label})")
|
||||
|
||||
if not node_sources:
|
||||
return "correlation"
|
||||
|
||||
# Use the meaningful_attr if available
|
||||
for source in node_sources:
|
||||
meaningful_attr = source.get('meaningful_attr')
|
||||
if meaningful_attr and meaningful_attr != "unknown":
|
||||
return meaningful_attr
|
||||
|
||||
# Fallback to parent_attr
|
||||
for source in node_sources:
|
||||
parent_attr = source.get('parent_attr')
|
||||
if parent_attr:
|
||||
return parent_attr
|
||||
|
||||
# Last resort - extract from path
|
||||
for source in node_sources:
|
||||
path = source.get('path', '')
|
||||
if path:
|
||||
extracted = self._extract_meaningful_attribute(path)
|
||||
if extracted != "unknown":
|
||||
return extracted
|
||||
|
||||
return "correlation"
|
||||
|
||||
def _has_direct_edge_bidirectional(self, node_a: str, node_b: str) -> bool:
|
||||
"""
|
||||
@@ -382,6 +261,47 @@ class GraphManager:
|
||||
f"across {node_count} nodes"
|
||||
)
|
||||
|
||||
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[List[Dict[str, Any]]] = None,
|
||||
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
|
||||
"""
|
||||
Add a node to the graph, update attributes, and process correlations.
|
||||
Now compatible with unified data model - attributes are dictionaries from converted StandardAttribute objects.
|
||||
"""
|
||||
is_new_node = not self.graph.has_node(node_id)
|
||||
if is_new_node:
|
||||
self.graph.add_node(node_id, type=node_type.value,
|
||||
added_timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
attributes=attributes or [], # Store as a list from the start
|
||||
description=description,
|
||||
metadata=metadata or {})
|
||||
else:
|
||||
# Safely merge new attributes into the existing list of attributes
|
||||
if attributes:
|
||||
existing_attributes = self.graph.nodes[node_id].get('attributes', [])
|
||||
|
||||
# Handle cases where old data might still be in dictionary format
|
||||
if not isinstance(existing_attributes, list):
|
||||
existing_attributes = []
|
||||
|
||||
# Create a set of existing attribute names for efficient duplicate checking
|
||||
existing_attr_names = {attr['name'] for attr in existing_attributes}
|
||||
|
||||
for new_attr in attributes:
|
||||
if new_attr['name'] not in existing_attr_names:
|
||||
existing_attributes.append(new_attr)
|
||||
existing_attr_names.add(new_attr['name'])
|
||||
|
||||
self.graph.nodes[node_id]['attributes'] = existing_attributes
|
||||
if description:
|
||||
self.graph.nodes[node_id]['description'] = description
|
||||
if metadata:
|
||||
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
|
||||
existing_metadata.update(metadata)
|
||||
self.graph.nodes[node_id]['metadata'] = existing_metadata
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return is_new_node
|
||||
|
||||
def add_edge(self, source_id: str, target_id: str, relationship_type: str,
|
||||
confidence_score: float = 0.5, source_provider: str = "unknown",
|
||||
raw_data: Optional[Dict[str, Any]] = None) -> bool:
|
||||
@@ -414,6 +334,63 @@ class GraphManager:
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return True
|
||||
|
||||
def extract_node_from_large_entity(self, large_entity_id: str, node_id_to_extract: str) -> bool:
|
||||
"""
|
||||
Removes a node from a large entity's internal lists and updates its count.
|
||||
This prepares the large entity for the node's promotion to a regular node.
|
||||
"""
|
||||
if not self.graph.has_node(large_entity_id):
|
||||
return False
|
||||
|
||||
node_data = self.graph.nodes[large_entity_id]
|
||||
attributes = node_data.get('attributes', {})
|
||||
|
||||
# Remove from the list of member nodes
|
||||
if 'nodes' in attributes and node_id_to_extract in attributes['nodes']:
|
||||
attributes['nodes'].remove(node_id_to_extract)
|
||||
# Update the count
|
||||
attributes['count'] = len(attributes['nodes'])
|
||||
else:
|
||||
# This can happen if the node was already extracted, which is not an error.
|
||||
print(f"Warning: Node {node_id_to_extract} not found in the 'nodes' list of {large_entity_id}.")
|
||||
return True # Proceed as if successful
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return True
|
||||
|
||||
def remove_node(self, node_id: str) -> bool:
|
||||
"""Remove a node and its connected edges from the graph."""
|
||||
if not self.graph.has_node(node_id):
|
||||
return False
|
||||
|
||||
# Remove node from the graph (NetworkX handles removing connected edges)
|
||||
self.graph.remove_node(node_id)
|
||||
|
||||
# Clean up the correlation index
|
||||
keys_to_delete = []
|
||||
for value, data in self.correlation_index.items():
|
||||
if isinstance(data, dict) and 'nodes' in data:
|
||||
# Updated correlation structure
|
||||
if node_id in data['nodes']:
|
||||
data['nodes'].discard(node_id)
|
||||
# Remove sources for this node
|
||||
data['sources'] = [s for s in data['sources'] if s['node_id'] != node_id]
|
||||
if not data['nodes']: # If no other nodes are associated, remove it
|
||||
keys_to_delete.append(value)
|
||||
else:
|
||||
# Legacy correlation structure (fallback)
|
||||
if isinstance(data, set) and node_id in data:
|
||||
data.discard(node_id)
|
||||
if not data:
|
||||
keys_to_delete.append(value)
|
||||
|
||||
for key in keys_to_delete:
|
||||
if key in self.correlation_index:
|
||||
del self.correlation_index[key]
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return True
|
||||
|
||||
def get_node_count(self) -> int:
|
||||
"""Get total number of nodes in the graph."""
|
||||
return self.graph.number_of_nodes()
|
||||
@@ -438,19 +415,58 @@ class GraphManager:
|
||||
if d.get('confidence_score', 0) >= min_confidence]
|
||||
|
||||
def get_graph_data(self) -> Dict[str, Any]:
|
||||
"""Export graph data formatted for frontend visualization."""
|
||||
"""
|
||||
Export graph data formatted for frontend visualization.
|
||||
UPDATED: Fixed certificate validity styling logic for unified data model.
|
||||
"""
|
||||
nodes = []
|
||||
for node_id, attrs in self.graph.nodes(data=True):
|
||||
node_data = {'id': node_id, 'label': node_id, 'type': attrs.get('type', 'unknown'),
|
||||
'attributes': attrs.get('attributes', {}),
|
||||
'attributes': attrs.get('attributes', []), # Ensure attributes is a list
|
||||
'description': attrs.get('description', ''),
|
||||
'metadata': attrs.get('metadata', {}),
|
||||
'added_timestamp': attrs.get('added_timestamp')}
|
||||
# Customize node appearance based on type and attributes
|
||||
|
||||
# UPDATED: Fixed certificate validity styling logic
|
||||
node_type = node_data['type']
|
||||
attributes = node_data['attributes']
|
||||
if node_type == 'domain' and attributes.get('certificates', {}).get('has_valid_cert') is False:
|
||||
node_data['color'] = {'background': '#c7c7c7', 'border': '#999'} # Gray for invalid cert
|
||||
attributes_list = node_data['attributes']
|
||||
|
||||
if node_type == 'domain' and isinstance(attributes_list, list):
|
||||
# Check for certificate-related attributes
|
||||
has_certificates = False
|
||||
has_valid_certificates = False
|
||||
has_expired_certificates = False
|
||||
|
||||
for attr in attributes_list:
|
||||
attr_name = attr.get('name', '').lower()
|
||||
attr_provider = attr.get('provider', '').lower()
|
||||
attr_value = attr.get('value')
|
||||
|
||||
# Look for certificate attributes from crt.sh provider
|
||||
if attr_provider == 'crtsh' or 'cert' in attr_name:
|
||||
has_certificates = True
|
||||
|
||||
# Check certificate validity
|
||||
if attr_name == 'cert_is_currently_valid':
|
||||
if attr_value is True:
|
||||
has_valid_certificates = True
|
||||
elif attr_value is False:
|
||||
has_expired_certificates = True
|
||||
|
||||
# Also check for certificate expiry indicators
|
||||
elif 'expires_soon' in attr_name and attr_value is True:
|
||||
has_expired_certificates = True
|
||||
elif 'expired' in attr_name and attr_value is True:
|
||||
has_expired_certificates = True
|
||||
|
||||
# Apply styling based on certificate status
|
||||
if has_expired_certificates and not has_valid_certificates:
|
||||
# Red for expired/invalid certificates
|
||||
node_data['color'] = {'background': '#ff6b6b', 'border': '#cc5555'}
|
||||
elif not has_certificates:
|
||||
# Grey for domains with no certificates
|
||||
node_data['color'] = {'background': '#c7c7c7', 'border': '#999999'}
|
||||
# Default green styling is handled by the frontend for domains with valid certificates
|
||||
|
||||
# Add incoming and outgoing edges to node data
|
||||
if self.graph.has_node(node_id):
|
||||
@@ -481,7 +497,7 @@ class GraphManager:
|
||||
'last_modified': self.last_modified,
|
||||
'total_nodes': self.get_node_count(),
|
||||
'total_edges': self.get_edge_count(),
|
||||
'graph_format': 'dnsrecon_v1_nodeling'
|
||||
'graph_format': 'dnsrecon_v1_unified_model'
|
||||
},
|
||||
'graph': graph_data,
|
||||
'statistics': self.get_statistics()
|
||||
|
||||
@@ -50,7 +50,7 @@ class ForensicLogger:
|
||||
session_id: Unique identifier for this reconnaissance session
|
||||
"""
|
||||
self.session_id = session_id or self._generate_session_id()
|
||||
#self.lock = threading.Lock()
|
||||
self.lock = threading.Lock()
|
||||
|
||||
# Initialize audit trail storage
|
||||
self.api_requests: List[APIRequest] = []
|
||||
@@ -86,6 +86,8 @@ class ForensicLogger:
|
||||
# Remove the unpickleable 'logger' attribute
|
||||
if 'logger' in state:
|
||||
del state['logger']
|
||||
if 'lock' in state:
|
||||
del state['lock']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
@@ -101,6 +103,7 @@ class ForensicLogger:
|
||||
console_handler = logging.StreamHandler()
|
||||
console_handler.setFormatter(formatter)
|
||||
self.logger.addHandler(console_handler)
|
||||
self.lock = threading.Lock()
|
||||
|
||||
def _generate_session_id(self) -> str:
|
||||
"""Generate unique session identifier."""
|
||||
|
||||
106
core/provider_result.py
Normal file
106
core/provider_result.py
Normal file
@@ -0,0 +1,106 @@
|
||||
# dnsrecon-reduced/core/provider_result.py
|
||||
|
||||
"""
|
||||
Unified data model for DNSRecon passive reconnaissance.
|
||||
Standardizes the data structure across all providers to ensure consistent processing.
|
||||
"""
|
||||
|
||||
from typing import Any, Optional, List, Dict
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
|
||||
|
||||
@dataclass
|
||||
class StandardAttribute:
|
||||
"""A unified data structure for a single piece of information about a node."""
|
||||
target_node: str
|
||||
name: str
|
||||
value: Any
|
||||
type: str
|
||||
provider: str
|
||||
confidence: float
|
||||
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
metadata: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate the attribute after initialization."""
|
||||
if not isinstance(self.confidence, (int, float)) or not 0.0 <= self.confidence <= 1.0:
|
||||
raise ValueError(f"Confidence must be between 0.0 and 1.0, got {self.confidence}")
|
||||
|
||||
|
||||
@dataclass
|
||||
class Relationship:
|
||||
"""A unified data structure for a directional link between two nodes."""
|
||||
source_node: str
|
||||
target_node: str
|
||||
relationship_type: str
|
||||
confidence: float
|
||||
provider: str
|
||||
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
raw_data: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate the relationship after initialization."""
|
||||
if not isinstance(self.confidence, (int, float)) or not 0.0 <= self.confidence <= 1.0:
|
||||
raise ValueError(f"Confidence must be between 0.0 and 1.0, got {self.confidence}")
|
||||
|
||||
|
||||
@dataclass
|
||||
class ProviderResult:
|
||||
"""A container for all data returned by a provider from a single query."""
|
||||
attributes: List[StandardAttribute] = field(default_factory=list)
|
||||
relationships: List[Relationship] = field(default_factory=list)
|
||||
|
||||
def add_attribute(self, target_node: str, name: str, value: Any, attr_type: str,
|
||||
provider: str, confidence: float = 0.8,
|
||||
metadata: Optional[Dict[str, Any]] = None) -> None:
|
||||
"""Helper method to add an attribute to the result."""
|
||||
self.attributes.append(StandardAttribute(
|
||||
target_node=target_node,
|
||||
name=name,
|
||||
value=value,
|
||||
type=attr_type,
|
||||
provider=provider,
|
||||
confidence=confidence,
|
||||
metadata=metadata or {}
|
||||
))
|
||||
|
||||
def add_relationship(self, source_node: str, target_node: str, relationship_type: str,
|
||||
provider: str, confidence: float = 0.8,
|
||||
raw_data: Optional[Dict[str, Any]] = None) -> None:
|
||||
"""Helper method to add a relationship to the result."""
|
||||
self.relationships.append(Relationship(
|
||||
source_node=source_node,
|
||||
target_node=target_node,
|
||||
relationship_type=relationship_type,
|
||||
confidence=confidence,
|
||||
provider=provider,
|
||||
raw_data=raw_data or {}
|
||||
))
|
||||
|
||||
def get_discovered_nodes(self) -> set:
|
||||
"""Get all unique node identifiers discovered in this result."""
|
||||
nodes = set()
|
||||
|
||||
# Add nodes from relationships
|
||||
for rel in self.relationships:
|
||||
nodes.add(rel.source_node)
|
||||
nodes.add(rel.target_node)
|
||||
|
||||
# Add nodes from attributes
|
||||
for attr in self.attributes:
|
||||
nodes.add(attr.target_node)
|
||||
|
||||
return nodes
|
||||
|
||||
def get_relationship_count(self) -> int:
|
||||
"""Get the total number of relationships in this result."""
|
||||
return len(self.relationships)
|
||||
|
||||
def get_attribute_count(self) -> int:
|
||||
"""Get the total number of attributes in this result."""
|
||||
return len(self.attributes)
|
||||
|
||||
def is_large_entity(self, threshold: int) -> bool:
|
||||
"""Check if this result qualifies as a large entity based on relationship count."""
|
||||
return self.get_relationship_count() > threshold
|
||||
28
core/rate_limiter.py
Normal file
28
core/rate_limiter.py
Normal file
@@ -0,0 +1,28 @@
|
||||
# dnsrecon-reduced/core/rate_limiter.py
|
||||
|
||||
import time
|
||||
|
||||
class GlobalRateLimiter:
|
||||
def __init__(self, redis_client):
|
||||
self.redis = redis_client
|
||||
|
||||
def is_rate_limited(self, key, limit, period):
|
||||
"""
|
||||
Check if a key is rate-limited.
|
||||
"""
|
||||
now = time.time()
|
||||
key = f"rate_limit:{key}"
|
||||
|
||||
# Remove old timestamps
|
||||
self.redis.zremrangebyscore(key, 0, now - period)
|
||||
|
||||
# Check the count
|
||||
count = self.redis.zcard(key)
|
||||
if count >= limit:
|
||||
return True
|
||||
|
||||
# Add new timestamp
|
||||
self.redis.zadd(key, {now: now})
|
||||
self.redis.expire(key, period)
|
||||
|
||||
return False
|
||||
682
core/scanner.py
682
core/scanner.py
@@ -1,20 +1,22 @@
|
||||
# dnsrecon/core/scanner.py
|
||||
# dnsrecon-reduced/core/scanner.py
|
||||
|
||||
import threading
|
||||
import traceback
|
||||
import time
|
||||
import os
|
||||
import importlib
|
||||
import redis
|
||||
from typing import List, Set, Dict, Any, Tuple, Optional
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed, CancelledError, Future
|
||||
from collections import defaultdict, deque
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from collections import defaultdict
|
||||
from queue import PriorityQueue
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from core.graph_manager import GraphManager, NodeType
|
||||
from core.logger import get_forensic_logger, new_session
|
||||
from core.provider_result import ProviderResult
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
from providers.base_provider import BaseProvider
|
||||
|
||||
from core.rate_limiter import GlobalRateLimiter
|
||||
|
||||
class ScanStatus:
|
||||
"""Enumeration of scan statuses."""
|
||||
@@ -28,6 +30,7 @@ class ScanStatus:
|
||||
class Scanner:
|
||||
"""
|
||||
Main scanning orchestrator for DNSRecon passive reconnaissance.
|
||||
Now provider-agnostic, consuming standardized ProviderResult objects.
|
||||
"""
|
||||
|
||||
def __init__(self, session_config=None):
|
||||
@@ -50,7 +53,7 @@ class Scanner:
|
||||
self.stop_event = threading.Event()
|
||||
self.scan_thread = None
|
||||
self.session_id: Optional[str] = None # Will be set by session manager
|
||||
self.task_queue = deque([])
|
||||
self.task_queue = PriorityQueue()
|
||||
self.target_retries = defaultdict(int)
|
||||
self.scan_failed_due_to_retries = False
|
||||
|
||||
@@ -63,6 +66,7 @@ class Scanner:
|
||||
self.indicators_processed = 0
|
||||
self.indicators_completed = 0
|
||||
self.tasks_re_enqueued = 0
|
||||
self.total_tasks_ever_enqueued = 0
|
||||
self.current_indicator = ""
|
||||
|
||||
# Concurrent processing configuration
|
||||
@@ -77,6 +81,9 @@ class Scanner:
|
||||
print("Initializing forensic logger...")
|
||||
self.logger = get_forensic_logger()
|
||||
|
||||
# Initialize global rate limiter
|
||||
self.rate_limiter = GlobalRateLimiter(redis.StrictRedis(db=0))
|
||||
|
||||
print("Scanner initialization complete")
|
||||
|
||||
except Exception as e:
|
||||
@@ -129,7 +136,10 @@ class Scanner:
|
||||
'stop_event',
|
||||
'scan_thread',
|
||||
'executor',
|
||||
'processing_lock' # **NEW**: Exclude the processing lock
|
||||
'processing_lock',
|
||||
'task_queue',
|
||||
'rate_limiter',
|
||||
'logger'
|
||||
]
|
||||
|
||||
for attr in unpicklable_attrs:
|
||||
@@ -138,7 +148,6 @@ class Scanner:
|
||||
|
||||
# Handle providers separately to ensure they're picklable
|
||||
if 'providers' in state:
|
||||
# The providers should be picklable now, but let's ensure clean state
|
||||
for provider in state['providers']:
|
||||
if hasattr(provider, '_stop_event'):
|
||||
provider._stop_event = None
|
||||
@@ -153,9 +162,15 @@ class Scanner:
|
||||
self.stop_event = threading.Event()
|
||||
self.scan_thread = None
|
||||
self.executor = None
|
||||
self.processing_lock = threading.Lock() # **NEW**: Recreate processing lock
|
||||
self.processing_lock = threading.Lock()
|
||||
self.task_queue = PriorityQueue()
|
||||
self.rate_limiter = GlobalRateLimiter(redis.StrictRedis(db=0))
|
||||
self.logger = get_forensic_logger()
|
||||
|
||||
if not hasattr(self, 'providers') or not self.providers:
|
||||
print("Providers not found after loading session, re-initializing...")
|
||||
self._initialize_providers()
|
||||
|
||||
# **NEW**: Reset processing tracking
|
||||
if not hasattr(self, 'currently_processing'):
|
||||
self.currently_processing = set()
|
||||
|
||||
@@ -204,11 +219,12 @@ class Scanner:
|
||||
self._initialize_providers()
|
||||
print("Session configuration updated")
|
||||
|
||||
def start_scan(self, target_domain: str, max_depth: int = 2, clear_graph: bool = True) -> bool:
|
||||
def start_scan(self, target: str, max_depth: int = 2, clear_graph: bool = True, force_rescan_target: Optional[str] = None) -> bool:
|
||||
"""Start a new reconnaissance scan with proper cleanup of previous scans."""
|
||||
print(f"=== STARTING SCAN IN SCANNER {id(self)} ===")
|
||||
print(f"Session ID: {self.session_id}")
|
||||
print(f"Initial scanner status: {self.status}")
|
||||
self.total_tasks_ever_enqueued = 0
|
||||
|
||||
# **IMPROVED**: More aggressive cleanup of previous scan
|
||||
if self.scan_thread and self.scan_thread.is_alive():
|
||||
@@ -221,7 +237,7 @@ class Scanner:
|
||||
# Clear all processing state
|
||||
with self.processing_lock:
|
||||
self.currently_processing.clear()
|
||||
self.task_queue.clear()
|
||||
self.task_queue = PriorityQueue()
|
||||
|
||||
# Shutdown executor aggressively
|
||||
if self.executor:
|
||||
@@ -251,7 +267,7 @@ class Scanner:
|
||||
with self.processing_lock:
|
||||
self.currently_processing.clear()
|
||||
|
||||
self.task_queue.clear()
|
||||
self.task_queue = PriorityQueue()
|
||||
self.target_retries.clear()
|
||||
self.scan_failed_due_to_retries = False
|
||||
|
||||
@@ -268,7 +284,14 @@ class Scanner:
|
||||
|
||||
if clear_graph:
|
||||
self.graph.clear()
|
||||
self.current_target = target_domain.lower().strip()
|
||||
|
||||
if force_rescan_target and self.graph.graph.has_node(force_rescan_target):
|
||||
print(f"Forcing rescan of {force_rescan_target}, clearing provider states.")
|
||||
node_data = self.graph.graph.nodes[force_rescan_target]
|
||||
if 'metadata' in node_data and 'provider_states' in node_data['metadata']:
|
||||
node_data['metadata']['provider_states'] = {}
|
||||
|
||||
self.current_target = target.lower().strip()
|
||||
self.max_depth = max_depth
|
||||
self.current_depth = 0
|
||||
|
||||
@@ -304,95 +327,119 @@ class Scanner:
|
||||
self._update_session_state()
|
||||
return False
|
||||
|
||||
def _execute_scan(self, target_domain: str, max_depth: int) -> None:
|
||||
"""Execute the reconnaissance scan with proper termination handling."""
|
||||
print(f"_execute_scan started for {target_domain} with depth {max_depth}")
|
||||
self.executor = ThreadPoolExecutor(max_workers=self.max_workers)
|
||||
processed_targets = set()
|
||||
def _get_priority(self, provider_name):
|
||||
rate_limit = self.config.get_rate_limit(provider_name)
|
||||
if rate_limit > 90:
|
||||
return 1 # Highest priority
|
||||
elif rate_limit > 50:
|
||||
return 2
|
||||
else:
|
||||
return 3 # Lowest priority
|
||||
|
||||
self.task_queue.append((target_domain, 0, False))
|
||||
def _execute_scan(self, target: str, max_depth: int) -> None:
|
||||
"""Execute the reconnaissance scan with proper termination handling."""
|
||||
print(f"_execute_scan started for {target} with depth {max_depth}")
|
||||
self.executor = ThreadPoolExecutor(max_workers=self.max_workers)
|
||||
processed_tasks = set()
|
||||
|
||||
# Initial task population for the main target
|
||||
is_ip = _is_valid_ip(target)
|
||||
initial_providers = self._get_eligible_providers(target, is_ip, False)
|
||||
for provider in initial_providers:
|
||||
provider_name = provider.get_name()
|
||||
self.task_queue.put((self._get_priority(provider_name), (provider_name, target, 0)))
|
||||
self.total_tasks_ever_enqueued += 1
|
||||
|
||||
try:
|
||||
self.status = ScanStatus.RUNNING
|
||||
self._update_session_state()
|
||||
|
||||
enabled_providers = [provider.get_name() for provider in self.providers]
|
||||
self.logger.log_scan_start(target_domain, max_depth, enabled_providers)
|
||||
self.graph.add_node(target_domain, NodeType.DOMAIN)
|
||||
self._initialize_provider_states(target_domain)
|
||||
self.logger.log_scan_start(target, max_depth, enabled_providers)
|
||||
|
||||
# **IMPROVED**: Better termination checking in main loop
|
||||
while self.task_queue and not self._is_stop_requested():
|
||||
# Determine initial node type
|
||||
node_type = NodeType.IP if is_ip else NodeType.DOMAIN
|
||||
self.graph.add_node(target, node_type)
|
||||
|
||||
self._initialize_provider_states(target)
|
||||
|
||||
# Better termination checking in main loop
|
||||
while not self.task_queue.empty() and not self._is_stop_requested():
|
||||
try:
|
||||
target, depth, is_large_entity_member = self.task_queue.popleft()
|
||||
priority, (provider_name, target_item, depth) = self.task_queue.get()
|
||||
except IndexError:
|
||||
# Queue became empty during processing
|
||||
break
|
||||
|
||||
if target in processed_targets:
|
||||
task_tuple = (provider_name, target_item)
|
||||
if task_tuple in processed_tasks:
|
||||
continue
|
||||
|
||||
if depth > max_depth:
|
||||
continue
|
||||
|
||||
# **NEW**: Track this target as currently processing
|
||||
if self.rate_limiter.is_rate_limited(provider_name, self.config.get_rate_limit(provider_name), 60):
|
||||
self.task_queue.put((priority + 1, (provider_name, target_item, depth))) # Postpone
|
||||
continue
|
||||
|
||||
with self.processing_lock:
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested before processing {target}")
|
||||
print(f"Stop requested before processing {target_item}")
|
||||
break
|
||||
self.currently_processing.add(target)
|
||||
self.currently_processing.add(target_item)
|
||||
|
||||
try:
|
||||
self.current_depth = depth
|
||||
self.current_indicator = target
|
||||
self.current_indicator = target_item
|
||||
self._update_session_state()
|
||||
|
||||
# **IMPROVED**: More frequent stop checking during processing
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested during processing setup for {target}")
|
||||
print(f"Stop requested during processing setup for {target_item}")
|
||||
break
|
||||
|
||||
new_targets, large_entity_members, success = self._query_providers_for_target(target, depth, is_large_entity_member)
|
||||
provider = next((p for p in self.providers if p.get_name() == provider_name), None)
|
||||
|
||||
# **NEW**: Check stop signal after provider queries
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested after querying providers for {target}")
|
||||
break
|
||||
if provider:
|
||||
new_targets, large_entity_members, success = self._query_single_provider_for_target(provider, target_item, depth)
|
||||
|
||||
if not success:
|
||||
self.target_retries[target] += 1
|
||||
if self.target_retries[target] <= self.config.max_retries_per_target:
|
||||
print(f"Re-queueing target {target} (attempt {self.target_retries[target]})")
|
||||
self.task_queue.append((target, depth, is_large_entity_member))
|
||||
self.tasks_re_enqueued += 1
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested after querying providers for {target_item}")
|
||||
break
|
||||
|
||||
if not success:
|
||||
self.target_retries[task_tuple] += 1
|
||||
if self.target_retries[task_tuple] <= self.config.max_retries_per_target:
|
||||
print(f"Re-queueing task {task_tuple} (attempt {self.target_retries[task_tuple]})")
|
||||
self.task_queue.put((priority, (provider_name, target_item, depth)))
|
||||
self.tasks_re_enqueued += 1
|
||||
self.total_tasks_ever_enqueued += 1
|
||||
else:
|
||||
print(f"ERROR: Max retries exceeded for task {task_tuple}")
|
||||
self.scan_failed_due_to_retries = True
|
||||
self._log_target_processing_error(str(task_tuple), "Max retries exceeded")
|
||||
else:
|
||||
print(f"ERROR: Max retries exceeded for target {target}")
|
||||
self.scan_failed_due_to_retries = True
|
||||
self._log_target_processing_error(target, "Max retries exceeded")
|
||||
else:
|
||||
processed_targets.add(target)
|
||||
self.indicators_completed += 1
|
||||
|
||||
# **NEW**: Only add new targets if not stopped
|
||||
if not self._is_stop_requested():
|
||||
for new_target in new_targets:
|
||||
if new_target not in processed_targets:
|
||||
self.task_queue.append((new_target, depth + 1, False))
|
||||
|
||||
for member in large_entity_members:
|
||||
if member not in processed_targets:
|
||||
self.task_queue.append((member, depth, True))
|
||||
processed_tasks.add(task_tuple)
|
||||
self.indicators_completed += 1
|
||||
|
||||
if not self._is_stop_requested():
|
||||
all_new_targets = new_targets.union(large_entity_members)
|
||||
for new_target in all_new_targets:
|
||||
is_ip_new = _is_valid_ip(new_target)
|
||||
eligible_providers_new = self._get_eligible_providers(new_target, is_ip_new, False)
|
||||
for p_new in eligible_providers_new:
|
||||
p_name_new = p_new.get_name()
|
||||
if (p_name_new, new_target) not in processed_tasks:
|
||||
new_depth = depth + 1 if new_target in new_targets else depth
|
||||
self.task_queue.put((self._get_priority(p_name_new), (p_name_new, new_target, new_depth)))
|
||||
self.total_tasks_ever_enqueued += 1
|
||||
finally:
|
||||
# **NEW**: Always remove from processing set
|
||||
with self.processing_lock:
|
||||
self.currently_processing.discard(target)
|
||||
self.currently_processing.discard(target_item)
|
||||
|
||||
# **NEW**: Log termination reason
|
||||
if self._is_stop_requested():
|
||||
print("Scan terminated due to stop request")
|
||||
self.logger.logger.info("Scan terminated by user request")
|
||||
elif not self.task_queue:
|
||||
elif self.task_queue.empty():
|
||||
print("Scan completed - no more targets to process")
|
||||
self.logger.logger.info("Scan completed - all targets processed")
|
||||
|
||||
@@ -402,7 +449,6 @@ class Scanner:
|
||||
self.status = ScanStatus.FAILED
|
||||
self.logger.logger.error(f"Scan failed: {e}")
|
||||
finally:
|
||||
# **NEW**: Clear processing state on exit
|
||||
with self.processing_lock:
|
||||
self.currently_processing.clear()
|
||||
|
||||
@@ -422,68 +468,221 @@ class Scanner:
|
||||
print("Final scan statistics:")
|
||||
print(f" - Total nodes: {stats['basic_metrics']['total_nodes']}")
|
||||
print(f" - Total edges: {stats['basic_metrics']['total_edges']}")
|
||||
print(f" - Targets processed: {len(processed_targets)}")
|
||||
print(f" - Tasks processed: {len(processed_tasks)}")
|
||||
|
||||
def _query_providers_for_target(self, target: str, depth: int, dns_only: bool = False) -> Tuple[Set[str], Set[str], bool]:
|
||||
"""Query providers for a single target with enhanced stop checking."""
|
||||
# **NEW**: Early termination check
|
||||
def _query_single_provider_for_target(self, provider: BaseProvider, target: str, depth: int) -> Tuple[Set[str], Set[str], bool]:
|
||||
"""
|
||||
Query a single provider and process the unified ProviderResult.
|
||||
Now provider-agnostic - handles any provider that returns ProviderResult.
|
||||
"""
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested before querying providers for {target}")
|
||||
print(f"Stop requested before querying {provider.get_name()} for {target}")
|
||||
return set(), set(), False
|
||||
|
||||
is_ip = _is_valid_ip(target)
|
||||
target_type = NodeType.IP if is_ip else NodeType.DOMAIN
|
||||
print(f"Querying providers for {target_type.value}: {target} at depth {depth}")
|
||||
print(f"Querying {provider.get_name()} for {target_type.value}: {target} at depth {depth}")
|
||||
|
||||
# Ensure target node exists in graph
|
||||
self.graph.add_node(target, target_type)
|
||||
self._initialize_provider_states(target)
|
||||
|
||||
new_targets = set()
|
||||
large_entity_members = set()
|
||||
node_attributes = defaultdict(lambda: defaultdict(list))
|
||||
all_providers_successful = True
|
||||
provider_successful = True
|
||||
|
||||
eligible_providers = self._get_eligible_providers(target, is_ip, dns_only)
|
||||
try:
|
||||
# Query provider - now returns unified ProviderResult
|
||||
provider_result = self._query_single_provider_unified(provider, target, is_ip, depth)
|
||||
|
||||
if not eligible_providers:
|
||||
self._log_no_eligible_providers(target, is_ip)
|
||||
return new_targets, large_entity_members, True
|
||||
if provider_result is None:
|
||||
provider_successful = False
|
||||
elif not self._is_stop_requested():
|
||||
# Process the unified result
|
||||
discovered, is_large_entity = self._process_provider_result_unified(
|
||||
target, provider, provider_result, depth
|
||||
)
|
||||
if is_large_entity:
|
||||
large_entity_members.update(discovered)
|
||||
else:
|
||||
new_targets.update(discovered)
|
||||
self.graph.process_correlations_for_node(target)
|
||||
else:
|
||||
print(f"Stop requested after processing results from {provider.get_name()}")
|
||||
except Exception as e:
|
||||
provider_successful = False
|
||||
self._log_provider_error(target, provider.get_name(), str(e))
|
||||
|
||||
return new_targets, large_entity_members, provider_successful
|
||||
|
||||
def _query_single_provider_unified(self, provider: BaseProvider, target: str, is_ip: bool, current_depth: int) -> Optional[ProviderResult]:
|
||||
"""
|
||||
Query a single provider with stop signal checking, now returns ProviderResult.
|
||||
"""
|
||||
provider_name = provider.get_name()
|
||||
start_time = datetime.now(timezone.utc)
|
||||
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested before querying {provider_name} for {target}")
|
||||
return None
|
||||
|
||||
print(f"Querying {provider_name} for {target}")
|
||||
|
||||
self.logger.logger.info(f"Attempting {provider_name} query for {target} at depth {current_depth}")
|
||||
|
||||
try:
|
||||
# Query the provider - returns unified ProviderResult
|
||||
if is_ip:
|
||||
result = provider.query_ip(target)
|
||||
else:
|
||||
result = provider.query_domain(target)
|
||||
|
||||
# **IMPROVED**: Check stop signal before each provider
|
||||
for i, provider in enumerate(eligible_providers):
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested while querying provider {i+1}/{len(eligible_providers)} for {target}")
|
||||
all_providers_successful = False
|
||||
print(f"Stop requested after querying {provider_name} for {target}")
|
||||
return None
|
||||
|
||||
# Update provider state with relationship count (more meaningful than raw result count)
|
||||
relationship_count = result.get_relationship_count() if result else 0
|
||||
self._update_provider_state(target, provider_name, 'success', relationship_count, None, start_time)
|
||||
|
||||
print(f"✓ {provider_name} returned {relationship_count} relationships for {target}")
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
self._update_provider_state(target, provider_name, 'failed', 0, str(e), start_time)
|
||||
print(f"✗ {provider_name} failed for {target}: {e}")
|
||||
return None
|
||||
|
||||
def _process_provider_result_unified(self, target: str, provider: BaseProvider,
|
||||
provider_result: ProviderResult, current_depth: int) -> Tuple[Set[str], bool]:
|
||||
"""
|
||||
Process a unified ProviderResult object to update the graph.
|
||||
Returns (discovered_targets, is_large_entity).
|
||||
"""
|
||||
provider_name = provider.get_name()
|
||||
discovered_targets = set()
|
||||
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested before processing results from {provider_name} for {target}")
|
||||
return discovered_targets, False
|
||||
|
||||
# Check for large entity based on relationship count
|
||||
if provider_result.get_relationship_count() > self.config.large_entity_threshold:
|
||||
print(f"Large entity detected: {provider_name} returned {provider_result.get_relationship_count()} relationships for {target}")
|
||||
members = self._create_large_entity_from_provider_result(target, provider_name, provider_result, current_depth)
|
||||
return members, True
|
||||
|
||||
# Process relationships
|
||||
for i, relationship in enumerate(provider_result.relationships):
|
||||
if i % 5 == 0 and self._is_stop_requested(): # Check periodically for stop
|
||||
print(f"Stop requested while processing relationships from {provider_name} for {target}")
|
||||
break
|
||||
|
||||
try:
|
||||
provider_results = self._query_single_provider_forensic(provider, target, is_ip, depth)
|
||||
if provider_results is None:
|
||||
all_providers_successful = False
|
||||
elif not self._is_stop_requested():
|
||||
discovered, is_large_entity = self._process_provider_results_forensic(
|
||||
target, provider, provider_results, node_attributes, depth
|
||||
)
|
||||
if is_large_entity:
|
||||
large_entity_members.update(discovered)
|
||||
else:
|
||||
new_targets.update(discovered)
|
||||
# Add nodes for relationship endpoints
|
||||
source_node = relationship.source_node
|
||||
target_node = relationship.target_node
|
||||
|
||||
# Determine node types
|
||||
source_type = NodeType.IP if _is_valid_ip(source_node) else NodeType.DOMAIN
|
||||
if target_node.startswith('AS') and target_node[2:].isdigit():
|
||||
target_type = NodeType.ASN
|
||||
elif _is_valid_ip(target_node):
|
||||
target_type = NodeType.IP
|
||||
else:
|
||||
target_type = NodeType.DOMAIN
|
||||
|
||||
# Add nodes to graph
|
||||
self.graph.add_node(source_node, source_type)
|
||||
self.graph.add_node(target_node, target_type)
|
||||
|
||||
# Add edge to graph
|
||||
if self.graph.add_edge(
|
||||
source_node, target_node,
|
||||
relationship.relationship_type,
|
||||
relationship.confidence,
|
||||
provider_name,
|
||||
relationship.raw_data
|
||||
):
|
||||
print(f"Added relationship: {source_node} -> {target_node} ({relationship.relationship_type})")
|
||||
|
||||
# Track discovered targets for further processing
|
||||
if _is_valid_domain(target_node) or _is_valid_ip(target_node):
|
||||
discovered_targets.add(target_node)
|
||||
|
||||
# Process attributes, preserving them as a list of objects
|
||||
attributes_by_node = defaultdict(list)
|
||||
for attribute in provider_result.attributes:
|
||||
# Convert the StandardAttribute object to a dictionary that the frontend can use
|
||||
attr_dict = {
|
||||
"name": attribute.name,
|
||||
"value": attribute.value,
|
||||
"type": attribute.type,
|
||||
"provider": attribute.provider,
|
||||
"confidence": attribute.confidence,
|
||||
"metadata": attribute.metadata
|
||||
}
|
||||
attributes_by_node[attribute.target_node].append(attr_dict)
|
||||
|
||||
# Add attributes to nodes
|
||||
for node_id, node_attributes_list in attributes_by_node.items():
|
||||
if self.graph.graph.has_node(node_id):
|
||||
# Determine node type
|
||||
if _is_valid_ip(node_id):
|
||||
node_type = NodeType.IP
|
||||
elif node_id.startswith('AS') and node_id[2:].isdigit():
|
||||
node_type = NodeType.ASN
|
||||
else:
|
||||
print(f"Stop requested after processing results from {provider.get_name()}")
|
||||
break
|
||||
except Exception as e:
|
||||
all_providers_successful = False
|
||||
self._log_provider_error(target, provider.get_name(), str(e))
|
||||
node_type = NodeType.DOMAIN
|
||||
|
||||
# **NEW**: Only update node attributes if not stopped
|
||||
if not self._is_stop_requested():
|
||||
for node_id, attributes in node_attributes.items():
|
||||
if self.graph.graph.has_node(node_id):
|
||||
node_is_ip = _is_valid_ip(node_id)
|
||||
node_type_to_add = NodeType.IP if node_is_ip else NodeType.DOMAIN
|
||||
self.graph.add_node(node_id, node_type_to_add, attributes=attributes)
|
||||
# Add node with the list of attributes
|
||||
self.graph.add_node(node_id, node_type, attributes=node_attributes_list)
|
||||
|
||||
return new_targets, large_entity_members, all_providers_successful
|
||||
return discovered_targets, False
|
||||
|
||||
def _create_large_entity_from_provider_result(self, source: str, provider_name: str,
|
||||
provider_result: ProviderResult, current_depth: int) -> Set[str]:
|
||||
"""
|
||||
Create a large entity node from a ProviderResult and return the members for DNS processing.
|
||||
"""
|
||||
entity_id = f"large_entity_{provider_name}_{hash(source) & 0x7FFFFFFF}"
|
||||
|
||||
# Extract target nodes from relationships
|
||||
targets = [rel.target_node for rel in provider_result.relationships]
|
||||
node_type = 'unknown'
|
||||
|
||||
if targets:
|
||||
if _is_valid_domain(targets[0]):
|
||||
node_type = 'domain'
|
||||
elif _is_valid_ip(targets[0]):
|
||||
node_type = 'ip'
|
||||
|
||||
# Create nodes in graph (they exist but are grouped)
|
||||
for target in targets:
|
||||
target_node_type = NodeType.DOMAIN if node_type == 'domain' else NodeType.IP
|
||||
self.graph.add_node(target, target_node_type)
|
||||
|
||||
attributes = {
|
||||
'count': len(targets),
|
||||
'nodes': targets,
|
||||
'node_type': node_type,
|
||||
'source_provider': provider_name,
|
||||
'discovery_depth': current_depth,
|
||||
'threshold_exceeded': self.config.large_entity_threshold,
|
||||
}
|
||||
description = f'Large entity created due to {len(targets)} relationships from {provider_name}'
|
||||
|
||||
self.graph.add_node(entity_id, NodeType.LARGE_ENTITY, attributes=attributes, description=description)
|
||||
|
||||
# Create edge from source to large entity
|
||||
if provider_result.relationships:
|
||||
rel_type = provider_result.relationships[0].relationship_type
|
||||
self.graph.add_edge(source, entity_id, rel_type, 0.9, provider_name,
|
||||
{'large_entity_info': f'Contains {len(targets)} {node_type}s'})
|
||||
|
||||
self.logger.logger.warning(f"Large entity created: {entity_id} contains {len(targets)} targets from {provider_name}")
|
||||
print(f"Created large entity {entity_id} for {len(targets)} {node_type}s from {provider_name}")
|
||||
|
||||
return set(targets)
|
||||
|
||||
def stop_scan(self) -> bool:
|
||||
"""Request immediate scan termination with proper cleanup."""
|
||||
@@ -502,8 +701,10 @@ class Scanner:
|
||||
print(f"Cleared {len(currently_processing_copy)} currently processing targets: {currently_processing_copy}")
|
||||
|
||||
# **IMPROVED**: Clear task queue and log what was discarded
|
||||
discarded_tasks = list(self.task_queue)
|
||||
self.task_queue.clear()
|
||||
discarded_tasks = []
|
||||
while not self.task_queue.empty():
|
||||
discarded_tasks.append(self.task_queue.get())
|
||||
self.task_queue = PriorityQueue()
|
||||
print(f"Discarded {len(discarded_tasks)} pending tasks")
|
||||
|
||||
# **IMPROVED**: Aggressively shut down executor
|
||||
@@ -528,6 +729,73 @@ class Scanner:
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
def extract_node_from_large_entity(self, large_entity_id: str, node_id_to_extract: str) -> bool:
|
||||
"""
|
||||
Extracts a node from a large entity, re-creates its original edge, and
|
||||
re-queues it for full scanning.
|
||||
"""
|
||||
if not self.graph.graph.has_node(large_entity_id):
|
||||
print(f"ERROR: Large entity {large_entity_id} not found.")
|
||||
return False
|
||||
|
||||
# 1. Get the original source node that discovered the large entity
|
||||
predecessors = list(self.graph.graph.predecessors(large_entity_id))
|
||||
if not predecessors:
|
||||
print(f"ERROR: No source node found for large entity {large_entity_id}.")
|
||||
return False
|
||||
source_node_id = predecessors[0]
|
||||
|
||||
# Get the original edge data to replicate it for the extracted node
|
||||
original_edge_data = self.graph.graph.get_edge_data(source_node_id, large_entity_id)
|
||||
if not original_edge_data:
|
||||
print(f"ERROR: Could not find original edge data from {source_node_id} to {large_entity_id}.")
|
||||
return False
|
||||
|
||||
# 2. Modify the graph data structure first
|
||||
success = self.graph.extract_node_from_large_entity(large_entity_id, node_id_to_extract)
|
||||
if not success:
|
||||
print(f"ERROR: Node {node_id_to_extract} could not be removed from {large_entity_id}'s attributes.")
|
||||
return False
|
||||
|
||||
# 3. Create the direct edge from the original source to the newly extracted node
|
||||
print(f"Re-creating direct edge from {source_node_id} to extracted node {node_id_to_extract}")
|
||||
self.graph.add_edge(
|
||||
source_id=source_node_id,
|
||||
target_id=node_id_to_extract,
|
||||
relationship_type=original_edge_data.get('relationship_type', 'extracted_from_large_entity'),
|
||||
confidence_score=original_edge_data.get('confidence_score', 0.85), # Slightly lower confidence
|
||||
source_provider=original_edge_data.get('source_provider', 'unknown'),
|
||||
raw_data={'context': f'Extracted from large entity {large_entity_id}'}
|
||||
)
|
||||
|
||||
# 4. Re-queue the extracted node for full processing by all eligible providers
|
||||
print(f"Re-queueing extracted node {node_id_to_extract} for full reconnaissance...")
|
||||
is_ip = _is_valid_ip(node_id_to_extract)
|
||||
current_depth = self.graph.graph.nodes[large_entity_id].get('attributes', {}).get('discovery_depth', 0)
|
||||
|
||||
eligible_providers = self._get_eligible_providers(node_id_to_extract, is_ip, False)
|
||||
for provider in eligible_providers:
|
||||
provider_name = provider.get_name()
|
||||
self.task_queue.put((self._get_priority(provider_name), (provider_name, node_id_to_extract, current_depth)))
|
||||
self.total_tasks_ever_enqueued += 1
|
||||
|
||||
# 5. If the scanner is not running, we need to kickstart it to process this one item.
|
||||
if self.status != ScanStatus.RUNNING:
|
||||
print("Scanner is idle. Starting a mini-scan to process the extracted node.")
|
||||
self.status = ScanStatus.RUNNING
|
||||
self._update_session_state()
|
||||
|
||||
if not self.scan_thread or not self.scan_thread.is_alive():
|
||||
self.scan_thread = threading.Thread(
|
||||
target=self._execute_scan,
|
||||
args=(self.current_target, self.max_depth),
|
||||
daemon=True
|
||||
)
|
||||
self.scan_thread.start()
|
||||
|
||||
print(f"Successfully extracted and re-queued {node_id_to_extract} from {large_entity_id}.")
|
||||
return True
|
||||
|
||||
def _update_session_state(self) -> None:
|
||||
"""
|
||||
Update the scanner state in Redis for GUI updates.
|
||||
@@ -559,11 +827,12 @@ class Scanner:
|
||||
'indicators_completed': self.indicators_completed,
|
||||
'tasks_re_enqueued': self.tasks_re_enqueued,
|
||||
'progress_percentage': self._calculate_progress(),
|
||||
'total_tasks_ever_enqueued': self.total_tasks_ever_enqueued,
|
||||
'enabled_providers': [provider.get_name() for provider in self.providers],
|
||||
'graph_statistics': self.graph.get_statistics(),
|
||||
'task_queue_size': len(self.task_queue),
|
||||
'currently_processing_count': currently_processing_count, # **NEW**
|
||||
'currently_processing': currently_processing_list[:5] # **NEW**: Show first 5 for debugging
|
||||
'task_queue_size': self.task_queue.qsize(),
|
||||
'currently_processing_count': currently_processing_count,
|
||||
'currently_processing': currently_processing_list[:5]
|
||||
}
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in get_scan_status: {e}")
|
||||
@@ -625,39 +894,6 @@ class Scanner:
|
||||
provider_state = provider_states.get(provider_name)
|
||||
return provider_state is not None and provider_state.get('status') == 'success'
|
||||
|
||||
def _query_single_provider_forensic(self, provider, target: str, is_ip: bool, current_depth: int) -> Optional[List]:
|
||||
"""Query a single provider with stop signal checking."""
|
||||
provider_name = provider.get_name()
|
||||
start_time = datetime.now(timezone.utc)
|
||||
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested before querying {provider_name} for {target}")
|
||||
return None
|
||||
|
||||
print(f"Querying {provider_name} for {target}")
|
||||
|
||||
self.logger.logger.info(f"Attempting {provider_name} query for {target} at depth {current_depth}")
|
||||
|
||||
try:
|
||||
if is_ip:
|
||||
results = provider.query_ip(target)
|
||||
else:
|
||||
results = provider.query_domain(target)
|
||||
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested after querying {provider_name} for {target}")
|
||||
return None
|
||||
|
||||
self._update_provider_state(target, provider_name, 'success', len(results), None, start_time)
|
||||
|
||||
print(f"✓ {provider_name} returned {len(results)} results for {target}")
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
self._update_provider_state(target, provider_name, 'failed', 0, str(e), start_time)
|
||||
print(f"✗ {provider_name} failed for {target}: {e}")
|
||||
return None
|
||||
|
||||
def _update_provider_state(self, target: str, provider_name: str, status: str,
|
||||
results_count: int, error: Optional[str], start_time: datetime) -> None:
|
||||
"""Update provider state in node metadata for forensic tracking."""
|
||||
@@ -680,159 +916,6 @@ class Scanner:
|
||||
|
||||
self.logger.logger.info(f"Provider state updated: {target} -> {provider_name} -> {status} ({results_count} results)")
|
||||
|
||||
def _process_provider_results_forensic(self, target: str, provider, results: List,
|
||||
node_attributes: Dict, current_depth: int) -> Tuple[Set[str], bool]:
|
||||
"""Process provider results, returns (discovered_targets, is_large_entity)."""
|
||||
provider_name = provider.get_name()
|
||||
discovered_targets = set()
|
||||
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested before processing results from {provider_name} for {target}")
|
||||
return discovered_targets, False
|
||||
|
||||
if len(results) > self.config.large_entity_threshold:
|
||||
print(f"Large entity detected: {provider_name} returned {len(results)} results for {target}")
|
||||
members = self._create_large_entity(target, provider_name, results, current_depth)
|
||||
return members, True
|
||||
|
||||
for i, (source, rel_target, rel_type, confidence, raw_data) in enumerate(results):
|
||||
if i % 5 == 0 and self._is_stop_requested(): # Check more frequently
|
||||
print(f"Stop requested while processing results from {provider_name} for {target}")
|
||||
break
|
||||
|
||||
self.logger.log_relationship_discovery(
|
||||
source_node=source,
|
||||
target_node=rel_target,
|
||||
relationship_type=rel_type,
|
||||
confidence_score=confidence,
|
||||
provider=provider_name,
|
||||
raw_data=raw_data,
|
||||
discovery_method=f"{provider_name}_query_depth_{current_depth}"
|
||||
)
|
||||
|
||||
self._collect_node_attributes(source, provider_name, rel_type, rel_target, raw_data, node_attributes[source])
|
||||
|
||||
if isinstance(rel_target, list):
|
||||
# If the target is a list, iterate and process each item
|
||||
for single_target in rel_target:
|
||||
if _is_valid_ip(single_target):
|
||||
self.graph.add_node(single_target, NodeType.IP)
|
||||
if self.graph.add_edge(source, single_target, rel_type, confidence, provider_name, raw_data):
|
||||
print(f"Added IP relationship: {source} -> {single_target} ({rel_type})")
|
||||
discovered_targets.add(single_target)
|
||||
elif _is_valid_domain(single_target):
|
||||
self.graph.add_node(single_target, NodeType.DOMAIN)
|
||||
if self.graph.add_edge(source, single_target, rel_type, confidence, provider_name, raw_data):
|
||||
print(f"Added domain relationship: {source} -> {single_target} ({rel_type})")
|
||||
discovered_targets.add(single_target)
|
||||
self._collect_node_attributes(single_target, provider_name, rel_type, source, raw_data, node_attributes[single_target])
|
||||
|
||||
elif _is_valid_ip(rel_target):
|
||||
self.graph.add_node(rel_target, NodeType.IP)
|
||||
if self.graph.add_edge(source, rel_target, rel_type, confidence, provider_name, raw_data):
|
||||
print(f"Added IP relationship: {source} -> {rel_target} ({rel_type})")
|
||||
discovered_targets.add(rel_target)
|
||||
|
||||
elif rel_target.startswith('AS') and rel_target[2:].isdigit():
|
||||
self.graph.add_node(rel_target, NodeType.ASN)
|
||||
if self.graph.add_edge(source, rel_target, rel_type, confidence, provider_name, raw_data):
|
||||
print(f"Added ASN relationship: {source} -> {rel_target} ({rel_type})")
|
||||
|
||||
elif _is_valid_domain(rel_target):
|
||||
self.graph.add_node(rel_target, NodeType.DOMAIN)
|
||||
if self.graph.add_edge(source, rel_target, rel_type, confidence, provider_name, raw_data):
|
||||
print(f"Added domain relationship: {source} -> {rel_target} ({rel_type})")
|
||||
discovered_targets.add(rel_target)
|
||||
self._collect_node_attributes(rel_target, provider_name, rel_type, source, raw_data, node_attributes[rel_target])
|
||||
|
||||
else:
|
||||
self._collect_node_attributes(source, provider_name, rel_type, rel_target, raw_data, node_attributes[source])
|
||||
|
||||
return discovered_targets, False
|
||||
|
||||
def _create_large_entity(self, source: str, provider_name: str, results: List, current_depth: int) -> Set[str]:
|
||||
"""Create a large entity node and returns the members for DNS processing."""
|
||||
entity_id = f"large_entity_{provider_name}_{hash(source) & 0x7FFFFFFF}"
|
||||
|
||||
targets = [rel[1] for rel in results if len(rel) > 1]
|
||||
node_type = 'unknown'
|
||||
|
||||
if targets:
|
||||
if _is_valid_domain(targets[0]):
|
||||
node_type = 'domain'
|
||||
elif _is_valid_ip(targets[0]):
|
||||
node_type = 'ip'
|
||||
|
||||
for target in targets:
|
||||
self.graph.add_node(target, NodeType.DOMAIN if node_type == 'domain' else NodeType.IP)
|
||||
|
||||
attributes = {
|
||||
'count': len(targets),
|
||||
'nodes': targets,
|
||||
'node_type': node_type,
|
||||
'source_provider': provider_name,
|
||||
'discovery_depth': current_depth,
|
||||
'threshold_exceeded': self.config.large_entity_threshold,
|
||||
}
|
||||
description = f'Large entity created due to {len(targets)} results from {provider_name}'
|
||||
|
||||
self.graph.add_node(entity_id, NodeType.LARGE_ENTITY, attributes=attributes, description=description)
|
||||
|
||||
if results:
|
||||
rel_type = results[0][2]
|
||||
self.graph.add_edge(source, entity_id, rel_type, 0.9, provider_name,
|
||||
{'large_entity_info': f'Contains {len(targets)} {node_type}s'})
|
||||
|
||||
self.logger.logger.warning(f"Large entity created: {entity_id} contains {len(targets)} targets from {provider_name}")
|
||||
print(f"Created large entity {entity_id} for {len(targets)} {node_type}s from {provider_name}")
|
||||
|
||||
return set(targets)
|
||||
|
||||
def _collect_node_attributes(self, node_id: str, provider_name: str, rel_type: str,
|
||||
target: str, raw_data: Dict[str, Any], attributes: Dict[str, Any]) -> None:
|
||||
"""Collect and organize attributes for a node."""
|
||||
self.logger.logger.debug(f"Collecting attributes for {node_id} from {provider_name}: {rel_type}")
|
||||
|
||||
if provider_name == 'dns':
|
||||
record_type = raw_data.get('query_type', 'UNKNOWN')
|
||||
value = raw_data.get('value', target)
|
||||
dns_entry = f"{record_type}: {value}"
|
||||
if dns_entry not in attributes.get('dns_records', []):
|
||||
attributes.setdefault('dns_records', []).append(dns_entry)
|
||||
|
||||
elif provider_name == 'crtsh':
|
||||
if rel_type == "san_certificate":
|
||||
domain_certs = raw_data.get('domain_certificates', {})
|
||||
if node_id in domain_certs:
|
||||
cert_summary = domain_certs[node_id]
|
||||
attributes['certificates'] = cert_summary
|
||||
if target not in attributes.get('related_domains_san', []):
|
||||
attributes.setdefault('related_domains_san', []).append(target)
|
||||
|
||||
elif provider_name == 'shodan':
|
||||
shodan_attributes = attributes.setdefault('shodan', {})
|
||||
for key, value in raw_data.items():
|
||||
if key not in shodan_attributes or not shodan_attributes.get(key):
|
||||
shodan_attributes[key] = value
|
||||
|
||||
if rel_type == "asn_membership":
|
||||
attributes['asn'] = {
|
||||
'id': target,
|
||||
'description': raw_data.get('org', ''),
|
||||
'isp': raw_data.get('isp', ''),
|
||||
'country': raw_data.get('country', '')
|
||||
}
|
||||
|
||||
record_type_name = rel_type
|
||||
if record_type_name not in attributes:
|
||||
attributes[record_type_name] = []
|
||||
|
||||
if isinstance(target, list):
|
||||
attributes[record_type_name].extend(target)
|
||||
else:
|
||||
if target not in attributes[record_type_name]:
|
||||
attributes[record_type_name].append(target)
|
||||
|
||||
def _log_target_processing_error(self, target: str, error: str) -> None:
|
||||
"""Log target processing errors for forensic trail."""
|
||||
self.logger.logger.error(f"Target processing failed for {target}: {error}")
|
||||
@@ -848,10 +931,9 @@ class Scanner:
|
||||
|
||||
def _calculate_progress(self) -> float:
|
||||
"""Calculate scan progress percentage based on task completion."""
|
||||
total_tasks = self.indicators_completed + len(self.task_queue)
|
||||
if total_tasks == 0:
|
||||
if self.total_tasks_ever_enqueued == 0:
|
||||
return 0.0
|
||||
return min(100.0, (self.indicators_completed / total_tasks) * 100)
|
||||
return min(100.0, (self.indicators_completed / self.total_tasks_ever_enqueued) * 100)
|
||||
|
||||
def get_graph_data(self) -> Dict[str, Any]:
|
||||
"""Get current graph data for visualization."""
|
||||
|
||||
@@ -5,15 +5,11 @@ import time
|
||||
import uuid
|
||||
import redis
|
||||
import pickle
|
||||
from typing import Dict, Optional, Any, List
|
||||
from typing import Dict, Optional, Any
|
||||
|
||||
from core.scanner import Scanner
|
||||
from config import config
|
||||
|
||||
# WARNING: Using pickle can be a security risk if the data source is not trusted.
|
||||
# In this case, we are only serializing/deserializing our own trusted Scanner objects,
|
||||
# which is generally safe. Do not unpickle data from untrusted sources.
|
||||
|
||||
class SessionManager:
|
||||
"""
|
||||
Manages multiple scanner instances for concurrent user sessions using Redis.
|
||||
|
||||
@@ -3,14 +3,15 @@ Data provider modules for DNSRecon.
|
||||
Contains implementations for various reconnaissance data sources.
|
||||
"""
|
||||
|
||||
from .base_provider import BaseProvider, RateLimiter
|
||||
from .base_provider import BaseProvider
|
||||
from .crtsh_provider import CrtShProvider
|
||||
from .dns_provider import DNSProvider
|
||||
from .shodan_provider import ShodanProvider
|
||||
from core.rate_limiter import GlobalRateLimiter
|
||||
|
||||
__all__ = [
|
||||
'BaseProvider',
|
||||
'RateLimiter',
|
||||
'GlobalRateLimiter',
|
||||
'CrtShProvider',
|
||||
'DNSProvider',
|
||||
'ShodanProvider'
|
||||
|
||||
@@ -4,49 +4,17 @@ import time
|
||||
import requests
|
||||
import threading
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import List, Dict, Any, Optional, Tuple
|
||||
from typing import Dict, Any, Optional
|
||||
|
||||
from core.logger import get_forensic_logger
|
||||
|
||||
|
||||
class RateLimiter:
|
||||
"""Simple rate limiter for API calls."""
|
||||
|
||||
def __init__(self, requests_per_minute: int):
|
||||
"""
|
||||
Initialize rate limiter.
|
||||
|
||||
Args:
|
||||
requests_per_minute: Maximum requests allowed per minute
|
||||
"""
|
||||
self.requests_per_minute = requests_per_minute
|
||||
self.min_interval = 60.0 / requests_per_minute
|
||||
self.last_request_time = 0
|
||||
|
||||
def __getstate__(self):
|
||||
"""RateLimiter is fully picklable, return full state."""
|
||||
return self.__dict__.copy()
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore RateLimiter state."""
|
||||
self.__dict__.update(state)
|
||||
|
||||
def wait_if_needed(self) -> None:
|
||||
"""Wait if necessary to respect rate limits."""
|
||||
current_time = time.time()
|
||||
time_since_last = current_time - self.last_request_time
|
||||
|
||||
if time_since_last < self.min_interval:
|
||||
sleep_time = self.min_interval - time_since_last
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_request_time = time.time()
|
||||
from core.rate_limiter import GlobalRateLimiter
|
||||
from core.provider_result import ProviderResult
|
||||
|
||||
|
||||
class BaseProvider(ABC):
|
||||
"""
|
||||
Abstract base class for all DNSRecon data providers.
|
||||
Now supports session-specific configuration.
|
||||
Now supports session-specific configuration and returns standardized ProviderResult objects.
|
||||
"""
|
||||
|
||||
def __init__(self, name: str, rate_limit: int = 60, timeout: int = 30, session_config=None):
|
||||
@@ -68,11 +36,9 @@ class BaseProvider(ABC):
|
||||
# Fallback to global config for backwards compatibility
|
||||
from config import config as global_config
|
||||
self.config = global_config
|
||||
actual_rate_limit = rate_limit
|
||||
actual_timeout = timeout
|
||||
|
||||
self.name = name
|
||||
self.rate_limiter = RateLimiter(actual_rate_limit)
|
||||
self.timeout = actual_timeout
|
||||
self._local = threading.local()
|
||||
self.logger = get_forensic_logger()
|
||||
@@ -136,7 +102,7 @@ class BaseProvider(ABC):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
def query_domain(self, domain: str) -> ProviderResult:
|
||||
"""
|
||||
Query the provider for information about a domain.
|
||||
|
||||
@@ -144,12 +110,12 @@ class BaseProvider(ABC):
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
List of tuples: (source_node, target_node, relationship_type, confidence, raw_data)
|
||||
ProviderResult containing standardized attributes and relationships
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
def query_ip(self, ip: str) -> ProviderResult:
|
||||
"""
|
||||
Query the provider for information about an IP address.
|
||||
|
||||
@@ -157,7 +123,7 @@ class BaseProvider(ABC):
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
List of tuples: (source_node, target_node, relationship_type, confidence, raw_data)
|
||||
ProviderResult containing standardized attributes and relationships
|
||||
"""
|
||||
pass
|
||||
|
||||
@@ -172,8 +138,6 @@ class BaseProvider(ABC):
|
||||
print(f"Request cancelled before start: {url}")
|
||||
return None
|
||||
|
||||
self.rate_limiter.wait_if_needed()
|
||||
|
||||
start_time = time.time()
|
||||
response = None
|
||||
error = None
|
||||
@@ -297,5 +261,5 @@ class BaseProvider(ABC):
|
||||
'failed_requests': self.failed_requests,
|
||||
'success_rate': (self.successful_requests / self.total_requests * 100) if self.total_requests > 0 else 0,
|
||||
'relationships_found': self.total_relationships_found,
|
||||
'rate_limit': self.rate_limiter.requests_per_minute
|
||||
'rate_limit': self.config.get_rate_limit(self.name)
|
||||
}
|
||||
@@ -2,59 +2,48 @@
|
||||
|
||||
import json
|
||||
import re
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Any, Tuple, Set
|
||||
from typing import List, Dict, Any, Set
|
||||
from urllib.parse import quote
|
||||
from datetime import datetime, timezone
|
||||
|
||||
# New dependency required for this provider
|
||||
try:
|
||||
import psycopg2
|
||||
import psycopg2.extras
|
||||
PSYCOPG2_AVAILABLE = True
|
||||
except ImportError:
|
||||
PSYCOPG2_AVAILABLE = False
|
||||
import requests
|
||||
|
||||
from .base_provider import BaseProvider
|
||||
from core.provider_result import ProviderResult
|
||||
from utils.helpers import _is_valid_domain
|
||||
|
||||
# We use requests only to raise the same exception type for compatibility with core retry logic
|
||||
import requests
|
||||
|
||||
|
||||
class CrtShProvider(BaseProvider):
|
||||
"""
|
||||
Provider for querying crt.sh certificate transparency database via its public PostgreSQL endpoint.
|
||||
This version is designed to be a drop-in, high-performance replacement for the API-based provider.
|
||||
It preserves the same caching and data processing logic.
|
||||
Provider for querying crt.sh certificate transparency database.
|
||||
Now returns standardized ProviderResult objects with caching support.
|
||||
"""
|
||||
|
||||
def __init__(self, name=None, session_config=None):
|
||||
"""Initialize CrtShDB provider with session-specific configuration."""
|
||||
"""Initialize CrtSh provider with session-specific configuration."""
|
||||
super().__init__(
|
||||
name="crtsh",
|
||||
rate_limit=0, # No rate limit for direct DB access
|
||||
timeout=60, # Increased timeout for potentially long DB queries
|
||||
rate_limit=60,
|
||||
timeout=15,
|
||||
session_config=session_config
|
||||
)
|
||||
# Database connection details
|
||||
self.db_host = "crt.sh"
|
||||
self.db_port = 5432
|
||||
self.db_name = "certwatch"
|
||||
self.db_user = "guest"
|
||||
self.base_url = "https://crt.sh/"
|
||||
self._stop_event = None
|
||||
|
||||
# Initialize cache directory (same as original provider)
|
||||
# Initialize cache directory
|
||||
self.cache_dir = Path('cache') / 'crtsh'
|
||||
self.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Compile regex for date filtering for efficiency
|
||||
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||
|
||||
def get_name(self) -> str:
|
||||
"""Return the provider name."""
|
||||
return "crtsh"
|
||||
|
||||
def get_display_name(self) -> str:
|
||||
"""Return the provider display name for the UI."""
|
||||
return "crt.sh (DB)"
|
||||
return "crt.sh"
|
||||
|
||||
def requires_api_key(self) -> bool:
|
||||
"""Return True if the provider requires an API key."""
|
||||
@@ -65,162 +54,19 @@ class CrtShProvider(BaseProvider):
|
||||
return {'domains': True, 'ips': False}
|
||||
|
||||
def is_available(self) -> bool:
|
||||
"""
|
||||
Check if the provider can be used. Requires the psycopg2 library.
|
||||
"""
|
||||
if not PSYCOPG2_AVAILABLE:
|
||||
self.logger.logger.warning("psycopg2 library not found. CrtShDBProvider is unavailable. "
|
||||
"Please run 'pip install psycopg2-binary'.")
|
||||
return False
|
||||
"""Check if the provider is configured to be used."""
|
||||
return True
|
||||
|
||||
def _query_crtsh(self, domain: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Query the crt.sh PostgreSQL database for raw certificate data.
|
||||
Raises exceptions for DB/network errors to allow core logic to retry.
|
||||
"""
|
||||
conn = None
|
||||
certificates = []
|
||||
|
||||
# SQL Query to find all certificate IDs related to the domain (including subdomains),
|
||||
# then retrieve comprehensive details for each certificate, mimicking the JSON API structure.
|
||||
sql_query = """
|
||||
WITH certificates_of_interest AS (
|
||||
SELECT DISTINCT ci.certificate_id
|
||||
FROM certificate_identity ci
|
||||
WHERE ci.name_value ILIKE %(domain_wildcard)s OR ci.name_value = %(domain)s
|
||||
)
|
||||
SELECT
|
||||
c.id,
|
||||
c.serial_number,
|
||||
c.not_before,
|
||||
c.not_after,
|
||||
(SELECT min(entry_timestamp) FROM ct_log_entry cle WHERE cle.certificate_id = c.id) as entry_timestamp,
|
||||
ca.id as issuer_ca_id,
|
||||
ca.name as issuer_name,
|
||||
(SELECT array_to_string(array_agg(DISTINCT ci.name_value), E'\n') FROM certificate_identity ci WHERE ci.certificate_id = c.id) as name_value,
|
||||
(SELECT name_value FROM certificate_identity ci WHERE ci.certificate_id = c.id AND ci.name_type = 'commonName' LIMIT 1) as common_name
|
||||
FROM
|
||||
certificate c
|
||||
JOIN ca ON c.issuer_ca_id = ca.id
|
||||
WHERE c.id IN (SELECT certificate_id FROM certificates_of_interest);
|
||||
"""
|
||||
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
dbname=self.db_name,
|
||||
user=self.db_user,
|
||||
host=self.db_host,
|
||||
port=self.db_port,
|
||||
connect_timeout=self.timeout
|
||||
)
|
||||
|
||||
with conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) as cursor:
|
||||
cursor.execute(sql_query, {'domain': domain, 'domain_wildcard': f'%.{domain}'})
|
||||
results = cursor.fetchall()
|
||||
certificates = [dict(row) for row in results]
|
||||
|
||||
self.logger.logger.info(f"crt.sh DB query for '{domain}' returned {len(certificates)} certificates.")
|
||||
|
||||
except psycopg2.Error as e:
|
||||
self.logger.logger.error(f"PostgreSQL query failed for {domain}: {e}")
|
||||
# Raise a RequestException to be compatible with the existing retry logic in the core application
|
||||
raise requests.exceptions.RequestException(f"PostgreSQL query failed: {e}") from e
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
|
||||
return certificates
|
||||
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query crt.sh for certificates containing the domain with caching support.
|
||||
Properly raises exceptions for network errors to allow core logic retries.
|
||||
"""
|
||||
if not _is_valid_domain(domain):
|
||||
return []
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
return []
|
||||
|
||||
cache_file = self._get_cache_file_path(domain)
|
||||
cache_status = self._get_cache_status(cache_file)
|
||||
|
||||
certificates = []
|
||||
|
||||
try:
|
||||
if cache_status == "fresh":
|
||||
certificates = self._load_cached_certificates(cache_file)
|
||||
self.logger.logger.info(f"Using cached data for {domain} ({len(certificates)} certificates)")
|
||||
|
||||
elif cache_status == "not_found":
|
||||
# Fresh query from DB, create new cache
|
||||
certificates = self._query_crtsh(domain)
|
||||
if certificates:
|
||||
self._create_cache_file(cache_file, domain, self._serialize_certs_for_cache(certificates))
|
||||
else:
|
||||
self.logger.logger.info(f"No certificates found for {domain}, not caching")
|
||||
|
||||
elif cache_status == "stale":
|
||||
try:
|
||||
new_certificates = self._query_crtsh(domain)
|
||||
if new_certificates:
|
||||
certificates = self._append_to_cache(cache_file, self._serialize_certs_for_cache(new_certificates))
|
||||
else:
|
||||
certificates = self._load_cached_certificates(cache_file)
|
||||
except requests.exceptions.RequestException:
|
||||
certificates = self._load_cached_certificates(cache_file)
|
||||
if certificates:
|
||||
self.logger.logger.warning(f"DB query failed for {domain}, using stale cache data.")
|
||||
else:
|
||||
raise
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
# Re-raise so core logic can retry
|
||||
self.logger.logger.error(f"DB query failed for {domain}: {e}")
|
||||
raise e
|
||||
except json.JSONDecodeError as e:
|
||||
# JSON parsing errors from cache should also be handled
|
||||
self.logger.logger.error(f"Failed to parse JSON from cache for {domain}: {e}")
|
||||
raise e
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
return []
|
||||
|
||||
if not certificates:
|
||||
return []
|
||||
|
||||
return self._process_certificates_to_relationships(domain, certificates)
|
||||
|
||||
def _serialize_certs_for_cache(self, certificates: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Serialize certificate data for JSON caching, converting datetime objects to ISO strings.
|
||||
"""
|
||||
serialized_certs = []
|
||||
for cert in certificates:
|
||||
serialized_cert = cert.copy()
|
||||
for key in ['not_before', 'not_after', 'entry_timestamp']:
|
||||
if isinstance(serialized_cert.get(key), datetime):
|
||||
# Ensure datetime is timezone-aware before converting
|
||||
dt_obj = serialized_cert[key]
|
||||
if dt_obj.tzinfo is None:
|
||||
dt_obj = dt_obj.replace(tzinfo=timezone.utc)
|
||||
serialized_cert[key] = dt_obj.isoformat()
|
||||
serialized_certs.append(serialized_cert)
|
||||
return serialized_certs
|
||||
|
||||
# --- All methods below are copied directly from the original CrtShProvider ---
|
||||
# They are compatible because _query_crtsh returns data in the same format
|
||||
# as the original _query_crtsh_api method. A small adjustment is made to
|
||||
# _parse_certificate_date to handle datetime objects directly from the DB.
|
||||
|
||||
def _get_cache_file_path(self, domain: str) -> Path:
|
||||
"""Generate cache file path for a domain."""
|
||||
safe_domain = domain.replace('.', '_').replace('/', '_').replace('\\', '_')
|
||||
return self.cache_dir / f"{safe_domain}.json"
|
||||
|
||||
def _get_cache_status(self, cache_file_path: Path) -> str:
|
||||
"""Check cache status for a domain."""
|
||||
"""
|
||||
Check cache status for a domain.
|
||||
Returns: 'not_found', 'fresh', or 'stale'
|
||||
"""
|
||||
if not cache_file_path.exists():
|
||||
return "not_found"
|
||||
|
||||
@@ -245,130 +91,261 @@ class CrtShProvider(BaseProvider):
|
||||
self.logger.logger.warning(f"Invalid cache file format for {cache_file_path}: {e}")
|
||||
return "stale"
|
||||
|
||||
def _load_cached_certificates(self, cache_file_path: Path) -> List[Dict[str, Any]]:
|
||||
"""Load certificates from cache file."""
|
||||
def query_domain(self, domain: str) -> ProviderResult:
|
||||
"""
|
||||
Query crt.sh for certificates containing the domain with caching support.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
ProviderResult containing discovered relationships and attributes
|
||||
"""
|
||||
if not _is_valid_domain(domain):
|
||||
return ProviderResult()
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
return ProviderResult()
|
||||
|
||||
cache_file = self._get_cache_file_path(domain)
|
||||
cache_status = self._get_cache_status(cache_file)
|
||||
|
||||
processed_certificates = []
|
||||
result = ProviderResult()
|
||||
|
||||
try:
|
||||
if cache_status == "fresh":
|
||||
result = self._load_from_cache(cache_file)
|
||||
self.logger.logger.info(f"Using cached crt.sh data for {domain}")
|
||||
|
||||
else: # "stale" or "not_found"
|
||||
raw_certificates = self._query_crtsh_api(domain)
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
return ProviderResult()
|
||||
|
||||
# Process raw data into the application's expected format
|
||||
current_processed_certs = [self._extract_certificate_metadata(cert) for cert in raw_certificates]
|
||||
|
||||
if cache_status == "stale":
|
||||
# Load existing and append new processed certs
|
||||
existing_result = self._load_from_cache(cache_file)
|
||||
result = self._merge_results(existing_result, current_processed_certs, domain)
|
||||
self.logger.logger.info(f"Refreshed and merged cache for {domain}")
|
||||
else: # "not_found"
|
||||
# Create new result from processed certs
|
||||
result = self._process_certificates_to_result(domain, raw_certificates)
|
||||
self.logger.logger.info(f"Created fresh result for {domain} ({result.get_relationship_count()} relationships)")
|
||||
|
||||
# Save the result to cache
|
||||
self._save_result_to_cache(cache_file, result, domain)
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.logger.error(f"API query failed for {domain}: {e}")
|
||||
if cache_status != "not_found":
|
||||
result = self._load_from_cache(cache_file)
|
||||
self.logger.logger.warning(f"Using stale cache for {domain} due to API failure.")
|
||||
else:
|
||||
raise e # Re-raise if there's no cache to fall back on
|
||||
|
||||
return result
|
||||
|
||||
def query_ip(self, ip: str) -> ProviderResult:
|
||||
"""
|
||||
Query crt.sh for certificates containing the IP address.
|
||||
Note: crt.sh doesn't typically index by IP, so this returns empty results.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
Empty ProviderResult (crt.sh doesn't support IP-based certificate queries effectively)
|
||||
"""
|
||||
return ProviderResult()
|
||||
|
||||
def _load_from_cache(self, cache_file_path: Path) -> ProviderResult:
|
||||
"""Load processed crt.sh data from a cache file."""
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_data = json.load(f)
|
||||
return cache_data.get('certificates', [])
|
||||
cache_content = json.load(f)
|
||||
|
||||
result = ProviderResult()
|
||||
|
||||
# Reconstruct relationships
|
||||
for rel_data in cache_content.get("relationships", []):
|
||||
result.add_relationship(
|
||||
source_node=rel_data["source_node"],
|
||||
target_node=rel_data["target_node"],
|
||||
relationship_type=rel_data["relationship_type"],
|
||||
provider=rel_data["provider"],
|
||||
confidence=rel_data["confidence"],
|
||||
raw_data=rel_data.get("raw_data", {})
|
||||
)
|
||||
|
||||
# Reconstruct attributes
|
||||
for attr_data in cache_content.get("attributes", []):
|
||||
result.add_attribute(
|
||||
target_node=attr_data["target_node"],
|
||||
name=attr_data["name"],
|
||||
value=attr_data["value"],
|
||||
attr_type=attr_data["type"],
|
||||
provider=attr_data["provider"],
|
||||
confidence=attr_data["confidence"],
|
||||
metadata=attr_data.get("metadata", {})
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
except (json.JSONDecodeError, FileNotFoundError, KeyError) as e:
|
||||
self.logger.logger.error(f"Failed to load cached certificates from {cache_file_path}: {e}")
|
||||
return []
|
||||
return ProviderResult()
|
||||
|
||||
def _create_cache_file(self, cache_file_path: Path, domain: str, certificates: List[Dict[str, Any]]) -> None:
|
||||
"""Create new cache file with certificates."""
|
||||
def _save_result_to_cache(self, cache_file_path: Path, result: ProviderResult, domain: str) -> None:
|
||||
"""Save processed crt.sh result to a cache file."""
|
||||
try:
|
||||
cache_data = {
|
||||
"domain": domain,
|
||||
"first_cached": datetime.now(timezone.utc).isoformat(),
|
||||
"last_upstream_query": datetime.now(timezone.utc).isoformat(),
|
||||
"upstream_query_count": 1,
|
||||
"certificates": certificates
|
||||
"relationships": [
|
||||
{
|
||||
"source_node": rel.source_node,
|
||||
"target_node": rel.target_node,
|
||||
"relationship_type": rel.relationship_type,
|
||||
"confidence": rel.confidence,
|
||||
"provider": rel.provider,
|
||||
"raw_data": rel.raw_data
|
||||
} for rel in result.relationships
|
||||
],
|
||||
"attributes": [
|
||||
{
|
||||
"target_node": attr.target_node,
|
||||
"name": attr.name,
|
||||
"value": attr.value,
|
||||
"type": attr.type,
|
||||
"provider": attr.provider,
|
||||
"confidence": attr.confidence,
|
||||
"metadata": attr.metadata
|
||||
} for attr in result.attributes
|
||||
]
|
||||
}
|
||||
cache_file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(cache_file_path, 'w') as f:
|
||||
json.dump(cache_data, f, separators=(',', ':'))
|
||||
self.logger.logger.info(f"Created cache file for {domain} with {len(certificates)} certificates")
|
||||
json.dump(cache_data, f, separators=(',', ':'), default=str)
|
||||
except Exception as e:
|
||||
self.logger.logger.warning(f"Failed to create cache file for {domain}: {e}")
|
||||
self.logger.logger.warning(f"Failed to save cache file for {domain}: {e}")
|
||||
|
||||
def _append_to_cache(self, cache_file_path: Path, new_certificates: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""Append new certificates to existing cache and return all certificates."""
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_data = json.load(f)
|
||||
def _merge_results(self, existing_result: ProviderResult, new_certificates: List[Dict[str, Any]], domain: str) -> ProviderResult:
|
||||
"""Merge new certificate data with existing cached result."""
|
||||
# Create a fresh result from the new certificates
|
||||
new_result = self._process_certificates_to_result(domain, new_certificates)
|
||||
|
||||
existing_ids = {cert.get('id') for cert in cache_data.get('certificates', [])}
|
||||
added_count = 0
|
||||
for cert in new_certificates:
|
||||
cert_id = cert.get('id')
|
||||
if cert_id and cert_id not in existing_ids:
|
||||
cache_data['certificates'].append(cert)
|
||||
existing_ids.add(cert_id)
|
||||
added_count += 1
|
||||
# Simple merge strategy: combine all relationships and attributes
|
||||
# In practice, you might want more sophisticated deduplication
|
||||
merged_result = ProviderResult()
|
||||
|
||||
cache_data['last_upstream_query'] = datetime.now(timezone.utc).isoformat()
|
||||
cache_data['upstream_query_count'] = cache_data.get('upstream_query_count', 0) + 1
|
||||
# Add existing relationships and attributes
|
||||
merged_result.relationships.extend(existing_result.relationships)
|
||||
merged_result.attributes.extend(existing_result.attributes)
|
||||
|
||||
with open(cache_file_path, 'w') as f:
|
||||
json.dump(cache_data, f, separators=(',', ':'))
|
||||
# Add new relationships and attributes
|
||||
merged_result.relationships.extend(new_result.relationships)
|
||||
merged_result.attributes.extend(new_result.attributes)
|
||||
|
||||
total_certs = len(cache_data['certificates'])
|
||||
self.logger.logger.info(f"Appended {added_count} new certificates to cache. Total: {total_certs}")
|
||||
return cache_data['certificates']
|
||||
except Exception as e:
|
||||
self.logger.logger.warning(f"Failed to append to cache: {e}")
|
||||
return new_certificates
|
||||
return merged_result
|
||||
|
||||
def _parse_issuer_organization(self, issuer_dn: str) -> str:
|
||||
"""Parse the issuer Distinguished Name to extract just the organization name."""
|
||||
if not issuer_dn: return issuer_dn
|
||||
try:
|
||||
components = [comp.strip() for comp in issuer_dn.split(',')]
|
||||
for component in components:
|
||||
if component.startswith('O='):
|
||||
org_name = component[2:].strip()
|
||||
if org_name.startswith('"') and org_name.endswith('"'):
|
||||
org_name = org_name[1:-1]
|
||||
return org_name
|
||||
return issuer_dn
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Failed to parse issuer DN '{issuer_dn}': {e}")
|
||||
return issuer_dn
|
||||
def _query_crtsh_api(self, domain: str) -> List[Dict[str, Any]]:
|
||||
"""Query crt.sh API for raw certificate data."""
|
||||
url = f"{self.base_url}?q={quote(domain)}&output=json"
|
||||
response = self.make_request(url, target_indicator=domain)
|
||||
|
||||
def _parse_certificate_date(self, date_input: Any) -> datetime:
|
||||
if not response or response.status_code != 200:
|
||||
raise requests.exceptions.RequestException(f"crt.sh API returned status {response.status_code if response else 'None'}")
|
||||
|
||||
certificates = response.json()
|
||||
if not certificates:
|
||||
return []
|
||||
|
||||
return certificates
|
||||
|
||||
def _process_certificates_to_result(self, domain: str, certificates: List[Dict[str, Any]]) -> ProviderResult:
|
||||
"""
|
||||
Parse certificate date from various formats (string from cache, datetime from DB).
|
||||
Process certificates to create ProviderResult with relationships and attributes.
|
||||
"""
|
||||
if isinstance(date_input, datetime):
|
||||
# If it's already a datetime object from the DB, just ensure it's UTC
|
||||
if date_input.tzinfo is None:
|
||||
return date_input.replace(tzinfo=timezone.utc)
|
||||
return date_input
|
||||
result = ProviderResult()
|
||||
|
||||
date_string = str(date_input)
|
||||
if not date_string:
|
||||
raise ValueError("Empty date string")
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh processing cancelled before processing for domain: {domain}")
|
||||
return result
|
||||
|
||||
try:
|
||||
if 'Z' in date_string:
|
||||
return datetime.fromisoformat(date_string.replace('Z', '+00:00'))
|
||||
# Handle standard ISO format with or without timezone
|
||||
dt = datetime.fromisoformat(date_string)
|
||||
if dt.tzinfo is None:
|
||||
return dt.replace(tzinfo=timezone.utc)
|
||||
return dt
|
||||
except ValueError as e:
|
||||
try:
|
||||
# Fallback for other formats
|
||||
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
|
||||
except Exception:
|
||||
raise ValueError(f"Unable to parse date: {date_string}") from e
|
||||
all_discovered_domains = set()
|
||||
|
||||
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
|
||||
"""Check if a certificate is currently valid based on its expiry date."""
|
||||
try:
|
||||
not_after_str = cert_data.get('not_after')
|
||||
if not not_after_str: return False
|
||||
for i, cert_data in enumerate(certificates):
|
||||
if i % 5 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh processing cancelled at certificate {i} for domain: {domain}")
|
||||
break
|
||||
|
||||
not_after_date = self._parse_certificate_date(not_after_str)
|
||||
not_before_str = cert_data.get('not_before')
|
||||
now = datetime.now(timezone.utc)
|
||||
is_not_expired = not_after_date > now
|
||||
cert_domains = self._extract_domains_from_certificate(cert_data)
|
||||
all_discovered_domains.update(cert_domains)
|
||||
|
||||
if not_before_str:
|
||||
not_before_date = self._parse_certificate_date(not_before_str)
|
||||
is_not_before_valid = not_before_date <= now
|
||||
return is_not_expired and is_not_before_valid
|
||||
return is_not_expired
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Certificate validity check failed: {e}")
|
||||
return False
|
||||
for cert_domain in cert_domains:
|
||||
if not _is_valid_domain(cert_domain):
|
||||
continue
|
||||
|
||||
for key, value in self._extract_certificate_metadata(cert_data).items():
|
||||
if value is not None:
|
||||
result.add_attribute(
|
||||
target_node=cert_domain,
|
||||
name=f"cert_{key}",
|
||||
value=value,
|
||||
attr_type='certificate_data',
|
||||
provider=self.name,
|
||||
confidence=0.9
|
||||
)
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled before relationship creation for domain: {domain}")
|
||||
return result
|
||||
|
||||
for i, discovered_domain in enumerate(all_discovered_domains):
|
||||
if discovered_domain == domain:
|
||||
continue
|
||||
|
||||
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh relationship creation cancelled for domain: {domain}")
|
||||
break
|
||||
|
||||
if not _is_valid_domain(discovered_domain):
|
||||
continue
|
||||
|
||||
confidence = self._calculate_domain_relationship_confidence(
|
||||
domain, discovered_domain, [], all_discovered_domains
|
||||
)
|
||||
|
||||
result.add_relationship(
|
||||
source_node=domain,
|
||||
target_node=discovered_domain,
|
||||
relationship_type='san_certificate',
|
||||
provider=self.name,
|
||||
confidence=confidence,
|
||||
raw_data={'relationship_type': 'certificate_discovery'}
|
||||
)
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=discovered_domain,
|
||||
relationship_type='san_certificate',
|
||||
confidence_score=confidence,
|
||||
raw_data={'relationship_type': 'certificate_discovery'},
|
||||
discovery_method="certificate_transparency_analysis"
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
def _extract_certificate_metadata(self, cert_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
# This method works as-is.
|
||||
"""Extract comprehensive metadata from certificate data."""
|
||||
raw_issuer_name = cert_data.get('issuer_name', '')
|
||||
parsed_issuer_name = self._parse_issuer_organization(raw_issuer_name)
|
||||
|
||||
metadata = {
|
||||
'certificate_id': cert_data.get('id'),
|
||||
'serial_number': cert_data.get('serial_number'),
|
||||
@@ -378,136 +355,286 @@ class CrtShProvider(BaseProvider):
|
||||
'not_before': cert_data.get('not_before'),
|
||||
'not_after': cert_data.get('not_after'),
|
||||
'entry_timestamp': cert_data.get('entry_timestamp'),
|
||||
'source': 'crt.sh (DB)'
|
||||
'source': 'crt.sh'
|
||||
}
|
||||
|
||||
try:
|
||||
if metadata['not_before'] and metadata['not_after']:
|
||||
not_before = self._parse_certificate_date(metadata['not_before'])
|
||||
not_after = self._parse_certificate_date(metadata['not_after'])
|
||||
|
||||
metadata['validity_period_days'] = (not_after - not_before).days
|
||||
metadata['is_currently_valid'] = self._is_cert_valid(cert_data)
|
||||
metadata['expires_soon'] = (not_after - datetime.now(timezone.utc)).days <= 30
|
||||
|
||||
metadata['not_before'] = not_before.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
metadata['not_after'] = not_after.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Error computing certificate metadata: {e}")
|
||||
metadata['is_currently_valid'] = False
|
||||
metadata['expires_soon'] = False
|
||||
|
||||
return metadata
|
||||
|
||||
def _process_certificates_to_relationships(self, domain: str, certificates: List[Dict[str, Any]]) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
# This method works as-is.
|
||||
relationships = []
|
||||
if self._stop_event and self._stop_event.is_set(): return []
|
||||
domain_certificates = {}
|
||||
all_discovered_domains = set()
|
||||
for i, cert_data in enumerate(certificates):
|
||||
if i % 5 == 0 and self._stop_event and self._stop_event.is_set(): break
|
||||
cert_metadata = self._extract_certificate_metadata(cert_data)
|
||||
cert_domains = self._extract_domains_from_certificate(cert_data)
|
||||
all_discovered_domains.update(cert_domains)
|
||||
for cert_domain in cert_domains:
|
||||
if not _is_valid_domain(cert_domain): continue
|
||||
if cert_domain not in domain_certificates:
|
||||
domain_certificates[cert_domain] = []
|
||||
domain_certificates[cert_domain].append(cert_metadata)
|
||||
if self._stop_event and self._stop_event.is_set(): return []
|
||||
for i, discovered_domain in enumerate(all_discovered_domains):
|
||||
if discovered_domain == domain: continue
|
||||
if i % 10 == 0 and self._stop_event and self._stop_event.is_set(): break
|
||||
if not _is_valid_domain(discovered_domain): continue
|
||||
query_domain_certs = domain_certificates.get(domain, [])
|
||||
discovered_domain_certs = domain_certificates.get(discovered_domain, [])
|
||||
shared_certificates = self._find_shared_certificates(query_domain_certs, discovered_domain_certs)
|
||||
confidence = self._calculate_domain_relationship_confidence(
|
||||
domain, discovered_domain, shared_certificates, all_discovered_domains
|
||||
)
|
||||
relationship_raw_data = {
|
||||
'relationship_type': 'certificate_discovery',
|
||||
'shared_certificates': shared_certificates,
|
||||
'total_shared_certs': len(shared_certificates),
|
||||
'discovery_context': self._determine_relationship_context(discovered_domain, domain),
|
||||
'domain_certificates': {
|
||||
domain: self._summarize_certificates(query_domain_certs),
|
||||
discovered_domain: self._summarize_certificates(discovered_domain_certs)
|
||||
}
|
||||
}
|
||||
relationships.append((
|
||||
domain, discovered_domain, 'san_certificate', confidence, relationship_raw_data
|
||||
))
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain, target_node=discovered_domain, relationship_type='san_certificate',
|
||||
confidence_score=confidence, raw_data=relationship_raw_data,
|
||||
discovery_method="certificate_transparency_analysis"
|
||||
)
|
||||
return relationships
|
||||
def _parse_issuer_organization(self, issuer_dn: str) -> str:
|
||||
"""Parse the issuer Distinguished Name to extract just the organization name."""
|
||||
if not issuer_dn:
|
||||
return issuer_dn
|
||||
|
||||
# --- All remaining helper methods are identical to the original and fully compatible ---
|
||||
# They are included here for completeness.
|
||||
try:
|
||||
components = [comp.strip() for comp in issuer_dn.split(',')]
|
||||
|
||||
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
cert1_ids = {cert.get('certificate_id') for cert in certs1 if cert.get('certificate_id')}
|
||||
return [cert for cert in certs2 if cert.get('certificate_id') in cert1_ids]
|
||||
for component in components:
|
||||
if component.startswith('O='):
|
||||
org_name = component[2:].strip()
|
||||
if org_name.startswith('"') and org_name.endswith('"'):
|
||||
org_name = org_name[1:-1]
|
||||
return org_name
|
||||
|
||||
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
if not certificates: return {'total_certificates': 0, 'valid_certificates': 0, 'expired_certificates': 0, 'expires_soon_count': 0, 'unique_issuers': [], 'latest_certificate': None, 'has_valid_cert': False}
|
||||
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
|
||||
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
|
||||
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
|
||||
latest_cert, latest_date = None, None
|
||||
for cert in certificates:
|
||||
return issuer_dn
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Failed to parse issuer DN '{issuer_dn}': {e}")
|
||||
return issuer_dn
|
||||
|
||||
def _parse_certificate_date(self, date_string: str) -> datetime:
|
||||
"""Parse certificate date from crt.sh format."""
|
||||
if not date_string:
|
||||
raise ValueError("Empty date string")
|
||||
|
||||
try:
|
||||
if date_string.endswith('Z'):
|
||||
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
|
||||
elif '+' in date_string or date_string.endswith('UTC'):
|
||||
date_string = date_string.replace('UTC', '').strip()
|
||||
if '+' in date_string:
|
||||
date_string = date_string.split('+')[0]
|
||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
||||
else:
|
||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
||||
except Exception as e:
|
||||
try:
|
||||
if cert.get('not_before'):
|
||||
cert_date = self._parse_certificate_date(cert['not_before'])
|
||||
if latest_date is None or cert_date > latest_date:
|
||||
latest_date, latest_cert = cert_date, cert
|
||||
except Exception: continue
|
||||
return {'total_certificates': len(certificates), 'valid_certificates': valid_count, 'expired_certificates': len(certificates) - valid_count, 'expires_soon_count': expires_soon_count, 'unique_issuers': unique_issuers, 'latest_certificate': latest_cert, 'has_valid_cert': valid_count > 0, 'certificate_details': certificates}
|
||||
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
|
||||
except Exception:
|
||||
raise ValueError(f"Unable to parse date: {date_string}") from e
|
||||
|
||||
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str, shared_certificates: List[Dict[str, Any]], all_discovered_domains: Set[str]) -> float:
|
||||
base_confidence, context_bonus, shared_bonus, validity_bonus, issuer_bonus = 0.9, 0.0, 0.0, 0.0, 0.0
|
||||
relationship_context = self._determine_relationship_context(domain2, domain1)
|
||||
if relationship_context == 'subdomain': context_bonus = 0.1
|
||||
elif relationship_context == 'parent_domain': context_bonus = 0.05
|
||||
if shared_certificates:
|
||||
if len(shared_certificates) >= 3: shared_bonus = 0.1
|
||||
elif len(shared_certificates) >= 2: shared_bonus = 0.05
|
||||
else: shared_bonus = 0.02
|
||||
if any(cert.get('is_currently_valid') for cert in shared_certificates): validity_bonus = 0.05
|
||||
for cert in shared_certificates:
|
||||
if any(ca in cert.get('issuer_name', '').lower() for ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
|
||||
issuer_bonus = max(issuer_bonus, 0.03)
|
||||
break
|
||||
return max(0.1, min(1.0, base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus))
|
||||
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
|
||||
"""Check if a certificate is currently valid based on its expiry date."""
|
||||
try:
|
||||
not_after_str = cert_data.get('not_after')
|
||||
if not not_after_str:
|
||||
return False
|
||||
|
||||
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
|
||||
if cert_domain == query_domain: return 'exact_match'
|
||||
if cert_domain.endswith(f'.{query_domain}'): return 'subdomain'
|
||||
if query_domain.endswith(f'.{cert_domain}'): return 'parent_domain'
|
||||
return 'related_domain'
|
||||
not_after_date = self._parse_certificate_date(not_after_str)
|
||||
not_before_str = cert_data.get('not_before')
|
||||
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
return []
|
||||
now = datetime.now(timezone.utc)
|
||||
is_not_expired = not_after_date > now
|
||||
|
||||
if not_before_str:
|
||||
not_before_date = self._parse_certificate_date(not_before_str)
|
||||
is_not_before_valid = not_before_date <= now
|
||||
return is_not_expired and is_not_before_valid
|
||||
|
||||
return is_not_expired
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Certificate validity check failed: {e}")
|
||||
return False
|
||||
|
||||
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
|
||||
"""Extract all domains from certificate data."""
|
||||
domains = set()
|
||||
if cn := cert_data.get('common_name'):
|
||||
if cleaned := self._clean_domain_name(cn):
|
||||
domains.update(cleaned)
|
||||
if nv := cert_data.get('name_value'):
|
||||
for line in nv.split('\n'):
|
||||
if cleaned := self._clean_domain_name(line.strip()):
|
||||
domains.update(cleaned)
|
||||
|
||||
# Extract from common name
|
||||
common_name = cert_data.get('common_name', '')
|
||||
if common_name:
|
||||
cleaned_cn = self._clean_domain_name(common_name)
|
||||
if cleaned_cn:
|
||||
domains.update(cleaned_cn)
|
||||
|
||||
# Extract from name_value field (contains SANs)
|
||||
name_value = cert_data.get('name_value', '')
|
||||
if name_value:
|
||||
for line in name_value.split('\n'):
|
||||
cleaned_domains = self._clean_domain_name(line.strip())
|
||||
if cleaned_domains:
|
||||
domains.update(cleaned_domains)
|
||||
|
||||
return domains
|
||||
|
||||
def _clean_domain_name(self, domain_name: str) -> List[str]:
|
||||
if not domain_name: return []
|
||||
domain = domain_name.strip().lower().split('://', 1)[-1].split('/', 1)[0]
|
||||
if ':' in domain and not domain.count(':') > 1: domain = domain.split(':', 1)[0]
|
||||
cleaned_domains = [domain, domain[2:]] if domain.startswith('*.') else [domain]
|
||||
"""Clean and normalize domain name from certificate data."""
|
||||
if not domain_name:
|
||||
return []
|
||||
|
||||
domain = domain_name.strip().lower()
|
||||
|
||||
if domain.startswith(('http://', 'https://')):
|
||||
domain = domain.split('://', 1)[1]
|
||||
|
||||
if '/' in domain:
|
||||
domain = domain.split('/', 1)[0]
|
||||
|
||||
if ':' in domain and not domain.count(':') > 1:
|
||||
domain = domain.split(':', 1)[0]
|
||||
|
||||
cleaned_domains = []
|
||||
if domain.startswith('*.'):
|
||||
cleaned_domains.append(domain)
|
||||
cleaned_domains.append(domain[2:])
|
||||
else:
|
||||
cleaned_domains.append(domain)
|
||||
|
||||
final_domains = []
|
||||
for d in cleaned_domains:
|
||||
d = re.sub(r'[^\w\-\.]', '', d)
|
||||
if d and not d.startswith(('.', '-')) and not d.endswith(('.', '-')):
|
||||
final_domains.append(d)
|
||||
|
||||
return [d for d in final_domains if _is_valid_domain(d)]
|
||||
|
||||
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""Find certificates that are shared between two domain certificate lists."""
|
||||
shared = []
|
||||
|
||||
cert1_ids = set()
|
||||
for cert in certs1:
|
||||
cert_id = cert.get('certificate_id')
|
||||
if cert_id and isinstance(cert_id, (int, str, float, bool, tuple)):
|
||||
cert1_ids.add(cert_id)
|
||||
|
||||
for cert in certs2:
|
||||
cert_id = cert.get('certificate_id')
|
||||
if cert_id and isinstance(cert_id, (int, str, float, bool, tuple)):
|
||||
if cert_id in cert1_ids:
|
||||
shared.append(cert)
|
||||
|
||||
return shared
|
||||
|
||||
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""Create a summary of certificates for a domain."""
|
||||
if not certificates:
|
||||
return {
|
||||
'total_certificates': 0,
|
||||
'valid_certificates': 0,
|
||||
'expired_certificates': 0,
|
||||
'expires_soon_count': 0,
|
||||
'unique_issuers': [],
|
||||
'latest_certificate': None,
|
||||
'has_valid_cert': False,
|
||||
'certificate_details': []
|
||||
}
|
||||
|
||||
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
|
||||
expired_count = len(certificates) - valid_count
|
||||
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
|
||||
|
||||
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
|
||||
|
||||
# Find the most recent certificate
|
||||
latest_cert = None
|
||||
latest_date = None
|
||||
|
||||
for cert in certificates:
|
||||
try:
|
||||
if cert.get('not_before'):
|
||||
cert_date = self._parse_certificate_date(cert['not_before'])
|
||||
if latest_date is None or cert_date > latest_date:
|
||||
latest_date = cert_date
|
||||
latest_cert = cert
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
# Sort certificates by date for better display (newest first)
|
||||
sorted_certificates = sorted(
|
||||
certificates,
|
||||
key=lambda c: self._get_certificate_sort_date(c),
|
||||
reverse=True
|
||||
)
|
||||
|
||||
return {
|
||||
'total_certificates': len(certificates),
|
||||
'valid_certificates': valid_count,
|
||||
'expired_certificates': expired_count,
|
||||
'expires_soon_count': expires_soon_count,
|
||||
'unique_issuers': unique_issuers,
|
||||
'latest_certificate': latest_cert,
|
||||
'has_valid_cert': valid_count > 0,
|
||||
'certificate_details': sorted_certificates
|
||||
}
|
||||
|
||||
def _get_certificate_sort_date(self, cert: Dict[str, Any]) -> datetime:
|
||||
"""Get a sortable date from certificate data for chronological ordering."""
|
||||
try:
|
||||
if cert.get('not_before'):
|
||||
return self._parse_certificate_date(cert['not_before'])
|
||||
|
||||
if cert.get('entry_timestamp'):
|
||||
return self._parse_certificate_date(cert['entry_timestamp'])
|
||||
|
||||
return datetime(1970, 1, 1, tzinfo=timezone.utc)
|
||||
|
||||
except Exception:
|
||||
return datetime(1970, 1, 1, tzinfo=timezone.utc)
|
||||
|
||||
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str,
|
||||
shared_certificates: List[Dict[str, Any]],
|
||||
all_discovered_domains: Set[str]) -> float:
|
||||
"""Calculate confidence score for domain relationship based on various factors."""
|
||||
base_confidence = 0.9
|
||||
|
||||
# Adjust confidence based on domain relationship context
|
||||
relationship_context = self._determine_relationship_context(domain2, domain1)
|
||||
|
||||
if relationship_context == 'exact_match':
|
||||
context_bonus = 0.0
|
||||
elif relationship_context == 'subdomain':
|
||||
context_bonus = 0.1
|
||||
elif relationship_context == 'parent_domain':
|
||||
context_bonus = 0.05
|
||||
else:
|
||||
context_bonus = 0.0
|
||||
|
||||
# Adjust confidence based on shared certificates
|
||||
if shared_certificates:
|
||||
shared_count = len(shared_certificates)
|
||||
if shared_count >= 3:
|
||||
shared_bonus = 0.1
|
||||
elif shared_count >= 2:
|
||||
shared_bonus = 0.05
|
||||
else:
|
||||
shared_bonus = 0.02
|
||||
|
||||
valid_shared = sum(1 for cert in shared_certificates if cert.get('is_currently_valid'))
|
||||
if valid_shared > 0:
|
||||
validity_bonus = 0.05
|
||||
else:
|
||||
validity_bonus = 0.0
|
||||
else:
|
||||
shared_bonus = 0.0
|
||||
validity_bonus = 0.0
|
||||
|
||||
# Adjust confidence based on certificate issuer reputation
|
||||
issuer_bonus = 0.0
|
||||
if shared_certificates:
|
||||
for cert in shared_certificates:
|
||||
issuer = cert.get('issuer_name', '').lower()
|
||||
if any(trusted_ca in issuer for trusted_ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
|
||||
issuer_bonus = max(issuer_bonus, 0.03)
|
||||
break
|
||||
|
||||
final_confidence = base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus
|
||||
return max(0.1, min(1.0, final_confidence))
|
||||
|
||||
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
|
||||
"""Determine the context of the relationship between certificate domain and query domain."""
|
||||
if cert_domain == query_domain:
|
||||
return 'exact_match'
|
||||
elif cert_domain.endswith(f'.{query_domain}'):
|
||||
return 'subdomain'
|
||||
elif query_domain.endswith(f'.{cert_domain}'):
|
||||
return 'parent_domain'
|
||||
else:
|
||||
return 'related_domain'
|
||||
@@ -1,15 +1,16 @@
|
||||
# dnsrecon/providers/dns_provider.py
|
||||
|
||||
from dns import resolver, reversename
|
||||
from typing import List, Dict, Any, Tuple
|
||||
from typing import Dict
|
||||
from .base_provider import BaseProvider
|
||||
from core.provider_result import ProviderResult
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
|
||||
|
||||
class DNSProvider(BaseProvider):
|
||||
"""
|
||||
Provider for standard DNS resolution and reverse DNS lookups.
|
||||
Now uses session-specific configuration.
|
||||
Now returns standardized ProviderResult objects.
|
||||
"""
|
||||
|
||||
def __init__(self, name=None, session_config=None):
|
||||
@@ -25,7 +26,6 @@ class DNSProvider(BaseProvider):
|
||||
self.resolver = resolver.Resolver()
|
||||
self.resolver.timeout = 5
|
||||
self.resolver.lifetime = 10
|
||||
#self.resolver.nameservers = ['127.0.0.1']
|
||||
|
||||
def get_name(self) -> str:
|
||||
"""Return the provider name."""
|
||||
@@ -47,31 +47,35 @@ class DNSProvider(BaseProvider):
|
||||
"""DNS is always available - no API key required."""
|
||||
return True
|
||||
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
def query_domain(self, domain: str) -> ProviderResult:
|
||||
"""
|
||||
Query DNS records for the domain to discover relationships.
|
||||
...
|
||||
Query DNS records for the domain to discover relationships and attributes.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
ProviderResult containing discovered relationships and attributes
|
||||
"""
|
||||
if not _is_valid_domain(domain):
|
||||
return []
|
||||
return ProviderResult()
|
||||
|
||||
relationships = []
|
||||
result = ProviderResult()
|
||||
|
||||
# Query all record types
|
||||
for record_type in ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SOA', 'TXT', 'SRV', 'CAA']:
|
||||
try:
|
||||
relationships.extend(self._query_record(domain, record_type))
|
||||
self._query_record(domain, record_type, result)
|
||||
except resolver.NoAnswer:
|
||||
# This is not an error, just a confirmation that the record doesn't exist.
|
||||
self.logger.logger.debug(f"No {record_type} record found for {domain}")
|
||||
except Exception as e:
|
||||
self.failed_requests += 1
|
||||
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
|
||||
# Optionally, you might want to re-raise other, more serious exceptions.
|
||||
|
||||
return relationships
|
||||
return result
|
||||
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
def query_ip(self, ip: str) -> ProviderResult:
|
||||
"""
|
||||
Query reverse DNS for the IP address.
|
||||
|
||||
@@ -79,12 +83,12 @@ class DNSProvider(BaseProvider):
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
List of relationships discovered from reverse DNS
|
||||
ProviderResult containing discovered relationships and attributes
|
||||
"""
|
||||
if not _is_valid_ip(ip):
|
||||
return []
|
||||
return ProviderResult()
|
||||
|
||||
relationships = []
|
||||
result = ProviderResult()
|
||||
|
||||
try:
|
||||
# Perform reverse DNS lookup
|
||||
@@ -97,27 +101,44 @@ class DNSProvider(BaseProvider):
|
||||
hostname = str(ptr_record).rstrip('.')
|
||||
|
||||
if _is_valid_domain(hostname):
|
||||
raw_data = {
|
||||
'query_type': 'PTR',
|
||||
'ip_address': ip,
|
||||
'hostname': hostname,
|
||||
'ttl': response.ttl
|
||||
}
|
||||
# Add the relationship
|
||||
result.add_relationship(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='ptr_record',
|
||||
provider=self.name,
|
||||
confidence=0.8,
|
||||
raw_data={
|
||||
'query_type': 'PTR',
|
||||
'ip_address': ip,
|
||||
'hostname': hostname,
|
||||
'ttl': response.ttl
|
||||
}
|
||||
)
|
||||
|
||||
relationships.append((
|
||||
ip,
|
||||
hostname,
|
||||
'ptr_record',
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
# Add PTR record as attribute to the IP
|
||||
result.add_attribute(
|
||||
target_node=ip,
|
||||
name='ptr_record',
|
||||
value=hostname,
|
||||
attr_type='dns_record',
|
||||
provider=self.name,
|
||||
confidence=0.8,
|
||||
metadata={'ttl': response.ttl}
|
||||
)
|
||||
|
||||
# Log the relationship discovery
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='ptr_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=raw_data,
|
||||
raw_data={
|
||||
'query_type': 'PTR',
|
||||
'ip_address': ip,
|
||||
'hostname': hostname,
|
||||
'ttl': response.ttl
|
||||
},
|
||||
discovery_method="reverse_dns_lookup"
|
||||
)
|
||||
|
||||
@@ -130,18 +151,24 @@ class DNSProvider(BaseProvider):
|
||||
# Re-raise the exception so the scanner can handle the failure
|
||||
raise e
|
||||
|
||||
return relationships
|
||||
return result
|
||||
|
||||
def _query_record(self, domain: str, record_type: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
def _query_record(self, domain: str, record_type: str, result: ProviderResult) -> None:
|
||||
"""
|
||||
Query a specific type of DNS record for the domain.
|
||||
Query a specific type of DNS record for the domain and add results to ProviderResult.
|
||||
|
||||
Args:
|
||||
domain: Domain to query
|
||||
record_type: DNS record type (A, AAAA, CNAME, etc.)
|
||||
result: ProviderResult to populate
|
||||
"""
|
||||
relationships = []
|
||||
try:
|
||||
self.total_requests += 1
|
||||
response = self.resolver.resolve(domain, record_type)
|
||||
self.successful_requests += 1
|
||||
|
||||
dns_records = []
|
||||
|
||||
for record in response:
|
||||
target = ""
|
||||
if record_type in ['A', 'AAAA']:
|
||||
@@ -153,12 +180,16 @@ class DNSProvider(BaseProvider):
|
||||
elif record_type == 'SOA':
|
||||
target = str(record.mname).rstrip('.')
|
||||
elif record_type in ['TXT']:
|
||||
# TXT records are treated as metadata, not relationships.
|
||||
# TXT records are treated as attributes, not relationships
|
||||
txt_value = str(record).strip('"')
|
||||
dns_records.append(f"TXT: {txt_value}")
|
||||
continue
|
||||
elif record_type == 'SRV':
|
||||
target = str(record.target).rstrip('.')
|
||||
elif record_type == 'CAA':
|
||||
target = f"{record.flags} {record.tag.decode('utf-8')} \"{record.value.decode('utf-8')}\""
|
||||
caa_value = f"{record.flags} {record.tag.decode('utf-8')} \"{record.value.decode('utf-8')}\""
|
||||
dns_records.append(f"CAA: {caa_value}")
|
||||
continue
|
||||
else:
|
||||
target = str(record)
|
||||
|
||||
@@ -170,16 +201,22 @@ class DNSProvider(BaseProvider):
|
||||
'ttl': response.ttl
|
||||
}
|
||||
relationship_type = f"{record_type.lower()}_record"
|
||||
confidence = 0.8 # Default confidence for DNS records
|
||||
confidence = 0.8 # Standard confidence for DNS records
|
||||
|
||||
relationships.append((
|
||||
domain,
|
||||
target,
|
||||
relationship_type,
|
||||
confidence,
|
||||
raw_data
|
||||
))
|
||||
# Add relationship
|
||||
result.add_relationship(
|
||||
source_node=domain,
|
||||
target_node=target,
|
||||
relationship_type=relationship_type,
|
||||
provider=self.name,
|
||||
confidence=confidence,
|
||||
raw_data=raw_data
|
||||
)
|
||||
|
||||
# Add DNS record as attribute to the source domain
|
||||
dns_records.append(f"{record_type}: {target}")
|
||||
|
||||
# Log relationship discovery
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=target,
|
||||
@@ -189,10 +226,20 @@ class DNSProvider(BaseProvider):
|
||||
discovery_method=f"dns_{record_type.lower()}_record"
|
||||
)
|
||||
|
||||
# Add DNS records as a consolidated attribute
|
||||
if dns_records:
|
||||
result.add_attribute(
|
||||
target_node=domain,
|
||||
name='dns_records',
|
||||
value=dns_records,
|
||||
attr_type='dns_record_list',
|
||||
provider=self.name,
|
||||
confidence=0.8,
|
||||
metadata={'record_types': [record_type]}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.failed_requests += 1
|
||||
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
|
||||
# Re-raise the exception so the scanner can handle it
|
||||
raise e
|
||||
|
||||
return relationships
|
||||
@@ -1,15 +1,20 @@
|
||||
# dnsrecon/providers/shodan_provider.py
|
||||
|
||||
import json
|
||||
from typing import List, Dict, Any, Tuple
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
from datetime import datetime, timezone
|
||||
import requests
|
||||
|
||||
from .base_provider import BaseProvider
|
||||
from core.provider_result import ProviderResult
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
|
||||
|
||||
class ShodanProvider(BaseProvider):
|
||||
"""
|
||||
Provider for querying Shodan API for IP address and hostname information.
|
||||
Now uses session-specific API keys.
|
||||
Provider for querying Shodan API for IP address information.
|
||||
Now returns standardized ProviderResult objects with caching support.
|
||||
"""
|
||||
|
||||
def __init__(self, name=None, session_config=None):
|
||||
@@ -23,6 +28,10 @@ class ShodanProvider(BaseProvider):
|
||||
self.base_url = "https://api.shodan.io"
|
||||
self.api_key = self.config.get_api_key('shodan')
|
||||
|
||||
# Initialize cache directory
|
||||
self.cache_dir = Path('cache') / 'shodan'
|
||||
self.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def is_available(self) -> bool:
|
||||
"""Check if Shodan provider is available (has valid API key in this session)."""
|
||||
return self.api_key is not None and len(self.api_key.strip()) > 0
|
||||
@@ -33,7 +42,7 @@ class ShodanProvider(BaseProvider):
|
||||
|
||||
def get_display_name(self) -> str:
|
||||
"""Return the provider display name for the UI."""
|
||||
return "shodan"
|
||||
return "Shodan"
|
||||
|
||||
def requires_api_key(self) -> bool:
|
||||
"""Return True if the provider requires an API key."""
|
||||
@@ -41,267 +50,234 @@ class ShodanProvider(BaseProvider):
|
||||
|
||||
def get_eligibility(self) -> Dict[str, bool]:
|
||||
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||
return {'domains': True, 'ips': True}
|
||||
return {'domains': False, 'ips': True}
|
||||
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
def _get_cache_file_path(self, ip: str) -> Path:
|
||||
"""Generate cache file path for an IP address."""
|
||||
safe_ip = ip.replace('.', '_').replace(':', '_')
|
||||
return self.cache_dir / f"{safe_ip}.json"
|
||||
|
||||
def _get_cache_status(self, cache_file_path: Path) -> str:
|
||||
"""
|
||||
Query Shodan for information about a domain.
|
||||
Uses Shodan's hostname search to find associated IPs.
|
||||
Check cache status for an IP.
|
||||
Returns: 'not_found', 'fresh', or 'stale'
|
||||
"""
|
||||
if not cache_file_path.exists():
|
||||
return "not_found"
|
||||
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_data = json.load(f)
|
||||
|
||||
last_query_str = cache_data.get("last_upstream_query")
|
||||
if not last_query_str:
|
||||
return "stale"
|
||||
|
||||
last_query = datetime.fromisoformat(last_query_str.replace('Z', '+00:00'))
|
||||
hours_since_query = (datetime.now(timezone.utc) - last_query).total_seconds() / 3600
|
||||
|
||||
cache_timeout = self.config.cache_timeout_hours
|
||||
if hours_since_query < cache_timeout:
|
||||
return "fresh"
|
||||
else:
|
||||
return "stale"
|
||||
|
||||
except (json.JSONDecodeError, ValueError, KeyError):
|
||||
return "stale"
|
||||
|
||||
def query_domain(self, domain: str) -> ProviderResult:
|
||||
"""
|
||||
Domain queries are no longer supported for the Shodan provider.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
List of relationships discovered from Shodan data
|
||||
Empty ProviderResult
|
||||
"""
|
||||
if not _is_valid_domain(domain) or not self.is_available():
|
||||
return []
|
||||
return ProviderResult()
|
||||
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
# Search for hostname in Shodan
|
||||
search_query = f"hostname:{domain}"
|
||||
url = f"{self.base_url}/shodan/host/search"
|
||||
params = {
|
||||
'key': self.api_key,
|
||||
'query': search_query,
|
||||
'minify': True # Get minimal data to reduce bandwidth
|
||||
}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
|
||||
|
||||
if not response or response.status_code != 200:
|
||||
return []
|
||||
|
||||
data = response.json()
|
||||
|
||||
if 'matches' not in data:
|
||||
return []
|
||||
|
||||
# Process search results
|
||||
for match in data['matches']:
|
||||
ip_address = match.get('ip_str')
|
||||
hostnames = match.get('hostnames', [])
|
||||
|
||||
if ip_address and domain in hostnames:
|
||||
raw_data = {
|
||||
'ip_address': ip_address,
|
||||
'hostnames': hostnames,
|
||||
'country': match.get('location', {}).get('country_name', ''),
|
||||
'city': match.get('location', {}).get('city', ''),
|
||||
'isp': match.get('isp', ''),
|
||||
'org': match.get('org', ''),
|
||||
'ports': match.get('ports', []),
|
||||
'last_update': match.get('last_update', '')
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
domain,
|
||||
ip_address,
|
||||
'a_record', # Domain resolves to IP
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=ip_address,
|
||||
relationship_type='a_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=raw_data,
|
||||
discovery_method="shodan_hostname_search"
|
||||
)
|
||||
|
||||
# Also create relationships to other hostnames on the same IP
|
||||
for hostname in hostnames:
|
||||
if hostname != domain and _is_valid_domain(hostname):
|
||||
hostname_raw_data = {
|
||||
'shared_ip': ip_address,
|
||||
'all_hostnames': hostnames,
|
||||
'discovery_context': 'shared_hosting'
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
domain,
|
||||
hostname,
|
||||
'passive_dns', # Shared hosting relationship
|
||||
0.6, # Lower confidence for shared hosting
|
||||
hostname_raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=hostname,
|
||||
relationship_type='passive_dns',
|
||||
confidence_score=0.6,
|
||||
raw_data=hostname_raw_data,
|
||||
discovery_method="shodan_shared_hosting"
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
||||
|
||||
return relationships
|
||||
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
def query_ip(self, ip: str) -> ProviderResult:
|
||||
"""
|
||||
Query Shodan for information about an IP address.
|
||||
Query Shodan for information about an IP address, with caching of processed data.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
List of relationships discovered from Shodan IP data
|
||||
ProviderResult containing discovered relationships and attributes
|
||||
"""
|
||||
if not _is_valid_ip(ip) or not self.is_available():
|
||||
return []
|
||||
return ProviderResult()
|
||||
|
||||
relationships = []
|
||||
cache_file = self._get_cache_file_path(ip)
|
||||
cache_status = self._get_cache_status(cache_file)
|
||||
|
||||
result = ProviderResult()
|
||||
|
||||
try:
|
||||
# Query Shodan host information
|
||||
url = f"{self.base_url}/shodan/host/{ip}"
|
||||
params = {'key': self.api_key}
|
||||
if cache_status == "fresh":
|
||||
result = self._load_from_cache(cache_file)
|
||||
self.logger.logger.info(f"Using cached Shodan data for {ip}")
|
||||
else: # "stale" or "not_found"
|
||||
url = f"{self.base_url}/shodan/host/{ip}"
|
||||
params = {'key': self.api_key}
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||
if response and response.status_code == 200:
|
||||
data = response.json()
|
||||
# Process the data into ProviderResult BEFORE caching
|
||||
result = self._process_shodan_data(ip, data)
|
||||
self._save_to_cache(cache_file, result, data) # Save both result and raw data
|
||||
elif cache_status == "stale":
|
||||
# If API fails on a stale cache, use the old data
|
||||
result = self._load_from_cache(cache_file)
|
||||
|
||||
if not response or response.status_code != 200:
|
||||
return []
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.logger.error(f"Shodan API query failed for {ip}: {e}")
|
||||
if cache_status == "stale":
|
||||
result = self._load_from_cache(cache_file)
|
||||
|
||||
data = response.json()
|
||||
return result
|
||||
|
||||
# Extract hostname relationships
|
||||
hostnames = data.get('hostnames', [])
|
||||
for hostname in hostnames:
|
||||
if _is_valid_domain(hostname):
|
||||
raw_data = {
|
||||
'ip_address': ip,
|
||||
'hostname': hostname,
|
||||
'country': data.get('country_name', ''),
|
||||
'city': data.get('city', ''),
|
||||
'isp': data.get('isp', ''),
|
||||
'org': data.get('org', ''),
|
||||
'asn': data.get('asn', ''),
|
||||
'ports': data.get('ports', []),
|
||||
'last_update': data.get('last_update', ''),
|
||||
'os': data.get('os', '')
|
||||
}
|
||||
def _load_from_cache(self, cache_file_path: Path) -> ProviderResult:
|
||||
"""Load processed Shodan data from a cache file."""
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_content = json.load(f)
|
||||
|
||||
relationships.append((
|
||||
ip,
|
||||
hostname,
|
||||
'a_record', # IP resolves to hostname
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
result = ProviderResult()
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='a_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=raw_data,
|
||||
discovery_method="shodan_host_lookup"
|
||||
)
|
||||
# Reconstruct relationships
|
||||
for rel_data in cache_content.get("relationships", []):
|
||||
result.add_relationship(
|
||||
source_node=rel_data["source_node"],
|
||||
target_node=rel_data["target_node"],
|
||||
relationship_type=rel_data["relationship_type"],
|
||||
provider=rel_data["provider"],
|
||||
confidence=rel_data["confidence"],
|
||||
raw_data=rel_data.get("raw_data", {})
|
||||
)
|
||||
|
||||
# Extract ASN relationship if available
|
||||
asn = data.get('asn')
|
||||
if asn:
|
||||
# Ensure the ASN starts with "AS"
|
||||
if isinstance(asn, str) and asn.startswith('AS'):
|
||||
asn_name = asn
|
||||
asn_number = asn[2:]
|
||||
else:
|
||||
asn_name = f"AS{asn}"
|
||||
asn_number = str(asn)
|
||||
# Reconstruct attributes
|
||||
for attr_data in cache_content.get("attributes", []):
|
||||
result.add_attribute(
|
||||
target_node=attr_data["target_node"],
|
||||
name=attr_data["name"],
|
||||
value=attr_data["value"],
|
||||
attr_type=attr_data["type"],
|
||||
provider=attr_data["provider"],
|
||||
confidence=attr_data["confidence"],
|
||||
metadata=attr_data.get("metadata", {})
|
||||
)
|
||||
|
||||
asn_raw_data = {
|
||||
'ip_address': ip,
|
||||
'asn': asn_number,
|
||||
'isp': data.get('isp', ''),
|
||||
'org': data.get('org', '')
|
||||
}
|
||||
return result
|
||||
|
||||
relationships.append((
|
||||
ip,
|
||||
asn_name,
|
||||
'asn_membership',
|
||||
0.7,
|
||||
asn_raw_data
|
||||
))
|
||||
except (json.JSONDecodeError, FileNotFoundError, KeyError):
|
||||
return ProviderResult()
|
||||
|
||||
def _save_to_cache(self, cache_file_path: Path, result: ProviderResult, raw_data: Dict[str, Any]) -> None:
|
||||
"""Save processed Shodan data to a cache file."""
|
||||
try:
|
||||
cache_data = {
|
||||
"last_upstream_query": datetime.now(timezone.utc).isoformat(),
|
||||
"raw_data": raw_data, # Preserve original for forensic purposes
|
||||
"relationships": [
|
||||
{
|
||||
"source_node": rel.source_node,
|
||||
"target_node": rel.target_node,
|
||||
"relationship_type": rel.relationship_type,
|
||||
"confidence": rel.confidence,
|
||||
"provider": rel.provider,
|
||||
"raw_data": rel.raw_data
|
||||
} for rel in result.relationships
|
||||
],
|
||||
"attributes": [
|
||||
{
|
||||
"target_node": attr.target_node,
|
||||
"name": attr.name,
|
||||
"value": attr.value,
|
||||
"type": attr.type,
|
||||
"provider": attr.provider,
|
||||
"confidence": attr.confidence,
|
||||
"metadata": attr.metadata
|
||||
} for attr in result.attributes
|
||||
]
|
||||
}
|
||||
with open(cache_file_path, 'w') as f:
|
||||
json.dump(cache_data, f, separators=(',', ':'), default=str)
|
||||
except Exception as e:
|
||||
self.logger.logger.warning(f"Failed to save Shodan cache for {cache_file_path.name}: {e}")
|
||||
|
||||
def _process_shodan_data(self, ip: str, data: Dict[str, Any]) -> ProviderResult:
|
||||
"""
|
||||
Process Shodan data to extract relationships and attributes.
|
||||
|
||||
Args:
|
||||
ip: IP address queried
|
||||
data: Raw Shodan response data
|
||||
|
||||
Returns:
|
||||
ProviderResult with relationships and attributes
|
||||
"""
|
||||
result = ProviderResult()
|
||||
|
||||
for key, value in data.items():
|
||||
if key == 'hostnames':
|
||||
for hostname in value:
|
||||
if _is_valid_domain(hostname):
|
||||
result.add_relationship(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='a_record',
|
||||
provider=self.name,
|
||||
confidence=0.8,
|
||||
raw_data=data
|
||||
)
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='a_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=data,
|
||||
discovery_method="shodan_host_lookup"
|
||||
)
|
||||
elif key == 'asn':
|
||||
asn_name = f"AS{value[2:]}" if isinstance(value, str) and value.startswith('AS') else f"AS{value}"
|
||||
result.add_relationship(
|
||||
source_node=ip,
|
||||
target_node=asn_name,
|
||||
relationship_type='asn_membership',
|
||||
provider=self.name,
|
||||
confidence=0.7,
|
||||
raw_data=data
|
||||
)
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=asn_name,
|
||||
relationship_type='asn_membership',
|
||||
confidence_score=0.7,
|
||||
raw_data=asn_raw_data,
|
||||
raw_data=data,
|
||||
discovery_method="shodan_asn_lookup"
|
||||
)
|
||||
elif key == 'ports':
|
||||
for port in value:
|
||||
result.add_attribute(
|
||||
target_node=ip,
|
||||
name='open_port',
|
||||
value=port,
|
||||
attr_type='network_info',
|
||||
provider=self.name,
|
||||
confidence=0.9
|
||||
)
|
||||
elif isinstance(value, (str, int, float, bool)) and value is not None:
|
||||
result.add_attribute(
|
||||
target_node=ip,
|
||||
name=f"shodan_{key}",
|
||||
value=value,
|
||||
attr_type='shodan_info',
|
||||
provider=self.name,
|
||||
confidence=0.9
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
||||
|
||||
return relationships
|
||||
|
||||
def search_by_organization(self, org_name: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Search Shodan for hosts belonging to a specific organization.
|
||||
|
||||
Args:
|
||||
org_name: Organization name to search for
|
||||
|
||||
Returns:
|
||||
List of host information dictionaries
|
||||
"""
|
||||
if not self.is_available():
|
||||
return []
|
||||
|
||||
try:
|
||||
search_query = f"org:\"{org_name}\""
|
||||
url = f"{self.base_url}/shodan/host/search"
|
||||
params = {
|
||||
'key': self.api_key,
|
||||
'query': search_query,
|
||||
'minify': True
|
||||
}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=org_name)
|
||||
|
||||
if response and response.status_code == 200:
|
||||
data = response.json()
|
||||
return data.get('matches', [])
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.error(f"Error searching Shodan by organization {org_name}: {e}")
|
||||
|
||||
return []
|
||||
|
||||
def get_host_services(self, ip: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get service information for a specific IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address to query
|
||||
|
||||
Returns:
|
||||
List of service information dictionaries
|
||||
"""
|
||||
if not _is_valid_ip(ip) or not self.is_available():
|
||||
return []
|
||||
|
||||
try:
|
||||
url = f"{self.base_url}/shodan/host/{ip}"
|
||||
params = {'key': self.api_key}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||
|
||||
if response and response.status_code == 200:
|
||||
data = response.json()
|
||||
return data.get('data', []) # Service banners
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.error(f"Error getting Shodan services for IP {ip}: {e}")
|
||||
|
||||
return []
|
||||
return result
|
||||
@@ -8,4 +8,3 @@ dnspython>=2.4.2
|
||||
gunicorn
|
||||
redis
|
||||
python-dotenv
|
||||
psycopg2-binary
|
||||
1884
static/css/main.css
1884
static/css/main.css
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,58 @@
|
||||
/**
|
||||
* Graph visualization module for DNSRecon
|
||||
* Handles network graph rendering using vis.js
|
||||
* Handles network graph rendering using vis.js with proper large entity node hiding
|
||||
* UPDATED: Now compatible with a strictly flat, unified data model for attributes.
|
||||
*/
|
||||
const contextMenuCSS = `
|
||||
.graph-context-menu {
|
||||
position: fixed;
|
||||
z-index: 1000;
|
||||
background: linear-gradient(135deg, #2a2a2a 0%, #1e1e1e 100%);
|
||||
border: 1px solid #444;
|
||||
border-radius: 6px;
|
||||
box-shadow: 0 8px 25px rgba(0,0,0,0.6);
|
||||
display: none;
|
||||
font-family: 'Roboto Mono', monospace;
|
||||
font-size: 0.9rem;
|
||||
color: #c7c7c7;
|
||||
min-width: 180px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.graph-context-menu ul {
|
||||
list-style: none;
|
||||
padding: 0.5rem 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.graph-context-menu ul li {
|
||||
padding: 0.75rem 1rem;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
.graph-context-menu ul li:hover {
|
||||
background: linear-gradient(135deg, #3a3a3a 0%, #2e2e2e 100%);
|
||||
color: #00ff41;
|
||||
}
|
||||
|
||||
.graph-context-menu .menu-icon {
|
||||
font-size: 0.9rem;
|
||||
width: 1.2rem;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.graph-context-menu ul li:first-child {
|
||||
border-top: none;
|
||||
}
|
||||
|
||||
.graph-context-menu ul li:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
`;
|
||||
|
||||
class GraphManager {
|
||||
constructor(containerId) {
|
||||
@@ -12,6 +63,13 @@ class GraphManager {
|
||||
this.isInitialized = false;
|
||||
this.currentLayout = 'physics';
|
||||
this.nodeInfoPopup = null;
|
||||
this.contextMenu = null;
|
||||
this.history = [];
|
||||
this.filterPanel = null;
|
||||
this.initialTargetIds = new Set();
|
||||
// Track large entity members for proper hiding
|
||||
this.largeEntityMembers = new Set();
|
||||
this.isScanning = false;
|
||||
|
||||
this.options = {
|
||||
nodes: {
|
||||
@@ -115,8 +173,14 @@ class GraphManager {
|
||||
randomSeed: 2
|
||||
}
|
||||
};
|
||||
|
||||
if (typeof document !== 'undefined') {
|
||||
const style = document.createElement('style');
|
||||
style.textContent = contextMenuCSS;
|
||||
document.head.appendChild(style);
|
||||
}
|
||||
this.createNodeInfoPopup();
|
||||
this.createContextMenu();
|
||||
document.body.addEventListener('click', () => this.hideContextMenu());
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -129,6 +193,30 @@ class GraphManager {
|
||||
document.body.appendChild(this.nodeInfoPopup);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create context menu
|
||||
*/
|
||||
createContextMenu() {
|
||||
// Remove existing context menu if it exists
|
||||
const existing = document.getElementById('graph-context-menu');
|
||||
if (existing) {
|
||||
existing.remove();
|
||||
}
|
||||
|
||||
this.contextMenu = document.createElement('div');
|
||||
this.contextMenu.id = 'graph-context-menu';
|
||||
this.contextMenu.className = 'graph-context-menu';
|
||||
this.contextMenu.style.display = 'none';
|
||||
|
||||
// Prevent body click listener from firing when clicking the menu itself
|
||||
this.contextMenu.addEventListener('click', (event) => {
|
||||
event.stopPropagation();
|
||||
});
|
||||
|
||||
document.body.appendChild(this.contextMenu);
|
||||
console.log('Context menu created and added to body');
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the network graph
|
||||
*/
|
||||
@@ -155,6 +243,7 @@ class GraphManager {
|
||||
|
||||
// Add graph controls
|
||||
this.addGraphControls();
|
||||
this.addFilterPanel();
|
||||
|
||||
console.log('Graph initialized successfully');
|
||||
} catch (error) {
|
||||
@@ -173,6 +262,8 @@ class GraphManager {
|
||||
<button class="graph-control-btn" id="graph-fit" title="Fit to Screen">[FIT]</button>
|
||||
<button class="graph-control-btn" id="graph-physics" title="Toggle Physics">[PHYSICS]</button>
|
||||
<button class="graph-control-btn" id="graph-cluster" title="Cluster Nodes">[CLUSTER]</button>
|
||||
<button class="graph-control-btn" id="graph-unhide" title="Unhide All">[UNHIDE]</button>
|
||||
<button class="graph-control-btn" id="graph-revert" title="Revert Last Action">[REVERT]</button>
|
||||
`;
|
||||
|
||||
this.container.appendChild(controlsContainer);
|
||||
@@ -181,6 +272,14 @@ class GraphManager {
|
||||
document.getElementById('graph-fit').addEventListener('click', () => this.fitView());
|
||||
document.getElementById('graph-physics').addEventListener('click', () => this.togglePhysics());
|
||||
document.getElementById('graph-cluster').addEventListener('click', () => this.toggleClustering());
|
||||
document.getElementById('graph-unhide').addEventListener('click', () => this.unhideAll());
|
||||
document.getElementById('graph-revert').addEventListener('click', () => this.revertLastAction());
|
||||
}
|
||||
|
||||
addFilterPanel() {
|
||||
this.filterPanel = document.createElement('div');
|
||||
this.filterPanel.className = 'graph-filter-panel';
|
||||
this.container.appendChild(this.filterPanel);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -189,8 +288,31 @@ class GraphManager {
|
||||
setupNetworkEvents() {
|
||||
if (!this.network) return;
|
||||
|
||||
// FIXED: Right-click context menu
|
||||
this.container.addEventListener('contextmenu', (event) => {
|
||||
event.preventDefault();
|
||||
console.log('Right-click detected at:', event.offsetX, event.offsetY);
|
||||
|
||||
// Get coordinates relative to the canvas
|
||||
const pointer = {
|
||||
x: event.offsetX,
|
||||
y: event.offsetY
|
||||
};
|
||||
|
||||
const nodeId = this.network.getNodeAt(pointer);
|
||||
console.log('Node at pointer:', nodeId);
|
||||
|
||||
if (nodeId) {
|
||||
// Pass the original client event for positioning
|
||||
this.showContextMenu(nodeId, event);
|
||||
} else {
|
||||
this.hideContextMenu();
|
||||
}
|
||||
});
|
||||
|
||||
// Node click event with details
|
||||
this.network.on('click', (params) => {
|
||||
this.hideContextMenu();
|
||||
if (params.nodes.length > 0) {
|
||||
const nodeId = params.nodes[0];
|
||||
if (this.network.isCluster(nodeId)) {
|
||||
@@ -216,10 +338,6 @@ class GraphManager {
|
||||
}
|
||||
});
|
||||
|
||||
this.network.on('oncontext', (params) => {
|
||||
params.event.preventDefault();
|
||||
});
|
||||
|
||||
// Stabilization events with progress
|
||||
this.network.on('stabilizationProgress', (params) => {
|
||||
const progress = params.iterations / params.total;
|
||||
@@ -235,6 +353,13 @@ class GraphManager {
|
||||
console.log('Selected nodes:', params.nodes);
|
||||
console.log('Selected edges:', params.edges);
|
||||
});
|
||||
|
||||
// Click away to hide context menu
|
||||
document.addEventListener('click', (e) => {
|
||||
if (!this.contextMenu.contains(e.target)) {
|
||||
this.hideContextMenu();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -252,21 +377,32 @@ class GraphManager {
|
||||
this.initialize();
|
||||
}
|
||||
|
||||
this.largeEntityMembers.clear();
|
||||
const largeEntityMap = new Map();
|
||||
|
||||
graphData.nodes.forEach(node => {
|
||||
if (node.type === 'large_entity' && node.attributes && Array.isArray(node.attributes.nodes)) {
|
||||
node.attributes.nodes.forEach(nodeId => {
|
||||
largeEntityMap.set(nodeId, node.id);
|
||||
});
|
||||
if (node.type === 'large_entity' && node.attributes) {
|
||||
// UPDATED: Handle unified data model - look for 'nodes' attribute in the attributes list
|
||||
const nodesAttribute = this.findAttributeByName(node.attributes, 'nodes');
|
||||
if (nodesAttribute && Array.isArray(nodesAttribute.value)) {
|
||||
nodesAttribute.value.forEach(nodeId => {
|
||||
largeEntityMap.set(nodeId, node.id);
|
||||
this.largeEntityMembers.add(nodeId);
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
const processedNodes = graphData.nodes.map(node => {
|
||||
const processed = this.processNode(node);
|
||||
if (largeEntityMap.has(node.id)) {
|
||||
processed.hidden = true;
|
||||
}
|
||||
return processed;
|
||||
const filteredNodes = graphData.nodes.filter(node => {
|
||||
// Only include nodes that are NOT members of large entities, but always include the container itself
|
||||
return !this.largeEntityMembers.has(node.id) || node.type === 'large_entity';
|
||||
});
|
||||
|
||||
console.log(`Filtered ${graphData.nodes.length - filteredNodes.length} large entity member nodes from visualization`);
|
||||
|
||||
// Process only the filtered nodes
|
||||
const processedNodes = filteredNodes.map(node => {
|
||||
return this.processNode(node);
|
||||
});
|
||||
|
||||
const mergedEdges = {};
|
||||
@@ -311,6 +447,10 @@ class GraphManager {
|
||||
this.nodes.update(processedNodes);
|
||||
this.edges.update(processedEdges);
|
||||
|
||||
// After data is loaded, apply filters
|
||||
this.updateFilterControls();
|
||||
this.applyAllFilters();
|
||||
|
||||
// Highlight new additions briefly
|
||||
if (newNodes.length > 0 || newEdges.length > 0) {
|
||||
setTimeout(() => this.highlightNewElements(newNodes, newEdges), 100);
|
||||
@@ -322,6 +462,8 @@ class GraphManager {
|
||||
}
|
||||
|
||||
console.log(`Graph updated: ${processedNodes.length} nodes, ${processedEdges.length} edges (${newNodes.length} new nodes, ${newEdges.length} new edges)`);
|
||||
console.log(`Large entity members hidden: ${this.largeEntityMembers.size}`);
|
||||
|
||||
} catch (error) {
|
||||
console.error('Failed to update graph:', error);
|
||||
this.showError('Failed to update visualization');
|
||||
@@ -329,8 +471,21 @@ class GraphManager {
|
||||
}
|
||||
|
||||
/**
|
||||
* Process node data with styling and metadata
|
||||
* @param {Object} node - Raw node data
|
||||
* UPDATED: Helper method to find an attribute by name in the standardized attributes list
|
||||
* @param {Array} attributes - List of StandardAttribute objects
|
||||
* @param {string} name - Attribute name to find
|
||||
* @returns {Object|null} The attribute object if found, null otherwise
|
||||
*/
|
||||
findAttributeByName(attributes, name) {
|
||||
if (!Array.isArray(attributes)) {
|
||||
return null;
|
||||
}
|
||||
return attributes.find(attr => attr.name === name) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* UPDATED: Process node data with styling and metadata for the flat data model
|
||||
* @param {Object} node - Raw node data with standardized attributes
|
||||
* @returns {Object} Processed node data
|
||||
*/
|
||||
processNode(node) {
|
||||
@@ -341,7 +496,7 @@ class GraphManager {
|
||||
size: this.getNodeSize(node.type),
|
||||
borderColor: this.getNodeBorderColor(node.type),
|
||||
shape: this.getNodeShape(node.type),
|
||||
attributes: node.attributes || {},
|
||||
attributes: node.attributes || [], // Keep as standardized attributes list
|
||||
description: node.description || '',
|
||||
metadata: node.metadata || {},
|
||||
type: node.type,
|
||||
@@ -354,13 +509,6 @@ class GraphManager {
|
||||
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
|
||||
}
|
||||
|
||||
// Style based on certificate validity
|
||||
if (node.type === 'domain') {
|
||||
if (node.attributes && node.attributes.certificates && node.attributes.certificates.has_valid_cert === false) {
|
||||
processedNode.color = { background: '#888888', border: '#666666' };
|
||||
}
|
||||
}
|
||||
|
||||
// Handle merged correlation objects (similar to large entities)
|
||||
if (node.type === 'correlation_object') {
|
||||
const metadata = node.metadata || {};
|
||||
@@ -408,8 +556,6 @@ class GraphManager {
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
|
||||
return processedEdge;
|
||||
}
|
||||
|
||||
@@ -456,7 +602,6 @@ class GraphManager {
|
||||
return colors[nodeType] || '#ffffff';
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get node border color based on type
|
||||
* @param {string} nodeType - Node type
|
||||
@@ -846,6 +991,9 @@ class GraphManager {
|
||||
clear() {
|
||||
this.nodes.clear();
|
||||
this.edges.clear();
|
||||
this.history = [];
|
||||
this.largeEntityMembers.clear(); // Clear large entity tracking
|
||||
this.clearInitialTargets();
|
||||
|
||||
// Show placeholder
|
||||
const placeholder = this.container.querySelector('.graph-placeholder');
|
||||
@@ -866,59 +1014,577 @@ class GraphManager {
|
||||
}
|
||||
}
|
||||
|
||||
/* * @param {Set} excludedNodeIds - Node IDs to exclude from analysis (for simulation)
|
||||
* @param {Set} excludedEdgeTypes - Edge types to exclude from traversal
|
||||
* @param {Set} excludedNodeTypes - Node types to exclude from traversal
|
||||
* @returns {Object} Analysis results with reachable/unreachable nodes
|
||||
*/
|
||||
analyzeGraphReachability(excludedNodeIds = new Set(), excludedEdgeTypes = new Set(), excludedNodeTypes = new Set()) {
|
||||
console.log("Performing comprehensive reachability analysis...");
|
||||
|
||||
const analysis = {
|
||||
reachableNodes: new Set(),
|
||||
unreachableNodes: new Set(),
|
||||
isolatedClusters: [],
|
||||
affectedNodes: new Set()
|
||||
};
|
||||
|
||||
if (this.nodes.length === 0) return analysis;
|
||||
|
||||
// Build adjacency list excluding specified elements
|
||||
const adjacencyList = {};
|
||||
this.nodes.getIds().forEach(id => {
|
||||
if (!excludedNodeIds.has(id)) {
|
||||
adjacencyList[id] = [];
|
||||
}
|
||||
});
|
||||
|
||||
this.edges.forEach(edge => {
|
||||
const edgeType = edge.metadata?.relationship_type || '';
|
||||
if (!excludedEdgeTypes.has(edgeType) &&
|
||||
!excludedNodeIds.has(edge.from) &&
|
||||
!excludedNodeIds.has(edge.to)) {
|
||||
|
||||
if (adjacencyList[edge.from]) {
|
||||
adjacencyList[edge.from].push(edge.to);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// BFS traversal from initial targets
|
||||
const traversalQueue = [];
|
||||
|
||||
// Start from initial targets that aren't excluded
|
||||
this.initialTargetIds.forEach(rootId => {
|
||||
if (!excludedNodeIds.has(rootId)) {
|
||||
const node = this.nodes.get(rootId);
|
||||
if (node && !excludedNodeTypes.has(node.type)) {
|
||||
if (!analysis.reachableNodes.has(rootId)) {
|
||||
traversalQueue.push(rootId);
|
||||
analysis.reachableNodes.add(rootId);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// BFS to find all reachable nodes
|
||||
let queueIndex = 0;
|
||||
while (queueIndex < traversalQueue.length) {
|
||||
const currentNode = traversalQueue[queueIndex++];
|
||||
|
||||
for (const neighbor of (adjacencyList[currentNode] || [])) {
|
||||
if (!analysis.reachableNodes.has(neighbor)) {
|
||||
const node = this.nodes.get(neighbor);
|
||||
if (node && !excludedNodeTypes.has(node.type)) {
|
||||
analysis.reachableNodes.add(neighbor);
|
||||
traversalQueue.push(neighbor);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Identify unreachable nodes (maintaining forensic integrity)
|
||||
Object.keys(adjacencyList).forEach(nodeId => {
|
||||
if (!analysis.reachableNodes.has(nodeId)) {
|
||||
analysis.unreachableNodes.add(nodeId);
|
||||
}
|
||||
});
|
||||
|
||||
// Find isolated clusters among unreachable nodes
|
||||
analysis.isolatedClusters = this.findIsolatedClusters(
|
||||
Array.from(analysis.unreachableNodes),
|
||||
adjacencyList
|
||||
);
|
||||
|
||||
console.log(`Reachability analysis complete:`, {
|
||||
reachable: analysis.reachableNodes.size,
|
||||
unreachable: analysis.unreachableNodes.size,
|
||||
clusters: analysis.isolatedClusters.length
|
||||
});
|
||||
|
||||
return analysis;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get network statistics
|
||||
* @returns {Object} Statistics object
|
||||
* Find isolated clusters within a set of nodes
|
||||
* Used for forensic analysis to identify disconnected subgraphs
|
||||
*/
|
||||
findIsolatedClusters(nodeIds, adjacencyList) {
|
||||
const visited = new Set();
|
||||
const clusters = [];
|
||||
|
||||
for (const nodeId of nodeIds) {
|
||||
if (!visited.has(nodeId)) {
|
||||
const cluster = [];
|
||||
const stack = [nodeId];
|
||||
|
||||
while (stack.length > 0) {
|
||||
const current = stack.pop();
|
||||
if (!visited.has(current)) {
|
||||
visited.add(current);
|
||||
cluster.push(current);
|
||||
|
||||
// Add unvisited neighbors within the unreachable set
|
||||
for (const neighbor of (adjacencyList[current] || [])) {
|
||||
if (nodeIds.includes(neighbor) && !visited.has(neighbor)) {
|
||||
stack.push(neighbor);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (cluster.length > 0) {
|
||||
clusters.push(cluster);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return clusters;
|
||||
}
|
||||
|
||||
/**
|
||||
* ENHANCED: Get comprehensive graph statistics with forensic information
|
||||
* Updates the existing getStatistics() method
|
||||
*/
|
||||
getStatistics() {
|
||||
return {
|
||||
const basicStats = {
|
||||
nodeCount: this.nodes.length,
|
||||
edgeCount: this.edges.length,
|
||||
//isStabilized: this.network ? this.network.isStabilized() : false
|
||||
largeEntityMembersHidden: this.largeEntityMembers.size
|
||||
};
|
||||
|
||||
// Add forensic statistics
|
||||
const visibleNodes = this.nodes.get({ filter: node => !node.hidden });
|
||||
const hiddenNodes = this.nodes.get({ filter: node => node.hidden });
|
||||
|
||||
return {
|
||||
...basicStats,
|
||||
forensicStatistics: {
|
||||
visibleNodes: visibleNodes.length,
|
||||
hiddenNodes: hiddenNodes.length,
|
||||
initialTargets: this.initialTargetIds.size,
|
||||
integrityStatus: visibleNodes.length > 0 && this.initialTargetIds.size > 0 ? 'INTACT' : 'COMPROMISED'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
addInitialTarget(targetId) {
|
||||
this.initialTargetIds.add(targetId);
|
||||
console.log("Initial targets:", this.initialTargetIds);
|
||||
}
|
||||
|
||||
clearInitialTargets() {
|
||||
this.initialTargetIds.clear();
|
||||
console.log("Initial targets cleared.");
|
||||
}
|
||||
|
||||
updateFilterControls() {
|
||||
if (!this.filterPanel) return;
|
||||
const nodeTypes = new Set(this.nodes.get().map(n => n.type));
|
||||
const edgeTypes = new Set(this.edges.get().map(e => e.metadata.relationship_type));
|
||||
|
||||
// Wrap both columns in a single container with vertical layout
|
||||
let filterHTML = '<div class="filter-container">';
|
||||
|
||||
// Nodes section
|
||||
filterHTML += '<div class="filter-column"><h4>Nodes</h4><div class="checkbox-group">';
|
||||
nodeTypes.forEach(type => {
|
||||
const label = type === 'correlation_object' ? 'latent correlations' : type;
|
||||
const isChecked = type !== 'correlation_object';
|
||||
filterHTML += `<label><input type="checkbox" data-filter-type="node" value="${type}" ${isChecked ? 'checked' : ''}> ${label}</label>`;
|
||||
});
|
||||
filterHTML += '</div></div>';
|
||||
|
||||
// Edges section
|
||||
filterHTML += '<div class="filter-column"><h4>Edges</h4><div class="checkbox-group">';
|
||||
edgeTypes.forEach(type => {
|
||||
filterHTML += `<label><input type="checkbox" data-filter-type="edge" value="${type}" checked> ${type}</label>`;
|
||||
});
|
||||
filterHTML += '</div></div>';
|
||||
|
||||
filterHTML += '</div>'; // Close filter-container
|
||||
this.filterPanel.innerHTML = filterHTML;
|
||||
|
||||
this.filterPanel.querySelectorAll('input[type="checkbox"]').forEach(checkbox => {
|
||||
checkbox.addEventListener('change', () => this.applyAllFilters());
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* ENHANCED: Apply filters using consolidated reachability analysis
|
||||
* Replaces the existing applyAllFilters() method
|
||||
*/
|
||||
applyAllFilters() {
|
||||
console.log("Applying filters with enhanced reachability analysis...");
|
||||
if (this.nodes.length === 0) return;
|
||||
|
||||
// Get filter criteria from UI
|
||||
const excludedNodeTypes = new Set();
|
||||
this.filterPanel?.querySelectorAll('input[data-filter-type="node"]:not(:checked)').forEach(cb => {
|
||||
excludedNodeTypes.add(cb.value);
|
||||
});
|
||||
|
||||
const excludedEdgeTypes = new Set();
|
||||
this.filterPanel?.querySelectorAll('input[data-filter-type="edge"]:not(:checked)').forEach(cb => {
|
||||
excludedEdgeTypes.add(cb.value);
|
||||
});
|
||||
|
||||
// Perform comprehensive analysis
|
||||
const analysis = this.analyzeGraphReachability(new Set(), excludedEdgeTypes, excludedNodeTypes);
|
||||
|
||||
// Apply visibility updates
|
||||
const nodeUpdates = this.nodes.map(node => ({
|
||||
id: node.id,
|
||||
hidden: !analysis.reachableNodes.has(node.id)
|
||||
}));
|
||||
|
||||
const edgeUpdates = this.edges.map(edge => ({
|
||||
id: edge.id,
|
||||
hidden: excludedEdgeTypes.has(edge.metadata?.relationship_type || '') ||
|
||||
!analysis.reachableNodes.has(edge.from) ||
|
||||
!analysis.reachableNodes.has(edge.to)
|
||||
}));
|
||||
|
||||
this.nodes.update(nodeUpdates);
|
||||
this.edges.update(edgeUpdates);
|
||||
|
||||
console.log(`Enhanced filters applied. Visible nodes: ${analysis.reachableNodes.size}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* ENHANCED: Hide node with forensic integrity using reachability analysis
|
||||
* Replaces the existing hideNodeAndOrphans() method
|
||||
*/
|
||||
hideNodeWithReachabilityAnalysis(nodeId) {
|
||||
console.log(`Hiding node ${nodeId} with reachability analysis...`);
|
||||
|
||||
// Simulate hiding this node and analyze impact
|
||||
const excludedNodes = new Set([nodeId]);
|
||||
const analysis = this.analyzeGraphReachability(excludedNodes);
|
||||
|
||||
// Nodes that will become unreachable (should be hidden)
|
||||
const nodesToHide = [nodeId, ...Array.from(analysis.unreachableNodes)];
|
||||
|
||||
// Store history for potential revert
|
||||
const historyData = {
|
||||
nodeIds: nodesToHide,
|
||||
operation: 'hide_with_reachability',
|
||||
timestamp: Date.now()
|
||||
};
|
||||
|
||||
// Apply hiding with forensic documentation
|
||||
const updates = nodesToHide.map(id => ({
|
||||
id: id,
|
||||
hidden: true,
|
||||
forensicNote: `Hidden due to reachability analysis from ${nodeId}`
|
||||
}));
|
||||
|
||||
this.nodes.update(updates);
|
||||
this.addToHistory('hide', historyData);
|
||||
|
||||
console.log(`Forensic hide operation: ${nodesToHide.length} nodes hidden`, {
|
||||
originalTarget: nodeId,
|
||||
cascadeNodes: nodesToHide.length - 1,
|
||||
isolatedClusters: analysis.isolatedClusters.length
|
||||
});
|
||||
|
||||
return {
|
||||
hiddenNodes: nodesToHide,
|
||||
isolatedClusters: analysis.isolatedClusters
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply filters to the graph
|
||||
* @param {string} nodeType - The type of node to show ('all' for no filter)
|
||||
* @param {number} minConfidence - The minimum confidence score for edges to be visible
|
||||
* ENHANCED: Delete node with forensic integrity using reachability analysis
|
||||
* Replaces the existing deleteNodeAndOrphans() method
|
||||
*/
|
||||
applyFilters(nodeType, minConfidence) {
|
||||
console.log(`Applying filters: nodeType=${nodeType}, minConfidence=${minConfidence}`);
|
||||
async deleteNodeWithReachabilityAnalysis(nodeId) {
|
||||
console.log(`Deleting node ${nodeId} with reachability analysis...`);
|
||||
|
||||
const nodeUpdates = [];
|
||||
const edgeUpdates = [];
|
||||
// Simulate deletion and analyze impact
|
||||
const excludedNodes = new Set([nodeId]);
|
||||
const analysis = this.analyzeGraphReachability(excludedNodes);
|
||||
|
||||
const allNodes = this.nodes.get({ returnType: 'Object' });
|
||||
const allEdges = this.edges.get();
|
||||
// Nodes that will become unreachable (should be deleted)
|
||||
const nodesToDelete = [nodeId, ...Array.from(analysis.unreachableNodes)];
|
||||
|
||||
// Determine which nodes are visible based on the nodeType filter
|
||||
for (const nodeId in allNodes) {
|
||||
const node = allNodes[nodeId];
|
||||
const isVisible = (nodeType === 'all' || node.type === nodeType);
|
||||
nodeUpdates.push({ id: nodeId, hidden: !isVisible });
|
||||
// Collect forensic data before deletion
|
||||
const historyData = {
|
||||
nodes: nodesToDelete.map(id => this.nodes.get(id)).filter(Boolean),
|
||||
edges: [],
|
||||
operation: 'delete_with_reachability',
|
||||
timestamp: Date.now(),
|
||||
forensicAnalysis: {
|
||||
originalTarget: nodeId,
|
||||
cascadeNodes: nodesToDelete.length - 1,
|
||||
isolatedClusters: analysis.isolatedClusters.length,
|
||||
clusterSizes: analysis.isolatedClusters.map(cluster => cluster.length)
|
||||
}
|
||||
};
|
||||
|
||||
// Collect affected edges
|
||||
nodesToDelete.forEach(id => {
|
||||
const connectedEdgeIds = this.network.getConnectedEdges(id);
|
||||
const edges = this.edges.get(connectedEdgeIds);
|
||||
historyData.edges.push(...edges);
|
||||
});
|
||||
|
||||
// Remove duplicates from edges
|
||||
historyData.edges = Array.from(new Map(historyData.edges.map(e => [e.id, e])).values());
|
||||
|
||||
// Perform backend deletion with forensic logging
|
||||
let operationFailed = false;
|
||||
|
||||
for (const targetNodeId of nodesToDelete) {
|
||||
try {
|
||||
const response = await fetch(`/api/graph/node/${targetNodeId}`, {
|
||||
method: 'DELETE',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
forensicContext: {
|
||||
operation: 'reachability_cascade_delete',
|
||||
originalTarget: nodeId,
|
||||
analysisTimestamp: historyData.timestamp
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
if (!result.success) {
|
||||
console.error(`Backend deletion failed for node ${targetNodeId}:`, result.error);
|
||||
operationFailed = true;
|
||||
break;
|
||||
}
|
||||
|
||||
console.log(`Node ${targetNodeId} deleted from backend with forensic context`);
|
||||
this.nodes.remove({ id: targetNodeId });
|
||||
|
||||
} catch (error) {
|
||||
console.error(`API error during deletion of node ${targetNodeId}:`, error);
|
||||
operationFailed = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Update nodes first to determine edge visibility
|
||||
this.nodes.update(nodeUpdates);
|
||||
// Handle operation results
|
||||
if (!operationFailed) {
|
||||
this.addToHistory('delete', historyData);
|
||||
console.log(`Forensic delete operation completed:`, historyData.forensicAnalysis);
|
||||
|
||||
// Determine which edges are visible based on confidence and connected nodes
|
||||
for (const edge of allEdges) {
|
||||
const sourceNode = this.nodes.get(edge.from);
|
||||
const targetNode = this.nodes.get(edge.to);
|
||||
const confidence = edge.metadata ? edge.metadata.confidence_score : 0;
|
||||
return {
|
||||
success: true,
|
||||
deletedNodes: nodesToDelete,
|
||||
forensicAnalysis: historyData.forensicAnalysis
|
||||
};
|
||||
} else {
|
||||
// Revert UI changes if backend operations failed - use update instead of add
|
||||
console.log("Reverting UI changes due to backend failure");
|
||||
this.nodes.update(historyData.nodes);
|
||||
this.edges.update(historyData.edges);
|
||||
|
||||
const isVisible = confidence >= minConfidence &&
|
||||
sourceNode && !sourceNode.hidden &&
|
||||
targetNode && !targetNode.hidden;
|
||||
|
||||
edgeUpdates.push({ id: edge.id, hidden: !isVisible });
|
||||
return {
|
||||
success: false,
|
||||
error: "Backend deletion failed, UI reverted"
|
||||
};
|
||||
}
|
||||
|
||||
this.edges.update(edgeUpdates);
|
||||
|
||||
console.log('Filters applied.');
|
||||
}
|
||||
|
||||
/**
|
||||
* Show context menu for a node
|
||||
* @param {string} nodeId - The ID of the node
|
||||
* @param {Event} event - The contextmenu event
|
||||
*/
|
||||
showContextMenu(nodeId, event) {
|
||||
console.log('Showing context menu for node:', nodeId);
|
||||
const node = this.nodes.get(nodeId);
|
||||
|
||||
// Create menu items
|
||||
let menuItems = `
|
||||
<ul>
|
||||
<li data-action="focus" data-node-id="${nodeId}">
|
||||
<span class="menu-icon">🎯</span>
|
||||
<span>Focus on Node</span>
|
||||
</li>
|
||||
`;
|
||||
|
||||
// Add "Iterate Scan" option only for domain or IP nodes
|
||||
if (node && (node.type === 'domain' || node.type === 'ip')) {
|
||||
const disabled = this.isScanning ? 'disabled' : ''; // Check if scanning
|
||||
const title = this.isScanning ? 'A scan is already in progress' : 'Iterate Scan (Add to Graph)'; // Add a title for disabled state
|
||||
menuItems += `
|
||||
<li data-action="iterate" data-node-id="${nodeId}" ${disabled} title="${title}">
|
||||
<span class="menu-icon">➕</span>
|
||||
<span>Iterate Scan (Add to Graph)</span>
|
||||
</li>
|
||||
`;
|
||||
}
|
||||
|
||||
menuItems += `
|
||||
<li data-action="hide" data-node-id="${nodeId}">
|
||||
<span class="menu-icon">👁️🗨️</span>
|
||||
<span>Hide Node</span>
|
||||
</li>
|
||||
<li data-action="delete" data-node-id="${nodeId}">
|
||||
<span class="menu-icon">🗑️</span>
|
||||
<span>Delete Node</span>
|
||||
</li>
|
||||
<li data-action="details" data-node-id="${nodeId}">
|
||||
<span class="menu-icon">ℹ️</span>
|
||||
<span>Show Details</span>
|
||||
</li>
|
||||
</ul>
|
||||
`;
|
||||
|
||||
this.contextMenu.innerHTML = menuItems;
|
||||
|
||||
// Position the menu
|
||||
this.contextMenu.style.left = `${event.clientX}px`;
|
||||
this.contextMenu.style.top = `${event.clientY}px`;
|
||||
this.contextMenu.style.display = 'block';
|
||||
|
||||
// Ensure menu stays within viewport
|
||||
const rect = this.contextMenu.getBoundingClientRect();
|
||||
if (rect.right > window.innerWidth) {
|
||||
this.contextMenu.style.left = `${event.clientX - rect.width}px`;
|
||||
}
|
||||
if (rect.bottom > window.innerHeight) {
|
||||
this.contextMenu.style.top = `${event.clientY - rect.height}px`;
|
||||
}
|
||||
|
||||
// Add event listeners to menu items
|
||||
this.contextMenu.querySelectorAll('li').forEach(item => {
|
||||
item.addEventListener('click', (e) => {
|
||||
if (e.currentTarget.hasAttribute('disabled')) { // Prevent action if disabled
|
||||
e.stopPropagation();
|
||||
return;
|
||||
}
|
||||
e.stopPropagation();
|
||||
const action = e.currentTarget.dataset.action;
|
||||
const nodeId = e.currentTarget.dataset.nodeId;
|
||||
console.log('Context menu action:', action, 'for node:', nodeId);
|
||||
this.performContextMenuAction(action, nodeId);
|
||||
this.hideContextMenu();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide the context menu
|
||||
*/
|
||||
hideContextMenu() {
|
||||
if (this.contextMenu) {
|
||||
this.contextMenu.style.display = 'none';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* UPDATED: Enhanced context menu actions using new methods
|
||||
* Updates the existing performContextMenuAction() method
|
||||
*/
|
||||
performContextMenuAction(action, nodeId) {
|
||||
console.log('Performing enhanced action:', action, 'on node:', nodeId);
|
||||
|
||||
switch (action) {
|
||||
case 'focus':
|
||||
this.focusOnNode(nodeId);
|
||||
break;
|
||||
|
||||
case 'iterate':
|
||||
const event = new CustomEvent('iterateScan', {
|
||||
detail: { nodeId }
|
||||
});
|
||||
document.dispatchEvent(event);
|
||||
break;
|
||||
|
||||
case 'hide':
|
||||
// Use enhanced method with reachability analysis
|
||||
this.hideNodeWithReachabilityAnalysis(nodeId);
|
||||
break;
|
||||
|
||||
case 'delete':
|
||||
// Use enhanced method with reachability analysis
|
||||
this.deleteNodeWithReachabilityAnalysis(nodeId);
|
||||
break;
|
||||
|
||||
case 'details':
|
||||
const node = this.nodes.get(nodeId);
|
||||
if (node) {
|
||||
this.showNodeDetails(node);
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
console.warn('Unknown action:', action);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add an operation to the history stack
|
||||
* @param {string} type - The type of operation ('hide', 'delete')
|
||||
* @param {Object} data - The data needed to revert the operation
|
||||
*/
|
||||
addToHistory(type, data) {
|
||||
this.history.push({ type, data });
|
||||
}
|
||||
|
||||
/**
|
||||
* Revert the last action
|
||||
*/
|
||||
async revertLastAction() {
|
||||
const lastAction = this.history.pop();
|
||||
if (!lastAction) {
|
||||
console.log('No actions to revert.');
|
||||
return;
|
||||
}
|
||||
|
||||
switch (lastAction.type) {
|
||||
case 'hide':
|
||||
// Revert hiding nodes by un-hiding them
|
||||
const updates = lastAction.data.nodeIds.map(id => ({ id: id, hidden: false }));
|
||||
this.nodes.update(updates);
|
||||
break;
|
||||
case 'delete':
|
||||
try {
|
||||
const response = await fetch('/api/graph/revert', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(lastAction)
|
||||
});
|
||||
const result = await response.json();
|
||||
|
||||
if (result.success) {
|
||||
console.log('Delete action reverted successfully on backend.');
|
||||
// Re-add all nodes and edges from the history to the local view - use update instead of add
|
||||
this.nodes.update(lastAction.data.nodes);
|
||||
this.edges.update(lastAction.data.edges);
|
||||
} else {
|
||||
console.error('Failed to revert delete action on backend:', result.error);
|
||||
// Push the action back onto the history stack if the API call failed
|
||||
this.history.push(lastAction);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error during revert API call:', error);
|
||||
this.history.push(lastAction);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Unhide all hidden nodes
|
||||
*/
|
||||
unhideAll() {
|
||||
const allNodes = this.nodes.get({
|
||||
filter: (node) => node.hidden === true
|
||||
});
|
||||
const updates = allNodes.map(node => ({ id: node.id, hidden: false }));
|
||||
this.nodes.update(updates);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Export for use in main.js
|
||||
|
||||
1062
static/js/main.js
1062
static/js/main.js
File diff suppressed because it is too large
Load Diff
@@ -32,19 +32,8 @@
|
||||
|
||||
<div class="form-container">
|
||||
<div class="input-group">
|
||||
<label for="target-domain">Target Domain</label>
|
||||
<input type="text" id="target-domain" placeholder="example.com" autocomplete="off">
|
||||
</div>
|
||||
|
||||
<div class="input-group">
|
||||
<label for="max-depth">Recursion Depth</label>
|
||||
<select id="max-depth">
|
||||
<option value="1">Depth 1 - Direct relationships</option>
|
||||
<option value="2" selected>Depth 2 - Recommended</option>
|
||||
<option value="3">Depth 3 - Extended analysis</option>
|
||||
<option value="4">Depth 4 - Deep reconnaissance</option>
|
||||
<option value="5">Depth 5 - Maximum depth</option>
|
||||
</select>
|
||||
<label for="target-input">Target Domain or IP</label>
|
||||
<input type="text" id="target-input" placeholder="example.com or 8.8.8.8" autocomplete="off">
|
||||
</div>
|
||||
|
||||
<div class="button-group">
|
||||
@@ -64,9 +53,9 @@
|
||||
<span class="btn-icon">[EXPORT]</span>
|
||||
<span>Download Results</span>
|
||||
</button>
|
||||
<button id="configure-api-keys" class="btn btn-secondary">
|
||||
<button id="configure-settings" class="btn btn-secondary">
|
||||
<span class="btn-icon">[API]</span>
|
||||
<span>Configure API Keys</span>
|
||||
<span>Settings</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
@@ -104,30 +93,22 @@
|
||||
<div class="progress-bar">
|
||||
<div id="progress-fill" class="progress-fill"></div>
|
||||
</div>
|
||||
<div class="progress-placeholder">
|
||||
<span class="status-label">
|
||||
⚠️ <strong>Important:</strong> Scanning large public services (e.g., Google, Cloudflare, AWS) is
|
||||
<strong>discouraged</strong> due to rate limits (e.g., crt.sh).
|
||||
<br><br>
|
||||
Our task scheduler operates on a <strong>priority-based queue</strong>:
|
||||
Short, targeted tasks like DNS are processed first, while resource-intensive requests (e.g., crt.sh)
|
||||
are <strong>automatically deprioritized</strong> and may be processed later.
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="visualization-panel">
|
||||
<div class="panel-header">
|
||||
<h2>Infrastructure Map</h2>
|
||||
<div class="view-controls">
|
||||
<div class="filter-group">
|
||||
<label for="node-type-filter">Node Type:</label>
|
||||
<select id="node-type-filter">
|
||||
<option value="all">All</option>
|
||||
<option value="domain">Domain</option>
|
||||
<option value="ip">IP</option>
|
||||
<option value="asn">ASN</option>
|
||||
<option value="correlation_object">Correlation Object</option>
|
||||
<option value="large_entity">Large Entity</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="filter-group">
|
||||
<label for="confidence-filter">Min Confidence:</label>
|
||||
<input type="range" id="confidence-filter" min="0" max="1" step="0.1" value="0">
|
||||
<span id="confidence-value">0</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="network-graph" class="graph-container">
|
||||
@@ -205,16 +186,28 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="api-key-modal" class="modal">
|
||||
<div id="settings-modal" class="modal">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<h3>Configure API Keys</h3>
|
||||
<button id="api-key-modal-close" class="modal-close">[×]</button>
|
||||
<h3>Settings</h3>
|
||||
<button id="settings-modal-close" class="modal-close">[×]</button>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<p class="modal-description">
|
||||
Enter your API keys for enhanced data providers. Keys are stored in memory for the current session only and are never saved to disk.
|
||||
Configure scan settings and API keys. Keys are stored in memory for the current session only.
|
||||
Only provide API-keys you dont use for anything else. Don´t enter an API-key if you don´t trust me (best practice would that you don´t).
|
||||
</p>
|
||||
<br>
|
||||
<div class="input-group">
|
||||
<label for="max-depth">Recursion Depth</label>
|
||||
<select id="max-depth">
|
||||
<option value="1">Depth 1 - Direct relationships</option>
|
||||
<option value="2" selected>Depth 2 - Recommended</option>
|
||||
<option value="3">Depth 3 - Extended analysis</option>
|
||||
<option value="4">Depth 4 - Deep reconnaissance</option>
|
||||
<option value="5">Depth 5 - Maximum depth</option>
|
||||
</select>
|
||||
</div>
|
||||
<div id="api-key-inputs">
|
||||
</div>
|
||||
<div class="button-group" style="flex-direction: row; justify-content: flex-end;">
|
||||
@@ -222,7 +215,7 @@
|
||||
<span>Reset</span>
|
||||
</button>
|
||||
<button id="save-api-keys" class="btn btn-primary">
|
||||
<span>Save Keys</span>
|
||||
<span>Save API-Keys</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -48,3 +48,15 @@ def _is_valid_ip(ip: str) -> bool:
|
||||
|
||||
except (ValueError, AttributeError):
|
||||
return False
|
||||
|
||||
def is_valid_target(target: str) -> bool:
|
||||
"""
|
||||
Checks if the target is a valid domain or IP address.
|
||||
|
||||
Args:
|
||||
target: The target string to validate.
|
||||
|
||||
Returns:
|
||||
True if the target is a valid domain or IP, False otherwise.
|
||||
"""
|
||||
return _is_valid_domain(target) or _is_valid_ip(target)
|
||||
Reference in New Issue
Block a user