Compare commits
2 Commits
140ef54674
...
try-fix
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4378146d0c | ||
|
|
b26002eff9 |
34
.env.example
34
.env.example
@@ -1,34 +0,0 @@
|
||||
# ===============================================
|
||||
# DNSRecon Environment Variables
|
||||
# ===============================================
|
||||
# Copy this file to .env and fill in your values.
|
||||
|
||||
# --- API Keys ---
|
||||
# Add your Shodan API key for the Shodan provider to be enabled.
|
||||
SHODAN_API_KEY=
|
||||
|
||||
# --- Flask & Session Settings ---
|
||||
# A strong, random secret key is crucial for session security.
|
||||
FLASK_SECRET_KEY=your-very-secret-and-random-key-here
|
||||
FLASK_HOST=127.0.0.1
|
||||
FLASK_PORT=5000
|
||||
FLASK_DEBUG=True
|
||||
# How long a user's session in the browser lasts (in hours).
|
||||
FLASK_PERMANENT_SESSION_LIFETIME_HOURS=2
|
||||
# How long inactive scanner data is stored in Redis (in minutes).
|
||||
SESSION_TIMEOUT_MINUTES=60
|
||||
|
||||
|
||||
# --- Application Core Settings ---
|
||||
# The default number of levels to recurse when scanning.
|
||||
DEFAULT_RECURSION_DEPTH=2
|
||||
# Default timeout for provider API requests in seconds.
|
||||
DEFAULT_TIMEOUT=30
|
||||
# The number of concurrent provider requests to make.
|
||||
MAX_CONCURRENT_REQUESTS=1
|
||||
# The number of results from a provider that triggers the "large entity" grouping.
|
||||
LARGE_ENTITY_THRESHOLD=100
|
||||
# The number of times to retry a target if a provider fails.
|
||||
MAX_RETRIES_PER_TARGET=8
|
||||
# How long cached provider responses are stored (in hours).
|
||||
CACHE_TIMEOUT_HOURS=12
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -169,4 +169,4 @@ cython_debug/
|
||||
#.idea/
|
||||
|
||||
dump.rdb
|
||||
cache/
|
||||
.vscode
|
||||
226
README.md
226
README.md
@@ -4,32 +4,28 @@ DNSRecon is an interactive, passive reconnaissance tool designed to map adversar
|
||||
|
||||
**Current Status: Phase 2 Implementation**
|
||||
|
||||
* ✅ Core infrastructure and graph engine
|
||||
* ✅ Multi-provider support (crt.sh, DNS, Shodan)
|
||||
* ✅ Session-based multi-user support
|
||||
* ✅ Real-time web interface with interactive visualization
|
||||
* ✅ Forensic logging system and JSON export
|
||||
|
||||
-----
|
||||
- ✅ Core infrastructure and graph engine
|
||||
- ✅ Multi-provider support (crt.sh, DNS, Shodan)
|
||||
- ✅ Session-based multi-user support
|
||||
- ✅ Real-time web interface with interactive visualization
|
||||
- ✅ Forensic logging system and JSON export
|
||||
|
||||
## Features
|
||||
|
||||
* **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
|
||||
* **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
|
||||
* **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
|
||||
* **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
|
||||
* **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
|
||||
* **Session Management**: Supports concurrent user sessions with isolated scanner instances.
|
||||
|
||||
-----
|
||||
- **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
|
||||
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
|
||||
- **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
|
||||
- **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
|
||||
- **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
|
||||
- **Session Management**: Supports concurrent user sessions with isolated scanner instances.
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* Python 3.8 or higher
|
||||
* A modern web browser with JavaScript enabled
|
||||
* (Recommended) A Linux host for running the application and the optional DNS cache.
|
||||
- Python 3.8 or higher
|
||||
- A modern web browser with JavaScript enabled
|
||||
- (Recommended) A Linux host for running the application and the optional DNS cache.
|
||||
|
||||
### 1\. Clone the Project
|
||||
|
||||
@@ -48,50 +44,156 @@ source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
The `requirements.txt` file contains the following dependencies:
|
||||
### 3\. (Optional but Recommended) Set up a Local DNS Caching Resolver
|
||||
|
||||
* Flask\>=2.3.3
|
||||
* networkx\>=3.1
|
||||
* requests\>=2.31.0
|
||||
* python-dateutil\>=2.8.2
|
||||
* Werkzeug\>=2.3.7
|
||||
* urllib3\>=2.0.0
|
||||
* dnspython\>=2.4.2
|
||||
* gunicorn
|
||||
* redis
|
||||
* python-dotenv
|
||||
Running a local DNS caching resolver can significantly speed up DNS queries and reduce your network footprint. Here’s how to set up `unbound` on a Debian-based Linux distribution (like Ubuntu).
|
||||
|
||||
-----
|
||||
|
||||
## Configuration
|
||||
|
||||
DNSRecon is configured using a `.env` file. You can copy the provided example file and edit it to suit your needs:
|
||||
**a. Install Unbound:**
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
sudo apt update
|
||||
sudo apt install unbound -y
|
||||
```
|
||||
|
||||
The following environment variables are available for configuration:
|
||||
**b. Configure Unbound:**
|
||||
Create a new configuration file for DNSRecon:
|
||||
|
||||
| Variable | Description | Default |
|
||||
| :--- | :--- | :--- |
|
||||
| `SHODAN_API_KEY` | Your Shodan API key. | |
|
||||
| `FLASK_SECRET_KEY`| A strong, random secret key for session security. | `your-very-secret-and-random-key-here` |
|
||||
| `FLASK_HOST` | The host address for the Flask application. | `127.0.0.1` |
|
||||
| `FLASK_PORT` | The port for the Flask application. | `5000` |
|
||||
| `FLASK_DEBUG` | Enable or disable Flask's debug mode. | `True` |
|
||||
| `FLASK_PERMANENT_SESSION_LIFETIME_HOURS`| How long a user's session in the browser lasts (in hours). | `2` |
|
||||
| `SESSION_TIMEOUT_MINUTES` | How long inactive scanner data is stored in Redis (in minutes). | `60` |
|
||||
| `DEFAULT_RECURSION_DEPTH` | The default number of levels to recurse when scanning. | `2` |
|
||||
| `DEFAULT_TIMEOUT` | Default timeout for provider API requests in seconds. | `30` |
|
||||
| `MAX_CONCURRENT_REQUESTS`| The number of concurrent provider requests to make. | `5` |
|
||||
| `LARGE_ENTITY_THRESHOLD`| The number of results from a provider that triggers the "large entity" grouping. | `100` |
|
||||
| `MAX_RETRIES_PER_TARGET`| The number of times to retry a target if a provider fails. | `8` |
|
||||
| `CACHE_EXPIRY_HOURS`| How long cached provider responses are stored (in hours). | `12` |
|
||||
```bash
|
||||
sudo nano /etc/unbound/unbound.conf.d/dnsrecon.conf
|
||||
```
|
||||
|
||||
-----
|
||||
Add the following content to the file:
|
||||
|
||||
## Systemd Service
|
||||
```
|
||||
server:
|
||||
# Listen on localhost for all users
|
||||
interface: 127.0.0.1
|
||||
access-control: 0.0.0.0/0 refuse
|
||||
access-control: 127.0.0.0/8 allow
|
||||
|
||||
# Enable prefetching of popular items
|
||||
prefetch: yes
|
||||
```
|
||||
|
||||
**c. Restart Unbound and set it as the default resolver:**
|
||||
|
||||
```bash
|
||||
sudo systemctl restart unbound
|
||||
sudo systemctl enable unbound
|
||||
```
|
||||
|
||||
To use this resolver for your system, you may need to update your network settings to point to `127.0.0.1` as your DNS server.
|
||||
|
||||
**d. Update DNSProvider to use the local resolver:**
|
||||
In `dnsrecon/providers/dns_provider.py`, you can explicitly set the resolver's nameservers in the `__init__` method:
|
||||
|
||||
```python
|
||||
# dnsrecon/providers/dns_provider.py
|
||||
|
||||
class DNSProvider(BaseProvider):
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize DNS provider with session-specific configuration."""
|
||||
super().__init__(...)
|
||||
|
||||
# Configure DNS resolver
|
||||
self.resolver = dns.resolver.Resolver()
|
||||
self.resolver.nameservers = ['127.0.0.1'] # Use local caching resolver
|
||||
self.resolver.timeout = 5
|
||||
self.resolver.lifetime = 10
|
||||
```
|
||||
|
||||
## Usage (Development)
|
||||
|
||||
### 1\. Start the Application
|
||||
|
||||
```bash
|
||||
python app.py
|
||||
```
|
||||
|
||||
### 2\. Open Your Browser
|
||||
|
||||
Navigate to `http://127.0.0.1:5000`.
|
||||
|
||||
### 3\. Basic Reconnaissance Workflow
|
||||
|
||||
1. **Enter Target Domain**: Input a domain like `example.com`.
|
||||
2. **Select Recursion Depth**: Depth 2 is recommended for most investigations.
|
||||
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin.
|
||||
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered.
|
||||
5. **Analyze and Export**: Interact with the graph and download the results when the scan is complete.
|
||||
|
||||
## Production Deployment
|
||||
|
||||
To deploy DNSRecon in a production environment, follow these steps:
|
||||
|
||||
### 1\. Use a Production WSGI Server
|
||||
|
||||
Do not use the built-in Flask development server for production. Use a WSGI server like **Gunicorn**:
|
||||
|
||||
```bash
|
||||
pip install gunicorn
|
||||
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||
```
|
||||
|
||||
### 2\. Configure Environment Variables
|
||||
|
||||
Set the following environment variables for a secure and configurable deployment:
|
||||
|
||||
```bash
|
||||
# Generate a strong, random secret key
|
||||
export SECRET_KEY='your-super-secret-and-random-key'
|
||||
|
||||
# Set Flask to production mode
|
||||
export FLASK_ENV='production'
|
||||
export FLASK_DEBUG=False
|
||||
|
||||
# API keys (optional, but recommended for full functionality)
|
||||
export SHODAN_API_KEY="your_shodan_key"
|
||||
```
|
||||
|
||||
### 3\. Use a Reverse Proxy
|
||||
|
||||
Set up a reverse proxy like **Nginx** to sit in front of the Gunicorn server. This provides several benefits, including:
|
||||
|
||||
- **TLS/SSL Termination**: Securely handle HTTPS traffic.
|
||||
- **Load Balancing**: Distribute traffic across multiple application instances.
|
||||
- **Serving Static Files**: Efficiently serve CSS and JavaScript files.
|
||||
|
||||
**Example Nginx Configuration:**
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name your_domain.com;
|
||||
|
||||
location / {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name your_domain.com;
|
||||
|
||||
# SSL cert configuration
|
||||
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:5000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
}
|
||||
|
||||
location /static {
|
||||
alias /path/to/your/dnsrecon/static;
|
||||
expires 30d;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Autostart with systemd
|
||||
|
||||
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
|
||||
|
||||
@@ -143,18 +245,12 @@ You can check the status of the service at any time with:
|
||||
sudo systemctl status dnsrecon.service
|
||||
```
|
||||
|
||||
-----
|
||||
## Security Considerations
|
||||
|
||||
- **API Keys**: API keys are stored in memory for the duration of a user session and are not written to disk.
|
||||
- **Rate Limiting**: DNSRecon includes built-in rate limiting to be respectful to data sources.
|
||||
- **Local Use**: The application is designed for local or trusted network use and does not have built-in authentication. **Do not expose it directly to the internet without proper security controls.**
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the terms of the **BSD-3-Clause** license.
|
||||
|
||||
Copyright (c) 2025 mstoeck3.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
||||
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
This project is licensed under the terms of the license agreement found in the `LICENSE` file.
|
||||
171
config.py
171
config.py
@@ -1,5 +1,3 @@
|
||||
# dnsrecon-reduced/config.py
|
||||
|
||||
"""
|
||||
Configuration management for DNSRecon tool.
|
||||
Handles API key storage, rate limiting, and default settings.
|
||||
@@ -7,149 +5,110 @@ Handles API key storage, rate limiting, and default settings.
|
||||
|
||||
import os
|
||||
from typing import Dict, Optional
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
|
||||
class Config:
|
||||
"""Configuration manager for DNSRecon application."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize configuration with default values."""
|
||||
self.api_keys: Dict[str, Optional[str]] = {}
|
||||
self.api_keys: Dict[str, Optional[str]] = {
|
||||
'shodan': None
|
||||
}
|
||||
|
||||
# --- General Settings ---
|
||||
# Default settings
|
||||
self.default_recursion_depth = 2
|
||||
self.default_timeout = 60
|
||||
self.max_concurrent_requests = 1
|
||||
self.default_timeout = 10
|
||||
self.max_concurrent_requests = 5
|
||||
self.large_entity_threshold = 100
|
||||
self.max_retries_per_target = 8
|
||||
|
||||
# --- Provider Caching Settings ---
|
||||
self.cache_timeout_hours = 6 # Provider-specific cache timeout
|
||||
|
||||
# --- Rate Limiting (requests per minute) ---
|
||||
# Rate limiting settings (requests per minute)
|
||||
self.rate_limits = {
|
||||
'crtsh': 5,
|
||||
'shodan': 60,
|
||||
'dns': 100
|
||||
'crtsh': 60, # Free service, be respectful
|
||||
'shodan': 60, # API dependent
|
||||
'dns': 100 # Local DNS queries
|
||||
}
|
||||
|
||||
# --- Provider Settings ---
|
||||
# Provider settings
|
||||
self.enabled_providers = {
|
||||
'crtsh': True,
|
||||
'dns': True,
|
||||
'shodan': False
|
||||
'crtsh': True, # Always enabled (free)
|
||||
'dns': True, # Always enabled (free)
|
||||
'shodan': False # Requires API key
|
||||
}
|
||||
|
||||
# --- Logging ---
|
||||
# Logging configuration
|
||||
self.log_level = 'INFO'
|
||||
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
# --- Flask & Session Settings ---
|
||||
# Flask configuration
|
||||
self.flask_host = '127.0.0.1'
|
||||
self.flask_port = 5000
|
||||
self.flask_debug = True
|
||||
self.flask_secret_key = 'default-secret-key-change-me'
|
||||
self.flask_permanent_session_lifetime_hours = 2
|
||||
self.session_timeout_minutes = 60
|
||||
|
||||
# Load environment variables to override defaults
|
||||
self.load_from_env()
|
||||
|
||||
def load_from_env(self):
|
||||
"""Load configuration from environment variables."""
|
||||
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
|
||||
|
||||
# Override settings from environment
|
||||
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', self.default_recursion_depth))
|
||||
self.default_timeout = int(os.getenv('DEFAULT_TIMEOUT', self.default_timeout))
|
||||
self.max_concurrent_requests = int(os.getenv('MAX_CONCURRENT_REQUESTS', self.max_concurrent_requests))
|
||||
self.large_entity_threshold = int(os.getenv('LARGE_ENTITY_THRESHOLD', self.large_entity_threshold))
|
||||
self.max_retries_per_target = int(os.getenv('MAX_RETRIES_PER_TARGET', self.max_retries_per_target))
|
||||
self.cache_timeout_hours = int(os.getenv('CACHE_TIMEOUT_HOURS', self.cache_timeout_hours))
|
||||
|
||||
# Override Flask and session settings
|
||||
self.flask_host = os.getenv('FLASK_HOST', self.flask_host)
|
||||
self.flask_port = int(os.getenv('FLASK_PORT', self.flask_port))
|
||||
self.flask_debug = os.getenv('FLASK_DEBUG', str(self.flask_debug)).lower() == 'true'
|
||||
self.flask_secret_key = os.getenv('FLASK_SECRET_KEY', self.flask_secret_key)
|
||||
self.flask_permanent_session_lifetime_hours = int(os.getenv('FLASK_PERMANENT_SESSION_LIFETIME_HOURS', self.flask_permanent_session_lifetime_hours))
|
||||
self.session_timeout_minutes = int(os.getenv('SESSION_TIMEOUT_MINUTES', self.session_timeout_minutes))
|
||||
|
||||
def set_api_key(self, provider: str, api_key: Optional[str]) -> bool:
|
||||
"""Set API key for a provider."""
|
||||
self.api_keys[provider] = api_key
|
||||
if api_key:
|
||||
self.enabled_providers[provider] = True
|
||||
return True
|
||||
|
||||
def set_provider_enabled(self, provider: str, enabled: bool) -> bool:
|
||||
def set_api_key(self, provider: str, api_key: str) -> bool:
|
||||
"""
|
||||
Set provider enabled status for the session.
|
||||
Set API key for a provider.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
enabled: Whether the provider should be enabled
|
||||
provider: Provider name (shodan, etc)
|
||||
api_key: API key string
|
||||
|
||||
Returns:
|
||||
True if the setting was applied successfully
|
||||
bool: True if key was set successfully
|
||||
"""
|
||||
provider_key = provider.lower()
|
||||
self.enabled_providers[provider_key] = enabled
|
||||
return True
|
||||
|
||||
def get_provider_enabled(self, provider: str) -> bool:
|
||||
"""
|
||||
Get provider enabled status.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
True if the provider is enabled
|
||||
"""
|
||||
provider_key = provider.lower()
|
||||
return self.enabled_providers.get(provider_key, True) # Default to enabled
|
||||
|
||||
def bulk_set_provider_settings(self, provider_settings: dict) -> dict:
|
||||
"""
|
||||
Set multiple provider settings at once.
|
||||
|
||||
Args:
|
||||
provider_settings: Dict of provider_name -> {'enabled': bool, ...}
|
||||
|
||||
Returns:
|
||||
Dict with results for each provider
|
||||
"""
|
||||
results = {}
|
||||
|
||||
for provider_name, settings in provider_settings.items():
|
||||
provider_key = provider_name.lower()
|
||||
|
||||
try:
|
||||
if 'enabled' in settings:
|
||||
self.enabled_providers[provider_key] = settings['enabled']
|
||||
results[provider_key] = {'success': True, 'enabled': settings['enabled']}
|
||||
else:
|
||||
results[provider_key] = {'success': False, 'error': 'No enabled setting provided'}
|
||||
except Exception as e:
|
||||
results[provider_key] = {'success': False, 'error': str(e)}
|
||||
|
||||
return results
|
||||
if provider in self.api_keys:
|
||||
self.api_keys[provider] = api_key
|
||||
self.enabled_providers[provider] = True if api_key else False
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_api_key(self, provider: str) -> Optional[str]:
|
||||
"""Get API key for a provider."""
|
||||
"""
|
||||
Get API key for a provider.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
API key or None if not set
|
||||
"""
|
||||
return self.api_keys.get(provider)
|
||||
|
||||
def is_provider_enabled(self, provider: str) -> bool:
|
||||
"""Check if a provider is enabled."""
|
||||
"""
|
||||
Check if a provider is enabled.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
bool: True if provider is enabled
|
||||
"""
|
||||
return self.enabled_providers.get(provider, False)
|
||||
|
||||
def get_rate_limit(self, provider: str) -> int:
|
||||
"""Get rate limit for a provider."""
|
||||
"""
|
||||
Get rate limit for a provider.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
Rate limit in requests per minute
|
||||
"""
|
||||
return self.rate_limits.get(provider, 60)
|
||||
|
||||
def load_from_env(self):
|
||||
"""Load configuration from environment variables."""
|
||||
if os.getenv('SHODAN_API_KEY'):
|
||||
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
|
||||
|
||||
# Override default settings from environment
|
||||
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', '2'))
|
||||
self.flask_debug = os.getenv('FLASK_DEBUG', 'True').lower() == 'true'
|
||||
self.default_timeout = 30
|
||||
self.max_concurrent_requests = 5
|
||||
|
||||
|
||||
# Global configuration instance
|
||||
config = Config()
|
||||
@@ -8,6 +8,7 @@ from .scanner import Scanner, ScanStatus
|
||||
from .logger import ForensicLogger, get_forensic_logger, new_session
|
||||
from .session_manager import session_manager
|
||||
from .session_config import SessionConfig, create_session_config
|
||||
from .task_manager import TaskManager, TaskType, ReconTask
|
||||
|
||||
__all__ = [
|
||||
'GraphManager',
|
||||
@@ -19,7 +20,10 @@ __all__ = [
|
||||
'new_session',
|
||||
'session_manager',
|
||||
'SessionConfig',
|
||||
'create_session_config'
|
||||
'create_session_config',
|
||||
'TaskManager',
|
||||
'TaskType',
|
||||
'ReconTask'
|
||||
]
|
||||
|
||||
__version__ = "1.0.0-phase2"
|
||||
@@ -1,10 +1,6 @@
|
||||
# dnsrecon-reduced/core/graph_manager.py
|
||||
|
||||
"""
|
||||
Graph data model for DNSRecon using NetworkX.
|
||||
Manages in-memory graph storage with confidence scoring and forensic metadata.
|
||||
Now fully compatible with the unified ProviderResult data model.
|
||||
UPDATED: Fixed correlation exclusion keys to match actual attribute names.
|
||||
"""
|
||||
import re
|
||||
from datetime import datetime, timezone
|
||||
@@ -18,8 +14,7 @@ class NodeType(Enum):
|
||||
"""Enumeration of supported node types."""
|
||||
DOMAIN = "domain"
|
||||
IP = "ip"
|
||||
ISP = "isp"
|
||||
CA = "ca"
|
||||
ASN = "asn"
|
||||
LARGE_ENTITY = "large_entity"
|
||||
CORRELATION_OBJECT = "correlation_object"
|
||||
|
||||
@@ -31,7 +26,6 @@ class GraphManager:
|
||||
"""
|
||||
Thread-safe graph manager for DNSRecon infrastructure mapping.
|
||||
Uses NetworkX for in-memory graph storage with confidence scoring.
|
||||
Compatible with unified ProviderResult data model.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
@@ -42,31 +36,6 @@ class GraphManager:
|
||||
self.correlation_index = {}
|
||||
# Compile regex for date filtering for efficiency
|
||||
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||
|
||||
# FIXED: Exclude cert_issuer_name since we already create proper CA relationships
|
||||
self.EXCLUDED_KEYS = [
|
||||
# Certificate metadata that creates noise or has dedicated node types
|
||||
'cert_source', # Always 'crtsh' for crtsh provider
|
||||
'cert_common_name',
|
||||
'cert_validity_period_days', # Numerical, not useful for correlation
|
||||
'cert_issuer_name', # FIXED: Has dedicated CA nodes, don't correlate
|
||||
#'cert_certificate_id', # Unique per certificate
|
||||
#'cert_serial_number', # Unique per certificate
|
||||
'cert_entry_timestamp', # Timestamp, filtered by date regex anyway
|
||||
'cert_not_before', # Date, filtered by date regex anyway
|
||||
'cert_not_after', # Date, filtered by date regex anyway
|
||||
# DNS metadata that creates noise
|
||||
'dns_ttl', # TTL values are not meaningful for correlation
|
||||
# Shodan metadata that might create noise
|
||||
'timestamp', # Generic timestamp fields
|
||||
'last_update', # Generic timestamp fields
|
||||
#'org', # Too generic, causes false correlations
|
||||
#'isp', # Too generic, causes false correlations
|
||||
# Generic noisy attributes
|
||||
'updated_timestamp', # Any timestamp field
|
||||
'discovery_timestamp', # Any timestamp field
|
||||
'query_timestamp', # Any timestamp field
|
||||
]
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare GraphManager for pickling, excluding compiled regex."""
|
||||
@@ -81,138 +50,178 @@ class GraphManager:
|
||||
self.__dict__.update(state)
|
||||
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||
|
||||
def process_correlations_for_node(self, node_id: str):
|
||||
"""
|
||||
UPDATED: Process correlations for a given node with enhanced tracking.
|
||||
Now properly tracks which attribute/provider created each correlation.
|
||||
"""
|
||||
if not self.graph.has_node(node_id):
|
||||
def _update_correlation_index(self, node_id: str, data: Any, path: List[str] = None):
|
||||
"""Recursively traverse metadata and add hashable values to the index."""
|
||||
if path is None:
|
||||
path = []
|
||||
|
||||
if isinstance(data, dict):
|
||||
for key, value in data.items():
|
||||
self._update_correlation_index(node_id, value, path + [key])
|
||||
elif isinstance(data, list):
|
||||
for i, item in enumerate(data):
|
||||
self._update_correlation_index(node_id, item, path + [f"[{i}]"])
|
||||
else:
|
||||
self._add_to_correlation_index(node_id, data, ".".join(path))
|
||||
|
||||
def _add_to_correlation_index(self, node_id: str, value: Any, path_str: str):
|
||||
"""Add a hashable value to the correlation index, filtering out noise."""
|
||||
if not isinstance(value, (str, int, float, bool)) or value is None:
|
||||
return
|
||||
|
||||
node_attributes = self.graph.nodes[node_id].get('attributes', [])
|
||||
|
||||
# Process each attribute for potential correlations
|
||||
for attr in node_attributes:
|
||||
attr_name = attr.get('name')
|
||||
attr_value = attr.get('value')
|
||||
attr_provider = attr.get('provider', 'unknown')
|
||||
# Ignore certain paths that contain noisy, non-unique identifiers
|
||||
if any(keyword in path_str.lower() for keyword in ['count', 'total', 'timestamp', 'date']):
|
||||
return
|
||||
|
||||
# IMPROVED: More comprehensive exclusion logic
|
||||
should_exclude = (
|
||||
# Check against excluded keys (exact match or substring)
|
||||
any(excluded_key in attr_name or attr_name == excluded_key for excluded_key in self.EXCLUDED_KEYS) or
|
||||
# Invalid value types
|
||||
not isinstance(attr_value, (str, int, float, bool)) or
|
||||
attr_value is None or
|
||||
# Boolean values are not useful for correlation
|
||||
isinstance(attr_value, bool) or
|
||||
# String values that are too short or are dates
|
||||
(isinstance(attr_value, str) and (
|
||||
len(attr_value) < 4 or
|
||||
self.date_pattern.match(attr_value) or
|
||||
# Exclude common generic values that create noise
|
||||
attr_value.lower() in ['unknown', 'none', 'null', 'n/a', 'true', 'false', '0', '1']
|
||||
)) or
|
||||
# Numerical values that are likely to be unique identifiers
|
||||
(isinstance(attr_value, (int, float)) and (
|
||||
attr_value == 0 or # Zero values are not meaningful
|
||||
attr_value == 1 or # One values are too common
|
||||
abs(attr_value) > 1000000 # Very large numbers are likely IDs
|
||||
))
|
||||
)
|
||||
# Filter out common low-entropy values and date-like strings
|
||||
if isinstance(value, str):
|
||||
# FIXED: Prevent correlation on date/time strings.
|
||||
if self.date_pattern.match(value):
|
||||
return
|
||||
if len(value) < 4 or value.lower() in ['true', 'false', 'unknown', 'none', 'crt.sh']:
|
||||
return
|
||||
elif isinstance(value, int) and abs(value) < 9999:
|
||||
return # Ignore small integers
|
||||
elif isinstance(value, bool):
|
||||
return # Ignore boolean values
|
||||
|
||||
if should_exclude:
|
||||
continue
|
||||
# Add the valuable correlation data to the index
|
||||
if value not in self.correlation_index:
|
||||
self.correlation_index[value] = {}
|
||||
if node_id not in self.correlation_index[value]:
|
||||
self.correlation_index[value][node_id] = []
|
||||
if path_str not in self.correlation_index[value][node_id]:
|
||||
self.correlation_index[value][node_id].append(path_str)
|
||||
|
||||
# Initialize correlation tracking for this value
|
||||
if attr_value not in self.correlation_index:
|
||||
self.correlation_index[attr_value] = {
|
||||
'nodes': set(),
|
||||
'sources': [] # Track which provider/attribute combinations contributed
|
||||
}
|
||||
def _check_for_correlations(self, new_node_id: str, data: Any, path: List[str] = None) -> List[Dict]:
|
||||
"""Recursively traverse metadata to find correlations with existing data."""
|
||||
if path is None:
|
||||
path = []
|
||||
|
||||
# Add this node and source information
|
||||
self.correlation_index[attr_value]['nodes'].add(node_id)
|
||||
|
||||
# Track the source of this correlation value
|
||||
source_info = {
|
||||
'node_id': node_id,
|
||||
'provider': attr_provider,
|
||||
'attribute': attr_name,
|
||||
'path': f"{attr_provider}_{attr_name}"
|
||||
}
|
||||
|
||||
# Add source if not already present (avoid duplicates)
|
||||
existing_sources = [s for s in self.correlation_index[attr_value]['sources']
|
||||
if s['node_id'] == node_id and s['path'] == source_info['path']]
|
||||
if not existing_sources:
|
||||
self.correlation_index[attr_value]['sources'].append(source_info)
|
||||
all_correlations = []
|
||||
if isinstance(data, dict):
|
||||
for key, value in data.items():
|
||||
if key == 'source': # Avoid correlating on the provider name
|
||||
continue
|
||||
all_correlations.extend(self._check_for_correlations(new_node_id, value, path + [key]))
|
||||
elif isinstance(data, list):
|
||||
for i, item in enumerate(data):
|
||||
all_correlations.extend(self._check_for_correlations(new_node_id, item, path + [f"[{i}]"]))
|
||||
else:
|
||||
value = data
|
||||
if value in self.correlation_index:
|
||||
existing_nodes_with_paths = self.correlation_index[value]
|
||||
unique_nodes = set(existing_nodes_with_paths.keys())
|
||||
unique_nodes.add(new_node_id)
|
||||
|
||||
# Create correlation node if we have multiple nodes with this value
|
||||
if len(self.correlation_index[attr_value]['nodes']) > 1:
|
||||
self._create_enhanced_correlation_node_and_edges(attr_value, self.correlation_index[attr_value])
|
||||
if len(unique_nodes) < 2:
|
||||
return all_correlations # Correlation must involve at least two distinct nodes
|
||||
|
||||
def _create_enhanced_correlation_node_and_edges(self, value, correlation_data):
|
||||
"""
|
||||
UPDATED: Create correlation node and edges with raw provider data (no formatting).
|
||||
"""
|
||||
correlation_node_id = f"corr_{hash(str(value)) & 0x7FFFFFFF}"
|
||||
nodes = correlation_data['nodes']
|
||||
sources = correlation_data['sources']
|
||||
|
||||
# Create or update correlation node
|
||||
if not self.graph.has_node(correlation_node_id):
|
||||
# Use raw provider/attribute data - no formatting
|
||||
provider_counts = {}
|
||||
for source in sources:
|
||||
# Keep original provider and attribute names
|
||||
key = f"{source['provider']}_{source['attribute']}"
|
||||
provider_counts[key] = provider_counts.get(key, 0) + 1
|
||||
|
||||
# Use the most common provider/attribute as the primary label (raw)
|
||||
primary_source = max(provider_counts.items(), key=lambda x: x[1])[0] if provider_counts else "unknown_correlation"
|
||||
|
||||
metadata = {
|
||||
'value': value,
|
||||
'correlated_nodes': list(nodes),
|
||||
'sources': sources,
|
||||
'primary_source': primary_source,
|
||||
'correlation_count': len(nodes)
|
||||
}
|
||||
|
||||
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT, metadata=metadata)
|
||||
#print(f"Created correlation node {correlation_node_id} for value '{value}' with {len(nodes)} nodes")
|
||||
new_source = {'node_id': new_node_id, 'path': ".".join(path)}
|
||||
all_sources = [new_source]
|
||||
for node_id, paths in existing_nodes_with_paths.items():
|
||||
for p_str in paths:
|
||||
all_sources.append({'node_id': node_id, 'path': p_str})
|
||||
|
||||
# Create edges from each node to the correlation node
|
||||
for source in sources:
|
||||
node_id = source['node_id']
|
||||
provider = source['provider']
|
||||
attribute = source['attribute']
|
||||
|
||||
if self.graph.has_node(node_id) and not self.graph.has_edge(node_id, correlation_node_id):
|
||||
# Format relationship label as "corr_provider_attribute"
|
||||
relationship_label = f"corr_{provider}_{attribute}"
|
||||
all_correlations.append({
|
||||
'value': value,
|
||||
'sources': all_sources,
|
||||
'nodes': list(unique_nodes)
|
||||
})
|
||||
return all_correlations
|
||||
|
||||
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[Dict[str, Any]] = None,
|
||||
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
|
||||
"""Add a node to the graph, update attributes, and process correlations."""
|
||||
is_new_node = not self.graph.has_node(node_id)
|
||||
if is_new_node:
|
||||
self.graph.add_node(node_id, type=node_type.value,
|
||||
added_timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
attributes=attributes or {},
|
||||
description=description,
|
||||
metadata=metadata or {})
|
||||
else:
|
||||
# Safely merge new attributes into existing attributes
|
||||
if attributes:
|
||||
existing_attributes = self.graph.nodes[node_id].get('attributes', {})
|
||||
existing_attributes.update(attributes)
|
||||
self.graph.nodes[node_id]['attributes'] = existing_attributes
|
||||
if description:
|
||||
self.graph.nodes[node_id]['description'] = description
|
||||
if metadata:
|
||||
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
|
||||
existing_metadata.update(metadata)
|
||||
self.graph.nodes[node_id]['metadata'] = existing_metadata
|
||||
|
||||
if attributes and node_type != NodeType.CORRELATION_OBJECT:
|
||||
correlations = self._check_for_correlations(node_id, attributes)
|
||||
for corr in correlations:
|
||||
value = corr['value']
|
||||
|
||||
self.add_edge(
|
||||
source_id=node_id,
|
||||
target_id=correlation_node_id,
|
||||
relationship_type=relationship_label,
|
||||
confidence_score=0.9,
|
||||
source_provider=provider,
|
||||
raw_data={
|
||||
'correlation_value': value,
|
||||
'original_attribute': attribute,
|
||||
'correlation_type': 'attribute_matching'
|
||||
}
|
||||
)
|
||||
# STEP 1: Substring check against all existing nodes
|
||||
if self._correlation_value_matches_existing_node(value):
|
||||
# Skip creating correlation node - would be redundant
|
||||
continue
|
||||
|
||||
#print(f"Added correlation edge: {node_id} -> {correlation_node_id} ({relationship_label})")
|
||||
# STEP 2: Filter out node pairs that already have direct edges
|
||||
eligible_nodes = self._filter_nodes_without_direct_edges(set(corr['nodes']))
|
||||
|
||||
if len(eligible_nodes) < 2:
|
||||
# Need at least 2 nodes to create a correlation
|
||||
continue
|
||||
|
||||
# STEP 3: Check for existing correlation node with same connection pattern
|
||||
correlation_nodes_with_pattern = self._find_correlation_nodes_with_same_pattern(eligible_nodes)
|
||||
|
||||
if correlation_nodes_with_pattern:
|
||||
# STEP 4: Merge with existing correlation node
|
||||
target_correlation_node = correlation_nodes_with_pattern[0]
|
||||
self._merge_correlation_values(target_correlation_node, value, corr)
|
||||
else:
|
||||
# STEP 5: Create new correlation node for eligible nodes only
|
||||
correlation_node_id = f"corr_{abs(hash(str(sorted(eligible_nodes))))}"
|
||||
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT,
|
||||
metadata={'values': [value], 'sources': corr['sources'],
|
||||
'correlated_nodes': list(eligible_nodes)})
|
||||
|
||||
# Create edges from eligible nodes to this correlation node
|
||||
for c_node_id in eligible_nodes:
|
||||
if self.graph.has_node(c_node_id):
|
||||
attribute = corr['sources'][0]['path'].split('.')[-1]
|
||||
relationship_type = f"c_{attribute}"
|
||||
self.add_edge(c_node_id, correlation_node_id, relationship_type, confidence_score=0.9)
|
||||
|
||||
self._update_correlation_index(node_id, attributes)
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return is_new_node
|
||||
|
||||
def _filter_nodes_without_direct_edges(self, node_set: set) -> set:
|
||||
"""
|
||||
Filter out nodes that already have direct edges between them.
|
||||
Returns set of nodes that should be included in correlation.
|
||||
"""
|
||||
nodes_list = list(node_set)
|
||||
eligible_nodes = set(node_set) # Start with all nodes
|
||||
|
||||
# Check all pairs of nodes
|
||||
for i in range(len(nodes_list)):
|
||||
for j in range(i + 1, len(nodes_list)):
|
||||
node_a = nodes_list[i]
|
||||
node_b = nodes_list[j]
|
||||
|
||||
# Check if direct edge exists in either direction
|
||||
if self._has_direct_edge_bidirectional(node_a, node_b):
|
||||
# Remove both nodes from eligible set since they're already connected
|
||||
eligible_nodes.discard(node_a)
|
||||
eligible_nodes.discard(node_b)
|
||||
|
||||
return eligible_nodes
|
||||
|
||||
def _has_direct_edge_bidirectional(self, node_a: str, node_b: str) -> bool:
|
||||
"""
|
||||
Check if there's a direct edge between two nodes in either direction.
|
||||
Returns True if node_aâ†'node_b OR node_bâ†'node_a exists.
|
||||
Returns True if node_a→node_b OR node_b→node_a exists.
|
||||
"""
|
||||
return (self.graph.has_edge(node_a, node_b) or
|
||||
self.graph.has_edge(node_b, node_a))
|
||||
@@ -281,7 +290,7 @@ class GraphManager:
|
||||
# Create set of unique sources based on (node_id, path) tuples
|
||||
source_set = set()
|
||||
for source in existing_sources + new_sources:
|
||||
source_tuple = (source['node_id'], source.get('path', ''))
|
||||
source_tuple = (source['node_id'], source['path'])
|
||||
source_set.add(source_tuple)
|
||||
|
||||
# Convert back to list of dictionaries
|
||||
@@ -304,60 +313,19 @@ class GraphManager:
|
||||
f"across {node_count} nodes"
|
||||
)
|
||||
|
||||
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[List[Dict[str, Any]]] = None,
|
||||
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
|
||||
"""
|
||||
Add a node to the graph, update attributes, and process correlations.
|
||||
Now compatible with unified data model - attributes are dictionaries from converted StandardAttribute objects.
|
||||
"""
|
||||
is_new_node = not self.graph.has_node(node_id)
|
||||
if is_new_node:
|
||||
self.graph.add_node(node_id, type=node_type.value,
|
||||
added_timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
attributes=attributes or [], # Store as a list from the start
|
||||
description=description,
|
||||
metadata=metadata or {})
|
||||
else:
|
||||
# Safely merge new attributes into the existing list of attributes
|
||||
if attributes:
|
||||
existing_attributes = self.graph.nodes[node_id].get('attributes', [])
|
||||
|
||||
# Handle cases where old data might still be in dictionary format
|
||||
if not isinstance(existing_attributes, list):
|
||||
existing_attributes = []
|
||||
|
||||
# Create a set of existing attribute names for efficient duplicate checking
|
||||
existing_attr_names = {attr['name'] for attr in existing_attributes}
|
||||
|
||||
for new_attr in attributes:
|
||||
if new_attr['name'] not in existing_attr_names:
|
||||
existing_attributes.append(new_attr)
|
||||
existing_attr_names.add(new_attr['name'])
|
||||
|
||||
self.graph.nodes[node_id]['attributes'] = existing_attributes
|
||||
if description:
|
||||
self.graph.nodes[node_id]['description'] = description
|
||||
if metadata:
|
||||
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
|
||||
existing_metadata.update(metadata)
|
||||
self.graph.nodes[node_id]['metadata'] = existing_metadata
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return is_new_node
|
||||
|
||||
def add_edge(self, source_id: str, target_id: str, relationship_type: str,
|
||||
confidence_score: float = 0.5, source_provider: str = "unknown",
|
||||
raw_data: Optional[Dict[str, Any]] = None) -> bool:
|
||||
"""
|
||||
UPDATED: Add or update an edge between two nodes with raw relationship labels.
|
||||
"""
|
||||
confidence_score: float = 0.5, source_provider: str = "unknown",
|
||||
raw_data: Optional[Dict[str, Any]] = None) -> bool:
|
||||
"""Add or update an edge between two nodes, ensuring nodes exist."""
|
||||
if not self.graph.has_node(source_id) or not self.graph.has_node(target_id):
|
||||
return False
|
||||
|
||||
new_confidence = confidence_score
|
||||
|
||||
# UPDATED: Use raw relationship type - no formatting
|
||||
edge_label = relationship_type
|
||||
if relationship_type.startswith("c_"):
|
||||
edge_label = relationship_type
|
||||
else:
|
||||
edge_label = f"{source_provider}_{relationship_type}"
|
||||
|
||||
if self.graph.has_edge(source_id, target_id):
|
||||
# If edge exists, update confidence if the new score is higher.
|
||||
@@ -367,7 +335,7 @@ class GraphManager:
|
||||
self.graph.edges[source_id, target_id]['updated_by'] = source_provider
|
||||
return False
|
||||
|
||||
# Add a new edge with raw attributes
|
||||
# Add a new edge with all attributes.
|
||||
self.graph.add_edge(source_id, target_id,
|
||||
relationship_type=edge_label,
|
||||
confidence_score=new_confidence,
|
||||
@@ -377,69 +345,6 @@ class GraphManager:
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return True
|
||||
|
||||
def extract_node_from_large_entity(self, large_entity_id: str, node_id_to_extract: str) -> bool:
|
||||
"""
|
||||
Removes a node from a large entity's internal lists and updates its count.
|
||||
This prepares the large entity for the node's promotion to a regular node.
|
||||
"""
|
||||
if not self.graph.has_node(large_entity_id):
|
||||
return False
|
||||
|
||||
node_data = self.graph.nodes[large_entity_id]
|
||||
attributes = node_data.get('attributes', [])
|
||||
|
||||
# Find the 'nodes' attribute dictionary in the list
|
||||
nodes_attr = next((attr for attr in attributes if attr.get('name') == 'nodes'), None)
|
||||
|
||||
# Remove from the list of member nodes
|
||||
if nodes_attr and 'value' in nodes_attr and isinstance(nodes_attr['value'], list) and node_id_to_extract in nodes_attr['value']:
|
||||
nodes_attr['value'].remove(node_id_to_extract)
|
||||
|
||||
# Find the 'count' attribute and update it
|
||||
count_attr = next((attr for attr in attributes if attr.get('name') == 'count'), None)
|
||||
if count_attr:
|
||||
count_attr['value'] = len(nodes_attr['value'])
|
||||
else:
|
||||
# This can happen if the node was already extracted, which is not an error.
|
||||
print(f"Warning: Node {node_id_to_extract} not found in the 'nodes' list of {large_entity_id}.")
|
||||
return True # Proceed as if successful
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return True
|
||||
|
||||
def remove_node(self, node_id: str) -> bool:
|
||||
"""Remove a node and its connected edges from the graph."""
|
||||
if not self.graph.has_node(node_id):
|
||||
return False
|
||||
|
||||
# Remove node from the graph (NetworkX handles removing connected edges)
|
||||
self.graph.remove_node(node_id)
|
||||
|
||||
# Clean up the correlation index
|
||||
keys_to_delete = []
|
||||
for value, data in self.correlation_index.items():
|
||||
if isinstance(data, dict) and 'nodes' in data:
|
||||
# Updated correlation structure
|
||||
if node_id in data['nodes']:
|
||||
data['nodes'].discard(node_id)
|
||||
# Remove sources for this node
|
||||
data['sources'] = [s for s in data['sources'] if s['node_id'] != node_id]
|
||||
if not data['nodes']: # If no other nodes are associated, remove it
|
||||
keys_to_delete.append(value)
|
||||
else:
|
||||
# Legacy correlation structure (fallback)
|
||||
if isinstance(data, set) and node_id in data:
|
||||
data.discard(node_id)
|
||||
if not data:
|
||||
keys_to_delete.append(value)
|
||||
|
||||
for key in keys_to_delete:
|
||||
if key in self.correlation_index:
|
||||
del self.correlation_index[key]
|
||||
|
||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||
return True
|
||||
|
||||
def get_node_count(self) -> int:
|
||||
"""Get total number of nodes in the graph."""
|
||||
return self.graph.number_of_nodes()
|
||||
@@ -452,59 +357,54 @@ class GraphManager:
|
||||
"""Get all nodes of a specific type."""
|
||||
return [n for n, d in self.graph.nodes(data=True) if d.get('type') == node_type.value]
|
||||
|
||||
def get_neighbors(self, node_id: str) -> List[str]:
|
||||
"""Get all unique neighbors (predecessors and successors) for a node."""
|
||||
if not self.graph.has_node(node_id):
|
||||
return []
|
||||
return list(set(self.graph.predecessors(node_id)) | set(self.graph.successors(node_id)))
|
||||
|
||||
def get_high_confidence_edges(self, min_confidence: float = 0.8) -> List[Tuple[str, str, Dict]]:
|
||||
"""Get edges with confidence score above a given threshold."""
|
||||
return [(u, v, d) for u, v, d in self.graph.edges(data=True)
|
||||
if d.get('confidence_score', 0) >= min_confidence]
|
||||
|
||||
def get_graph_data(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Export graph data formatted for frontend visualization.
|
||||
SIMPLIFIED: No certificate styling - frontend handles all visual styling.
|
||||
"""
|
||||
"""Export graph data formatted for frontend visualization."""
|
||||
nodes = []
|
||||
for node_id, attrs in self.graph.nodes(data=True):
|
||||
node_data = {
|
||||
'id': node_id,
|
||||
'label': node_id,
|
||||
'type': attrs.get('type', 'unknown'),
|
||||
'attributes': attrs.get('attributes', []), # Raw attributes list
|
||||
'description': attrs.get('description', ''),
|
||||
'metadata': attrs.get('metadata', {}),
|
||||
'added_timestamp': attrs.get('added_timestamp')
|
||||
}
|
||||
node_data = {'id': node_id, 'label': node_id, 'type': attrs.get('type', 'unknown'),
|
||||
'attributes': attrs.get('attributes', {}),
|
||||
'description': attrs.get('description', ''),
|
||||
'metadata': attrs.get('metadata', {}),
|
||||
'added_timestamp': attrs.get('added_timestamp')}
|
||||
# Customize node appearance based on type and attributes
|
||||
node_type = node_data['type']
|
||||
attributes = node_data['attributes']
|
||||
if node_type == 'domain' and attributes.get('certificates', {}).get('has_valid_cert') is False:
|
||||
node_data['color'] = {'background': '#c7c7c7', 'border': '#999'} # Gray for invalid cert
|
||||
|
||||
# Add incoming and outgoing edges to node data
|
||||
if self.graph.has_node(node_id):
|
||||
node_data['incoming_edges'] = [
|
||||
{'from': u, 'data': d} for u, _, d in self.graph.in_edges(node_id, data=True)
|
||||
]
|
||||
node_data['outgoing_edges'] = [
|
||||
{'to': v, 'data': d} for _, v, d in self.graph.out_edges(node_id, data=True)
|
||||
]
|
||||
node_data['incoming_edges'] = [{'from': u, 'data': d} for u, _, d in self.graph.in_edges(node_id, data=True)]
|
||||
node_data['outgoing_edges'] = [{'to': v, 'data': d} for _, v, d in self.graph.out_edges(node_id, data=True)]
|
||||
|
||||
nodes.append(node_data)
|
||||
|
||||
edges = []
|
||||
for source, target, attrs in self.graph.edges(data=True):
|
||||
edges.append({
|
||||
'from': source,
|
||||
'to': target,
|
||||
'label': attrs.get('relationship_type', ''),
|
||||
'confidence_score': attrs.get('confidence_score', 0),
|
||||
'source_provider': attrs.get('source_provider', ''),
|
||||
'discovery_timestamp': attrs.get('discovery_timestamp')
|
||||
})
|
||||
|
||||
edges.append({'from': source, 'to': target,
|
||||
'label': attrs.get('relationship_type', ''),
|
||||
'confidence_score': attrs.get('confidence_score', 0),
|
||||
'source_provider': attrs.get('source_provider', ''),
|
||||
'discovery_timestamp': attrs.get('discovery_timestamp')})
|
||||
return {
|
||||
'nodes': nodes,
|
||||
'edges': edges,
|
||||
'nodes': nodes, 'edges': edges,
|
||||
'statistics': self.get_statistics()['basic_metrics']
|
||||
}
|
||||
|
||||
def export_json(self) -> Dict[str, Any]:
|
||||
"""Export complete graph data as a JSON-serializable dictionary."""
|
||||
graph_data = nx.node_link_data(self.graph, edges="edges")
|
||||
graph_data = nx.node_link_data(self.graph) # Use NetworkX's built-in robust serializer
|
||||
return {
|
||||
'export_metadata': {
|
||||
'export_timestamp': datetime.now(timezone.utc).isoformat(),
|
||||
@@ -512,67 +412,37 @@ class GraphManager:
|
||||
'last_modified': self.last_modified,
|
||||
'total_nodes': self.get_node_count(),
|
||||
'total_edges': self.get_edge_count(),
|
||||
'graph_format': 'dnsrecon_v1_unified_model'
|
||||
'graph_format': 'dnsrecon_v1_nodeling'
|
||||
},
|
||||
'graph': graph_data,
|
||||
'statistics': self.get_statistics()
|
||||
}
|
||||
|
||||
def _get_confidence_distribution(self) -> Dict[str, int]:
|
||||
"""Get distribution of edge confidence scores with empty graph handling."""
|
||||
"""Get distribution of edge confidence scores."""
|
||||
distribution = {'high': 0, 'medium': 0, 'low': 0}
|
||||
|
||||
# FIXED: Handle empty graph case
|
||||
if self.get_edge_count() == 0:
|
||||
return distribution
|
||||
|
||||
for _, _, data in self.graph.edges(data=True):
|
||||
confidence = data.get('confidence_score', 0)
|
||||
if confidence >= 0.8:
|
||||
distribution['high'] += 1
|
||||
elif confidence >= 0.6:
|
||||
distribution['medium'] += 1
|
||||
else:
|
||||
distribution['low'] += 1
|
||||
for _, _, confidence in self.graph.edges(data='confidence_score', default=0):
|
||||
if confidence >= 0.8: distribution['high'] += 1
|
||||
elif confidence >= 0.6: distribution['medium'] += 1
|
||||
else: distribution['low'] += 1
|
||||
return distribution
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""Get comprehensive statistics about the graph with proper empty graph handling."""
|
||||
|
||||
# FIXED: Handle empty graph case properly
|
||||
node_count = self.get_node_count()
|
||||
edge_count = self.get_edge_count()
|
||||
|
||||
stats = {
|
||||
'basic_metrics': {
|
||||
'total_nodes': node_count,
|
||||
'total_edges': edge_count,
|
||||
'creation_time': self.creation_time,
|
||||
'last_modified': self.last_modified
|
||||
},
|
||||
'node_type_distribution': {},
|
||||
'relationship_type_distribution': {},
|
||||
'confidence_distribution': self._get_confidence_distribution(),
|
||||
'provider_distribution': {}
|
||||
}
|
||||
|
||||
# FIXED: Only calculate distributions if we have data
|
||||
if node_count > 0:
|
||||
# Calculate node type distributions
|
||||
for node_type in NodeType:
|
||||
count = len(self.get_nodes_by_type(node_type))
|
||||
if count > 0: # Only include types that exist
|
||||
stats['node_type_distribution'][node_type.value] = count
|
||||
|
||||
if edge_count > 0:
|
||||
# Calculate edge distributions
|
||||
for _, _, data in self.graph.edges(data=True):
|
||||
rel_type = data.get('relationship_type', 'unknown')
|
||||
stats['relationship_type_distribution'][rel_type] = stats['relationship_type_distribution'].get(rel_type, 0) + 1
|
||||
|
||||
provider = data.get('source_provider', 'unknown')
|
||||
stats['provider_distribution'][provider] = stats['provider_distribution'].get(provider, 0) + 1
|
||||
|
||||
"""Get comprehensive statistics about the graph."""
|
||||
stats = {'basic_metrics': {'total_nodes': self.get_node_count(),
|
||||
'total_edges': self.get_edge_count(),
|
||||
'creation_time': self.creation_time,
|
||||
'last_modified': self.last_modified},
|
||||
'node_type_distribution': {}, 'relationship_type_distribution': {},
|
||||
'confidence_distribution': self._get_confidence_distribution(),
|
||||
'provider_distribution': {}}
|
||||
# Calculate distributions
|
||||
for node_type in NodeType:
|
||||
stats['node_type_distribution'][node_type.value] = self.get_nodes_by_type(node_type).__len__()
|
||||
for _, _, rel_type in self.graph.edges(data='relationship_type', default='unknown'):
|
||||
stats['relationship_type_distribution'][rel_type] = stats['relationship_type_distribution'].get(rel_type, 0) + 1
|
||||
for _, _, provider in self.graph.edges(data='source_provider', default='unknown'):
|
||||
stats['provider_distribution'][provider] = stats['provider_distribution'].get(provider, 0) + 1
|
||||
return stats
|
||||
|
||||
def clear(self) -> None:
|
||||
|
||||
@@ -42,7 +42,7 @@ class ForensicLogger:
|
||||
Maintains detailed audit trail of all reconnaissance activities.
|
||||
"""
|
||||
|
||||
def __init__(self, session_id: str = ""):
|
||||
def __init__(self, session_id: str = None):
|
||||
"""
|
||||
Initialize forensic logger.
|
||||
|
||||
@@ -50,7 +50,7 @@ class ForensicLogger:
|
||||
session_id: Unique identifier for this reconnaissance session
|
||||
"""
|
||||
self.session_id = session_id or self._generate_session_id()
|
||||
self.lock = threading.Lock()
|
||||
#self.lock = threading.Lock()
|
||||
|
||||
# Initialize audit trail storage
|
||||
self.api_requests: List[APIRequest] = []
|
||||
@@ -86,8 +86,6 @@ class ForensicLogger:
|
||||
# Remove the unpickleable 'logger' attribute
|
||||
if 'logger' in state:
|
||||
del state['logger']
|
||||
if 'lock' in state:
|
||||
del state['lock']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
@@ -103,7 +101,6 @@ class ForensicLogger:
|
||||
console_handler = logging.StreamHandler()
|
||||
console_handler.setFormatter(formatter)
|
||||
self.logger.addHandler(console_handler)
|
||||
self.lock = threading.Lock()
|
||||
|
||||
def _generate_session_id(self) -> str:
|
||||
"""Generate unique session identifier."""
|
||||
@@ -152,7 +149,7 @@ class ForensicLogger:
|
||||
|
||||
# Log to standard logger
|
||||
if error:
|
||||
self.logger.error(f"API Request Failed.")
|
||||
self.logger.error(f"API Request Failed - {provider}: {url} - {error}")
|
||||
else:
|
||||
self.logger.info(f"API Request - {provider}: {url} - Status: {status_code}")
|
||||
|
||||
@@ -197,7 +194,7 @@ class ForensicLogger:
|
||||
self.logger.info(f"Scan Started - Target: {target_domain}, Depth: {recursion_depth}")
|
||||
self.logger.info(f"Enabled Providers: {', '.join(enabled_providers)}")
|
||||
|
||||
self.session_metadata['target_domains'].update(target_domain)
|
||||
self.session_metadata['target_domains'].add(target_domain)
|
||||
|
||||
def log_scan_complete(self) -> None:
|
||||
"""Log the completion of a reconnaissance scan."""
|
||||
@@ -206,6 +203,8 @@ class ForensicLogger:
|
||||
self.session_metadata['target_domains'] = list(self.session_metadata['target_domains'])
|
||||
|
||||
self.logger.info(f"Scan Complete - Session: {self.session_id}")
|
||||
self.logger.info(f"Total API Requests: {self.session_metadata['total_requests']}")
|
||||
self.logger.info(f"Total Relationships: {self.session_metadata['total_relationships']}")
|
||||
|
||||
def export_audit_trail(self) -> Dict[str, Any]:
|
||||
"""
|
||||
|
||||
@@ -1,107 +0,0 @@
|
||||
# dnsrecon-reduced/core/provider_result.py
|
||||
|
||||
"""
|
||||
Unified data model for DNSRecon passive reconnaissance.
|
||||
Standardizes the data structure across all providers to ensure consistent processing.
|
||||
"""
|
||||
|
||||
from typing import Any, Optional, List, Dict
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
|
||||
|
||||
@dataclass
|
||||
class StandardAttribute:
|
||||
"""A unified data structure for a single piece of information about a node."""
|
||||
target_node: str
|
||||
name: str
|
||||
value: Any
|
||||
type: str
|
||||
provider: str
|
||||
confidence: float
|
||||
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
metadata: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate the attribute after initialization."""
|
||||
if not isinstance(self.confidence, (int, float)) or not 0.0 <= self.confidence <= 1.0:
|
||||
raise ValueError(f"Confidence must be between 0.0 and 1.0, got {self.confidence}")
|
||||
|
||||
|
||||
@dataclass
|
||||
class Relationship:
|
||||
"""A unified data structure for a directional link between two nodes."""
|
||||
source_node: str
|
||||
target_node: str
|
||||
relationship_type: str
|
||||
confidence: float
|
||||
provider: str
|
||||
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
raw_data: Optional[Dict[str, Any]] = field(default_factory=dict)
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate the relationship after initialization."""
|
||||
if not isinstance(self.confidence, (int, float)) or not 0.0 <= self.confidence <= 1.0:
|
||||
raise ValueError(f"Confidence must be between 0.0 and 1.0, got {self.confidence}")
|
||||
|
||||
|
||||
@dataclass
|
||||
class ProviderResult:
|
||||
"""A container for all data returned by a provider from a single query."""
|
||||
attributes: List[StandardAttribute] = field(default_factory=list)
|
||||
relationships: List[Relationship] = field(default_factory=list)
|
||||
|
||||
def add_attribute(self, target_node: str, name: str, value: Any, attr_type: str,
|
||||
provider: str, confidence: float = 0.8,
|
||||
metadata: Optional[Dict[str, Any]] = None) -> None:
|
||||
"""Helper method to add an attribute to the result."""
|
||||
self.attributes.append(StandardAttribute(
|
||||
target_node=target_node,
|
||||
name=name,
|
||||
value=value,
|
||||
type=attr_type,
|
||||
provider=provider,
|
||||
confidence=confidence,
|
||||
metadata=metadata or {}
|
||||
))
|
||||
|
||||
def add_relationship(self, source_node: str, target_node: str, relationship_type: str,
|
||||
provider: str, confidence: float = 0.8,
|
||||
raw_data: Optional[Dict[str, Any]] = None) -> None:
|
||||
"""Helper method to add a relationship to the result."""
|
||||
self.relationships.append(Relationship(
|
||||
source_node=source_node,
|
||||
target_node=target_node,
|
||||
relationship_type=relationship_type,
|
||||
confidence=confidence,
|
||||
provider=provider,
|
||||
raw_data=raw_data or {}
|
||||
))
|
||||
|
||||
def get_discovered_nodes(self) -> set:
|
||||
"""Get all unique node identifiers discovered in this result."""
|
||||
nodes = set()
|
||||
|
||||
# Add nodes from relationships
|
||||
for rel in self.relationships:
|
||||
nodes.add(rel.source_node)
|
||||
nodes.add(rel.target_node)
|
||||
|
||||
# Add nodes from attributes
|
||||
for attr in self.attributes:
|
||||
nodes.add(attr.target_node)
|
||||
|
||||
return nodes
|
||||
|
||||
def get_relationship_count(self) -> int:
|
||||
"""Get the total number of relationships in this result."""
|
||||
return len(self.relationships)
|
||||
|
||||
def get_attribute_count(self) -> int:
|
||||
"""Get the total number of attributes in this result."""
|
||||
return len(self.attributes)
|
||||
|
||||
##TODO
|
||||
#def is_large_entity(self, threshold: int) -> bool:
|
||||
# """Check if this result qualifies as a large entity based on relationship count."""
|
||||
# return self.get_relationship_count() > threshold
|
||||
@@ -1,28 +0,0 @@
|
||||
# dnsrecon-reduced/core/rate_limiter.py
|
||||
|
||||
import time
|
||||
|
||||
class GlobalRateLimiter:
|
||||
def __init__(self, redis_client):
|
||||
self.redis = redis_client
|
||||
|
||||
def is_rate_limited(self, key, limit, period):
|
||||
"""
|
||||
Check if a key is rate-limited.
|
||||
"""
|
||||
now = time.time()
|
||||
key = f"rate_limit:{key}"
|
||||
|
||||
# Remove old timestamps
|
||||
self.redis.zremrangebyscore(key, 0, now - period)
|
||||
|
||||
# Check the count
|
||||
count = self.redis.zcard(key)
|
||||
if count >= limit:
|
||||
return True
|
||||
|
||||
# Add new timestamp
|
||||
self.redis.zadd(key, {now: now})
|
||||
self.redis.expire(key, period)
|
||||
|
||||
return False
|
||||
1228
core/scanner.py
1228
core/scanner.py
File diff suppressed because it is too large
Load Diff
@@ -1,20 +1,372 @@
|
||||
"""
|
||||
Per-session configuration management for DNSRecon.
|
||||
Provides isolated configuration instances for each user session.
|
||||
Enhanced per-session configuration management for DNSRecon.
|
||||
Provides isolated configuration instances for each user session while supporting global caching.
|
||||
"""
|
||||
|
||||
from config import Config
|
||||
import os
|
||||
from typing import Dict, Optional
|
||||
|
||||
class SessionConfig(Config):
|
||||
|
||||
class SessionConfig:
|
||||
"""
|
||||
Session-specific configuration that inherits from global config
|
||||
but maintains isolated API keys and provider settings.
|
||||
Enhanced session-specific configuration that inherits from global config
|
||||
but maintains isolated API keys and provider settings while supporting global caching.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize session config with global defaults."""
|
||||
super().__init__()
|
||||
"""Initialize enhanced session config with global cache support."""
|
||||
# Copy all attributes from global config
|
||||
self.api_keys: Dict[str, Optional[str]] = {
|
||||
'shodan': None
|
||||
}
|
||||
|
||||
# Default settings (copied from global config)
|
||||
self.default_recursion_depth = 2
|
||||
self.default_timeout = 30
|
||||
self.max_concurrent_requests = 5
|
||||
self.large_entity_threshold = 100
|
||||
|
||||
# Enhanced rate limiting settings (per session)
|
||||
self.rate_limits = {
|
||||
'crtsh': 60,
|
||||
'shodan': 60,
|
||||
'dns': 100
|
||||
}
|
||||
|
||||
# Enhanced provider settings (per session)
|
||||
self.enabled_providers = {
|
||||
'crtsh': True,
|
||||
'dns': True,
|
||||
'shodan': False
|
||||
}
|
||||
|
||||
# Task-based execution settings
|
||||
self.task_retry_settings = {
|
||||
'max_retries': 3,
|
||||
'base_backoff_seconds': 1.0,
|
||||
'max_backoff_seconds': 60.0,
|
||||
'retry_on_rate_limit': True,
|
||||
'retry_on_connection_error': True,
|
||||
'retry_on_timeout': True
|
||||
}
|
||||
|
||||
# Cache settings (global across all sessions)
|
||||
self.cache_settings = {
|
||||
'enabled': True,
|
||||
'expiry_hours': 12,
|
||||
'cache_base_dir': '.cache',
|
||||
'per_provider_directories': True,
|
||||
'thread_safe_operations': True
|
||||
}
|
||||
|
||||
# Logging configuration
|
||||
self.log_level = 'INFO'
|
||||
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
# Flask configuration (shared)
|
||||
self.flask_host = '127.0.0.1'
|
||||
self.flask_port = 5000
|
||||
self.flask_debug = True
|
||||
|
||||
# Session isolation settings
|
||||
self.session_isolation = {
|
||||
'enforce_single_session_per_user': True,
|
||||
'consolidate_session_data_on_replacement': True,
|
||||
'user_fingerprinting_enabled': True,
|
||||
'session_timeout_minutes': 60
|
||||
}
|
||||
|
||||
# Circuit breaker settings for provider reliability
|
||||
self.circuit_breaker = {
|
||||
'enabled': True,
|
||||
'failure_threshold': 5, # Failures before opening circuit
|
||||
'recovery_timeout_seconds': 300, # 5 minutes before trying again
|
||||
'half_open_max_calls': 3 # Test calls when recovering
|
||||
}
|
||||
|
||||
def set_api_key(self, provider: str, api_key: str) -> bool:
|
||||
"""
|
||||
Set API key for a provider in this session.
|
||||
|
||||
Args:
|
||||
provider: Provider name (shodan, etc)
|
||||
api_key: API key string (empty string to clear)
|
||||
|
||||
Returns:
|
||||
bool: True if key was set successfully
|
||||
"""
|
||||
if provider in self.api_keys:
|
||||
# Handle clearing of API keys
|
||||
if api_key and api_key.strip():
|
||||
self.api_keys[provider] = api_key.strip()
|
||||
self.enabled_providers[provider] = True
|
||||
else:
|
||||
self.api_keys[provider] = None
|
||||
self.enabled_providers[provider] = False
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_api_key(self, provider: str) -> Optional[str]:
|
||||
"""
|
||||
Get API key for a provider in this session.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
API key or None if not set
|
||||
"""
|
||||
return self.api_keys.get(provider)
|
||||
|
||||
def is_provider_enabled(self, provider: str) -> bool:
|
||||
"""
|
||||
Check if a provider is enabled in this session.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
bool: True if provider is enabled
|
||||
"""
|
||||
return self.enabled_providers.get(provider, False)
|
||||
|
||||
def get_rate_limit(self, provider: str) -> int:
|
||||
"""
|
||||
Get rate limit for a provider in this session.
|
||||
|
||||
Args:
|
||||
provider: Provider name
|
||||
|
||||
Returns:
|
||||
Rate limit in requests per minute
|
||||
"""
|
||||
return self.rate_limits.get(provider, 60)
|
||||
|
||||
def get_task_retry_config(self) -> Dict[str, any]:
|
||||
"""
|
||||
Get task retry configuration for this session.
|
||||
|
||||
Returns:
|
||||
Dictionary with retry settings
|
||||
"""
|
||||
return self.task_retry_settings.copy()
|
||||
|
||||
def get_cache_config(self) -> Dict[str, any]:
|
||||
"""
|
||||
Get cache configuration (global settings).
|
||||
|
||||
Returns:
|
||||
Dictionary with cache settings
|
||||
"""
|
||||
return self.cache_settings.copy()
|
||||
|
||||
def is_circuit_breaker_enabled(self) -> bool:
|
||||
"""Check if circuit breaker is enabled for provider reliability."""
|
||||
return self.circuit_breaker.get('enabled', True)
|
||||
|
||||
def get_circuit_breaker_config(self) -> Dict[str, any]:
|
||||
"""Get circuit breaker configuration."""
|
||||
return self.circuit_breaker.copy()
|
||||
|
||||
def update_provider_settings(self, provider_updates: Dict[str, Dict[str, any]]) -> bool:
|
||||
"""
|
||||
Update provider-specific settings in bulk.
|
||||
|
||||
Args:
|
||||
provider_updates: Dictionary of provider -> settings updates
|
||||
|
||||
Returns:
|
||||
bool: True if updates were applied successfully
|
||||
"""
|
||||
try:
|
||||
for provider_name, updates in provider_updates.items():
|
||||
# Update rate limits
|
||||
if 'rate_limit' in updates:
|
||||
self.rate_limits[provider_name] = updates['rate_limit']
|
||||
|
||||
# Update enabled status
|
||||
if 'enabled' in updates:
|
||||
self.enabled_providers[provider_name] = updates['enabled']
|
||||
|
||||
# Update API key
|
||||
if 'api_key' in updates:
|
||||
self.set_api_key(provider_name, updates['api_key'])
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"Error updating provider settings: {e}")
|
||||
return False
|
||||
|
||||
def validate_configuration(self) -> Dict[str, any]:
|
||||
"""
|
||||
Validate the current configuration and return validation results.
|
||||
|
||||
Returns:
|
||||
Dictionary with validation results and any issues found
|
||||
"""
|
||||
validation_result = {
|
||||
'valid': True,
|
||||
'warnings': [],
|
||||
'errors': [],
|
||||
'provider_status': {}
|
||||
}
|
||||
|
||||
# Validate provider configurations
|
||||
for provider_name, enabled in self.enabled_providers.items():
|
||||
provider_status = {
|
||||
'enabled': enabled,
|
||||
'has_api_key': bool(self.api_keys.get(provider_name)),
|
||||
'rate_limit': self.rate_limits.get(provider_name, 60)
|
||||
}
|
||||
|
||||
# Check for potential issues
|
||||
if enabled and provider_name in ['shodan'] and not provider_status['has_api_key']:
|
||||
validation_result['warnings'].append(
|
||||
f"Provider '{provider_name}' is enabled but missing API key"
|
||||
)
|
||||
|
||||
validation_result['provider_status'][provider_name] = provider_status
|
||||
|
||||
# Validate task settings
|
||||
if self.task_retry_settings['max_retries'] > 10:
|
||||
validation_result['warnings'].append(
|
||||
f"High retry count ({self.task_retry_settings['max_retries']}) may cause long delays"
|
||||
)
|
||||
|
||||
# Validate concurrent settings
|
||||
if self.max_concurrent_requests > 10:
|
||||
validation_result['warnings'].append(
|
||||
f"High concurrency ({self.max_concurrent_requests}) may overwhelm providers"
|
||||
)
|
||||
|
||||
# Validate cache settings
|
||||
if not os.path.exists(self.cache_settings['cache_base_dir']):
|
||||
try:
|
||||
os.makedirs(self.cache_settings['cache_base_dir'], exist_ok=True)
|
||||
except Exception as e:
|
||||
validation_result['errors'].append(f"Cannot create cache directory: {e}")
|
||||
validation_result['valid'] = False
|
||||
|
||||
return validation_result
|
||||
|
||||
def load_from_env(self):
|
||||
"""Load configuration from environment variables with enhanced validation."""
|
||||
# Load API keys from environment
|
||||
if os.getenv('SHODAN_API_KEY') and not self.api_keys['shodan']:
|
||||
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
|
||||
print("Loaded Shodan API key from environment")
|
||||
|
||||
# Override default settings from environment
|
||||
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', '2'))
|
||||
self.default_timeout = int(os.getenv('DEFAULT_TIMEOUT', '30'))
|
||||
self.max_concurrent_requests = int(os.getenv('MAX_CONCURRENT_REQUESTS', '5'))
|
||||
|
||||
# Load task retry settings from environment
|
||||
if os.getenv('TASK_MAX_RETRIES'):
|
||||
self.task_retry_settings['max_retries'] = int(os.getenv('TASK_MAX_RETRIES'))
|
||||
|
||||
if os.getenv('TASK_BASE_BACKOFF'):
|
||||
self.task_retry_settings['base_backoff_seconds'] = float(os.getenv('TASK_BASE_BACKOFF'))
|
||||
|
||||
# Load cache settings from environment
|
||||
if os.getenv('CACHE_EXPIRY_HOURS'):
|
||||
self.cache_settings['expiry_hours'] = int(os.getenv('CACHE_EXPIRY_HOURS'))
|
||||
|
||||
if os.getenv('CACHE_DISABLED'):
|
||||
self.cache_settings['enabled'] = os.getenv('CACHE_DISABLED').lower() != 'true'
|
||||
|
||||
# Load circuit breaker settings
|
||||
if os.getenv('CIRCUIT_BREAKER_DISABLED'):
|
||||
self.circuit_breaker['enabled'] = os.getenv('CIRCUIT_BREAKER_DISABLED').lower() != 'true'
|
||||
|
||||
# Flask settings
|
||||
self.flask_debug = os.getenv('FLASK_DEBUG', 'True').lower() == 'true'
|
||||
|
||||
print("Enhanced configuration loaded from environment")
|
||||
|
||||
def export_config_summary(self) -> Dict[str, any]:
|
||||
"""
|
||||
Export a summary of the current configuration for debugging/logging.
|
||||
|
||||
Returns:
|
||||
Dictionary with configuration summary (API keys redacted)
|
||||
"""
|
||||
return {
|
||||
'providers': {
|
||||
provider: {
|
||||
'enabled': self.enabled_providers.get(provider, False),
|
||||
'has_api_key': bool(self.api_keys.get(provider)),
|
||||
'rate_limit': self.rate_limits.get(provider, 60)
|
||||
}
|
||||
for provider in self.enabled_providers.keys()
|
||||
},
|
||||
'task_settings': {
|
||||
'max_retries': self.task_retry_settings['max_retries'],
|
||||
'max_concurrent_requests': self.max_concurrent_requests,
|
||||
'large_entity_threshold': self.large_entity_threshold
|
||||
},
|
||||
'cache_settings': {
|
||||
'enabled': self.cache_settings['enabled'],
|
||||
'expiry_hours': self.cache_settings['expiry_hours'],
|
||||
'base_directory': self.cache_settings['cache_base_dir']
|
||||
},
|
||||
'session_settings': {
|
||||
'isolation_enabled': self.session_isolation['enforce_single_session_per_user'],
|
||||
'consolidation_enabled': self.session_isolation['consolidate_session_data_on_replacement'],
|
||||
'timeout_minutes': self.session_isolation['session_timeout_minutes']
|
||||
},
|
||||
'circuit_breaker': {
|
||||
'enabled': self.circuit_breaker['enabled'],
|
||||
'failure_threshold': self.circuit_breaker['failure_threshold'],
|
||||
'recovery_timeout': self.circuit_breaker['recovery_timeout_seconds']
|
||||
}
|
||||
}
|
||||
|
||||
def create_session_config() -> 'SessionConfig':
|
||||
"""Create a new session configuration instance."""
|
||||
return SessionConfig()
|
||||
|
||||
def create_session_config() -> SessionConfig:
|
||||
"""
|
||||
Create a new enhanced session configuration instance.
|
||||
|
||||
Returns:
|
||||
Configured SessionConfig instance
|
||||
"""
|
||||
session_config = SessionConfig()
|
||||
session_config.load_from_env()
|
||||
|
||||
# Validate configuration and log any issues
|
||||
validation = session_config.validate_configuration()
|
||||
if validation['warnings']:
|
||||
print("Configuration warnings:")
|
||||
for warning in validation['warnings']:
|
||||
print(f" WARNING: {warning}")
|
||||
|
||||
if validation['errors']:
|
||||
print("Configuration errors:")
|
||||
for error in validation['errors']:
|
||||
print(f" ERROR: {error}")
|
||||
|
||||
if not validation['valid']:
|
||||
raise ValueError("Configuration validation failed - see errors above")
|
||||
|
||||
print(f"Enhanced session configuration created successfully")
|
||||
return session_config
|
||||
|
||||
|
||||
def create_test_config() -> SessionConfig:
|
||||
"""
|
||||
Create a test configuration with safe defaults for testing.
|
||||
|
||||
Returns:
|
||||
Test-safe SessionConfig instance
|
||||
"""
|
||||
test_config = SessionConfig()
|
||||
|
||||
# Override settings for testing
|
||||
test_config.max_concurrent_requests = 2
|
||||
test_config.task_retry_settings['max_retries'] = 1
|
||||
test_config.task_retry_settings['base_backoff_seconds'] = 0.1
|
||||
test_config.cache_settings['expiry_hours'] = 1
|
||||
test_config.session_isolation['session_timeout_minutes'] = 10
|
||||
|
||||
print("Test configuration created")
|
||||
return test_config
|
||||
@@ -5,42 +5,154 @@ import time
|
||||
import uuid
|
||||
import redis
|
||||
import pickle
|
||||
from typing import Dict, Optional, Any
|
||||
import hashlib
|
||||
from typing import Dict, Optional, Any, List, Tuple
|
||||
|
||||
from core.scanner import Scanner
|
||||
from config import config
|
||||
|
||||
|
||||
class UserIdentifier:
|
||||
"""Handles user identification for session management."""
|
||||
|
||||
@staticmethod
|
||||
def generate_user_fingerprint(client_ip: str, user_agent: str) -> str:
|
||||
"""
|
||||
Generate a unique fingerprint for a user based on IP and User-Agent.
|
||||
|
||||
Args:
|
||||
client_ip: Client IP address
|
||||
user_agent: User-Agent header value
|
||||
|
||||
Returns:
|
||||
Unique user fingerprint hash
|
||||
"""
|
||||
# Create deterministic user identifier
|
||||
user_data = f"{client_ip}:{user_agent[:100]}" # Limit UA to 100 chars
|
||||
fingerprint = hashlib.sha256(user_data.encode()).hexdigest()[:16] # 16 char fingerprint
|
||||
return f"user_{fingerprint}"
|
||||
|
||||
@staticmethod
|
||||
def extract_request_info(request) -> Tuple[str, str]:
|
||||
"""
|
||||
Extract client IP and User-Agent from Flask request.
|
||||
|
||||
Args:
|
||||
request: Flask request object
|
||||
|
||||
Returns:
|
||||
Tuple of (client_ip, user_agent)
|
||||
"""
|
||||
# Handle proxy headers for real IP
|
||||
client_ip = request.headers.get('X-Forwarded-For', '').split(',')[0].strip()
|
||||
if not client_ip:
|
||||
client_ip = request.headers.get('X-Real-IP', '')
|
||||
if not client_ip:
|
||||
client_ip = request.remote_addr or 'unknown'
|
||||
|
||||
user_agent = request.headers.get('User-Agent', 'unknown')
|
||||
|
||||
return client_ip, user_agent
|
||||
|
||||
|
||||
class SessionConsolidator:
|
||||
"""Handles consolidation of session data when replacing sessions."""
|
||||
|
||||
@staticmethod
|
||||
def consolidate_scanner_data(old_scanner: 'Scanner', new_scanner: 'Scanner') -> 'Scanner':
|
||||
"""
|
||||
Consolidate useful data from old scanner into new scanner.
|
||||
|
||||
Args:
|
||||
old_scanner: Scanner from terminated session
|
||||
new_scanner: New scanner instance
|
||||
|
||||
Returns:
|
||||
Enhanced new scanner with consolidated data
|
||||
"""
|
||||
try:
|
||||
# Consolidate graph data if old scanner has valuable data
|
||||
if old_scanner and hasattr(old_scanner, 'graph') and old_scanner.graph:
|
||||
old_stats = old_scanner.graph.get_statistics()
|
||||
if old_stats['basic_metrics']['total_nodes'] > 0:
|
||||
print(f"Consolidating graph data: {old_stats['basic_metrics']['total_nodes']} nodes, {old_stats['basic_metrics']['total_edges']} edges")
|
||||
|
||||
# Transfer nodes and edges to new scanner's graph
|
||||
for node_id, node_data in old_scanner.graph.graph.nodes(data=True):
|
||||
# Add node to new graph with all attributes
|
||||
new_scanner.graph.graph.add_node(node_id, **node_data)
|
||||
|
||||
for source, target, edge_data in old_scanner.graph.graph.edges(data=True):
|
||||
# Add edge to new graph with all attributes
|
||||
new_scanner.graph.graph.add_edge(source, target, **edge_data)
|
||||
|
||||
# Update correlation index
|
||||
if hasattr(old_scanner.graph, 'correlation_index'):
|
||||
new_scanner.graph.correlation_index = old_scanner.graph.correlation_index.copy()
|
||||
|
||||
# Update timestamps
|
||||
new_scanner.graph.creation_time = old_scanner.graph.creation_time
|
||||
new_scanner.graph.last_modified = old_scanner.graph.last_modified
|
||||
|
||||
# Consolidate provider statistics
|
||||
if old_scanner and hasattr(old_scanner, 'providers') and old_scanner.providers:
|
||||
for old_provider in old_scanner.providers:
|
||||
# Find matching provider in new scanner
|
||||
matching_new_provider = None
|
||||
for new_provider in new_scanner.providers:
|
||||
if new_provider.get_name() == old_provider.get_name():
|
||||
matching_new_provider = new_provider
|
||||
break
|
||||
|
||||
if matching_new_provider:
|
||||
# Transfer cumulative statistics
|
||||
matching_new_provider.total_requests += old_provider.total_requests
|
||||
matching_new_provider.successful_requests += old_provider.successful_requests
|
||||
matching_new_provider.failed_requests += old_provider.failed_requests
|
||||
matching_new_provider.total_relationships_found += old_provider.total_relationships_found
|
||||
|
||||
# Transfer cache statistics if available
|
||||
if hasattr(old_provider, 'cache_hits'):
|
||||
matching_new_provider.cache_hits += getattr(old_provider, 'cache_hits', 0)
|
||||
matching_new_provider.cache_misses += getattr(old_provider, 'cache_misses', 0)
|
||||
|
||||
print(f"Consolidated {old_provider.get_name()} provider stats: {old_provider.total_requests} requests")
|
||||
|
||||
return new_scanner
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Error during session consolidation: {e}")
|
||||
return new_scanner
|
||||
|
||||
|
||||
class SessionManager:
|
||||
"""
|
||||
FIXED: Manages multiple scanner instances for concurrent user sessions using Redis.
|
||||
Now more conservative about session creation to preserve API keys and configuration.
|
||||
Manages single scanner session per user using Redis with user identification.
|
||||
Enforces one active session per user for consistent state management.
|
||||
"""
|
||||
|
||||
def __init__(self, session_timeout_minutes: int = 0):
|
||||
def __init__(self, session_timeout_minutes: int = 60):
|
||||
"""
|
||||
Initialize session manager with a Redis backend.
|
||||
Initialize session manager with Redis backend and user tracking.
|
||||
"""
|
||||
if session_timeout_minutes is None:
|
||||
session_timeout_minutes = config.session_timeout_minutes
|
||||
|
||||
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
|
||||
self.session_timeout = session_timeout_minutes * 60 # Convert to seconds
|
||||
self.lock = threading.Lock()
|
||||
|
||||
# FIXED: Add a creation lock to prevent race conditions
|
||||
self.creation_lock = threading.Lock()
|
||||
# User identification helper
|
||||
self.user_identifier = UserIdentifier()
|
||||
self.consolidator = SessionConsolidator()
|
||||
|
||||
# Start cleanup thread
|
||||
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
|
||||
self.cleanup_thread.start()
|
||||
|
||||
print(f"SessionManager initialized with Redis backend and {session_timeout_minutes}min timeout")
|
||||
print(f"SessionManager initialized with Redis backend, user tracking, and {session_timeout_minutes}min timeout")
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare SessionManager for pickling."""
|
||||
state = self.__dict__.copy()
|
||||
# Exclude unpickleable attributes - Redis client and threading objects
|
||||
unpicklable_attrs = ['lock', 'cleanup_thread', 'redis_client', 'creation_lock']
|
||||
# Exclude unpickleable attributes
|
||||
unpicklable_attrs = ['lock', 'cleanup_thread', 'redis_client']
|
||||
for attr in unpicklable_attrs:
|
||||
if attr in state:
|
||||
del state[attr]
|
||||
@@ -50,77 +162,115 @@ class SessionManager:
|
||||
"""Restore SessionManager after unpickling."""
|
||||
self.__dict__.update(state)
|
||||
# Re-initialize unpickleable attributes
|
||||
import redis
|
||||
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
|
||||
self.lock = threading.Lock()
|
||||
self.creation_lock = threading.Lock()
|
||||
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
|
||||
self.cleanup_thread.start()
|
||||
|
||||
def _get_session_key(self, session_id: str) -> str:
|
||||
"""Generates the Redis key for a session."""
|
||||
"""Generate Redis key for a session."""
|
||||
return f"dnsrecon:session:{session_id}"
|
||||
|
||||
def _get_user_session_key(self, user_fingerprint: str) -> str:
|
||||
"""Generate Redis key for user -> session mapping."""
|
||||
return f"dnsrecon:user:{user_fingerprint}"
|
||||
|
||||
def _get_stop_signal_key(self, session_id: str) -> str:
|
||||
"""Generates the Redis key for a session's stop signal."""
|
||||
"""Generate Redis key for session stop signal."""
|
||||
return f"dnsrecon:stop:{session_id}"
|
||||
|
||||
def create_session(self) -> str:
|
||||
def create_or_replace_user_session(self, client_ip: str, user_agent: str) -> str:
|
||||
"""
|
||||
FIXED: Create a new user session with thread-safe creation to prevent duplicates.
|
||||
"""
|
||||
# FIXED: Use creation lock to prevent race conditions
|
||||
with self.creation_lock:
|
||||
session_id = str(uuid.uuid4())
|
||||
print(f"=== CREATING SESSION {session_id} IN REDIS ===")
|
||||
|
||||
try:
|
||||
from core.session_config import create_session_config
|
||||
session_config = create_session_config()
|
||||
scanner_instance = Scanner(session_config=session_config)
|
||||
|
||||
# Set the session ID on the scanner for cross-process stop signal management
|
||||
scanner_instance.session_id = session_id
|
||||
|
||||
session_data = {
|
||||
'scanner': scanner_instance,
|
||||
'config': session_config,
|
||||
'created_at': time.time(),
|
||||
'last_activity': time.time(),
|
||||
'status': 'active'
|
||||
}
|
||||
|
||||
# Serialize the entire session data dictionary using pickle
|
||||
serialized_data = pickle.dumps(session_data)
|
||||
|
||||
# Store in Redis
|
||||
session_key = self._get_session_key(session_id)
|
||||
self.redis_client.setex(session_key, self.session_timeout, serialized_data)
|
||||
|
||||
# Initialize stop signal as False
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
self.redis_client.setex(stop_key, self.session_timeout, b'0')
|
||||
|
||||
print(f"Session {session_id} stored in Redis with stop signal initialized")
|
||||
print(f"Session has {len(scanner_instance.providers)} providers: {[p.get_name() for p in scanner_instance.providers]}")
|
||||
return session_id
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to create session {session_id}: {e}")
|
||||
raise
|
||||
|
||||
def set_stop_signal(self, session_id: str) -> bool:
|
||||
"""
|
||||
Set the stop signal for a session (cross-process safe).
|
||||
Create new session for user, replacing any existing session.
|
||||
Consolidates data from previous session if it exists.
|
||||
|
||||
Args:
|
||||
session_id: Session identifier
|
||||
client_ip: Client IP address
|
||||
user_agent: User-Agent header
|
||||
|
||||
Returns:
|
||||
bool: True if signal was set successfully
|
||||
New session ID
|
||||
"""
|
||||
user_fingerprint = self.user_identifier.generate_user_fingerprint(client_ip, user_agent)
|
||||
new_session_id = str(uuid.uuid4())
|
||||
|
||||
print(f"=== CREATING/REPLACING SESSION FOR USER {user_fingerprint} ===")
|
||||
|
||||
try:
|
||||
# Check for existing user session
|
||||
existing_session_id = self._get_user_current_session(user_fingerprint)
|
||||
old_scanner = None
|
||||
|
||||
if existing_session_id:
|
||||
print(f"Found existing session {existing_session_id} for user {user_fingerprint}")
|
||||
# Get old scanner data for consolidation
|
||||
old_scanner = self.get_session(existing_session_id)
|
||||
# Terminate old session
|
||||
self._terminate_session_internal(existing_session_id, cleanup_user_mapping=False)
|
||||
print(f"Terminated old session {existing_session_id}")
|
||||
|
||||
# Create new session config and scanner
|
||||
from core.session_config import create_session_config
|
||||
session_config = create_session_config()
|
||||
new_scanner = Scanner(session_config=session_config)
|
||||
|
||||
# Set session ID on scanner for cross-process operations
|
||||
new_scanner.session_id = new_session_id
|
||||
|
||||
# Consolidate data from old session if available
|
||||
if old_scanner:
|
||||
new_scanner = self.consolidator.consolidate_scanner_data(old_scanner, new_scanner)
|
||||
print(f"Consolidated data from previous session")
|
||||
|
||||
# Create session data
|
||||
session_data = {
|
||||
'scanner': new_scanner,
|
||||
'config': session_config,
|
||||
'created_at': time.time(),
|
||||
'last_activity': time.time(),
|
||||
'status': 'active',
|
||||
'user_fingerprint': user_fingerprint,
|
||||
'client_ip': client_ip,
|
||||
'user_agent': user_agent[:200] # Truncate for storage
|
||||
}
|
||||
|
||||
# Store session in Redis
|
||||
session_key = self._get_session_key(new_session_id)
|
||||
serialized_data = pickle.dumps(session_data)
|
||||
self.redis_client.setex(session_key, self.session_timeout, serialized_data)
|
||||
|
||||
# Update user -> session mapping
|
||||
user_session_key = self._get_user_session_key(user_fingerprint)
|
||||
self.redis_client.setex(user_session_key, self.session_timeout, new_session_id.encode('utf-8'))
|
||||
|
||||
# Initialize stop signal
|
||||
stop_key = self._get_stop_signal_key(new_session_id)
|
||||
self.redis_client.setex(stop_key, self.session_timeout, b'0')
|
||||
|
||||
print(f"Created new session {new_session_id} for user {user_fingerprint}")
|
||||
return new_session_id
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to create session for user {user_fingerprint}: {e}")
|
||||
raise
|
||||
|
||||
def _get_user_current_session(self, user_fingerprint: str) -> Optional[str]:
|
||||
"""Get current session ID for a user."""
|
||||
try:
|
||||
user_session_key = self._get_user_session_key(user_fingerprint)
|
||||
session_id_bytes = self.redis_client.get(user_session_key)
|
||||
if session_id_bytes:
|
||||
return session_id_bytes.decode('utf-8')
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"Error getting user session: {e}")
|
||||
return None
|
||||
|
||||
def set_stop_signal(self, session_id: str) -> bool:
|
||||
"""Set stop signal for session (cross-process safe)."""
|
||||
try:
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
# Set stop signal to '1' with the same TTL as the session
|
||||
self.redis_client.setex(stop_key, self.session_timeout, b'1')
|
||||
print(f"Stop signal set for session {session_id}")
|
||||
return True
|
||||
@@ -129,15 +279,7 @@ class SessionManager:
|
||||
return False
|
||||
|
||||
def is_stop_requested(self, session_id: str) -> bool:
|
||||
"""
|
||||
Check if stop is requested for a session (cross-process safe).
|
||||
|
||||
Args:
|
||||
session_id: Session identifier
|
||||
|
||||
Returns:
|
||||
bool: True if stop is requested
|
||||
"""
|
||||
"""Check if stop is requested for session (cross-process safe)."""
|
||||
try:
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
value = self.redis_client.get(stop_key)
|
||||
@@ -147,15 +289,7 @@ class SessionManager:
|
||||
return False
|
||||
|
||||
def clear_stop_signal(self, session_id: str) -> bool:
|
||||
"""
|
||||
Clear the stop signal for a session.
|
||||
|
||||
Args:
|
||||
session_id: Session identifier
|
||||
|
||||
Returns:
|
||||
bool: True if signal was cleared successfully
|
||||
"""
|
||||
"""Clear stop signal for session."""
|
||||
try:
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
self.redis_client.setex(stop_key, self.session_timeout, b'0')
|
||||
@@ -166,13 +300,13 @@ class SessionManager:
|
||||
return False
|
||||
|
||||
def _get_session_data(self, session_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""Retrieves and deserializes session data from Redis."""
|
||||
"""Retrieve and deserialize session data from Redis."""
|
||||
try:
|
||||
session_key = self._get_session_key(session_id)
|
||||
serialized_data = self.redis_client.get(session_key)
|
||||
if serialized_data:
|
||||
session_data = pickle.loads(serialized_data)
|
||||
# Ensure the scanner has the correct session ID for stop signal checking
|
||||
# Ensure scanner has correct session ID
|
||||
if 'scanner' in session_data and session_data['scanner']:
|
||||
session_data['scanner'].session_id = session_id
|
||||
return session_data
|
||||
@@ -182,47 +316,35 @@ class SessionManager:
|
||||
return None
|
||||
|
||||
def _save_session_data(self, session_id: str, session_data: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Serializes and saves session data back to Redis with updated TTL.
|
||||
|
||||
Returns:
|
||||
bool: True if save was successful
|
||||
"""
|
||||
"""Serialize and save session data to Redis with updated TTL."""
|
||||
try:
|
||||
session_key = self._get_session_key(session_id)
|
||||
serialized_data = pickle.dumps(session_data)
|
||||
result = self.redis_client.setex(session_key, self.session_timeout, serialized_data)
|
||||
|
||||
# Also refresh user mapping TTL if available
|
||||
if 'user_fingerprint' in session_data:
|
||||
user_session_key = self._get_user_session_key(session_data['user_fingerprint'])
|
||||
self.redis_client.setex(user_session_key, self.session_timeout, session_id.encode('utf-8'))
|
||||
|
||||
return result
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to save session data for {session_id}: {e}")
|
||||
return False
|
||||
|
||||
def update_session_scanner(self, session_id: str, scanner: 'Scanner') -> bool:
|
||||
"""
|
||||
Updates just the scanner object in a session with immediate persistence.
|
||||
|
||||
Returns:
|
||||
bool: True if update was successful
|
||||
"""
|
||||
"""Update scanner object in session with immediate persistence."""
|
||||
try:
|
||||
session_data = self._get_session_data(session_id)
|
||||
if session_data:
|
||||
# Ensure scanner has the session ID
|
||||
# Ensure scanner has session ID
|
||||
scanner.session_id = session_id
|
||||
session_data['scanner'] = scanner
|
||||
session_data['last_activity'] = time.time()
|
||||
|
||||
# Immediately save to Redis for GUI updates
|
||||
success = self._save_session_data(session_id, session_data)
|
||||
if success:
|
||||
# Only log occasionally to reduce noise
|
||||
if hasattr(self, '_last_update_log'):
|
||||
if time.time() - self._last_update_log > 5: # Log every 5 seconds max
|
||||
#print(f"Scanner state updated for session {session_id} (status: {scanner.status})")
|
||||
self._last_update_log = time.time()
|
||||
else:
|
||||
#print(f"Scanner state updated for session {session_id} (status: {scanner.status})")
|
||||
self._last_update_log = time.time()
|
||||
print(f"Scanner state updated for session {session_id} (status: {scanner.status})")
|
||||
else:
|
||||
print(f"WARNING: Failed to save scanner state for session {session_id}")
|
||||
return success
|
||||
@@ -234,16 +356,7 @@ class SessionManager:
|
||||
return False
|
||||
|
||||
def update_scanner_status(self, session_id: str, status: str) -> bool:
|
||||
"""
|
||||
Quickly update just the scanner status for immediate GUI feedback.
|
||||
|
||||
Args:
|
||||
session_id: Session identifier
|
||||
status: New scanner status
|
||||
|
||||
Returns:
|
||||
bool: True if update was successful
|
||||
"""
|
||||
"""Quickly update scanner status for immediate GUI feedback."""
|
||||
try:
|
||||
session_data = self._get_session_data(session_id)
|
||||
if session_data and 'scanner' in session_data:
|
||||
@@ -262,9 +375,7 @@ class SessionManager:
|
||||
return False
|
||||
|
||||
def get_session(self, session_id: str) -> Optional[Scanner]:
|
||||
"""
|
||||
Get scanner instance for a session from Redis with session ID management.
|
||||
"""
|
||||
"""Get scanner instance for session with session ID management."""
|
||||
if not session_id:
|
||||
return None
|
||||
|
||||
@@ -279,21 +390,13 @@ class SessionManager:
|
||||
|
||||
scanner = session_data.get('scanner')
|
||||
if scanner:
|
||||
# Ensure the scanner can check the Redis-based stop signal
|
||||
# Ensure scanner can check Redis-based stop signal
|
||||
scanner.session_id = session_id
|
||||
|
||||
return scanner
|
||||
|
||||
def get_session_status_only(self, session_id: str) -> Optional[str]:
|
||||
"""
|
||||
Get just the scanner status without full session retrieval (for performance).
|
||||
|
||||
Args:
|
||||
session_id: Session identifier
|
||||
|
||||
Returns:
|
||||
Scanner status string or None if not found
|
||||
"""
|
||||
"""Get scanner status without full session retrieval (for performance)."""
|
||||
try:
|
||||
session_data = self._get_session_data(session_id)
|
||||
if session_data and 'scanner' in session_data:
|
||||
@@ -304,16 +407,18 @@ class SessionManager:
|
||||
return None
|
||||
|
||||
def terminate_session(self, session_id: str) -> bool:
|
||||
"""
|
||||
Terminate a specific session in Redis with reliable stop signal and immediate status update.
|
||||
"""
|
||||
"""Terminate specific session with reliable stop signal and immediate status update."""
|
||||
return self._terminate_session_internal(session_id, cleanup_user_mapping=True)
|
||||
|
||||
def _terminate_session_internal(self, session_id: str, cleanup_user_mapping: bool = True) -> bool:
|
||||
"""Internal session termination with configurable user mapping cleanup."""
|
||||
print(f"=== TERMINATING SESSION {session_id} ===")
|
||||
|
||||
try:
|
||||
# First, set the stop signal
|
||||
# Set stop signal first
|
||||
self.set_stop_signal(session_id)
|
||||
|
||||
# Update scanner status to stopped immediately for GUI feedback
|
||||
# Update scanner status immediately for GUI feedback
|
||||
self.update_scanner_status(session_id, 'stopped')
|
||||
|
||||
session_data = self._get_session_data(session_id)
|
||||
@@ -324,16 +429,19 @@ class SessionManager:
|
||||
scanner = session_data.get('scanner')
|
||||
if scanner and scanner.status == 'running':
|
||||
print(f"Stopping scan for session: {session_id}")
|
||||
# The scanner will check the Redis stop signal
|
||||
scanner.stop_scan()
|
||||
|
||||
# Update the scanner state immediately
|
||||
self.update_session_scanner(session_id, scanner)
|
||||
|
||||
# Wait a moment for graceful shutdown
|
||||
# Wait for graceful shutdown
|
||||
time.sleep(0.5)
|
||||
|
||||
# Delete session data and stop signal from Redis
|
||||
# Clean up user mapping if requested
|
||||
if cleanup_user_mapping and 'user_fingerprint' in session_data:
|
||||
user_session_key = self._get_user_session_key(session_data['user_fingerprint'])
|
||||
self.redis_client.delete(user_session_key)
|
||||
print(f"Cleaned up user mapping for {session_data['user_fingerprint']}")
|
||||
|
||||
# Delete session data and stop signal
|
||||
session_key = self._get_session_key(session_id)
|
||||
stop_key = self._get_stop_signal_key(session_id)
|
||||
self.redis_client.delete(session_key)
|
||||
@@ -347,35 +455,72 @@ class SessionManager:
|
||||
return False
|
||||
|
||||
def _cleanup_loop(self) -> None:
|
||||
"""
|
||||
Background thread to cleanup inactive sessions and orphaned stop signals.
|
||||
"""
|
||||
"""Background thread to cleanup inactive sessions and orphaned signals."""
|
||||
while True:
|
||||
try:
|
||||
# Clean up orphaned stop signals
|
||||
stop_keys = self.redis_client.keys("dnsrecon:stop:*")
|
||||
for stop_key in stop_keys:
|
||||
# Extract session ID from stop key
|
||||
session_id = stop_key.decode('utf-8').split(':')[-1]
|
||||
session_key = self._get_session_key(session_id)
|
||||
|
||||
# If session doesn't exist but stop signal does, clean it up
|
||||
if not self.redis_client.exists(session_key):
|
||||
self.redis_client.delete(stop_key)
|
||||
print(f"Cleaned up orphaned stop signal for session {session_id}")
|
||||
|
||||
# Clean up orphaned user mappings
|
||||
user_keys = self.redis_client.keys("dnsrecon:user:*")
|
||||
for user_key in user_keys:
|
||||
session_id_bytes = self.redis_client.get(user_key)
|
||||
if session_id_bytes:
|
||||
session_id = session_id_bytes.decode('utf-8')
|
||||
session_key = self._get_session_key(session_id)
|
||||
|
||||
if not self.redis_client.exists(session_key):
|
||||
self.redis_client.delete(user_key)
|
||||
print(f"Cleaned up orphaned user mapping for session {session_id}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in cleanup loop: {e}")
|
||||
|
||||
time.sleep(300) # Sleep for 5 minutes
|
||||
|
||||
def list_active_sessions(self) -> List[Dict[str, Any]]:
|
||||
"""List all active sessions for admin purposes."""
|
||||
try:
|
||||
session_keys = self.redis_client.keys("dnsrecon:session:*")
|
||||
sessions = []
|
||||
|
||||
for session_key in session_keys:
|
||||
session_id = session_key.decode('utf-8').split(':')[-1]
|
||||
session_data = self._get_session_data(session_id)
|
||||
|
||||
if session_data:
|
||||
scanner = session_data.get('scanner')
|
||||
sessions.append({
|
||||
'session_id': session_id,
|
||||
'user_fingerprint': session_data.get('user_fingerprint', 'unknown'),
|
||||
'client_ip': session_data.get('client_ip', 'unknown'),
|
||||
'created_at': session_data.get('created_at'),
|
||||
'last_activity': session_data.get('last_activity'),
|
||||
'scanner_status': scanner.status if scanner else 'unknown',
|
||||
'current_target': scanner.current_target if scanner else None
|
||||
})
|
||||
|
||||
return sessions
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to list active sessions: {e}")
|
||||
return []
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""Get session manager statistics."""
|
||||
try:
|
||||
session_keys = self.redis_client.keys("dnsrecon:session:*")
|
||||
user_keys = self.redis_client.keys("dnsrecon:user:*")
|
||||
stop_keys = self.redis_client.keys("dnsrecon:stop:*")
|
||||
|
||||
active_sessions = len(session_keys)
|
||||
unique_users = len(user_keys)
|
||||
running_scans = 0
|
||||
|
||||
for session_key in session_keys:
|
||||
@@ -386,16 +531,46 @@ class SessionManager:
|
||||
|
||||
return {
|
||||
'total_active_sessions': active_sessions,
|
||||
'unique_users': unique_users,
|
||||
'running_scans': running_scans,
|
||||
'total_stop_signals': len(stop_keys)
|
||||
'total_stop_signals': len(stop_keys),
|
||||
'average_sessions_per_user': round(active_sessions / unique_users, 2) if unique_users > 0 else 0
|
||||
}
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to get statistics: {e}")
|
||||
return {
|
||||
'total_active_sessions': 0,
|
||||
'unique_users': 0,
|
||||
'running_scans': 0,
|
||||
'total_stop_signals': 0
|
||||
'total_stop_signals': 0,
|
||||
'average_sessions_per_user': 0
|
||||
}
|
||||
|
||||
def get_session_info(self, session_id: str) -> Dict[str, Any]:
|
||||
"""Get detailed information about a specific session."""
|
||||
try:
|
||||
session_data = self._get_session_data(session_id)
|
||||
if not session_data:
|
||||
return {'error': 'Session not found'}
|
||||
|
||||
scanner = session_data.get('scanner')
|
||||
|
||||
return {
|
||||
'session_id': session_id,
|
||||
'user_fingerprint': session_data.get('user_fingerprint', 'unknown'),
|
||||
'client_ip': session_data.get('client_ip', 'unknown'),
|
||||
'user_agent': session_data.get('user_agent', 'unknown'),
|
||||
'created_at': session_data.get('created_at'),
|
||||
'last_activity': session_data.get('last_activity'),
|
||||
'status': session_data.get('status'),
|
||||
'scanner_status': scanner.status if scanner else 'unknown',
|
||||
'current_target': scanner.current_target if scanner else None,
|
||||
'session_age_minutes': round((time.time() - session_data.get('created_at', time.time())) / 60, 1)
|
||||
}
|
||||
except Exception as e:
|
||||
print(f"ERROR: Failed to get session info for {session_id}: {e}")
|
||||
return {'error': f'Failed to get session info: {str(e)}'}
|
||||
|
||||
|
||||
# Global session manager instance
|
||||
session_manager = SessionManager(session_timeout_minutes=60)
|
||||
564
core/task_manager.py
Normal file
564
core/task_manager.py
Normal file
@@ -0,0 +1,564 @@
|
||||
# dnsrecon/core/task_manager.py
|
||||
|
||||
import threading
|
||||
import time
|
||||
import uuid
|
||||
from enum import Enum
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Dict, List, Optional, Any, Set
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from collections import deque
|
||||
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
|
||||
|
||||
class TaskStatus(Enum):
|
||||
"""Enumeration of task execution statuses."""
|
||||
PENDING = "pending"
|
||||
RUNNING = "running"
|
||||
SUCCEEDED = "succeeded"
|
||||
FAILED_RETRYING = "failed_retrying"
|
||||
FAILED_PERMANENT = "failed_permanent"
|
||||
CANCELLED = "cancelled"
|
||||
|
||||
|
||||
class TaskType(Enum):
|
||||
"""Enumeration of task types for provider queries."""
|
||||
DOMAIN_QUERY = "domain_query"
|
||||
IP_QUERY = "ip_query"
|
||||
GRAPH_UPDATE = "graph_update"
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskResult:
|
||||
"""Result of a task execution."""
|
||||
success: bool
|
||||
data: Optional[Any] = None
|
||||
error: Optional[str] = None
|
||||
metadata: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReconTask:
|
||||
"""Represents a single reconnaissance task with retry logic."""
|
||||
task_id: str
|
||||
task_type: TaskType
|
||||
target: str
|
||||
provider_name: str
|
||||
depth: int
|
||||
status: TaskStatus = TaskStatus.PENDING
|
||||
created_at: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
|
||||
# Retry configuration
|
||||
max_retries: int = 3
|
||||
current_retry: int = 0
|
||||
base_backoff_seconds: float = 1.0
|
||||
max_backoff_seconds: float = 60.0
|
||||
|
||||
# Execution tracking
|
||||
last_attempt_at: Optional[datetime] = None
|
||||
next_retry_at: Optional[datetime] = None
|
||||
execution_history: List[Dict[str, Any]] = field(default_factory=list)
|
||||
|
||||
# Results
|
||||
result: Optional[TaskResult] = None
|
||||
|
||||
def __post_init__(self):
|
||||
"""Initialize additional fields after creation."""
|
||||
if not self.task_id:
|
||||
self.task_id = str(uuid.uuid4())[:8]
|
||||
|
||||
def calculate_next_retry_time(self) -> datetime:
|
||||
"""Calculate next retry time with exponential backoff and jitter."""
|
||||
if self.current_retry >= self.max_retries:
|
||||
return None
|
||||
|
||||
# Exponential backoff with jitter
|
||||
backoff_time = min(
|
||||
self.max_backoff_seconds,
|
||||
self.base_backoff_seconds * (2 ** self.current_retry)
|
||||
)
|
||||
|
||||
# Add jitter (±25%)
|
||||
jitter = backoff_time * 0.25 * (0.5 - hash(self.task_id) % 1000 / 1000.0)
|
||||
final_backoff = max(self.base_backoff_seconds, backoff_time + jitter)
|
||||
|
||||
return datetime.now(timezone.utc) + timedelta(seconds=final_backoff)
|
||||
|
||||
def should_retry(self) -> bool:
|
||||
"""Determine if task should be retried based on status and retry count."""
|
||||
if self.status != TaskStatus.FAILED_RETRYING:
|
||||
return False
|
||||
if self.current_retry >= self.max_retries:
|
||||
return False
|
||||
if self.next_retry_at and datetime.now(timezone.utc) < self.next_retry_at:
|
||||
return False
|
||||
return True
|
||||
|
||||
def mark_failed(self, error: str, metadata: Dict[str, Any] = None):
|
||||
"""Mark task as failed and prepare for retry or permanent failure."""
|
||||
self.current_retry += 1
|
||||
self.last_attempt_at = datetime.now(timezone.utc)
|
||||
|
||||
# Record execution history
|
||||
execution_record = {
|
||||
'attempt': self.current_retry,
|
||||
'timestamp': self.last_attempt_at.isoformat(),
|
||||
'error': error,
|
||||
'metadata': metadata or {}
|
||||
}
|
||||
self.execution_history.append(execution_record)
|
||||
|
||||
if self.current_retry >= self.max_retries:
|
||||
self.status = TaskStatus.FAILED_PERMANENT
|
||||
self.result = TaskResult(success=False, error=f"Permanent failure after {self.max_retries} attempts: {error}")
|
||||
else:
|
||||
self.status = TaskStatus.FAILED_RETRYING
|
||||
self.next_retry_at = self.calculate_next_retry_time()
|
||||
|
||||
def mark_succeeded(self, data: Any = None, metadata: Dict[str, Any] = None):
|
||||
"""Mark task as successfully completed."""
|
||||
self.status = TaskStatus.SUCCEEDED
|
||||
self.last_attempt_at = datetime.now(timezone.utc)
|
||||
self.result = TaskResult(success=True, data=data, metadata=metadata or {})
|
||||
|
||||
# Record successful execution
|
||||
execution_record = {
|
||||
'attempt': self.current_retry + 1,
|
||||
'timestamp': self.last_attempt_at.isoformat(),
|
||||
'success': True,
|
||||
'metadata': metadata or {}
|
||||
}
|
||||
self.execution_history.append(execution_record)
|
||||
|
||||
def get_summary(self) -> Dict[str, Any]:
|
||||
"""Get task summary for progress reporting."""
|
||||
return {
|
||||
'task_id': self.task_id,
|
||||
'task_type': self.task_type.value,
|
||||
'target': self.target,
|
||||
'provider': self.provider_name,
|
||||
'status': self.status.value,
|
||||
'current_retry': self.current_retry,
|
||||
'max_retries': self.max_retries,
|
||||
'created_at': self.created_at.isoformat(),
|
||||
'last_attempt_at': self.last_attempt_at.isoformat() if self.last_attempt_at else None,
|
||||
'next_retry_at': self.next_retry_at.isoformat() if self.next_retry_at else None,
|
||||
'total_attempts': len(self.execution_history),
|
||||
'has_result': self.result is not None
|
||||
}
|
||||
|
||||
|
||||
class TaskQueue:
|
||||
"""Thread-safe task queue with retry logic and priority handling."""
|
||||
|
||||
def __init__(self, max_concurrent_tasks: int = 5):
|
||||
"""Initialize task queue."""
|
||||
self.max_concurrent_tasks = max_concurrent_tasks
|
||||
self.tasks: Dict[str, ReconTask] = {}
|
||||
self.pending_queue = deque()
|
||||
self.retry_queue = deque()
|
||||
self.running_tasks: Set[str] = set()
|
||||
|
||||
self._lock = threading.Lock()
|
||||
self._stop_event = threading.Event()
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare TaskQueue for pickling by excluding unpicklable objects."""
|
||||
state = self.__dict__.copy()
|
||||
# Exclude the unpickleable '_lock' and '_stop_event' attributes
|
||||
if '_lock' in state:
|
||||
del state['_lock']
|
||||
if '_stop_event' in state:
|
||||
del state['_stop_event']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore TaskQueue after unpickling by reconstructing threading objects."""
|
||||
self.__dict__.update(state)
|
||||
# Re-initialize the '_lock' and '_stop_event' attributes
|
||||
self._lock = threading.Lock()
|
||||
self._stop_event = threading.Event()
|
||||
|
||||
def add_task(self, task: ReconTask) -> str:
|
||||
"""Add task to queue."""
|
||||
with self._lock:
|
||||
self.tasks[task.task_id] = task
|
||||
self.pending_queue.append(task.task_id)
|
||||
print(f"Added task {task.task_id}: {task.provider_name} query for {task.target}")
|
||||
return task.task_id
|
||||
|
||||
def get_next_ready_task(self) -> Optional[ReconTask]:
|
||||
"""Get next task ready for execution."""
|
||||
with self._lock:
|
||||
# Check if we have room for more concurrent tasks
|
||||
if len(self.running_tasks) >= self.max_concurrent_tasks:
|
||||
return None
|
||||
|
||||
# First priority: retry queue (tasks ready for retry)
|
||||
while self.retry_queue:
|
||||
task_id = self.retry_queue.popleft()
|
||||
if task_id in self.tasks:
|
||||
task = self.tasks[task_id]
|
||||
if task.should_retry():
|
||||
task.status = TaskStatus.RUNNING
|
||||
self.running_tasks.add(task_id)
|
||||
print(f"Retrying task {task_id} (attempt {task.current_retry + 1})")
|
||||
return task
|
||||
|
||||
# Second priority: pending queue (new tasks)
|
||||
while self.pending_queue:
|
||||
task_id = self.pending_queue.popleft()
|
||||
if task_id in self.tasks:
|
||||
task = self.tasks[task_id]
|
||||
if task.status == TaskStatus.PENDING:
|
||||
task.status = TaskStatus.RUNNING
|
||||
self.running_tasks.add(task_id)
|
||||
print(f"Starting task {task_id}")
|
||||
return task
|
||||
|
||||
return None
|
||||
|
||||
def complete_task(self, task_id: str, success: bool, data: Any = None,
|
||||
error: str = None, metadata: Dict[str, Any] = None):
|
||||
"""Mark task as completed (success or failure)."""
|
||||
with self._lock:
|
||||
if task_id not in self.tasks:
|
||||
return
|
||||
|
||||
task = self.tasks[task_id]
|
||||
self.running_tasks.discard(task_id)
|
||||
|
||||
if success:
|
||||
task.mark_succeeded(data=data, metadata=metadata)
|
||||
print(f"Task {task_id} succeeded")
|
||||
else:
|
||||
task.mark_failed(error or "Unknown error", metadata=metadata)
|
||||
if task.status == TaskStatus.FAILED_RETRYING:
|
||||
self.retry_queue.append(task_id)
|
||||
print(f"Task {task_id} failed, scheduled for retry at {task.next_retry_at}")
|
||||
else:
|
||||
print(f"Task {task_id} permanently failed after {task.current_retry} attempts")
|
||||
|
||||
def cancel_all_tasks(self):
|
||||
"""Cancel all pending and running tasks."""
|
||||
with self._lock:
|
||||
self._stop_event.set()
|
||||
for task in self.tasks.values():
|
||||
if task.status in [TaskStatus.PENDING, TaskStatus.RUNNING, TaskStatus.FAILED_RETRYING]:
|
||||
task.status = TaskStatus.CANCELLED
|
||||
self.pending_queue.clear()
|
||||
self.retry_queue.clear()
|
||||
self.running_tasks.clear()
|
||||
print("All tasks cancelled")
|
||||
|
||||
def is_complete(self) -> bool:
|
||||
"""Check if all tasks are complete (succeeded, permanently failed, or cancelled)."""
|
||||
with self._lock:
|
||||
for task in self.tasks.values():
|
||||
if task.status in [TaskStatus.PENDING, TaskStatus.RUNNING, TaskStatus.FAILED_RETRYING]:
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""Get queue statistics."""
|
||||
with self._lock:
|
||||
stats = {
|
||||
'total_tasks': len(self.tasks),
|
||||
'pending': len(self.pending_queue),
|
||||
'running': len(self.running_tasks),
|
||||
'retry_queue': len(self.retry_queue),
|
||||
'succeeded': 0,
|
||||
'failed_permanent': 0,
|
||||
'cancelled': 0,
|
||||
'failed_retrying': 0
|
||||
}
|
||||
|
||||
for task in self.tasks.values():
|
||||
if task.status == TaskStatus.SUCCEEDED:
|
||||
stats['succeeded'] += 1
|
||||
elif task.status == TaskStatus.FAILED_PERMANENT:
|
||||
stats['failed_permanent'] += 1
|
||||
elif task.status == TaskStatus.CANCELLED:
|
||||
stats['cancelled'] += 1
|
||||
elif task.status == TaskStatus.FAILED_RETRYING:
|
||||
stats['failed_retrying'] += 1
|
||||
|
||||
stats['completion_rate'] = (stats['succeeded'] / stats['total_tasks'] * 100) if stats['total_tasks'] > 0 else 0
|
||||
stats['is_complete'] = self.is_complete()
|
||||
|
||||
return stats
|
||||
|
||||
def get_task_summaries(self) -> List[Dict[str, Any]]:
|
||||
"""Get summaries of all tasks for detailed progress reporting."""
|
||||
with self._lock:
|
||||
return [task.get_summary() for task in self.tasks.values()]
|
||||
|
||||
def get_failed_tasks(self) -> List[ReconTask]:
|
||||
"""Get all permanently failed tasks for analysis."""
|
||||
with self._lock:
|
||||
return [task for task in self.tasks.values() if task.status == TaskStatus.FAILED_PERMANENT]
|
||||
|
||||
|
||||
class TaskExecutor:
|
||||
"""Executes reconnaissance tasks using providers."""
|
||||
|
||||
def __init__(self, providers: List, graph_manager, logger):
|
||||
"""Initialize task executor."""
|
||||
self.providers = {provider.get_name(): provider for provider in providers}
|
||||
self.graph = graph_manager
|
||||
self.logger = logger
|
||||
|
||||
def execute_task(self, task: ReconTask) -> TaskResult:
|
||||
"""
|
||||
Execute a single reconnaissance task.
|
||||
|
||||
Args:
|
||||
task: Task to execute
|
||||
|
||||
Returns:
|
||||
TaskResult with success/failure information
|
||||
"""
|
||||
try:
|
||||
print(f"Executing task {task.task_id}: {task.provider_name} query for {task.target}")
|
||||
|
||||
provider = self.providers.get(task.provider_name)
|
||||
if not provider:
|
||||
return TaskResult(
|
||||
success=False,
|
||||
error=f"Provider {task.provider_name} not available"
|
||||
)
|
||||
|
||||
if not provider.is_available():
|
||||
return TaskResult(
|
||||
success=False,
|
||||
error=f"Provider {task.provider_name} is not available (missing API key or configuration)"
|
||||
)
|
||||
|
||||
# Execute provider query based on task type
|
||||
if task.task_type == TaskType.DOMAIN_QUERY:
|
||||
if not _is_valid_domain(task.target):
|
||||
return TaskResult(success=False, error=f"Invalid domain: {task.target}")
|
||||
|
||||
relationships = provider.query_domain(task.target)
|
||||
|
||||
elif task.task_type == TaskType.IP_QUERY:
|
||||
if not _is_valid_ip(task.target):
|
||||
return TaskResult(success=False, error=f"Invalid IP: {task.target}")
|
||||
|
||||
relationships = provider.query_ip(task.target)
|
||||
|
||||
else:
|
||||
return TaskResult(success=False, error=f"Unsupported task type: {task.task_type}")
|
||||
|
||||
# Process results and update graph
|
||||
new_targets = set()
|
||||
relationships_added = 0
|
||||
|
||||
for source, target, rel_type, confidence, raw_data in relationships:
|
||||
# Add nodes to graph
|
||||
from core.graph_manager import NodeType
|
||||
|
||||
if _is_valid_ip(target):
|
||||
self.graph.add_node(target, NodeType.IP)
|
||||
new_targets.add(target)
|
||||
elif target.startswith('AS') and target[2:].isdigit():
|
||||
self.graph.add_node(target, NodeType.ASN)
|
||||
elif _is_valid_domain(target):
|
||||
self.graph.add_node(target, NodeType.DOMAIN)
|
||||
new_targets.add(target)
|
||||
|
||||
# Add edge to graph
|
||||
if self.graph.add_edge(source, target, rel_type, confidence, task.provider_name, raw_data):
|
||||
relationships_added += 1
|
||||
|
||||
# Log forensic information
|
||||
self.logger.logger.info(
|
||||
f"Task {task.task_id} completed: {len(relationships)} relationships found, "
|
||||
f"{relationships_added} added to graph, {len(new_targets)} new targets"
|
||||
)
|
||||
|
||||
return TaskResult(
|
||||
success=True,
|
||||
data={
|
||||
'relationships': relationships,
|
||||
'new_targets': list(new_targets),
|
||||
'relationships_added': relationships_added
|
||||
},
|
||||
metadata={
|
||||
'provider': task.provider_name,
|
||||
'target': task.target,
|
||||
'depth': task.depth,
|
||||
'execution_time': datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Task execution failed: {str(e)}"
|
||||
print(f"ERROR: {error_msg} for task {task.task_id}")
|
||||
self.logger.logger.error(error_msg)
|
||||
|
||||
return TaskResult(
|
||||
success=False,
|
||||
error=error_msg,
|
||||
metadata={
|
||||
'provider': task.provider_name,
|
||||
'target': task.target,
|
||||
'exception_type': type(e).__name__
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
class TaskManager:
|
||||
"""High-level task management for reconnaissance scans."""
|
||||
|
||||
def __init__(self, providers: List, graph_manager, logger, max_concurrent_tasks: int = 5):
|
||||
"""Initialize task manager."""
|
||||
self.task_queue = TaskQueue(max_concurrent_tasks)
|
||||
self.task_executor = TaskExecutor(providers, graph_manager, logger)
|
||||
self.logger = logger
|
||||
|
||||
# Execution control
|
||||
self._stop_event = threading.Event()
|
||||
self._execution_threads: List[threading.Thread] = []
|
||||
self._is_running = False
|
||||
|
||||
def create_provider_tasks(self, target: str, depth: int, providers: List) -> List[str]:
|
||||
"""
|
||||
Create tasks for querying all eligible providers for a target.
|
||||
|
||||
Args:
|
||||
target: Domain or IP to query
|
||||
depth: Current recursion depth
|
||||
providers: List of available providers
|
||||
|
||||
Returns:
|
||||
List of created task IDs
|
||||
"""
|
||||
task_ids = []
|
||||
is_ip = _is_valid_ip(target)
|
||||
target_key = 'ips' if is_ip else 'domains'
|
||||
task_type = TaskType.IP_QUERY if is_ip else TaskType.DOMAIN_QUERY
|
||||
|
||||
for provider in providers:
|
||||
if provider.get_eligibility().get(target_key) and provider.is_available():
|
||||
task = ReconTask(
|
||||
task_id=str(uuid.uuid4())[:8],
|
||||
task_type=task_type,
|
||||
target=target,
|
||||
provider_name=provider.get_name(),
|
||||
depth=depth,
|
||||
max_retries=3 # Configure retries per task type/provider
|
||||
)
|
||||
|
||||
task_id = self.task_queue.add_task(task)
|
||||
task_ids.append(task_id)
|
||||
|
||||
return task_ids
|
||||
|
||||
def start_execution(self, max_workers: int = 3):
|
||||
"""Start task execution with specified number of worker threads."""
|
||||
if self._is_running:
|
||||
print("Task execution already running")
|
||||
return
|
||||
|
||||
self._is_running = True
|
||||
self._stop_event.clear()
|
||||
|
||||
print(f"Starting task execution with {max_workers} workers")
|
||||
|
||||
for i in range(max_workers):
|
||||
worker_thread = threading.Thread(
|
||||
target=self._worker_loop,
|
||||
name=f"TaskWorker-{i+1}",
|
||||
daemon=True
|
||||
)
|
||||
worker_thread.start()
|
||||
self._execution_threads.append(worker_thread)
|
||||
|
||||
def stop_execution(self):
|
||||
"""Stop task execution and cancel all tasks."""
|
||||
print("Stopping task execution")
|
||||
self._stop_event.set()
|
||||
self.task_queue.cancel_all_tasks()
|
||||
self._is_running = False
|
||||
|
||||
# Wait for worker threads to finish
|
||||
for thread in self._execution_threads:
|
||||
thread.join(timeout=5.0)
|
||||
|
||||
self._execution_threads.clear()
|
||||
print("Task execution stopped")
|
||||
|
||||
def _worker_loop(self):
|
||||
"""Worker thread loop for executing tasks."""
|
||||
thread_name = threading.current_thread().name
|
||||
print(f"{thread_name} started")
|
||||
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
# Get next task to execute
|
||||
task = self.task_queue.get_next_ready_task()
|
||||
|
||||
if task is None:
|
||||
# No tasks ready, check if we should exit
|
||||
if self.task_queue.is_complete() or self._stop_event.is_set():
|
||||
break
|
||||
time.sleep(0.1) # Brief sleep before checking again
|
||||
continue
|
||||
|
||||
# Execute the task
|
||||
result = self.task_executor.execute_task(task)
|
||||
|
||||
# Complete the task in queue
|
||||
self.task_queue.complete_task(
|
||||
task.task_id,
|
||||
success=result.success,
|
||||
data=result.data,
|
||||
error=result.error,
|
||||
metadata=result.metadata
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Worker {thread_name} encountered error: {e}")
|
||||
# Continue running even if individual task fails
|
||||
continue
|
||||
|
||||
print(f"{thread_name} finished")
|
||||
|
||||
def wait_for_completion(self, timeout_seconds: int = 300) -> bool:
|
||||
"""
|
||||
Wait for all tasks to complete.
|
||||
|
||||
Args:
|
||||
timeout_seconds: Maximum time to wait
|
||||
|
||||
Returns:
|
||||
True if all tasks completed, False if timeout
|
||||
"""
|
||||
start_time = time.time()
|
||||
|
||||
while time.time() - start_time < timeout_seconds:
|
||||
if self.task_queue.is_complete():
|
||||
return True
|
||||
|
||||
if self._stop_event.is_set():
|
||||
return False
|
||||
|
||||
time.sleep(1.0) # Check every second
|
||||
|
||||
print(f"Timeout waiting for task completion after {timeout_seconds} seconds")
|
||||
return False
|
||||
|
||||
def get_progress_report(self) -> Dict[str, Any]:
|
||||
"""Get detailed progress report for UI updates."""
|
||||
stats = self.task_queue.get_statistics()
|
||||
failed_tasks = self.task_queue.get_failed_tasks()
|
||||
|
||||
return {
|
||||
'statistics': stats,
|
||||
'failed_tasks': [task.get_summary() for task in failed_tasks],
|
||||
'is_running': self._is_running,
|
||||
'worker_count': len(self._execution_threads),
|
||||
'detailed_tasks': self.task_queue.get_task_summaries() if stats['total_tasks'] < 50 else [] # Limit detail for performance
|
||||
}
|
||||
@@ -3,15 +3,14 @@ Data provider modules for DNSRecon.
|
||||
Contains implementations for various reconnaissance data sources.
|
||||
"""
|
||||
|
||||
from .base_provider import BaseProvider
|
||||
from .base_provider import BaseProvider, RateLimiter
|
||||
from .crtsh_provider import CrtShProvider
|
||||
from .dns_provider import DNSProvider
|
||||
from .shodan_provider import ShodanProvider
|
||||
from core.rate_limiter import GlobalRateLimiter
|
||||
|
||||
__all__ = [
|
||||
'BaseProvider',
|
||||
'GlobalRateLimiter',
|
||||
'RateLimiter',
|
||||
'CrtShProvider',
|
||||
'DNSProvider',
|
||||
'ShodanProvider'
|
||||
|
||||
@@ -3,23 +3,175 @@
|
||||
import time
|
||||
import requests
|
||||
import threading
|
||||
import os
|
||||
import json
|
||||
import hashlib
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Any, Optional
|
||||
from typing import List, Dict, Any, Optional, Tuple
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from core.logger import get_forensic_logger
|
||||
from core.rate_limiter import GlobalRateLimiter
|
||||
from core.provider_result import ProviderResult
|
||||
|
||||
|
||||
class RateLimiter:
|
||||
"""Thread-safe rate limiter for API calls."""
|
||||
|
||||
def __init__(self, requests_per_minute: int):
|
||||
"""
|
||||
Initialize rate limiter.
|
||||
|
||||
Args:
|
||||
requests_per_minute: Maximum requests allowed per minute
|
||||
"""
|
||||
self.requests_per_minute = requests_per_minute
|
||||
self.min_interval = 60.0 / requests_per_minute
|
||||
self.last_request_time = 0
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def __getstate__(self):
|
||||
"""RateLimiter is fully picklable, return full state."""
|
||||
state = self.__dict__.copy()
|
||||
# Exclude unpickleable lock
|
||||
if '_lock' in state:
|
||||
del state['_lock']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Restore RateLimiter state."""
|
||||
self.__dict__.update(state)
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def wait_if_needed(self) -> None:
|
||||
"""Wait if necessary to respect rate limits."""
|
||||
with self._lock:
|
||||
current_time = time.time()
|
||||
time_since_last = current_time - self.last_request_time
|
||||
|
||||
if time_since_last < self.min_interval:
|
||||
sleep_time = self.min_interval - time_since_last
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.last_request_time = time.time()
|
||||
|
||||
|
||||
class ProviderCache:
|
||||
"""Thread-safe global cache for provider queries."""
|
||||
|
||||
def __init__(self, provider_name: str, cache_expiry_hours: int = 12):
|
||||
"""
|
||||
Initialize provider-specific cache.
|
||||
|
||||
Args:
|
||||
provider_name: Name of the provider for cache directory
|
||||
cache_expiry_hours: Cache expiry time in hours
|
||||
"""
|
||||
self.provider_name = provider_name
|
||||
self.cache_expiry = cache_expiry_hours * 3600 # Convert to seconds
|
||||
self.cache_dir = os.path.join('.cache', provider_name)
|
||||
self._lock = threading.Lock()
|
||||
|
||||
# Ensure cache directory exists with thread-safe creation
|
||||
os.makedirs(self.cache_dir, exist_ok=True)
|
||||
|
||||
def _generate_cache_key(self, method: str, url: str, params: Optional[Dict[str, Any]]) -> str:
|
||||
"""Generate unique cache key for request."""
|
||||
cache_data = f"{method}:{url}:{json.dumps(params or {}, sort_keys=True)}"
|
||||
return hashlib.md5(cache_data.encode()).hexdigest() + ".json"
|
||||
|
||||
def get_cached_response(self, method: str, url: str, params: Optional[Dict[str, Any]]) -> Optional[requests.Response]:
|
||||
"""
|
||||
Retrieve cached response if available and not expired.
|
||||
|
||||
Returns:
|
||||
Cached Response object or None if cache miss/expired
|
||||
"""
|
||||
cache_key = self._generate_cache_key(method, url, params)
|
||||
cache_path = os.path.join(self.cache_dir, cache_key)
|
||||
|
||||
with self._lock:
|
||||
if not os.path.exists(cache_path):
|
||||
return None
|
||||
|
||||
# Check if cache is expired
|
||||
cache_age = time.time() - os.path.getmtime(cache_path)
|
||||
if cache_age >= self.cache_expiry:
|
||||
try:
|
||||
os.remove(cache_path)
|
||||
except OSError:
|
||||
pass # File might have been removed by another thread
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(cache_path, 'r', encoding='utf-8') as f:
|
||||
cached_data = json.load(f)
|
||||
|
||||
# Reconstruct Response object
|
||||
response = requests.Response()
|
||||
response.status_code = cached_data['status_code']
|
||||
response._content = cached_data['content'].encode('utf-8')
|
||||
response.headers.update(cached_data['headers'])
|
||||
|
||||
return response
|
||||
|
||||
except (json.JSONDecodeError, KeyError, IOError) as e:
|
||||
# Cache file corrupted, remove it
|
||||
try:
|
||||
os.remove(cache_path)
|
||||
except OSError:
|
||||
pass
|
||||
return None
|
||||
|
||||
def cache_response(self, method: str, url: str, params: Optional[Dict[str, Any]],
|
||||
response: requests.Response) -> bool:
|
||||
"""
|
||||
Cache successful response to disk.
|
||||
|
||||
Returns:
|
||||
True if cached successfully, False otherwise
|
||||
"""
|
||||
if response.status_code != 200:
|
||||
return False
|
||||
|
||||
cache_key = self._generate_cache_key(method, url, params)
|
||||
cache_path = os.path.join(self.cache_dir, cache_key)
|
||||
|
||||
with self._lock:
|
||||
try:
|
||||
cache_data = {
|
||||
'status_code': response.status_code,
|
||||
'content': response.text,
|
||||
'headers': dict(response.headers),
|
||||
'cached_at': datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
|
||||
# Write to temporary file first, then rename for atomic operation
|
||||
temp_path = cache_path + '.tmp'
|
||||
with open(temp_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(cache_data, f)
|
||||
|
||||
# Atomic rename to prevent partial cache files
|
||||
os.rename(temp_path, cache_path)
|
||||
return True
|
||||
|
||||
except (IOError, OSError) as e:
|
||||
# Clean up temp file if it exists
|
||||
try:
|
||||
if os.path.exists(temp_path):
|
||||
os.remove(temp_path)
|
||||
except OSError:
|
||||
pass
|
||||
return False
|
||||
|
||||
|
||||
class BaseProvider(ABC):
|
||||
"""
|
||||
Abstract base class for all DNSRecon data providers.
|
||||
Now supports session-specific configuration and returns standardized ProviderResult objects.
|
||||
Now supports global provider-specific caching and session-specific configuration.
|
||||
"""
|
||||
|
||||
def __init__(self, name: str, rate_limit: int = 60, timeout: int = 30, session_config=None):
|
||||
"""
|
||||
Initialize base provider with session-specific configuration.
|
||||
Initialize base provider with global caching and session-specific configuration.
|
||||
|
||||
Args:
|
||||
name: Provider name for logging
|
||||
@@ -36,28 +188,35 @@ class BaseProvider(ABC):
|
||||
# Fallback to global config for backwards compatibility
|
||||
from config import config as global_config
|
||||
self.config = global_config
|
||||
actual_rate_limit = rate_limit
|
||||
actual_timeout = timeout
|
||||
|
||||
self.name = name
|
||||
self.rate_limiter = RateLimiter(actual_rate_limit)
|
||||
self.timeout = actual_timeout
|
||||
self._local = threading.local()
|
||||
self.logger = get_forensic_logger()
|
||||
self._stop_event = None
|
||||
|
||||
# GLOBAL provider-specific caching (not session-based)
|
||||
self.cache = ProviderCache(name, cache_expiry_hours=12)
|
||||
|
||||
# Statistics (per provider instance)
|
||||
self.total_requests = 0
|
||||
self.successful_requests = 0
|
||||
self.failed_requests = 0
|
||||
self.total_relationships_found = 0
|
||||
self.cache_hits = 0
|
||||
self.cache_misses = 0
|
||||
|
||||
print(f"Initialized {name} provider with global cache and session config (rate: {actual_rate_limit}/min)")
|
||||
|
||||
def __getstate__(self):
|
||||
"""Prepare BaseProvider for pickling by excluding unpicklable objects."""
|
||||
state = self.__dict__.copy()
|
||||
# Exclude the unpickleable '_local' attribute and stop event
|
||||
unpicklable_attrs = ['_local', '_stop_event']
|
||||
for attr in unpicklable_attrs:
|
||||
if attr in state:
|
||||
del state[attr]
|
||||
state['_local'] = None
|
||||
state['_stop_event'] = None
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
@@ -72,7 +231,7 @@ class BaseProvider(ABC):
|
||||
if not hasattr(self._local, 'session'):
|
||||
self._local.session = requests.Session()
|
||||
self._local.session.headers.update({
|
||||
'User-Agent': 'DNSRecon/1.0 (Passive Reconnaissance Tool)'
|
||||
'User-Agent': 'DNSRecon/2.0 (Passive Reconnaissance Tool)'
|
||||
})
|
||||
return self._local.session
|
||||
|
||||
@@ -102,7 +261,7 @@ class BaseProvider(ABC):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def query_domain(self, domain: str) -> ProviderResult:
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query the provider for information about a domain.
|
||||
|
||||
@@ -110,12 +269,12 @@ class BaseProvider(ABC):
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
ProviderResult containing standardized attributes and relationships
|
||||
List of tuples: (source_node, target_node, relationship_type, confidence, raw_data)
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def query_ip(self, ip: str) -> ProviderResult:
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query the provider for information about an IP address.
|
||||
|
||||
@@ -123,92 +282,160 @@ class BaseProvider(ABC):
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
ProviderResult containing standardized attributes and relationships
|
||||
List of tuples: (source_node, target_node, relationship_type, confidence, raw_data)
|
||||
"""
|
||||
pass
|
||||
|
||||
def make_request(self, url: str, method: str = "GET",
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None,
|
||||
target_indicator: str = "") -> Optional[requests.Response]:
|
||||
target_indicator: str = "",
|
||||
max_retries: int = 3) -> Optional[requests.Response]:
|
||||
"""
|
||||
Make a rate-limited HTTP request.
|
||||
FIXED: Returns response without automatically raising HTTPError exceptions.
|
||||
Individual providers should handle status codes appropriately.
|
||||
Make a rate-limited HTTP request with global caching and aggressive stop signal handling.
|
||||
"""
|
||||
# Check for cancellation before starting
|
||||
if self._is_stop_requested():
|
||||
print(f"Request cancelled before start: {url}")
|
||||
return None
|
||||
|
||||
start_time = time.time()
|
||||
response = None
|
||||
error = None
|
||||
# Check global cache first
|
||||
cached_response = self.cache.get_cached_response(method, url, params)
|
||||
if cached_response is not None:
|
||||
print(f"Cache hit for {self.name}: {url}")
|
||||
self.cache_hits += 1
|
||||
return cached_response
|
||||
|
||||
self.cache_misses += 1
|
||||
|
||||
try:
|
||||
self.total_requests += 1
|
||||
# Determine effective max_retries based on stop signal
|
||||
effective_max_retries = 0 if self._is_stop_requested() else max_retries
|
||||
last_exception = None
|
||||
|
||||
request_headers = dict(self.session.headers).copy()
|
||||
if headers:
|
||||
request_headers.update(headers)
|
||||
for attempt in range(effective_max_retries + 1):
|
||||
# Check for cancellation before each attempt
|
||||
if self._is_stop_requested():
|
||||
print(f"Request cancelled during attempt {attempt + 1}: {url}")
|
||||
return None
|
||||
|
||||
print(f"Making {method} request to: {url}")
|
||||
# Apply rate limiting with cancellation awareness
|
||||
if not self._wait_with_cancellation_check():
|
||||
print(f"Request cancelled during rate limiting: {url}")
|
||||
return None
|
||||
|
||||
if method.upper() == "GET":
|
||||
response = self.session.get(
|
||||
url,
|
||||
params=params,
|
||||
headers=request_headers,
|
||||
timeout=self.timeout
|
||||
)
|
||||
elif method.upper() == "POST":
|
||||
response = self.session.post(
|
||||
url,
|
||||
json=params,
|
||||
headers=request_headers,
|
||||
timeout=self.timeout
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||
# Final check before making HTTP request
|
||||
if self._is_stop_requested():
|
||||
print(f"Request cancelled before HTTP call: {url}")
|
||||
return None
|
||||
|
||||
print(f"Response status: {response.status_code}")
|
||||
|
||||
# FIXED: Don't automatically raise for HTTP error status codes
|
||||
# Let individual providers handle status codes appropriately
|
||||
# Only count 2xx responses as successful
|
||||
if 200 <= response.status_code < 300:
|
||||
start_time = time.time()
|
||||
response = None
|
||||
error = None
|
||||
|
||||
try:
|
||||
self.total_requests += 1
|
||||
|
||||
# Prepare request
|
||||
request_headers = self.session.headers.copy()
|
||||
if headers:
|
||||
request_headers.update(headers)
|
||||
|
||||
print(f"Making {method} request to: {url} (attempt {attempt + 1})")
|
||||
|
||||
# Use shorter timeout if termination is requested
|
||||
request_timeout = 2 if self._is_stop_requested() else self.timeout
|
||||
|
||||
# Make request
|
||||
if method.upper() == "GET":
|
||||
response = self.session.get(
|
||||
url,
|
||||
params=params,
|
||||
headers=request_headers,
|
||||
timeout=request_timeout
|
||||
)
|
||||
elif method.upper() == "POST":
|
||||
response = self.session.post(
|
||||
url,
|
||||
json=params,
|
||||
headers=request_headers,
|
||||
timeout=request_timeout
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||
|
||||
print(f"Response status: {response.status_code}")
|
||||
response.raise_for_status()
|
||||
self.successful_requests += 1
|
||||
else:
|
||||
self.failed_requests += 1
|
||||
|
||||
duration_ms = (time.time() - start_time) * 1000
|
||||
self.logger.log_api_request(
|
||||
provider=self.name,
|
||||
url=url,
|
||||
method=method.upper(),
|
||||
status_code=response.status_code,
|
||||
response_size=len(response.content),
|
||||
duration_ms=duration_ms,
|
||||
error=None,
|
||||
target_indicator=target_indicator
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
# Success - log, cache, and return
|
||||
duration_ms = (time.time() - start_time) * 1000
|
||||
self.logger.log_api_request(
|
||||
provider=self.name,
|
||||
url=url,
|
||||
method=method.upper(),
|
||||
status_code=response.status_code,
|
||||
response_size=len(response.content),
|
||||
duration_ms=duration_ms,
|
||||
error=None,
|
||||
target_indicator=target_indicator
|
||||
)
|
||||
|
||||
# Cache the successful response globally
|
||||
self.cache.cache_response(method, url, params, response)
|
||||
return response
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
error = str(e)
|
||||
self.failed_requests += 1
|
||||
duration_ms = (time.time() - start_time) * 1000
|
||||
self.logger.log_api_request(
|
||||
provider=self.name,
|
||||
url=url,
|
||||
method=method.upper(),
|
||||
status_code=response.status_code if response else None,
|
||||
response_size=len(response.content) if response else None,
|
||||
duration_ms=duration_ms,
|
||||
error=error,
|
||||
target_indicator=target_indicator
|
||||
)
|
||||
raise e
|
||||
except requests.exceptions.RequestException as e:
|
||||
error = str(e)
|
||||
self.failed_requests += 1
|
||||
print(f"Request failed (attempt {attempt + 1}): {error}")
|
||||
last_exception = e
|
||||
|
||||
# Immediately abort retries if stop requested
|
||||
if self._is_stop_requested():
|
||||
print(f"Stop requested - aborting retries for: {url}")
|
||||
break
|
||||
|
||||
# Check if we should retry
|
||||
if attempt < effective_max_retries and self._should_retry(e):
|
||||
# Exponential backoff with jitter for 429 errors
|
||||
if isinstance(e, requests.exceptions.HTTPError) and e.response and e.response.status_code == 429:
|
||||
backoff_time = min(60, 10 * (2 ** attempt))
|
||||
print(f"Rate limit hit. Retrying in {backoff_time} seconds...")
|
||||
else:
|
||||
backoff_time = min(2.0, (2 ** attempt) * 0.5)
|
||||
print(f"Retrying in {backoff_time} seconds...")
|
||||
|
||||
if not self._sleep_with_cancellation_check(backoff_time):
|
||||
print(f"Stop requested during backoff - aborting: {url}")
|
||||
return None
|
||||
continue
|
||||
else:
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
error = f"Unexpected error: {str(e)}"
|
||||
self.failed_requests += 1
|
||||
print(f"Unexpected error: {error}")
|
||||
last_exception = e
|
||||
break
|
||||
|
||||
# All attempts failed - log and return None
|
||||
duration_ms = (time.time() - start_time) * 1000
|
||||
self.logger.log_api_request(
|
||||
provider=self.name,
|
||||
url=url,
|
||||
method=method.upper(),
|
||||
status_code=response.status_code if response else None,
|
||||
response_size=len(response.content) if response else None,
|
||||
duration_ms=duration_ms,
|
||||
error=error,
|
||||
target_indicator=target_indicator
|
||||
)
|
||||
|
||||
if error and last_exception:
|
||||
raise last_exception
|
||||
|
||||
return None
|
||||
|
||||
def _is_stop_requested(self) -> bool:
|
||||
"""
|
||||
@@ -218,6 +445,43 @@ class BaseProvider(ABC):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _wait_with_cancellation_check(self) -> bool:
|
||||
"""
|
||||
Wait for rate limiting while aggressively checking for cancellation.
|
||||
Returns False if cancelled during wait.
|
||||
"""
|
||||
current_time = time.time()
|
||||
time_since_last = current_time - self.rate_limiter.last_request_time
|
||||
|
||||
if time_since_last < self.rate_limiter.min_interval:
|
||||
sleep_time = self.rate_limiter.min_interval - time_since_last
|
||||
if not self._sleep_with_cancellation_check(sleep_time):
|
||||
return False
|
||||
|
||||
self.rate_limiter.last_request_time = time.time()
|
||||
return True
|
||||
|
||||
def _sleep_with_cancellation_check(self, sleep_time: float) -> bool:
|
||||
"""
|
||||
Sleep for the specified time while aggressively checking for cancellation.
|
||||
|
||||
Args:
|
||||
sleep_time: Time to sleep in seconds
|
||||
|
||||
Returns:
|
||||
bool: True if sleep completed, False if cancelled
|
||||
"""
|
||||
sleep_start = time.time()
|
||||
check_interval = 0.05 # Check every 50ms for aggressive responsiveness
|
||||
|
||||
while time.time() - sleep_start < sleep_time:
|
||||
if self._is_stop_requested():
|
||||
return False
|
||||
remaining_time = sleep_time - (time.time() - sleep_start)
|
||||
time.sleep(min(check_interval, remaining_time))
|
||||
|
||||
return True
|
||||
|
||||
def set_stop_event(self, stop_event: threading.Event) -> None:
|
||||
"""
|
||||
Set the stop event for this provider to enable cancellation.
|
||||
@@ -227,6 +491,28 @@ class BaseProvider(ABC):
|
||||
"""
|
||||
self._stop_event = stop_event
|
||||
|
||||
def _should_retry(self, exception: requests.exceptions.RequestException) -> bool:
|
||||
"""
|
||||
Determine if a request should be retried based on the exception.
|
||||
|
||||
Args:
|
||||
exception: The request exception that occurred
|
||||
|
||||
Returns:
|
||||
True if the request should be retried
|
||||
"""
|
||||
# Retry on connection errors and timeouts
|
||||
if isinstance(exception, (requests.exceptions.ConnectionError,
|
||||
requests.exceptions.Timeout)):
|
||||
return True
|
||||
|
||||
if isinstance(exception, requests.exceptions.HTTPError):
|
||||
if hasattr(exception, 'response') and exception.response:
|
||||
# Retry on server errors (5xx) AND on rate-limiting errors (429)
|
||||
return exception.response.status_code >= 500 or exception.response.status_code == 429
|
||||
|
||||
return False
|
||||
|
||||
def log_relationship_discovery(self, source_node: str, target_node: str,
|
||||
relationship_type: str,
|
||||
confidence_score: float,
|
||||
@@ -257,7 +543,7 @@ class BaseProvider(ABC):
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get provider statistics.
|
||||
Get provider statistics including cache performance.
|
||||
|
||||
Returns:
|
||||
Dictionary containing provider performance metrics
|
||||
@@ -269,5 +555,8 @@ class BaseProvider(ABC):
|
||||
'failed_requests': self.failed_requests,
|
||||
'success_rate': (self.successful_requests / self.total_requests * 100) if self.total_requests > 0 else 0,
|
||||
'relationships_found': self.total_relationships_found,
|
||||
'rate_limit': self.config.get_rate_limit(self.name)
|
||||
'rate_limit': self.rate_limiter.requests_per_minute,
|
||||
'cache_hits': self.cache_hits,
|
||||
'cache_misses': self.cache_misses,
|
||||
'cache_hit_rate': (self.cache_hits / (self.cache_hits + self.cache_misses) * 100) if (self.cache_hits + self.cache_misses) > 0 else 0
|
||||
}
|
||||
@@ -1,26 +1,27 @@
|
||||
# dnsrecon/providers/crtsh_provider.py
|
||||
"""
|
||||
Certificate Transparency provider using crt.sh.
|
||||
Discovers domain relationships through certificate SAN analysis with comprehensive certificate tracking.
|
||||
Stores certificates as metadata on domain nodes rather than creating certificate nodes.
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Any, Set
|
||||
from typing import List, Dict, Any, Tuple, Set
|
||||
from urllib.parse import quote
|
||||
from datetime import datetime, timezone
|
||||
import requests
|
||||
|
||||
from .base_provider import BaseProvider
|
||||
from core.provider_result import ProviderResult
|
||||
from utils.helpers import _is_valid_domain
|
||||
|
||||
|
||||
class CrtShProvider(BaseProvider):
|
||||
"""
|
||||
Provider for querying crt.sh certificate transparency database.
|
||||
FIXED: Now properly creates domain and CA nodes instead of large entities.
|
||||
Returns standardized ProviderResult objects with caching support.
|
||||
Now uses session-specific configuration and caching.
|
||||
"""
|
||||
|
||||
def __init__(self, name=None, session_config=None):
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize CrtSh provider with session-specific configuration."""
|
||||
super().__init__(
|
||||
name="crtsh",
|
||||
@@ -30,13 +31,6 @@ class CrtShProvider(BaseProvider):
|
||||
)
|
||||
self.base_url = "https://crt.sh/"
|
||||
self._stop_event = None
|
||||
|
||||
# Initialize cache directory (separate from BaseProvider's HTTP cache)
|
||||
self.domain_cache_dir = Path('cache') / 'crtsh'
|
||||
self.domain_cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Compile regex for date filtering for efficiency
|
||||
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||
|
||||
def get_name(self) -> str:
|
||||
"""Return the provider name."""
|
||||
@@ -55,359 +49,100 @@ class CrtShProvider(BaseProvider):
|
||||
return {'domains': True, 'ips': False}
|
||||
|
||||
def is_available(self) -> bool:
|
||||
"""Check if the provider is configured to be used."""
|
||||
"""
|
||||
Check if the provider is configured to be used.
|
||||
This method is intentionally simple and does not perform a network request
|
||||
to avoid blocking application startup.
|
||||
"""
|
||||
return True
|
||||
|
||||
def _get_cache_file_path(self, domain: str) -> Path:
|
||||
"""Generate cache file path for a domain."""
|
||||
safe_domain = domain.replace('.', '_').replace('/', '_').replace('\\', '_')
|
||||
return self.domain_cache_dir / f"{safe_domain}.json"
|
||||
|
||||
def _get_cache_status(self, cache_file_path: Path) -> str:
|
||||
def _parse_certificate_date(self, date_string: str) -> datetime:
|
||||
"""
|
||||
Check cache status for a domain.
|
||||
Returns: 'not_found', 'fresh', or 'stale'
|
||||
"""
|
||||
if not cache_file_path.exists():
|
||||
return "not_found"
|
||||
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_data = json.load(f)
|
||||
|
||||
last_query_str = cache_data.get("last_upstream_query")
|
||||
if not last_query_str:
|
||||
return "stale"
|
||||
|
||||
last_query = datetime.fromisoformat(last_query_str.replace('Z', '+00:00'))
|
||||
hours_since_query = (datetime.now(timezone.utc) - last_query).total_seconds() / 3600
|
||||
|
||||
cache_timeout = self.config.cache_timeout_hours
|
||||
if hours_since_query < cache_timeout:
|
||||
return "fresh"
|
||||
else:
|
||||
return "stale"
|
||||
|
||||
except (json.JSONDecodeError, ValueError, KeyError) as e:
|
||||
self.logger.logger.warning(f"Invalid cache file format for {cache_file_path}: {e}")
|
||||
return "stale"
|
||||
Parse certificate date from crt.sh format.
|
||||
|
||||
def query_domain(self, domain: str) -> ProviderResult:
|
||||
"""
|
||||
FIXED: Query crt.sh for certificates containing the domain.
|
||||
Now properly creates domain and CA nodes instead of large entities.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
ProviderResult containing discovered relationships and attributes
|
||||
"""
|
||||
if not _is_valid_domain(domain):
|
||||
return ProviderResult()
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
return ProviderResult()
|
||||
date_string: Date string from crt.sh API
|
||||
|
||||
cache_file = self._get_cache_file_path(domain)
|
||||
cache_status = self._get_cache_status(cache_file)
|
||||
|
||||
result = ProviderResult()
|
||||
Returns:
|
||||
Parsed datetime object in UTC
|
||||
"""
|
||||
if not date_string:
|
||||
raise ValueError("Empty date string")
|
||||
|
||||
try:
|
||||
if cache_status == "fresh":
|
||||
result = self._load_from_cache(cache_file)
|
||||
self.logger.logger.info(f"Using fresh cached crt.sh data for {domain}")
|
||||
|
||||
else: # "stale" or "not_found"
|
||||
# Query the API for the latest certificates
|
||||
new_raw_certs = self._query_crtsh_api(domain)
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
return ProviderResult()
|
||||
|
||||
# Combine with old data if cache is stale
|
||||
if cache_status == "stale":
|
||||
old_raw_certs = self._load_raw_data_from_cache(cache_file)
|
||||
combined_certs = old_raw_certs + new_raw_certs
|
||||
|
||||
# Deduplicate the combined list
|
||||
seen_ids = set()
|
||||
unique_certs = []
|
||||
for cert in combined_certs:
|
||||
cert_id = cert.get('id')
|
||||
if cert_id not in seen_ids:
|
||||
unique_certs.append(cert)
|
||||
seen_ids.add(cert_id)
|
||||
|
||||
raw_certificates_to_process = unique_certs
|
||||
self.logger.logger.info(f"Refreshed and merged cache for {domain}. Total unique certs: {len(raw_certificates_to_process)}")
|
||||
else: # "not_found"
|
||||
raw_certificates_to_process = new_raw_certs
|
||||
|
||||
# FIXED: Process certificates to create proper domain and CA nodes
|
||||
result = self._process_certificates_to_result_fixed(domain, raw_certificates_to_process)
|
||||
self.logger.logger.info(f"Created fresh result for {domain} ({result.get_relationship_count()} relationships)")
|
||||
|
||||
# Save the new result and the raw data to the cache
|
||||
self._save_result_to_cache(cache_file, result, raw_certificates_to_process, domain)
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.logger.error(f"API query failed for {domain}: {e}")
|
||||
if cache_status != "not_found":
|
||||
result = self._load_from_cache(cache_file)
|
||||
self.logger.logger.warning(f"Using stale cache for {domain} due to API failure.")
|
||||
# Handle various possible formats from crt.sh
|
||||
if date_string.endswith('Z'):
|
||||
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
|
||||
elif '+' in date_string or date_string.endswith('UTC'):
|
||||
# Handle timezone-aware strings
|
||||
date_string = date_string.replace('UTC', '').strip()
|
||||
if '+' in date_string:
|
||||
date_string = date_string.split('+')[0]
|
||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
||||
else:
|
||||
raise e # Re-raise if there's no cache to fall back on
|
||||
|
||||
return result
|
||||
|
||||
def query_ip(self, ip: str) -> ProviderResult:
|
||||
"""
|
||||
Query crt.sh for certificates containing the IP address.
|
||||
Note: crt.sh doesn't typically index by IP, so this returns empty results.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
Empty ProviderResult (crt.sh doesn't support IP-based certificate queries effectively)
|
||||
"""
|
||||
return ProviderResult()
|
||||
|
||||
def _load_from_cache(self, cache_file_path: Path) -> ProviderResult:
|
||||
"""Load processed crt.sh data from a cache file."""
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_content = json.load(f)
|
||||
|
||||
result = ProviderResult()
|
||||
|
||||
# Reconstruct relationships
|
||||
for rel_data in cache_content.get("relationships", []):
|
||||
result.add_relationship(
|
||||
source_node=rel_data["source_node"],
|
||||
target_node=rel_data["target_node"],
|
||||
relationship_type=rel_data["relationship_type"],
|
||||
provider=rel_data["provider"],
|
||||
confidence=rel_data["confidence"],
|
||||
raw_data=rel_data.get("raw_data", {})
|
||||
)
|
||||
|
||||
# Reconstruct attributes
|
||||
for attr_data in cache_content.get("attributes", []):
|
||||
result.add_attribute(
|
||||
target_node=attr_data["target_node"],
|
||||
name=attr_data["name"],
|
||||
value=attr_data["value"],
|
||||
attr_type=attr_data["type"],
|
||||
provider=attr_data["provider"],
|
||||
confidence=attr_data["confidence"],
|
||||
metadata=attr_data.get("metadata", {})
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
except (json.JSONDecodeError, FileNotFoundError, KeyError) as e:
|
||||
self.logger.logger.error(f"Failed to load cached certificates from {cache_file_path}: {e}")
|
||||
return ProviderResult()
|
||||
|
||||
def _load_raw_data_from_cache(self, cache_file_path: Path) -> List[Dict[str, Any]]:
|
||||
"""Load only the raw certificate data from a cache file."""
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_content = json.load(f)
|
||||
return cache_content.get("raw_certificates", [])
|
||||
except (json.JSONDecodeError, FileNotFoundError):
|
||||
return []
|
||||
|
||||
def _save_result_to_cache(self, cache_file_path: Path, result: ProviderResult, raw_certificates: List[Dict[str, Any]], domain: str) -> None:
|
||||
"""Save processed crt.sh result and raw data to a cache file."""
|
||||
try:
|
||||
cache_data = {
|
||||
"domain": domain,
|
||||
"last_upstream_query": datetime.now(timezone.utc).isoformat(),
|
||||
"raw_certificates": raw_certificates, # Store the raw data for deduplication
|
||||
"relationships": [
|
||||
{
|
||||
"source_node": rel.source_node,
|
||||
"target_node": rel.target_node,
|
||||
"relationship_type": rel.relationship_type,
|
||||
"confidence": rel.confidence,
|
||||
"provider": rel.provider,
|
||||
"raw_data": rel.raw_data
|
||||
} for rel in result.relationships
|
||||
],
|
||||
"attributes": [
|
||||
{
|
||||
"target_node": attr.target_node,
|
||||
"name": attr.name,
|
||||
"value": attr.value,
|
||||
"type": attr.type,
|
||||
"provider": attr.provider,
|
||||
"confidence": attr.confidence,
|
||||
"metadata": attr.metadata
|
||||
} for attr in result.attributes
|
||||
]
|
||||
}
|
||||
cache_file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(cache_file_path, 'w') as f:
|
||||
json.dump(cache_data, f, separators=(',', ':'), default=str)
|
||||
# Assume UTC if no timezone specified
|
||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
||||
except Exception as e:
|
||||
self.logger.logger.warning(f"Failed to save cache file for {domain}: {e}")
|
||||
# Fallback: try parsing without timezone info and assume UTC
|
||||
try:
|
||||
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
|
||||
except Exception:
|
||||
raise ValueError(f"Unable to parse date: {date_string}") from e
|
||||
|
||||
def _query_crtsh_api(self, domain: str) -> List[Dict[str, Any]]:
|
||||
"""Query crt.sh API for raw certificate data."""
|
||||
url = f"{self.base_url}?q={quote(domain)}&output=json"
|
||||
response = self.make_request(url, target_indicator=domain)
|
||||
|
||||
if not response or response.status_code != 200:
|
||||
raise requests.exceptions.RequestException(f"crt.sh API returned status {response.status_code if response else 'None'}")
|
||||
|
||||
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Check if a certificate is currently valid based on its expiry date.
|
||||
|
||||
Args:
|
||||
cert_data: Certificate data from crt.sh
|
||||
|
||||
Returns:
|
||||
True if certificate is currently valid (not expired)
|
||||
"""
|
||||
try:
|
||||
certificates = response.json()
|
||||
except json.JSONDecodeError:
|
||||
self.logger.logger.error(f"crt.sh returned invalid JSON for {domain}")
|
||||
return []
|
||||
not_after_str = cert_data.get('not_after')
|
||||
if not not_after_str:
|
||||
return False
|
||||
|
||||
if not certificates:
|
||||
return []
|
||||
|
||||
return certificates
|
||||
not_after_date = self._parse_certificate_date(not_after_str)
|
||||
not_before_str = cert_data.get('not_before')
|
||||
|
||||
def _process_certificates_to_result_fixed(self, query_domain: str, certificates: List[Dict[str, Any]]) -> ProviderResult:
|
||||
"""
|
||||
FIXED: Process certificates to create proper domain and CA nodes.
|
||||
Now creates individual domain nodes instead of large entities.
|
||||
"""
|
||||
result = ProviderResult()
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
self.logger.logger.info(f"CrtSh processing cancelled before processing for domain: {query_domain}")
|
||||
return result
|
||||
# Check if certificate is within valid date range
|
||||
is_not_expired = not_after_date > now
|
||||
|
||||
all_discovered_domains = set()
|
||||
processed_issuers = set()
|
||||
if not_before_str:
|
||||
not_before_date = self._parse_certificate_date(not_before_str)
|
||||
is_not_before_valid = not_before_date <= now
|
||||
return is_not_expired and is_not_before_valid
|
||||
|
||||
for i, cert_data in enumerate(certificates):
|
||||
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
self.logger.logger.info(f"CrtSh processing cancelled at certificate {i} for domain: {query_domain}")
|
||||
break
|
||||
return is_not_expired
|
||||
|
||||
# Extract all domains from this certificate
|
||||
cert_domains = self._extract_domains_from_certificate(cert_data)
|
||||
all_discovered_domains.update(cert_domains)
|
||||
|
||||
# FIXED: Create CA nodes for certificate issuers (not as domain metadata)
|
||||
issuer_name = self._parse_issuer_organization(cert_data.get('issuer_name', ''))
|
||||
if issuer_name and issuer_name not in processed_issuers:
|
||||
# Create relationship from query domain to CA
|
||||
result.add_relationship(
|
||||
source_node=query_domain,
|
||||
target_node=issuer_name,
|
||||
relationship_type='crtsh_cert_issuer',
|
||||
provider=self.name,
|
||||
confidence=0.95,
|
||||
raw_data={'issuer_dn': cert_data.get('issuer_name', '')}
|
||||
)
|
||||
processed_issuers.add(issuer_name)
|
||||
|
||||
# Add certificate metadata to each domain in this certificate
|
||||
cert_metadata = self._extract_certificate_metadata(cert_data)
|
||||
for cert_domain in cert_domains:
|
||||
if not _is_valid_domain(cert_domain):
|
||||
continue
|
||||
|
||||
# Add certificate attributes to the domain
|
||||
for key, value in cert_metadata.items():
|
||||
if value is not None:
|
||||
result.add_attribute(
|
||||
target_node=cert_domain,
|
||||
name=f"cert_{key}",
|
||||
value=value,
|
||||
attr_type='certificate_data',
|
||||
provider=self.name,
|
||||
confidence=0.9,
|
||||
metadata={'certificate_id': cert_data.get('id')}
|
||||
)
|
||||
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
self.logger.logger.info(f"CrtSh query cancelled before relationship creation for domain: {query_domain}")
|
||||
return result
|
||||
|
||||
# FIXED: Create selective relationships to avoid large entities
|
||||
# Only create relationships to domains that are closely related
|
||||
for discovered_domain in all_discovered_domains:
|
||||
if discovered_domain == query_domain:
|
||||
continue
|
||||
|
||||
if not _is_valid_domain(discovered_domain):
|
||||
continue
|
||||
|
||||
# FIXED: Only create relationships for domains that share a meaningful connection
|
||||
# This prevents creating too many relationships that trigger large entity creation
|
||||
if self._should_create_relationship(query_domain, discovered_domain):
|
||||
confidence = self._calculate_domain_relationship_confidence(
|
||||
query_domain, discovered_domain, [], all_discovered_domains
|
||||
)
|
||||
|
||||
result.add_relationship(
|
||||
source_node=query_domain,
|
||||
target_node=discovered_domain,
|
||||
relationship_type='crtsh_san_certificate',
|
||||
provider=self.name,
|
||||
confidence=confidence,
|
||||
raw_data={'relationship_type': 'certificate_discovery'}
|
||||
)
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=query_domain,
|
||||
target_node=discovered_domain,
|
||||
relationship_type='crtsh_san_certificate',
|
||||
confidence_score=confidence,
|
||||
raw_data={'relationship_type': 'certificate_discovery'},
|
||||
discovery_method="certificate_transparency_analysis"
|
||||
)
|
||||
|
||||
self.logger.logger.info(f"CrtSh processing completed for {query_domain}: {len(all_discovered_domains)} domains, {result.get_relationship_count()} relationships")
|
||||
return result
|
||||
|
||||
def _should_create_relationship(self, source_domain: str, target_domain: str) -> bool:
|
||||
"""
|
||||
FIXED: Determine if a relationship should be created between two domains.
|
||||
This helps avoid creating too many relationships that trigger large entity creation.
|
||||
"""
|
||||
# Always create relationships for subdomains
|
||||
if target_domain.endswith(f'.{source_domain}') or source_domain.endswith(f'.{target_domain}'):
|
||||
return True
|
||||
|
||||
# Create relationships for domains that share a common parent (up to 2 levels)
|
||||
source_parts = source_domain.split('.')
|
||||
target_parts = target_domain.split('.')
|
||||
|
||||
# Check if they share the same root domain (last 2 parts)
|
||||
if len(source_parts) >= 2 and len(target_parts) >= 2:
|
||||
source_root = '.'.join(source_parts[-2:])
|
||||
target_root = '.'.join(target_parts[-2:])
|
||||
return source_root == target_root
|
||||
|
||||
return False
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Certificate validity check failed: {e}")
|
||||
return False
|
||||
|
||||
def _extract_certificate_metadata(self, cert_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Extract comprehensive metadata from certificate data."""
|
||||
raw_issuer_name = cert_data.get('issuer_name', '')
|
||||
parsed_issuer_name = self._parse_issuer_organization(raw_issuer_name)
|
||||
"""
|
||||
Extract comprehensive metadata from certificate data.
|
||||
|
||||
Args:
|
||||
cert_data: Raw certificate data from crt.sh
|
||||
|
||||
Returns:
|
||||
Comprehensive certificate metadata dictionary
|
||||
"""
|
||||
metadata = {
|
||||
'certificate_id': cert_data.get('id'),
|
||||
'serial_number': cert_data.get('serial_number'),
|
||||
'issuer_name': parsed_issuer_name,
|
||||
'issuer_name': cert_data.get('issuer_name'),
|
||||
'issuer_ca_id': cert_data.get('issuer_ca_id'),
|
||||
'common_name': cert_data.get('common_name'),
|
||||
'not_before': cert_data.get('not_before'),
|
||||
'not_after': cert_data.get('not_after'),
|
||||
'entry_timestamp': cert_data.get('entry_timestamp'),
|
||||
'source': 'crtsh'
|
||||
'source': 'crt.sh'
|
||||
}
|
||||
|
||||
try:
|
||||
@@ -419,9 +154,9 @@ class CrtShProvider(BaseProvider):
|
||||
metadata['is_currently_valid'] = self._is_cert_valid(cert_data)
|
||||
metadata['expires_soon'] = (not_after - datetime.now(timezone.utc)).days <= 30
|
||||
|
||||
# Keep raw date format or convert to standard format
|
||||
metadata['not_before'] = not_before.isoformat()
|
||||
metadata['not_after'] = not_after.isoformat()
|
||||
# Add human-readable dates
|
||||
metadata['not_before'] = not_before.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
metadata['not_after'] = not_after.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Error computing certificate metadata: {e}")
|
||||
@@ -430,73 +165,328 @@ class CrtShProvider(BaseProvider):
|
||||
|
||||
return metadata
|
||||
|
||||
def _parse_issuer_organization(self, issuer_dn: str) -> str:
|
||||
"""Parse the issuer Distinguished Name to extract just the organization name."""
|
||||
if not issuer_dn:
|
||||
return issuer_dn
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query crt.sh for certificates containing the domain.
|
||||
"""
|
||||
if not _is_valid_domain(domain):
|
||||
return []
|
||||
|
||||
# Check for cancellation before starting
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled before start for domain: {domain}")
|
||||
return []
|
||||
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
components = [comp.strip() for comp in issuer_dn.split(',')]
|
||||
# Query crt.sh for certificates
|
||||
url = f"{self.base_url}?q={quote(domain)}&output=json"
|
||||
response = self.make_request(url, target_indicator=domain, max_retries=3)
|
||||
|
||||
for component in components:
|
||||
if component.startswith('O='):
|
||||
org_name = component[2:].strip()
|
||||
if org_name.startswith('"') and org_name.endswith('"'):
|
||||
org_name = org_name[1:-1]
|
||||
return org_name
|
||||
if not response or response.status_code != 200:
|
||||
return []
|
||||
|
||||
return issuer_dn
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.debug(f"Failed to parse issuer DN '{issuer_dn}': {e}")
|
||||
return issuer_dn
|
||||
# Check for cancellation after request
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled after request for domain: {domain}")
|
||||
return []
|
||||
|
||||
def _parse_certificate_date(self, date_string: str) -> datetime:
|
||||
"""Parse certificate date from crt.sh format."""
|
||||
if not date_string:
|
||||
raise ValueError("Empty date string")
|
||||
certificates = response.json()
|
||||
|
||||
if not certificates:
|
||||
return []
|
||||
|
||||
# Check for cancellation before processing
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled before processing for domain: {domain}")
|
||||
return []
|
||||
|
||||
try:
|
||||
if date_string.endswith('Z'):
|
||||
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
|
||||
elif '+' in date_string or date_string.endswith('UTC'):
|
||||
date_string = date_string.replace('UTC', '').strip()
|
||||
if '+' in date_string:
|
||||
date_string = date_string.split('+')[0]
|
||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
||||
else:
|
||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
||||
except Exception as e:
|
||||
# Aggregate certificate data by domain
|
||||
domain_certificates = {}
|
||||
all_discovered_domains = set()
|
||||
|
||||
# Process certificates with cancellation checking
|
||||
for i, cert_data in enumerate(certificates):
|
||||
# Check for cancellation every 5 certificates instead of 10 for faster response
|
||||
if i % 5 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh processing cancelled at certificate {i} for domain: {domain}")
|
||||
break
|
||||
|
||||
cert_metadata = self._extract_certificate_metadata(cert_data)
|
||||
cert_domains = self._extract_domains_from_certificate(cert_data)
|
||||
|
||||
# Add all domains from this certificate to our tracking
|
||||
for cert_domain in cert_domains:
|
||||
# Additional stop check during domain processing
|
||||
if i % 20 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh domain processing cancelled for domain: {domain}")
|
||||
break
|
||||
|
||||
if not _is_valid_domain(cert_domain):
|
||||
continue
|
||||
|
||||
all_discovered_domains.add(cert_domain)
|
||||
|
||||
# Initialize domain certificate list if needed
|
||||
if cert_domain not in domain_certificates:
|
||||
domain_certificates[cert_domain] = []
|
||||
|
||||
# Add this certificate to the domain's certificate list
|
||||
domain_certificates[cert_domain].append(cert_metadata)
|
||||
|
||||
# Final cancellation check before creating relationships
|
||||
if self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh query cancelled before relationship creation for domain: {domain}")
|
||||
return []
|
||||
|
||||
# Create relationships from query domain to ALL discovered domains with stop checking
|
||||
for i, discovered_domain in enumerate(all_discovered_domains):
|
||||
if discovered_domain == domain:
|
||||
continue # Skip self-relationships
|
||||
|
||||
# Check for cancellation every 10 relationships
|
||||
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
|
||||
print(f"CrtSh relationship creation cancelled for domain: {domain}")
|
||||
break
|
||||
|
||||
if not _is_valid_domain(discovered_domain):
|
||||
continue
|
||||
|
||||
# Get certificates for both domains
|
||||
query_domain_certs = domain_certificates.get(domain, [])
|
||||
discovered_domain_certs = domain_certificates.get(discovered_domain, [])
|
||||
|
||||
# Find shared certificates (for metadata purposes)
|
||||
shared_certificates = self._find_shared_certificates(query_domain_certs, discovered_domain_certs)
|
||||
|
||||
# Calculate confidence based on relationship type and shared certificates
|
||||
confidence = self._calculate_domain_relationship_confidence(
|
||||
domain, discovered_domain, shared_certificates, all_discovered_domains
|
||||
)
|
||||
|
||||
# Create comprehensive raw data for the relationship
|
||||
relationship_raw_data = {
|
||||
'relationship_type': 'certificate_discovery',
|
||||
'shared_certificates': shared_certificates,
|
||||
'total_shared_certs': len(shared_certificates),
|
||||
'discovery_context': self._determine_relationship_context(discovered_domain, domain),
|
||||
'domain_certificates': {
|
||||
domain: self._summarize_certificates(query_domain_certs),
|
||||
discovered_domain: self._summarize_certificates(discovered_domain_certs)
|
||||
}
|
||||
}
|
||||
|
||||
# Create domain -> domain relationship
|
||||
relationships.append((
|
||||
domain,
|
||||
discovered_domain,
|
||||
'san_certificate',
|
||||
confidence,
|
||||
relationship_raw_data
|
||||
))
|
||||
|
||||
# Log the relationship discovery
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=discovered_domain,
|
||||
relationship_type='san_certificate',
|
||||
confidence_score=confidence,
|
||||
raw_data=relationship_raw_data,
|
||||
discovery_method="certificate_transparency_analysis"
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.logger.error(f"Failed to parse JSON response from crt.sh: {e}")
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.logger.error(f"HTTP request to crt.sh failed: {e}")
|
||||
|
||||
|
||||
return relationships
|
||||
|
||||
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Find certificates that are shared between two domain certificate lists.
|
||||
|
||||
Args:
|
||||
certs1: First domain's certificates
|
||||
certs2: Second domain's certificates
|
||||
|
||||
Returns:
|
||||
List of shared certificate metadata
|
||||
"""
|
||||
shared = []
|
||||
|
||||
# Create a set of certificate IDs from the first list for quick lookup
|
||||
cert1_ids = {cert.get('certificate_id') for cert in certs1 if cert.get('certificate_id')}
|
||||
|
||||
# Find certificates in the second list that match
|
||||
for cert in certs2:
|
||||
if cert.get('certificate_id') in cert1_ids:
|
||||
shared.append(cert)
|
||||
|
||||
return shared
|
||||
|
||||
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a summary of certificates for a domain.
|
||||
|
||||
Args:
|
||||
certificates: List of certificate metadata
|
||||
|
||||
Returns:
|
||||
Summary dictionary with aggregate statistics
|
||||
"""
|
||||
if not certificates:
|
||||
return {
|
||||
'total_certificates': 0,
|
||||
'valid_certificates': 0,
|
||||
'expired_certificates': 0,
|
||||
'expires_soon_count': 0,
|
||||
'unique_issuers': [],
|
||||
'latest_certificate': None,
|
||||
'has_valid_cert': False
|
||||
}
|
||||
|
||||
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
|
||||
expired_count = len(certificates) - valid_count
|
||||
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
|
||||
|
||||
# Get unique issuers
|
||||
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
|
||||
|
||||
# Find the most recent certificate
|
||||
latest_cert = None
|
||||
latest_date = None
|
||||
|
||||
for cert in certificates:
|
||||
try:
|
||||
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
|
||||
if cert.get('not_before'):
|
||||
cert_date = self._parse_certificate_date(cert['not_before'])
|
||||
if latest_date is None or cert_date > latest_date:
|
||||
latest_date = cert_date
|
||||
latest_cert = cert
|
||||
except Exception:
|
||||
raise ValueError(f"Unable to parse date: {date_string}") from e
|
||||
continue
|
||||
|
||||
return {
|
||||
'total_certificates': len(certificates),
|
||||
'valid_certificates': valid_count,
|
||||
'expired_certificates': expired_count,
|
||||
'expires_soon_count': expires_soon_count,
|
||||
'unique_issuers': unique_issuers,
|
||||
'latest_certificate': latest_cert,
|
||||
'has_valid_cert': valid_count > 0,
|
||||
'certificate_details': certificates # Full details for forensic analysis
|
||||
}
|
||||
|
||||
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
|
||||
"""Check if a certificate is currently valid based on its expiry date."""
|
||||
try:
|
||||
not_after_str = cert_data.get('not_after')
|
||||
if not not_after_str:
|
||||
return False
|
||||
|
||||
not_after_date = self._parse_certificate_date(not_after_str)
|
||||
not_before_str = cert_data.get('not_before')
|
||||
|
||||
now = datetime.now(timezone.utc)
|
||||
is_not_expired = not_after_date > now
|
||||
|
||||
if not_before_str:
|
||||
not_before_date = self._parse_certificate_date(not_before_str)
|
||||
is_not_before_valid = not_before_date <= now
|
||||
return is_not_expired and is_not_before_valid
|
||||
|
||||
return is_not_expired
|
||||
|
||||
except Exception as e:
|
||||
return False
|
||||
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str,
|
||||
shared_certificates: List[Dict[str, Any]],
|
||||
all_discovered_domains: Set[str]) -> float:
|
||||
"""
|
||||
Calculate confidence score for domain relationship based on various factors.
|
||||
|
||||
Args:
|
||||
domain1: Source domain (query domain)
|
||||
domain2: Target domain (discovered domain)
|
||||
shared_certificates: List of shared certificate metadata
|
||||
all_discovered_domains: All domains discovered in this query
|
||||
|
||||
Returns:
|
||||
Confidence score between 0.0 and 1.0
|
||||
"""
|
||||
base_confidence = 0.9
|
||||
|
||||
# Adjust confidence based on domain relationship context
|
||||
relationship_context = self._determine_relationship_context(domain2, domain1)
|
||||
|
||||
if relationship_context == 'exact_match':
|
||||
context_bonus = 0.0 # This shouldn't happen, but just in case
|
||||
elif relationship_context == 'subdomain':
|
||||
context_bonus = 0.1 # High confidence for subdomains
|
||||
elif relationship_context == 'parent_domain':
|
||||
context_bonus = 0.05 # Medium confidence for parent domains
|
||||
else:
|
||||
context_bonus = 0.0 # Related domains get base confidence
|
||||
|
||||
# Adjust confidence based on shared certificates
|
||||
if shared_certificates:
|
||||
shared_count = len(shared_certificates)
|
||||
if shared_count >= 3:
|
||||
shared_bonus = 0.1
|
||||
elif shared_count >= 2:
|
||||
shared_bonus = 0.05
|
||||
else:
|
||||
shared_bonus = 0.02
|
||||
|
||||
# Additional bonus for valid shared certificates
|
||||
valid_shared = sum(1 for cert in shared_certificates if cert.get('is_currently_valid'))
|
||||
if valid_shared > 0:
|
||||
validity_bonus = 0.05
|
||||
else:
|
||||
validity_bonus = 0.0
|
||||
else:
|
||||
# Even without shared certificates, domains found in the same query have some relationship
|
||||
shared_bonus = 0.0
|
||||
validity_bonus = 0.0
|
||||
|
||||
# Adjust confidence based on certificate issuer reputation (if shared certificates exist)
|
||||
issuer_bonus = 0.0
|
||||
if shared_certificates:
|
||||
for cert in shared_certificates:
|
||||
issuer = cert.get('issuer_name', '').lower()
|
||||
if any(trusted_ca in issuer for trusted_ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
|
||||
issuer_bonus = max(issuer_bonus, 0.03)
|
||||
break
|
||||
|
||||
# Calculate final confidence
|
||||
final_confidence = base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus
|
||||
return max(0.1, min(1.0, final_confidence)) # Clamp between 0.1 and 1.0
|
||||
|
||||
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
|
||||
"""
|
||||
Determine the context of the relationship between certificate domain and query domain.
|
||||
|
||||
Args:
|
||||
cert_domain: Domain found in certificate
|
||||
query_domain: Original query domain
|
||||
|
||||
Returns:
|
||||
String describing the relationship context
|
||||
"""
|
||||
if cert_domain == query_domain:
|
||||
return 'exact_match'
|
||||
elif cert_domain.endswith(f'.{query_domain}'):
|
||||
return 'subdomain'
|
||||
elif query_domain.endswith(f'.{cert_domain}'):
|
||||
return 'parent_domain'
|
||||
else:
|
||||
return 'related_domain'
|
||||
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query crt.sh for certificates containing the IP address.
|
||||
Note: crt.sh doesn't typically index by IP, so this returns empty results.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
Empty list (crt.sh doesn't support IP-based certificate queries effectively)
|
||||
"""
|
||||
# crt.sh doesn't effectively support IP-based certificate queries
|
||||
return []
|
||||
|
||||
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
|
||||
"""Extract all domains from certificate data."""
|
||||
"""
|
||||
Extract all domains from certificate data.
|
||||
|
||||
Args:
|
||||
cert_data: Certificate data from crt.sh API
|
||||
|
||||
Returns:
|
||||
Set of unique domain names found in the certificate
|
||||
"""
|
||||
domains = set()
|
||||
|
||||
# Extract from common name
|
||||
@@ -509,72 +499,50 @@ class CrtShProvider(BaseProvider):
|
||||
# Extract from name_value field (contains SANs)
|
||||
name_value = cert_data.get('name_value', '')
|
||||
if name_value:
|
||||
# Split by newlines and clean each domain
|
||||
for line in name_value.split('\n'):
|
||||
cleaned_domains = self._clean_domain_name(line.strip())
|
||||
if cleaned_domains:
|
||||
domains.update(cleaned_domains)
|
||||
|
||||
return domains
|
||||
|
||||
|
||||
def _clean_domain_name(self, domain_name: str) -> List[str]:
|
||||
"""Clean and normalize domain name from certificate data."""
|
||||
"""
|
||||
Clean and normalize domain name from certificate data.
|
||||
Now returns a list to handle wildcards correctly.
|
||||
"""
|
||||
if not domain_name:
|
||||
return []
|
||||
|
||||
domain = domain_name.strip().lower()
|
||||
|
||||
# Remove protocol if present
|
||||
if domain.startswith(('http://', 'https://')):
|
||||
domain = domain.split('://', 1)[1]
|
||||
|
||||
# Remove path if present
|
||||
if '/' in domain:
|
||||
domain = domain.split('/', 1)[0]
|
||||
|
||||
if ':' in domain and not domain.count(':') > 1:
|
||||
# Remove port if present
|
||||
if ':' in domain and not domain.count(':') > 1: # Avoid breaking IPv6
|
||||
domain = domain.split(':', 1)[0]
|
||||
|
||||
# Handle wildcard domains
|
||||
cleaned_domains = []
|
||||
if domain.startswith('*.'):
|
||||
# Add both the wildcard and the base domain
|
||||
cleaned_domains.append(domain)
|
||||
cleaned_domains.append(domain[2:])
|
||||
else:
|
||||
cleaned_domains.append(domain)
|
||||
|
||||
# Remove any remaining invalid characters and validate
|
||||
final_domains = []
|
||||
for d in cleaned_domains:
|
||||
d = re.sub(r'[^\w\-\.]', '', d)
|
||||
if d and not d.startswith(('.', '-')) and not d.endswith(('.', '-')):
|
||||
final_domains.append(d)
|
||||
|
||||
return [d for d in final_domains if _is_valid_domain(d)]
|
||||
|
||||
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str,
|
||||
shared_certificates: List[Dict[str, Any]],
|
||||
all_discovered_domains: Set[str]) -> float:
|
||||
"""Calculate confidence score for domain relationship based on various factors."""
|
||||
base_confidence = 0.9
|
||||
|
||||
# Adjust confidence based on domain relationship context
|
||||
relationship_context = self._determine_relationship_context(domain2, domain1)
|
||||
|
||||
if relationship_context == 'exact_match':
|
||||
context_bonus = 0.0
|
||||
elif relationship_context == 'subdomain':
|
||||
context_bonus = 0.1
|
||||
elif relationship_context == 'parent_domain':
|
||||
context_bonus = 0.05
|
||||
else:
|
||||
context_bonus = 0.0
|
||||
|
||||
final_confidence = base_confidence + context_bonus
|
||||
return max(0.1, min(1.0, final_confidence))
|
||||
|
||||
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
|
||||
"""Determine the context of the relationship between certificate domain and query domain."""
|
||||
if cert_domain == query_domain:
|
||||
return 'exact_match'
|
||||
elif cert_domain.endswith(f'.{query_domain}'):
|
||||
return 'subdomain'
|
||||
elif query_domain.endswith(f'.{cert_domain}'):
|
||||
return 'parent_domain'
|
||||
else:
|
||||
return 'related_domain'
|
||||
return [d for d in final_domains if _is_valid_domain(d)]
|
||||
@@ -1,19 +1,19 @@
|
||||
# dnsrecon/providers/dns_provider.py
|
||||
|
||||
from dns import resolver, reversename
|
||||
from typing import Dict
|
||||
import dns.resolver
|
||||
import dns.reversename
|
||||
from typing import List, Dict, Any, Tuple
|
||||
from .base_provider import BaseProvider
|
||||
from core.provider_result import ProviderResult
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain, get_ip_version
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
|
||||
|
||||
class DNSProvider(BaseProvider):
|
||||
"""
|
||||
Provider for standard DNS resolution and reverse DNS lookups.
|
||||
Now returns standardized ProviderResult objects with IPv4 and IPv6 support.
|
||||
Now uses session-specific configuration.
|
||||
"""
|
||||
|
||||
def __init__(self, name=None, session_config=None):
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize DNS provider with session-specific configuration."""
|
||||
super().__init__(
|
||||
name="dns",
|
||||
@@ -23,9 +23,10 @@ class DNSProvider(BaseProvider):
|
||||
)
|
||||
|
||||
# Configure DNS resolver
|
||||
self.resolver = resolver.Resolver()
|
||||
self.resolver = dns.resolver.Resolver()
|
||||
self.resolver.timeout = 5
|
||||
self.resolver.lifetime = 10
|
||||
#self.resolver.nameservers = ['127.0.0.1']
|
||||
|
||||
def get_name(self) -> str:
|
||||
"""Return the provider name."""
|
||||
@@ -47,149 +48,97 @@ class DNSProvider(BaseProvider):
|
||||
"""DNS is always available - no API key required."""
|
||||
return True
|
||||
|
||||
def query_domain(self, domain: str) -> ProviderResult:
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query DNS records for the domain to discover relationships and attributes.
|
||||
FIXED: Now creates separate attributes for each DNS record type.
|
||||
|
||||
Query DNS records for the domain to discover relationships.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
|
||||
Returns:
|
||||
ProviderResult containing discovered relationships and attributes
|
||||
List of relationships discovered from DNS analysis
|
||||
"""
|
||||
if not _is_valid_domain(domain):
|
||||
return ProviderResult()
|
||||
return []
|
||||
|
||||
result = ProviderResult()
|
||||
relationships = []
|
||||
|
||||
# Query all record types - each gets its own attribute
|
||||
# Query all record types
|
||||
for record_type in ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SOA', 'TXT', 'SRV', 'CAA']:
|
||||
try:
|
||||
self._query_record(domain, record_type, result)
|
||||
#except resolver.NoAnswer:
|
||||
# This is not an error, just a confirmation that the record doesn't exist.
|
||||
#self.logger.logger.debug(f"No {record_type} record found for {domain}")
|
||||
except Exception as e:
|
||||
self.failed_requests += 1
|
||||
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
|
||||
relationships.extend(self._query_record(domain, record_type))
|
||||
|
||||
return result
|
||||
return relationships
|
||||
|
||||
def query_ip(self, ip: str) -> ProviderResult:
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query reverse DNS for the IP address (supports both IPv4 and IPv6).
|
||||
Query reverse DNS for the IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate (IPv4 or IPv6)
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
ProviderResult containing discovered relationships and attributes
|
||||
List of relationships discovered from reverse DNS
|
||||
"""
|
||||
if not _is_valid_ip(ip):
|
||||
return ProviderResult()
|
||||
return []
|
||||
|
||||
result = ProviderResult()
|
||||
ip_version = get_ip_version(ip)
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
# Perform reverse DNS lookup (works for both IPv4 and IPv6)
|
||||
# Perform reverse DNS lookup
|
||||
self.total_requests += 1
|
||||
reverse_name = reversename.from_address(ip)
|
||||
reverse_name = dns.reversename.from_address(ip)
|
||||
response = self.resolver.resolve(reverse_name, 'PTR')
|
||||
self.successful_requests += 1
|
||||
|
||||
ptr_records = []
|
||||
for ptr_record in response:
|
||||
hostname = str(ptr_record).rstrip('.')
|
||||
|
||||
if _is_valid_domain(hostname):
|
||||
# Determine appropriate forward relationship type based on IP version
|
||||
if ip_version == 6:
|
||||
relationship_type = 'dns_aaaa_record'
|
||||
record_prefix = 'AAAA'
|
||||
else:
|
||||
relationship_type = 'dns_a_record'
|
||||
record_prefix = 'A'
|
||||
|
||||
# Add the relationship
|
||||
result.add_relationship(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='dns_ptr_record',
|
||||
provider=self.name,
|
||||
confidence=0.8,
|
||||
raw_data={
|
||||
'query_type': 'PTR',
|
||||
'ip_address': ip,
|
||||
'ip_version': ip_version,
|
||||
'hostname': hostname,
|
||||
'ttl': response.ttl
|
||||
}
|
||||
)
|
||||
raw_data = {
|
||||
'query_type': 'PTR',
|
||||
'ip_address': ip,
|
||||
'hostname': hostname,
|
||||
'ttl': response.ttl
|
||||
}
|
||||
|
||||
# Add to PTR records list
|
||||
ptr_records.append(f"PTR: {hostname}")
|
||||
relationships.append((
|
||||
ip,
|
||||
hostname,
|
||||
'ptr_record',
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
|
||||
# Log the relationship discovery
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='dns_ptr_record',
|
||||
relationship_type='ptr_record',
|
||||
confidence_score=0.8,
|
||||
raw_data={
|
||||
'query_type': 'PTR',
|
||||
'ip_address': ip,
|
||||
'ip_version': ip_version,
|
||||
'hostname': hostname,
|
||||
'ttl': response.ttl
|
||||
},
|
||||
discovery_method=f"reverse_dns_lookup_ipv{ip_version}"
|
||||
raw_data=raw_data,
|
||||
discovery_method="reverse_dns_lookup"
|
||||
)
|
||||
|
||||
# Add PTR records as separate attribute
|
||||
if ptr_records:
|
||||
result.add_attribute(
|
||||
target_node=ip,
|
||||
name='ptr_records', # Specific name for PTR records
|
||||
value=ptr_records,
|
||||
attr_type='dns_record',
|
||||
provider=self.name,
|
||||
confidence=0.8,
|
||||
metadata={'ttl': response.ttl, 'ip_version': ip_version}
|
||||
)
|
||||
|
||||
except resolver.NXDOMAIN:
|
||||
self.failed_requests += 1
|
||||
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: NXDOMAIN")
|
||||
except Exception as e:
|
||||
self.failed_requests += 1
|
||||
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: {e}")
|
||||
# Re-raise the exception so the scanner can handle the failure
|
||||
raise e
|
||||
|
||||
return result
|
||||
return relationships
|
||||
|
||||
def _query_record(self, domain: str, record_type: str, result: ProviderResult) -> None:
|
||||
def _query_record(self, domain: str, record_type: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
FIXED: Query DNS records with unique attribute names for each record type.
|
||||
Enhanced to better handle IPv6 AAAA records.
|
||||
Query a specific type of DNS record for the domain.
|
||||
"""
|
||||
relationships = []
|
||||
try:
|
||||
self.total_requests += 1
|
||||
response = self.resolver.resolve(domain, record_type)
|
||||
self.successful_requests += 1
|
||||
|
||||
dns_records = []
|
||||
|
||||
for record in response:
|
||||
target = ""
|
||||
if record_type in ['A', 'AAAA']:
|
||||
target = str(record)
|
||||
# Validate that the IP address is properly formed
|
||||
if not _is_valid_ip(target):
|
||||
self.logger.logger.debug(f"Invalid IP address in {record_type} record: {target}")
|
||||
continue
|
||||
elif record_type in ['CNAME', 'NS', 'PTR']:
|
||||
target = str(record.target).rstrip('.')
|
||||
elif record_type == 'MX':
|
||||
@@ -197,90 +146,44 @@ class DNSProvider(BaseProvider):
|
||||
elif record_type == 'SOA':
|
||||
target = str(record.mname).rstrip('.')
|
||||
elif record_type in ['TXT']:
|
||||
# Keep raw TXT record value
|
||||
txt_value = str(record).strip('"')
|
||||
dns_records.append(txt_value) # Just the value for TXT
|
||||
# TXT records are treated as metadata, not relationships.
|
||||
continue
|
||||
elif record_type == 'SRV':
|
||||
target = str(record.target).rstrip('.')
|
||||
elif record_type == 'CAA':
|
||||
# Keep raw CAA record format
|
||||
caa_value = f"{record.flags} {record.tag.decode('utf-8')} \"{record.value.decode('utf-8')}\""
|
||||
dns_records.append(caa_value) # Just the value for CAA
|
||||
continue
|
||||
target = f"{record.flags} {record.tag.decode('utf-8')} \"{record.value.decode('utf-8')}\""
|
||||
else:
|
||||
target = str(record)
|
||||
|
||||
if target:
|
||||
# Determine IP version for metadata if this is an IP record
|
||||
ip_version = None
|
||||
if record_type in ['A', 'AAAA'] and _is_valid_ip(target):
|
||||
ip_version = get_ip_version(target)
|
||||
|
||||
raw_data = {
|
||||
'query_type': record_type,
|
||||
'domain': domain,
|
||||
'value': target,
|
||||
'ttl': response.ttl
|
||||
}
|
||||
|
||||
if ip_version:
|
||||
raw_data['ip_version'] = ip_version
|
||||
|
||||
relationship_type = f"dns_{record_type.lower()}_record"
|
||||
confidence = 0.8
|
||||
relationship_type = f"{record_type.lower()}_record"
|
||||
confidence = 0.8 # Default confidence for DNS records
|
||||
|
||||
# Add relationship
|
||||
result.add_relationship(
|
||||
source_node=domain,
|
||||
target_node=target,
|
||||
relationship_type=relationship_type,
|
||||
provider=self.name,
|
||||
confidence=confidence,
|
||||
raw_data=raw_data
|
||||
)
|
||||
relationships.append((
|
||||
domain,
|
||||
target,
|
||||
relationship_type,
|
||||
confidence,
|
||||
raw_data
|
||||
))
|
||||
|
||||
# Add target to records list
|
||||
dns_records.append(target)
|
||||
|
||||
# Log relationship discovery with IP version info
|
||||
discovery_method = f"dns_{record_type.lower()}_record"
|
||||
if ip_version:
|
||||
discovery_method += f"_ipv{ip_version}"
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=target,
|
||||
relationship_type=relationship_type,
|
||||
confidence_score=confidence,
|
||||
raw_data=raw_data,
|
||||
discovery_method=discovery_method
|
||||
discovery_method=f"dns_{record_type.lower()}_record"
|
||||
)
|
||||
|
||||
# FIXED: Create attribute with specific name for each record type
|
||||
if dns_records:
|
||||
# Use record type specific attribute name (e.g., 'a_records', 'mx_records', etc.)
|
||||
attribute_name = f"{record_type.lower()}_records"
|
||||
|
||||
metadata = {'record_type': record_type, 'ttl': response.ttl}
|
||||
|
||||
# Add IP version info for A/AAAA records
|
||||
if record_type in ['A', 'AAAA'] and dns_records:
|
||||
first_ip_version = get_ip_version(dns_records[0])
|
||||
if first_ip_version:
|
||||
metadata['ip_version'] = first_ip_version
|
||||
|
||||
result.add_attribute(
|
||||
target_node=domain,
|
||||
name=attribute_name, # UNIQUE name for each record type!
|
||||
value=dns_records,
|
||||
attr_type='dns_record_list',
|
||||
provider=self.name,
|
||||
confidence=0.8,
|
||||
metadata=metadata
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.failed_requests += 1
|
||||
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
|
||||
raise e
|
||||
|
||||
return relationships
|
||||
@@ -1,23 +1,21 @@
|
||||
# dnsrecon/providers/shodan_provider.py
|
||||
"""
|
||||
Shodan provider for DNSRecon.
|
||||
Discovers IP relationships and infrastructure context through Shodan API.
|
||||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
from datetime import datetime, timezone
|
||||
import requests
|
||||
|
||||
from typing import List, Dict, Any, Tuple
|
||||
from .base_provider import BaseProvider
|
||||
from core.provider_result import ProviderResult
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain, get_ip_version, normalize_ip
|
||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||
|
||||
|
||||
class ShodanProvider(BaseProvider):
|
||||
"""
|
||||
Provider for querying Shodan API for IP address information.
|
||||
Now returns standardized ProviderResult objects with caching support for IPv4 and IPv6.
|
||||
Provider for querying Shodan API for IP address and hostname information.
|
||||
Now uses session-specific API keys.
|
||||
"""
|
||||
|
||||
def __init__(self, name=None, session_config=None):
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize Shodan provider with session-specific configuration."""
|
||||
super().__init__(
|
||||
name="shodan",
|
||||
@@ -27,10 +25,6 @@ class ShodanProvider(BaseProvider):
|
||||
)
|
||||
self.base_url = "https://api.shodan.io"
|
||||
self.api_key = self.config.get_api_key('shodan')
|
||||
|
||||
# Initialize cache directory
|
||||
self.cache_dir = Path('cache') / 'shodan'
|
||||
self.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def is_available(self) -> bool:
|
||||
"""Check if Shodan provider is available (has valid API key in this session)."""
|
||||
@@ -42,7 +36,7 @@ class ShodanProvider(BaseProvider):
|
||||
|
||||
def get_display_name(self) -> str:
|
||||
"""Return the provider display name for the UI."""
|
||||
return "Shodan"
|
||||
return "shodan"
|
||||
|
||||
def requires_api_key(self) -> bool:
|
||||
"""Return True if the provider requires an API key."""
|
||||
@@ -50,300 +44,267 @@ class ShodanProvider(BaseProvider):
|
||||
|
||||
def get_eligibility(self) -> Dict[str, bool]:
|
||||
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||
return {'domains': False, 'ips': True}
|
||||
return {'domains': True, 'ips': True}
|
||||
|
||||
def _get_cache_file_path(self, ip: str) -> Path:
|
||||
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Generate cache file path for an IP address (IPv4 or IPv6).
|
||||
IPv6 addresses contain colons which are replaced with underscores for filesystem safety.
|
||||
"""
|
||||
# Normalize the IP address first to ensure consistent caching
|
||||
normalized_ip = normalize_ip(ip)
|
||||
if not normalized_ip:
|
||||
# Fallback for invalid IPs
|
||||
safe_ip = ip.replace('.', '_').replace(':', '_')
|
||||
else:
|
||||
# Replace problematic characters for both IPv4 and IPv6
|
||||
safe_ip = normalized_ip.replace('.', '_').replace(':', '_')
|
||||
|
||||
return self.cache_dir / f"{safe_ip}.json"
|
||||
Query Shodan for information about a domain.
|
||||
Uses Shodan's hostname search to find associated IPs.
|
||||
|
||||
def _get_cache_status(self, cache_file_path: Path) -> str:
|
||||
"""
|
||||
Check cache status for an IP.
|
||||
Returns: 'not_found', 'fresh', or 'stale'
|
||||
"""
|
||||
if not cache_file_path.exists():
|
||||
return "not_found"
|
||||
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_data = json.load(f)
|
||||
|
||||
last_query_str = cache_data.get("last_upstream_query")
|
||||
if not last_query_str:
|
||||
return "stale"
|
||||
|
||||
last_query = datetime.fromisoformat(last_query_str.replace('Z', '+00:00'))
|
||||
hours_since_query = (datetime.now(timezone.utc) - last_query).total_seconds() / 3600
|
||||
|
||||
cache_timeout = self.config.cache_timeout_hours
|
||||
if hours_since_query < cache_timeout:
|
||||
return "fresh"
|
||||
else:
|
||||
return "stale"
|
||||
|
||||
except (json.JSONDecodeError, ValueError, KeyError):
|
||||
return "stale"
|
||||
|
||||
def query_domain(self, domain: str) -> ProviderResult:
|
||||
"""
|
||||
Domain queries are no longer supported for the Shodan provider.
|
||||
|
||||
Args:
|
||||
domain: Domain to investigate
|
||||
|
||||
Returns:
|
||||
Empty ProviderResult
|
||||
"""
|
||||
return ProviderResult()
|
||||
|
||||
def query_ip(self, ip: str) -> ProviderResult:
|
||||
"""
|
||||
Query Shodan for information about an IP address (IPv4 or IPv6), with caching of processed data.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate (IPv4 or IPv6)
|
||||
|
||||
Returns:
|
||||
ProviderResult containing discovered relationships and attributes
|
||||
List of relationships discovered from Shodan data
|
||||
"""
|
||||
if not _is_valid_domain(domain) or not self.is_available():
|
||||
return []
|
||||
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
# Search for hostname in Shodan
|
||||
search_query = f"hostname:{domain}"
|
||||
url = f"{self.base_url}/shodan/host/search"
|
||||
params = {
|
||||
'key': self.api_key,
|
||||
'query': search_query,
|
||||
'minify': True # Get minimal data to reduce bandwidth
|
||||
}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
|
||||
|
||||
if not response or response.status_code != 200:
|
||||
return []
|
||||
|
||||
data = response.json()
|
||||
|
||||
if 'matches' not in data:
|
||||
return []
|
||||
|
||||
# Process search results
|
||||
for match in data['matches']:
|
||||
ip_address = match.get('ip_str')
|
||||
hostnames = match.get('hostnames', [])
|
||||
|
||||
if ip_address and domain in hostnames:
|
||||
raw_data = {
|
||||
'ip_address': ip_address,
|
||||
'hostnames': hostnames,
|
||||
'country': match.get('location', {}).get('country_name', ''),
|
||||
'city': match.get('location', {}).get('city', ''),
|
||||
'isp': match.get('isp', ''),
|
||||
'org': match.get('org', ''),
|
||||
'ports': match.get('ports', []),
|
||||
'last_update': match.get('last_update', '')
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
domain,
|
||||
ip_address,
|
||||
'a_record', # Domain resolves to IP
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=ip_address,
|
||||
relationship_type='a_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=raw_data,
|
||||
discovery_method="shodan_hostname_search"
|
||||
)
|
||||
|
||||
# Also create relationships to other hostnames on the same IP
|
||||
for hostname in hostnames:
|
||||
if hostname != domain and _is_valid_domain(hostname):
|
||||
hostname_raw_data = {
|
||||
'shared_ip': ip_address,
|
||||
'all_hostnames': hostnames,
|
||||
'discovery_context': 'shared_hosting'
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
domain,
|
||||
hostname,
|
||||
'passive_dns', # Shared hosting relationship
|
||||
0.6, # Lower confidence for shared hosting
|
||||
hostname_raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=domain,
|
||||
target_node=hostname,
|
||||
relationship_type='passive_dns',
|
||||
confidence_score=0.6,
|
||||
raw_data=hostname_raw_data,
|
||||
discovery_method="shodan_shared_hosting"
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
||||
|
||||
return relationships
|
||||
|
||||
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Query Shodan for information about an IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address to investigate
|
||||
|
||||
Returns:
|
||||
List of relationships discovered from Shodan IP data
|
||||
"""
|
||||
if not _is_valid_ip(ip) or not self.is_available():
|
||||
return ProviderResult()
|
||||
|
||||
# Normalize IP address for consistent processing
|
||||
normalized_ip = normalize_ip(ip)
|
||||
if not normalized_ip:
|
||||
return ProviderResult()
|
||||
|
||||
cache_file = self._get_cache_file_path(normalized_ip)
|
||||
cache_status = self._get_cache_status(cache_file)
|
||||
|
||||
result = ProviderResult()
|
||||
|
||||
return []
|
||||
|
||||
relationships = []
|
||||
|
||||
try:
|
||||
if cache_status == "fresh":
|
||||
result = self._load_from_cache(cache_file)
|
||||
self.logger.logger.info(f"Using cached Shodan data for {normalized_ip}")
|
||||
else: # "stale" or "not_found"
|
||||
url = f"{self.base_url}/shodan/host/{normalized_ip}"
|
||||
params = {'key': self.api_key}
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=normalized_ip)
|
||||
|
||||
if response and response.status_code == 200:
|
||||
data = response.json()
|
||||
# Process the data into ProviderResult BEFORE caching
|
||||
result = self._process_shodan_data(normalized_ip, data)
|
||||
self._save_to_cache(cache_file, result, data) # Save both result and raw data
|
||||
elif response and response.status_code == 404:
|
||||
# Handle 404 "No information available" as successful empty result
|
||||
try:
|
||||
error_data = response.json()
|
||||
if "No information available" in error_data.get('error', ''):
|
||||
# This is a successful query - Shodan just has no data
|
||||
self.logger.logger.debug(f"Shodan has no information for {normalized_ip}")
|
||||
result = ProviderResult() # Empty but successful result
|
||||
# Cache the empty result to avoid repeated queries
|
||||
self._save_to_cache(cache_file, result, {'error': 'No information available'})
|
||||
else:
|
||||
# Some other 404 error - treat as failure
|
||||
raise requests.exceptions.RequestException(f"Shodan API returned 404: {error_data}")
|
||||
except (ValueError, KeyError):
|
||||
# Could not parse JSON response - treat as failure
|
||||
raise requests.exceptions.RequestException(f"Shodan API returned 404 with unparseable response")
|
||||
elif cache_status == "stale":
|
||||
# If API fails on a stale cache, use the old data
|
||||
result = self._load_from_cache(cache_file)
|
||||
else:
|
||||
# Other HTTP error codes should be treated as failures
|
||||
status_code = response.status_code if response else "No response"
|
||||
raise requests.exceptions.RequestException(f"Shodan API returned HTTP {status_code}")
|
||||
# Query Shodan host information
|
||||
url = f"{self.base_url}/shodan/host/{ip}"
|
||||
params = {'key': self.api_key}
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.logger.info(f"Shodan API query returned no info for {normalized_ip}: {e}")
|
||||
if cache_status == "stale":
|
||||
result = self._load_from_cache(cache_file)
|
||||
else:
|
||||
# Re-raise for retry scheduling - but only for actual failures
|
||||
raise e
|
||||
|
||||
return result
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||
|
||||
def _load_from_cache(self, cache_file_path: Path) -> ProviderResult:
|
||||
"""Load processed Shodan data from a cache file."""
|
||||
try:
|
||||
with open(cache_file_path, 'r') as f:
|
||||
cache_content = json.load(f)
|
||||
|
||||
result = ProviderResult()
|
||||
|
||||
# Reconstruct relationships
|
||||
for rel_data in cache_content.get("relationships", []):
|
||||
result.add_relationship(
|
||||
source_node=rel_data["source_node"],
|
||||
target_node=rel_data["target_node"],
|
||||
relationship_type=rel_data["relationship_type"],
|
||||
provider=rel_data["provider"],
|
||||
confidence=rel_data["confidence"],
|
||||
raw_data=rel_data.get("raw_data", {})
|
||||
)
|
||||
|
||||
# Reconstruct attributes
|
||||
for attr_data in cache_content.get("attributes", []):
|
||||
result.add_attribute(
|
||||
target_node=attr_data["target_node"],
|
||||
name=attr_data["name"],
|
||||
value=attr_data["value"],
|
||||
attr_type=attr_data["type"],
|
||||
provider=attr_data["provider"],
|
||||
confidence=attr_data["confidence"],
|
||||
metadata=attr_data.get("metadata", {})
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
except (json.JSONDecodeError, FileNotFoundError, KeyError):
|
||||
return ProviderResult()
|
||||
if not response or response.status_code != 200:
|
||||
return []
|
||||
|
||||
def _save_to_cache(self, cache_file_path: Path, result: ProviderResult, raw_data: Dict[str, Any]) -> None:
|
||||
"""Save processed Shodan data to a cache file."""
|
||||
try:
|
||||
cache_data = {
|
||||
"last_upstream_query": datetime.now(timezone.utc).isoformat(),
|
||||
"raw_data": raw_data, # Preserve original for forensic purposes
|
||||
"relationships": [
|
||||
{
|
||||
"source_node": rel.source_node,
|
||||
"target_node": rel.target_node,
|
||||
"relationship_type": rel.relationship_type,
|
||||
"confidence": rel.confidence,
|
||||
"provider": rel.provider,
|
||||
"raw_data": rel.raw_data
|
||||
} for rel in result.relationships
|
||||
],
|
||||
"attributes": [
|
||||
{
|
||||
"target_node": attr.target_node,
|
||||
"name": attr.name,
|
||||
"value": attr.value,
|
||||
"type": attr.type,
|
||||
"provider": attr.provider,
|
||||
"confidence": attr.confidence,
|
||||
"metadata": attr.metadata
|
||||
} for attr in result.attributes
|
||||
]
|
||||
}
|
||||
with open(cache_file_path, 'w') as f:
|
||||
json.dump(cache_data, f, separators=(',', ':'), default=str)
|
||||
except Exception as e:
|
||||
self.logger.logger.warning(f"Failed to save Shodan cache for {cache_file_path.name}: {e}")
|
||||
data = response.json()
|
||||
|
||||
def _process_shodan_data(self, ip: str, data: Dict[str, Any]) -> ProviderResult:
|
||||
"""
|
||||
VERIFIED: Process Shodan data creating ISP nodes with ASN attributes and proper relationships.
|
||||
Enhanced to include IP version information for IPv6 addresses.
|
||||
"""
|
||||
result = ProviderResult()
|
||||
|
||||
# Determine IP version for metadata
|
||||
ip_version = get_ip_version(ip)
|
||||
# Extract hostname relationships
|
||||
hostnames = data.get('hostnames', [])
|
||||
for hostname in hostnames:
|
||||
if _is_valid_domain(hostname):
|
||||
raw_data = {
|
||||
'ip_address': ip,
|
||||
'hostname': hostname,
|
||||
'country': data.get('country_name', ''),
|
||||
'city': data.get('city', ''),
|
||||
'isp': data.get('isp', ''),
|
||||
'org': data.get('org', ''),
|
||||
'asn': data.get('asn', ''),
|
||||
'ports': data.get('ports', []),
|
||||
'last_update': data.get('last_update', ''),
|
||||
'os': data.get('os', '')
|
||||
}
|
||||
|
||||
# VERIFIED: Extract ISP information and create proper ISP node with ASN
|
||||
isp_name = data.get('org')
|
||||
asn_value = data.get('asn')
|
||||
relationships.append((
|
||||
ip,
|
||||
hostname,
|
||||
'a_record', # IP resolves to hostname
|
||||
0.8,
|
||||
raw_data
|
||||
))
|
||||
|
||||
if isp_name and asn_value:
|
||||
# Create relationship from IP to ISP
|
||||
result.add_relationship(
|
||||
source_node=ip,
|
||||
target_node=isp_name,
|
||||
relationship_type='shodan_isp',
|
||||
provider=self.name,
|
||||
confidence=0.9,
|
||||
raw_data={'asn': asn_value, 'shodan_org': isp_name, 'ip_version': ip_version}
|
||||
)
|
||||
|
||||
# Add ASN as attribute to the ISP node
|
||||
result.add_attribute(
|
||||
target_node=isp_name,
|
||||
name='asn',
|
||||
value=asn_value,
|
||||
attr_type='isp_info',
|
||||
provider=self.name,
|
||||
confidence=0.9,
|
||||
metadata={'description': 'Autonomous System Number from Shodan', 'ip_version': ip_version}
|
||||
)
|
||||
|
||||
# Also add organization name as attribute to ISP node for completeness
|
||||
result.add_attribute(
|
||||
target_node=isp_name,
|
||||
name='organization_name',
|
||||
value=isp_name,
|
||||
attr_type='isp_info',
|
||||
provider=self.name,
|
||||
confidence=0.9,
|
||||
metadata={'description': 'Organization name from Shodan', 'ip_version': ip_version}
|
||||
)
|
||||
|
||||
# Process hostnames (reverse DNS)
|
||||
for key, value in data.items():
|
||||
if key == 'hostnames':
|
||||
for hostname in value:
|
||||
if _is_valid_domain(hostname):
|
||||
# Use appropriate relationship type based on IP version
|
||||
if ip_version == 6:
|
||||
relationship_type = 'shodan_aaaa_record'
|
||||
else:
|
||||
relationship_type = 'shodan_a_record'
|
||||
|
||||
result.add_relationship(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type=relationship_type,
|
||||
provider=self.name,
|
||||
confidence=0.8,
|
||||
raw_data={**data, 'ip_version': ip_version}
|
||||
)
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type=relationship_type,
|
||||
confidence_score=0.8,
|
||||
raw_data={**data, 'ip_version': ip_version},
|
||||
discovery_method=f"shodan_host_lookup_ipv{ip_version}"
|
||||
)
|
||||
elif key == 'ports':
|
||||
# Add open ports as attributes to the IP
|
||||
for port in value:
|
||||
result.add_attribute(
|
||||
target_node=ip,
|
||||
name='shodan_open_port',
|
||||
value=port,
|
||||
attr_type='shodan_network_info',
|
||||
provider=self.name,
|
||||
confidence=0.9,
|
||||
metadata={'ip_version': ip_version}
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=hostname,
|
||||
relationship_type='a_record',
|
||||
confidence_score=0.8,
|
||||
raw_data=raw_data,
|
||||
discovery_method="shodan_host_lookup"
|
||||
)
|
||||
elif isinstance(value, (str, int, float, bool)) and value is not None:
|
||||
# Add other Shodan fields as IP attributes (keep raw field names)
|
||||
result.add_attribute(
|
||||
target_node=ip,
|
||||
name=key, # Raw field name from Shodan API
|
||||
value=value,
|
||||
attr_type='shodan_info',
|
||||
provider=self.name,
|
||||
confidence=0.9,
|
||||
metadata={'ip_version': ip_version}
|
||||
|
||||
# Extract ASN relationship if available
|
||||
asn = data.get('asn')
|
||||
if asn:
|
||||
# Ensure the ASN starts with "AS"
|
||||
if isinstance(asn, str) and asn.startswith('AS'):
|
||||
asn_name = asn
|
||||
asn_number = asn[2:]
|
||||
else:
|
||||
asn_name = f"AS{asn}"
|
||||
asn_number = str(asn)
|
||||
|
||||
asn_raw_data = {
|
||||
'ip_address': ip,
|
||||
'asn': asn_number,
|
||||
'isp': data.get('isp', ''),
|
||||
'org': data.get('org', '')
|
||||
}
|
||||
|
||||
relationships.append((
|
||||
ip,
|
||||
asn_name,
|
||||
'asn_membership',
|
||||
0.7,
|
||||
asn_raw_data
|
||||
))
|
||||
|
||||
self.log_relationship_discovery(
|
||||
source_node=ip,
|
||||
target_node=asn_name,
|
||||
relationship_type='asn_membership',
|
||||
confidence_score=0.7,
|
||||
raw_data=asn_raw_data,
|
||||
discovery_method="shodan_asn_lookup"
|
||||
)
|
||||
|
||||
return result
|
||||
except json.JSONDecodeError as e:
|
||||
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
||||
|
||||
return relationships
|
||||
|
||||
def search_by_organization(self, org_name: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Search Shodan for hosts belonging to a specific organization.
|
||||
|
||||
Args:
|
||||
org_name: Organization name to search for
|
||||
|
||||
Returns:
|
||||
List of host information dictionaries
|
||||
"""
|
||||
if not self.is_available():
|
||||
return []
|
||||
|
||||
try:
|
||||
search_query = f"org:\"{org_name}\""
|
||||
url = f"{self.base_url}/shodan/host/search"
|
||||
params = {
|
||||
'key': self.api_key,
|
||||
'query': search_query,
|
||||
'minify': True
|
||||
}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=org_name)
|
||||
|
||||
if response and response.status_code == 200:
|
||||
data = response.json()
|
||||
return data.get('matches', [])
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.error(f"Error searching Shodan by organization {org_name}: {e}")
|
||||
|
||||
return []
|
||||
|
||||
def get_host_services(self, ip: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get service information for a specific IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address to query
|
||||
|
||||
Returns:
|
||||
List of service information dictionaries
|
||||
"""
|
||||
if not _is_valid_ip(ip) or not self.is_available():
|
||||
return []
|
||||
|
||||
try:
|
||||
url = f"{self.base_url}/shodan/host/{ip}"
|
||||
params = {'key': self.api_key}
|
||||
|
||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||
|
||||
if response and response.status_code == 200:
|
||||
data = response.json()
|
||||
return data.get('data', []) # Service banners
|
||||
|
||||
except Exception as e:
|
||||
self.logger.logger.error(f"Error getting Shodan services for IP {ip}: {e}")
|
||||
|
||||
return []
|
||||
@@ -1,10 +1,9 @@
|
||||
Flask
|
||||
networkx
|
||||
requests
|
||||
python-dateutil
|
||||
Werkzeug
|
||||
urllib3
|
||||
dnspython
|
||||
Flask>=2.3.3
|
||||
networkx>=3.1
|
||||
requests>=2.31.0
|
||||
python-dateutil>=2.8.2
|
||||
Werkzeug>=2.3.7
|
||||
urllib3>=2.0.0
|
||||
dnspython>=2.4.2
|
||||
gunicorn
|
||||
redis
|
||||
python-dotenv
|
||||
redis
|
||||
1680
static/css/main.css
1680
static/css/main.css
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
1959
static/js/main.js
1959
static/js/main.js
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,5 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
@@ -8,11 +7,8 @@
|
||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}">
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/vis/4.21.0/vis.min.js"></script>
|
||||
<link href="https://cdnjs.cloudflare.com/ajax/libs/vis/4.21.0/vis.min.css" rel="stylesheet" type="text/css">
|
||||
<link
|
||||
href="https://fonts.googleapis.com/css2?family=Roboto+Mono:wght@300;400;500;700&family=Special+Elite&display=swap"
|
||||
rel="stylesheet">
|
||||
<link href="https://fonts.googleapis.com/css2?family=Roboto+Mono:wght@300;400;500;700&family=Special+Elite&display=swap" rel="stylesheet">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<div class="container">
|
||||
<header class="header">
|
||||
@@ -33,13 +29,24 @@
|
||||
<div class="panel-header">
|
||||
<h2>Target Configuration</h2>
|
||||
</div>
|
||||
|
||||
|
||||
<div class="form-container">
|
||||
<div class="input-group">
|
||||
<label for="target-input">Target Domain or IP</label>
|
||||
<input type="text" id="target-input" placeholder="example.com or 8.8.8.8" autocomplete="off">
|
||||
<label for="target-domain">Target Domain</label>
|
||||
<input type="text" id="target-domain" placeholder="example.com" autocomplete="off">
|
||||
</div>
|
||||
|
||||
|
||||
<div class="input-group">
|
||||
<label for="max-depth">Recursion Depth</label>
|
||||
<select id="max-depth">
|
||||
<option value="1">Depth 1 - Direct relationships</option>
|
||||
<option value="2" selected>Depth 2 - Recommended</option>
|
||||
<option value="3">Depth 3 - Extended analysis</option>
|
||||
<option value="4">Depth 4 - Deep reconnaissance</option>
|
||||
<option value="5">Depth 5 - Maximum depth</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="button-group">
|
||||
<button id="start-scan" class="btn btn-primary">
|
||||
<span class="btn-icon">[RUN]</span>
|
||||
@@ -53,13 +60,13 @@
|
||||
<span class="btn-icon">[STOP]</span>
|
||||
<span>Terminate Scan</span>
|
||||
</button>
|
||||
<button id="export-options" class="btn btn-secondary">
|
||||
<button id="export-results" class="btn btn-secondary">
|
||||
<span class="btn-icon">[EXPORT]</span>
|
||||
<span>Export Options</span>
|
||||
<span>Download Results</span>
|
||||
</button>
|
||||
<button id="configure-settings" class="btn btn-secondary">
|
||||
<button id="configure-api-keys" class="btn btn-secondary">
|
||||
<span class="btn-icon">[API]</span>
|
||||
<span>Settings</span>
|
||||
<span>Configure API Keys</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
@@ -69,7 +76,7 @@
|
||||
<div class="panel-header">
|
||||
<h2>Reconnaissance Status</h2>
|
||||
</div>
|
||||
|
||||
|
||||
<div class="status-content">
|
||||
<div class="status-row">
|
||||
<span class="status-label">Current Status:</span>
|
||||
@@ -83,81 +90,87 @@
|
||||
<span class="status-label">Depth:</span>
|
||||
<span id="depth-display" class="status-value">0/0</span>
|
||||
</div>
|
||||
<div class="status-row">
|
||||
<span class="status-label">Progress:</span>
|
||||
<span id="progress-display" class="status-value">0%</span>
|
||||
</div>
|
||||
<div class="status-row">
|
||||
<span class="status-label">Indicators:</span>
|
||||
<span id="indicators-display" class="status-value">0</span>
|
||||
</div>
|
||||
<div class="status-row">
|
||||
<span class="status-label">Relationships:</span>
|
||||
<span id="relationships-display" class="status-value">0</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="progress-container">
|
||||
<div class="progress-info">
|
||||
<span id="progress-label">Progress:</span>
|
||||
<span id="progress-compact">0/0</span>
|
||||
</div>
|
||||
<div class="progress-bar">
|
||||
<div id="progress-fill" class="progress-fill"></div>
|
||||
</div>
|
||||
<div class="progress-placeholder">
|
||||
<span class="status-label">
|
||||
⚠️ <strong>Important:</strong> Scanning large public services (e.g., Google, Cloudflare,
|
||||
AWS) is
|
||||
<strong>discouraged</strong> due to rate limits (e.g., crt.sh).
|
||||
<br><br>
|
||||
Our task scheduler operates on a <strong>priority-based queue</strong>:
|
||||
Short, targeted tasks like DNS are processed first, while resource-intensive requests (e.g.,
|
||||
crt.sh)
|
||||
are <strong>automatically deprioritized</strong> and may be processed later.
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div class="progress-bar">
|
||||
<div id="progress-fill" class="progress-fill"></div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="visualization-panel">
|
||||
<div class="panel-header">
|
||||
<h2>Infrastructure Map</h2>
|
||||
</div>
|
||||
|
||||
<div id="network-graph" class="graph-container">
|
||||
<div class="graph-placeholder">
|
||||
<div class="placeholder-content">
|
||||
<div class="placeholder-icon">[◯]</div>
|
||||
<div class="placeholder-text">Infrastructure map will appear here</div>
|
||||
<div class="placeholder-subtext">Start a reconnaissance scan to visualize relationships
|
||||
</div>
|
||||
<div class="view-controls">
|
||||
<div class="filter-group">
|
||||
<label for="node-type-filter">Node Type:</label>
|
||||
<select id="node-type-filter">
|
||||
<option value="all">All</option>
|
||||
<option value="domain">Domain</option>
|
||||
<option value="ip">IP</option>
|
||||
<option value="asn">ASN</option>
|
||||
<option value="correlation_object">Correlation Object</option>
|
||||
<option value="large_entity">Large Entity</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="filter-group">
|
||||
<label for="confidence-filter">Min Confidence:</label>
|
||||
<input type="range" id="confidence-filter" min="0" max="1" step="0.1" value="0">
|
||||
<span id="confidence-value">0</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
<div id="network-graph" class="graph-container">
|
||||
<div class="graph-placeholder">
|
||||
<div class="placeholder-content">
|
||||
<div class="placeholder-icon">[○]</div>
|
||||
<div class="placeholder-text">Infrastructure map will appear here</div>
|
||||
<div class="placeholder-subtext">Start a reconnaissance scan to visualize relationships</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="legend">
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #00ff41;"></div>
|
||||
<span>Domains</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #c92f2f;"></div>
|
||||
<span>Domain (no valid cert)</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #c7c7c7;"></div>
|
||||
<span>Domain (never had cert)</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #ff9900;"></div>
|
||||
<span>IP Addresses</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #00aaff;"></div>
|
||||
<span>ISPs</span>
|
||||
<div class="legend-color" style="background-color: #c7c7c7;"></div>
|
||||
<span>Domain (invalid cert)</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #ff6b6b;"></div>
|
||||
<span>Certificate Authorities</span>
|
||||
</div>
|
||||
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #9d4edd;"></div>
|
||||
<span>Correlation Objects</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-edge high-confidence"></div>
|
||||
<span>High Confidence</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-edge medium-confidence"></div>
|
||||
<span>Medium Confidence</span>
|
||||
</div>
|
||||
<div class="legend-item">
|
||||
<div class="legend-color" style="background-color: #ff6b6b;"></div>
|
||||
<span>Large Entity</span>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
@@ -165,9 +178,9 @@
|
||||
<div class="panel-header">
|
||||
<h2>Data Providers</h2>
|
||||
</div>
|
||||
|
||||
|
||||
<div id="provider-list" class="provider-list">
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
</main>
|
||||
|
||||
@@ -189,127 +202,49 @@
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<div id="modal-details">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Settings Modal -->
|
||||
<div id="settings-modal" class="modal">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<h3>Scanner Configuration</h3>
|
||||
<button id="settings-modal-close" class="modal-close">[×]</button>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<div class="modal-details">
|
||||
<!-- Scan Settings Section -->
|
||||
<section class="modal-section">
|
||||
<details open>
|
||||
<summary>
|
||||
<span>⚙️ Scan Settings</span>
|
||||
</summary>
|
||||
<div class="modal-section-content">
|
||||
<div class="input-group">
|
||||
<label for="max-depth">Recursion Depth</label>
|
||||
<select id="max-depth">
|
||||
<option value="1">Depth 1 - Direct relationships</option>
|
||||
<option value="2" selected>Depth 2 - Recommended</option>
|
||||
<option value="3">Depth 3 - Extended analysis</option>
|
||||
<option value="4">Depth 4 - Deep reconnaissance</option>
|
||||
<option value="5">Depth 5 - Maximum depth</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
</details>
|
||||
</section>
|
||||
|
||||
<!-- Provider Configuration Section -->
|
||||
<section class="modal-section">
|
||||
<details open>
|
||||
<summary>
|
||||
<span>🔧 Provider Configuration</span>
|
||||
<span class="merge-badge" id="provider-count">0</span>
|
||||
</summary>
|
||||
<div class="modal-section-content">
|
||||
<div id="provider-config-list">
|
||||
<!-- Dynamically populated -->
|
||||
</div>
|
||||
</div>
|
||||
</details>
|
||||
</section>
|
||||
|
||||
<!-- API Keys Section -->
|
||||
<section class="modal-section">
|
||||
<details>
|
||||
<summary>
|
||||
<span>🔑 API Keys</span>
|
||||
<span class="merge-badge" id="api-key-count">0</span>
|
||||
</summary>
|
||||
<div class="modal-section-content">
|
||||
<p class="placeholder-subtext" style="margin-bottom: 1rem;">
|
||||
⚠️ API keys are stored in memory for the current session only.
|
||||
Only provide API keys you don't use for anything else.
|
||||
</p>
|
||||
<div id="api-key-inputs">
|
||||
<!-- Dynamically populated -->
|
||||
</div>
|
||||
</div>
|
||||
</details>
|
||||
</section>
|
||||
|
||||
<!-- Action Buttons -->
|
||||
<div class="button-group" style="margin-top: 1.5rem;">
|
||||
<button id="save-settings" class="btn btn-primary">
|
||||
<span class="btn-icon">[SAVE]</span>
|
||||
<span>Save Configuration</span>
|
||||
</button>
|
||||
<button id="reset-settings" class="btn btn-secondary">
|
||||
<span class="btn-icon">[RESET]</span>
|
||||
<span>Reset to Defaults</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Export Modal -->
|
||||
<div id="export-modal" class="modal">
|
||||
<div id="api-key-modal" class="modal">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<h3>Export Options</h3>
|
||||
<button id="export-modal-close" class="modal-close">[×]</button>
|
||||
<h3>Configure API Keys</h3>
|
||||
<button id="api-key-modal-close" class="modal-close">[×]</button>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<div class="modal-details">
|
||||
<section class="modal-section">
|
||||
<details open>
|
||||
<summary>
|
||||
<span>📊 Available Exports</span>
|
||||
</summary>
|
||||
<div class="modal-section-content">
|
||||
<div class="button-group" style="margin-top: 1rem;">
|
||||
<button id="export-graph-json" class="btn btn-primary">
|
||||
<span class="btn-icon">[JSON]</span>
|
||||
<span>Export Graph Data</span>
|
||||
</button>
|
||||
<div class="status-row" style="margin-top: 0.5rem;">
|
||||
<span class="status-label">Complete graph data with forensic audit trail,
|
||||
provider statistics, and scan metadata in JSON format for analysis and
|
||||
archival.</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</details>
|
||||
</section>
|
||||
<p class="modal-description">
|
||||
Enter your API keys for enhanced data providers. Keys are stored in memory for the current session only and are never saved to disk.
|
||||
</p>
|
||||
<div id="api-key-inputs">
|
||||
</div>
|
||||
<div class="button-group" style="flex-direction: row; justify-content: flex-end;">
|
||||
<button id="reset-api-keys" class="btn btn-secondary">
|
||||
<span>Reset</span>
|
||||
</button>
|
||||
<button id="save-api-keys" class="btn btn-primary">
|
||||
<span>Save Keys</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function copyToClipboard(elementId) {
|
||||
const element = document.getElementById(elementId);
|
||||
const textToCopy = element.innerText;
|
||||
navigator.clipboard.writeText(textToCopy).then(() => {
|
||||
// Optional: Show a success message
|
||||
console.log('Copied to clipboard');
|
||||
}).catch(err => {
|
||||
console.error('Failed to copy: ', err);
|
||||
});
|
||||
}
|
||||
</script>
|
||||
<script src="{{ url_for('static', filename='js/graph.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/main.js') }}"></script>
|
||||
</body>
|
||||
|
||||
</html>
|
||||
@@ -1,8 +1,3 @@
|
||||
# dnsrecon-reduced/utils/helpers.py
|
||||
|
||||
import ipaddress
|
||||
from typing import Union
|
||||
|
||||
def _is_valid_domain(domain: str) -> bool:
|
||||
"""
|
||||
Basic domain validation.
|
||||
@@ -31,64 +26,25 @@ def _is_valid_domain(domain: str) -> bool:
|
||||
|
||||
def _is_valid_ip(ip: str) -> bool:
|
||||
"""
|
||||
IP address validation supporting both IPv4 and IPv6.
|
||||
Basic IP address validation.
|
||||
|
||||
Args:
|
||||
ip: IP address string to validate
|
||||
|
||||
Returns:
|
||||
True if IP appears valid (IPv4 or IPv6)
|
||||
True if IP appears valid
|
||||
"""
|
||||
if not ip:
|
||||
return False
|
||||
|
||||
try:
|
||||
# This handles both IPv4 and IPv6 validation
|
||||
ipaddress.ip_address(ip.strip())
|
||||
parts = ip.split('.')
|
||||
if len(parts) != 4:
|
||||
return False
|
||||
|
||||
for part in parts:
|
||||
num = int(part)
|
||||
if not 0 <= num <= 255:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
except (ValueError, AttributeError):
|
||||
return False
|
||||
|
||||
def is_valid_target(target: str) -> bool:
|
||||
"""
|
||||
Checks if the target is a valid domain or IP address (IPv4/IPv6).
|
||||
|
||||
Args:
|
||||
target: The target string to validate.
|
||||
|
||||
Returns:
|
||||
True if the target is a valid domain or IP, False otherwise.
|
||||
"""
|
||||
return _is_valid_domain(target) or _is_valid_ip(target)
|
||||
|
||||
def get_ip_version(ip: str) -> Union[int, None]:
|
||||
"""
|
||||
Get the IP version (4 or 6) of a valid IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address string
|
||||
|
||||
Returns:
|
||||
4 for IPv4, 6 for IPv6, None if invalid
|
||||
"""
|
||||
try:
|
||||
addr = ipaddress.ip_address(ip.strip())
|
||||
return addr.version
|
||||
except (ValueError, AttributeError):
|
||||
return None
|
||||
|
||||
def normalize_ip(ip: str) -> Union[str, None]:
|
||||
"""
|
||||
Normalize an IP address to its canonical form.
|
||||
|
||||
Args:
|
||||
ip: IP address string
|
||||
|
||||
Returns:
|
||||
Normalized IP address string, None if invalid
|
||||
"""
|
||||
try:
|
||||
addr = ipaddress.ip_address(ip.strip())
|
||||
return str(addr)
|
||||
except (ValueError, AttributeError):
|
||||
return None
|
||||
Reference in New Issue
Block a user