Compare commits
42 Commits
0021bbc696
...
database_c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9f3b17e658 | ||
|
|
eb9eea127b | ||
|
|
ae07635ab6 | ||
|
|
d7adf9ad8b | ||
|
|
39ce0e9d11 | ||
|
|
926f9e1096 | ||
|
|
9499e62ccc | ||
|
|
89ae06482e | ||
|
|
7fe7ca41ba | ||
|
|
949fbdbb45 | ||
|
|
689e8c00d4 | ||
|
|
3511f18f9a | ||
|
|
72f7056bc7 | ||
|
|
2ae33bc5ba | ||
|
|
c91913fa13 | ||
|
|
2185177a84 | ||
|
|
b7a57f1552 | ||
|
|
41d556e2ce | ||
|
|
2974312278 | ||
|
|
930fdca500 | ||
|
|
2925512a4d | ||
|
|
717f103596 | ||
|
|
612f414d2a | ||
|
|
53baf2e291 | ||
|
|
84810cdbb0 | ||
|
|
d36fb7d814 | ||
|
|
c0b820c96c | ||
|
|
03c52abd1b | ||
|
|
2d62191aa0 | ||
|
|
d2e4c6ee49 | ||
|
|
9e66fd0785 | ||
|
|
b250109736 | ||
|
|
a535d25714 | ||
|
|
4f69cabd41 | ||
|
|
8b7a0656bb | ||
|
|
007ebbfd73 | ||
|
|
3ecfca95e6 | ||
|
|
7e2473b521 | ||
|
|
f445187025 | ||
|
|
df4e1703c4 | ||
|
|
646b569ced | ||
|
|
b47e679992 |
34
.env.example
Normal file
34
.env.example
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
# ===============================================
|
||||||
|
# DNSRecon Environment Variables
|
||||||
|
# ===============================================
|
||||||
|
# Copy this file to .env and fill in your values.
|
||||||
|
|
||||||
|
# --- API Keys ---
|
||||||
|
# Add your Shodan API key for the Shodan provider to be enabled.
|
||||||
|
SHODAN_API_KEY=
|
||||||
|
|
||||||
|
# --- Flask & Session Settings ---
|
||||||
|
# A strong, random secret key is crucial for session security.
|
||||||
|
FLASK_SECRET_KEY=your-very-secret-and-random-key-here
|
||||||
|
FLASK_HOST=127.0.0.1
|
||||||
|
FLASK_PORT=5000
|
||||||
|
FLASK_DEBUG=True
|
||||||
|
# How long a user's session in the browser lasts (in hours).
|
||||||
|
FLASK_PERMANENT_SESSION_LIFETIME_HOURS=2
|
||||||
|
# How long inactive scanner data is stored in Redis (in minutes).
|
||||||
|
SESSION_TIMEOUT_MINUTES=60
|
||||||
|
|
||||||
|
|
||||||
|
# --- Application Core Settings ---
|
||||||
|
# The default number of levels to recurse when scanning.
|
||||||
|
DEFAULT_RECURSION_DEPTH=2
|
||||||
|
# Default timeout for provider API requests in seconds.
|
||||||
|
DEFAULT_TIMEOUT=30
|
||||||
|
# The number of concurrent provider requests to make.
|
||||||
|
MAX_CONCURRENT_REQUESTS=5
|
||||||
|
# The number of results from a provider that triggers the "large entity" grouping.
|
||||||
|
LARGE_ENTITY_THRESHOLD=100
|
||||||
|
# The number of times to retry a target if a provider fails.
|
||||||
|
MAX_RETRIES_PER_TARGET=3
|
||||||
|
# How long cached provider responses are stored (in hours).
|
||||||
|
CACHE_EXPIRY_HOURS=12
|
||||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -168,3 +168,4 @@ cython_debug/
|
|||||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||||
#.idea/
|
#.idea/
|
||||||
|
|
||||||
|
dump.rdb
|
||||||
|
|||||||
467
README.md
467
README.md
@@ -2,272 +2,255 @@
|
|||||||
|
|
||||||
DNSRecon is an interactive, passive reconnaissance tool designed to map adversary infrastructure. It operates on a "free-by-default" model, ensuring core functionality without subscriptions, while allowing power users to enhance its capabilities with paid API keys.
|
DNSRecon is an interactive, passive reconnaissance tool designed to map adversary infrastructure. It operates on a "free-by-default" model, ensuring core functionality without subscriptions, while allowing power users to enhance its capabilities with paid API keys.
|
||||||
|
|
||||||
**Current Status: Phase 1 Implementation**
|
**Current Status: Phase 2 Implementation**
|
||||||
- ✅ Core infrastructure and graph engine
|
|
||||||
- ✅ Certificate transparency data provider (crt.sh)
|
- ✅ Core infrastructure and graph engine
|
||||||
- ✅ Basic web interface with real-time visualization
|
- ✅ Multi-provider support (crt.sh, DNS, Shodan)
|
||||||
- ✅ Forensic logging system
|
- ✅ Session-based multi-user support
|
||||||
- ✅ JSON export functionality
|
- ✅ Real-time web interface with interactive visualization
|
||||||
|
- ✅ Forensic logging system and JSON export
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
### Core Capabilities
|
- **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
|
||||||
- **Zero Contact Reconnaissance**: Passive data gathering without touching target infrastructure
|
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
|
||||||
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping
|
- **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
|
||||||
- **Real-Time Visualization**: Interactive graph updates during scanning
|
- **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
|
||||||
- **Forensic Logging**: Complete audit trail of all reconnaissance activities
|
- **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
|
||||||
- **Confidence Scoring**: Weighted relationships based on data source reliability
|
- **Session Management**: Supports concurrent user sessions with isolated scanner instances.
|
||||||
|
|
||||||
### Data Sources (Phase 1)
|
|
||||||
- **Certificate Transparency (crt.sh)**: Discovers domain relationships through SSL certificate SAN analysis
|
|
||||||
- **Basic DNS Resolution**: A/AAAA record lookups for IP relationships
|
|
||||||
|
|
||||||
### Visualization
|
|
||||||
- **Interactive Network Graph**: Powered by vis.js with cybersecurity theme
|
|
||||||
- **Node Types**: Domains, IP addresses, certificates, ASNs
|
|
||||||
- **Confidence-Based Styling**: Visual indicators for relationship strength
|
|
||||||
- **Real-Time Updates**: Graph builds dynamically as relationships are discovered
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
- Python 3.8 or higher
|
|
||||||
- Modern web browser with JavaScript enabled
|
|
||||||
|
|
||||||
### Setup
|
- Python 3.8 or higher
|
||||||
1. **Clone or create the project directory**:
|
- A modern web browser with JavaScript enabled
|
||||||
```bash
|
- (Recommended) A Linux host for running the application and the optional DNS cache.
|
||||||
mkdir dnsrecon
|
|
||||||
cd dnsrecon
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Install Python dependencies**:
|
### 1\. Clone the Project
|
||||||
```bash
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Verify the directory structure**:
|
|
||||||
```
|
|
||||||
dnsrecon/
|
|
||||||
├── app.py
|
|
||||||
├── config.py
|
|
||||||
├── requirements.txt
|
|
||||||
├── core/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── graph_manager.py
|
|
||||||
│ ├── scanner.py
|
|
||||||
│ └── logger.py
|
|
||||||
├── providers/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── base_provider.py
|
|
||||||
│ └── crtsh_provider.py
|
|
||||||
├── static/
|
|
||||||
│ ├── css/
|
|
||||||
│ │ └── main.css
|
|
||||||
│ └── js/
|
|
||||||
│ ├── graph.js
|
|
||||||
│ └── main.js
|
|
||||||
└── templates/
|
|
||||||
└── index.html
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Starting the Application
|
|
||||||
1. **Run the Flask application**:
|
|
||||||
```bash
|
|
||||||
python app.py
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Open your web browser** and navigate to:
|
|
||||||
```
|
|
||||||
http://127.0.0.1:5000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Basic Reconnaissance Workflow
|
|
||||||
|
|
||||||
1. **Enter Target Domain**: Input the domain you want to investigate (e.g., `example.com`)
|
|
||||||
|
|
||||||
2. **Select Recursion Depth**:
|
|
||||||
- **Depth 1**: Direct relationships only
|
|
||||||
- **Depth 2**: Recommended for most investigations
|
|
||||||
- **Depth 3+**: Extended analysis for comprehensive mapping
|
|
||||||
|
|
||||||
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin passive data gathering
|
|
||||||
|
|
||||||
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered
|
|
||||||
|
|
||||||
5. **Analyze Results**: Interact with the graph to explore relationships and click nodes for detailed information
|
|
||||||
|
|
||||||
6. **Export Data**: Download complete results including graph data and forensic audit trail
|
|
||||||
|
|
||||||
### Understanding the Visualization
|
|
||||||
|
|
||||||
#### Node Types
|
|
||||||
- 🟢 **Green Circles**: Domain names
|
|
||||||
- 🟠 **Orange Squares**: IP addresses
|
|
||||||
- ⚪ **Gray Diamonds**: SSL certificates
|
|
||||||
- 🔵 **Blue Triangles**: ASN (Autonomous System) information
|
|
||||||
|
|
||||||
#### Edge Confidence
|
|
||||||
- **Thick Green Lines**: High confidence (≥80%) - Certificate SAN relationships
|
|
||||||
- **Medium Orange Lines**: Medium confidence (60-79%) - DNS record relationships
|
|
||||||
- **Thin Gray Lines**: Lower confidence (<60%) - Passive DNS or uncertain relationships
|
|
||||||
|
|
||||||
### Example Investigation
|
|
||||||
|
|
||||||
Let's investigate `github.com`:
|
|
||||||
|
|
||||||
1. Enter `github.com` as the target domain
|
|
||||||
2. Set recursion depth to 2
|
|
||||||
3. Start the scan
|
|
||||||
4. Observe relationships to other GitHub domains discovered through certificate analysis
|
|
||||||
5. Export results for further analysis
|
|
||||||
|
|
||||||
Expected discoveries might include:
|
|
||||||
- `*.github.com` domains through certificate SANs
|
|
||||||
- `github.io` and related domains
|
|
||||||
- Associated IP addresses
|
|
||||||
- Certificate authority relationships
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
You can configure DNSRecon using environment variables:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# API keys for future providers (Phase 2)
|
git clone https://github.com/your-repo/dnsrecon.git
|
||||||
export VIRUSTOTAL_API_KEY="your_api_key_here"
|
cd dnsrecon
|
||||||
export SHODAN_API_KEY="your_api_key_here"
|
|
||||||
|
|
||||||
# Application settings
|
|
||||||
export DEFAULT_RECURSION_DEPTH=2
|
|
||||||
export FLASK_DEBUG=False
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Rate Limiting
|
### 2\. Install Python Dependencies
|
||||||
DNSRecon includes built-in rate limiting to be respectful to data sources:
|
|
||||||
- **crt.sh**: 60 requests per minute
|
|
||||||
- **DNS queries**: 100 requests per minute
|
|
||||||
|
|
||||||
## Data Export Format
|
It is highly recommended to use a virtual environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 -m venv venv
|
||||||
|
source venv/bin/activate
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3\. (Optional but Recommended) Set up a Local DNS Caching Resolver
|
||||||
|
|
||||||
|
Running a local DNS caching resolver can significantly speed up DNS queries and reduce your network footprint. Here’s how to set up `unbound` on a Debian-based Linux distribution (like Ubuntu).
|
||||||
|
|
||||||
|
**a. Install Unbound:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install unbound -y
|
||||||
|
```
|
||||||
|
|
||||||
|
**b. Configure Unbound:**
|
||||||
|
Create a new configuration file for DNSRecon:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nano /etc/unbound/unbound.conf.d/dnsrecon.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the following content to the file:
|
||||||
|
|
||||||
|
```
|
||||||
|
server:
|
||||||
|
# Listen on localhost for all users
|
||||||
|
interface: 127.0.0.1
|
||||||
|
access-control: 0.0.0.0/0 refuse
|
||||||
|
access-control: 127.0.0.0/8 allow
|
||||||
|
|
||||||
|
# Enable prefetching of popular items
|
||||||
|
prefetch: yes
|
||||||
|
```
|
||||||
|
|
||||||
|
**c. Restart Unbound and set it as the default resolver:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl restart unbound
|
||||||
|
sudo systemctl enable unbound
|
||||||
|
```
|
||||||
|
|
||||||
|
To use this resolver for your system, you may need to update your network settings to point to `127.0.0.1` as your DNS server.
|
||||||
|
|
||||||
|
**d. Update DNSProvider to use the local resolver:**
|
||||||
|
In `dnsrecon/providers/dns_provider.py`, you can explicitly set the resolver's nameservers in the `__init__` method:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# dnsrecon/providers/dns_provider.py
|
||||||
|
|
||||||
|
class DNSProvider(BaseProvider):
|
||||||
|
def __init__(self, session_config=None):
|
||||||
|
"""Initialize DNS provider with session-specific configuration."""
|
||||||
|
super().__init__(...)
|
||||||
|
|
||||||
|
# Configure DNS resolver
|
||||||
|
self.resolver = dns.resolver.Resolver()
|
||||||
|
self.resolver.nameservers = ['127.0.0.1'] # Use local caching resolver
|
||||||
|
self.resolver.timeout = 5
|
||||||
|
self.resolver.lifetime = 10
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage (Development)
|
||||||
|
|
||||||
|
### 1\. Start the Application
|
||||||
|
|
||||||
Results are exported as JSON with the following structure:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"scan_metadata": {
|
|
||||||
"target_domain": "example.com",
|
|
||||||
"max_depth": 2,
|
|
||||||
"final_status": "completed"
|
|
||||||
},
|
|
||||||
"graph_data": {
|
|
||||||
"nodes": [...],
|
|
||||||
"edges": [...]
|
|
||||||
},
|
|
||||||
"forensic_audit": {
|
|
||||||
"session_metadata": {...},
|
|
||||||
"api_requests": [...],
|
|
||||||
"relationships": [...]
|
|
||||||
},
|
|
||||||
"provider_statistics": {...}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Forensic Integrity
|
|
||||||
|
|
||||||
DNSRecon maintains complete forensic integrity:
|
|
||||||
|
|
||||||
- **API Request Logging**: Every external request is logged with timestamps, URLs, and responses
|
|
||||||
- **Relationship Provenance**: Each discovered relationship includes source provider and discovery method
|
|
||||||
- **Session Tracking**: Unique session IDs for investigation continuity
|
|
||||||
- **Confidence Metadata**: Scoring rationale for all relationships
|
|
||||||
- **Export Integrity**: Complete audit trail included in all exports
|
|
||||||
|
|
||||||
## Architecture Overview
|
|
||||||
|
|
||||||
### Core Components
|
|
||||||
|
|
||||||
- **GraphManager**: NetworkX-based in-memory graph with confidence scoring
|
|
||||||
- **Scanner**: Multi-provider orchestration with depth-limited BFS exploration
|
|
||||||
- **ForensicLogger**: Thread-safe audit trail with structured logging
|
|
||||||
- **BaseProvider**: Abstract interface for data source plugins
|
|
||||||
|
|
||||||
### Data Flow
|
|
||||||
1. User initiates scan via web interface
|
|
||||||
2. Scanner coordinates multiple data providers
|
|
||||||
3. Relationships discovered and added to in-memory graph
|
|
||||||
4. Real-time updates sent to web interface
|
|
||||||
5. Graph visualization updates dynamically
|
|
||||||
6. Complete audit trail maintained throughout
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
**Graph not displaying**:
|
|
||||||
- Ensure JavaScript is enabled in your browser
|
|
||||||
- Check browser console for errors
|
|
||||||
- Verify vis.js library is loading correctly
|
|
||||||
|
|
||||||
**Scan fails to start**:
|
|
||||||
- Check target domain is valid
|
|
||||||
- Ensure crt.sh is accessible from your network
|
|
||||||
- Review Flask console output for errors
|
|
||||||
|
|
||||||
**No relationships discovered**:
|
|
||||||
- Some domains may have limited certificate transparency data
|
|
||||||
- Try a well-known domain like `google.com` to verify functionality
|
|
||||||
- Check provider status in the interface
|
|
||||||
|
|
||||||
### Debug Mode
|
|
||||||
Enable debug mode for verbose logging:
|
|
||||||
```bash
|
```bash
|
||||||
export FLASK_DEBUG=True
|
|
||||||
python app.py
|
python app.py
|
||||||
```
|
```
|
||||||
|
|
||||||
## Development Roadmap
|
### 2\. Open Your Browser
|
||||||
|
|
||||||
### Phase 2 (Planned)
|
Navigate to `http://127.0.0.1:5000`.
|
||||||
- Multi-provider system with Shodan and VirusTotal integration
|
|
||||||
- Real-time scanning with enhanced visualization
|
|
||||||
- Provider health monitoring and failure recovery
|
|
||||||
|
|
||||||
### Phase 3 (Planned)
|
### 3\. Basic Reconnaissance Workflow
|
||||||
- Advanced correlation algorithms
|
|
||||||
- Enhanced forensic reporting
|
1. **Enter Target Domain**: Input a domain like `example.com`.
|
||||||
- Performance optimization for large investigations
|
2. **Select Recursion Depth**: Depth 2 is recommended for most investigations.
|
||||||
|
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin.
|
||||||
|
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered.
|
||||||
|
5. **Analyze and Export**: Interact with the graph and download the results when the scan is complete.
|
||||||
|
|
||||||
|
## Production Deployment
|
||||||
|
|
||||||
|
To deploy DNSRecon in a production environment, follow these steps:
|
||||||
|
|
||||||
|
### 1\. Use a Production WSGI Server
|
||||||
|
|
||||||
|
Do not use the built-in Flask development server for production. Use a WSGI server like **Gunicorn**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install gunicorn
|
||||||
|
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2\. Configure Environment Variables
|
||||||
|
|
||||||
|
Set the following environment variables for a secure and configurable deployment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate a strong, random secret key
|
||||||
|
export SECRET_KEY='your-super-secret-and-random-key'
|
||||||
|
|
||||||
|
# Set Flask to production mode
|
||||||
|
export FLASK_ENV='production'
|
||||||
|
export FLASK_DEBUG=False
|
||||||
|
|
||||||
|
# API keys (optional, but recommended for full functionality)
|
||||||
|
export SHODAN_API_KEY="your_shodan_key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3\. Use a Reverse Proxy
|
||||||
|
|
||||||
|
Set up a reverse proxy like **Nginx** to sit in front of the Gunicorn server. This provides several benefits, including:
|
||||||
|
|
||||||
|
- **TLS/SSL Termination**: Securely handle HTTPS traffic.
|
||||||
|
- **Load Balancing**: Distribute traffic across multiple application instances.
|
||||||
|
- **Serving Static Files**: Efficiently serve CSS and JavaScript files.
|
||||||
|
|
||||||
|
**Example Nginx Configuration:**
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
server_name your_domain.com;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
return 301 https://$host$request_uri;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 443 ssl;
|
||||||
|
server_name your_domain.com;
|
||||||
|
|
||||||
|
# SSL cert configuration
|
||||||
|
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
|
||||||
|
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_pass http://127.0.0.1:5000;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
}
|
||||||
|
|
||||||
|
location /static {
|
||||||
|
alias /path/to/your/dnsrecon/static;
|
||||||
|
expires 30d;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Autostart with systemd
|
||||||
|
|
||||||
|
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
|
||||||
|
|
||||||
|
### 1\. Create a `.service` file
|
||||||
|
|
||||||
|
Create a new service file in `/etc/systemd/system/`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nano /etc/systemd/system/dnsrecon.service
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2\. Add the Service Configuration
|
||||||
|
|
||||||
|
Paste the following configuration into the file. **Remember to replace `/path/to/your/dnsrecon` and `your_user` with your actual project path and username.**
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=DNSRecon Application
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
User=your_user
|
||||||
|
Group=your_user
|
||||||
|
WorkingDirectory=/path/to/your/dnsrecon
|
||||||
|
ExecStart=/path/to/your/dnsrecon/venv/bin/gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||||
|
Restart=always
|
||||||
|
Environment="SECRET_KEY=your-super-secret-and-random-key"
|
||||||
|
Environment="FLASK_ENV=production"
|
||||||
|
Environment="FLASK_DEBUG=False"
|
||||||
|
Environment="SHODAN_API_KEY=your_shodan_key"
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3\. Enable and Start the Service
|
||||||
|
|
||||||
|
Reload the `systemd` daemon, enable the service to start on boot, and then start it immediately:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl enable dnsrecon.service
|
||||||
|
sudo systemctl start dnsrecon.service
|
||||||
|
```
|
||||||
|
|
||||||
|
You can check the status of the service at any time with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl status dnsrecon.service
|
||||||
|
```
|
||||||
|
|
||||||
## Security Considerations
|
## Security Considerations
|
||||||
|
|
||||||
- **No Persistent Storage**: All data stored in memory only
|
- **API Keys**: API keys are stored in memory for the duration of a user session and are not written to disk.
|
||||||
- **API Keys**: Stored in memory only, never written to disk
|
- **Rate Limiting**: DNSRecon includes built-in rate limiting to be respectful to data sources.
|
||||||
- **Rate Limiting**: Prevents abuse of external services
|
- **Local Use**: The application is designed for local or trusted network use and does not have built-in authentication. **Do not expose it directly to the internet without proper security controls.**
|
||||||
- **Local Use Only**: No authentication required (designed for local use)
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
DNSRecon follows a phased development approach. Currently in Phase 1 with core infrastructure completed.
|
|
||||||
|
|
||||||
### Code Quality Standards
|
|
||||||
- Follow PEP 8 for Python code
|
|
||||||
- Comprehensive docstrings for all functions
|
|
||||||
- Type hints where appropriate
|
|
||||||
- Forensic logging for all external interactions
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
This project is intended for legitimate security research and infrastructure analysis. Users are responsible for compliance with applicable laws and regulations.
|
This project is licensed under the terms of the license agreement found in the `LICENSE` file.
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
For issues and questions:
|
|
||||||
1. Check the troubleshooting section above
|
|
||||||
2. Review the Flask console output for error details
|
|
||||||
3. Ensure all dependencies are properly installed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**DNSRecon v1.0 - Phase 1 Implementation**
|
|
||||||
*Passive Infrastructure Reconnaissance for Security Professionals*
|
|
||||||
406
app.py
406
app.py
@@ -1,7 +1,8 @@
|
|||||||
|
# dnsrecon-reduced/app.py
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Flask application entry point for DNSRecon web interface.
|
Flask application entry point for DNSRecon web interface.
|
||||||
Provides REST API endpoints and serves the web interface with user session support.
|
Provides REST API endpoints and serves the web interface with user session support.
|
||||||
Enhanced with better session debugging and isolation.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import json
|
import json
|
||||||
@@ -15,53 +16,38 @@ from config import config
|
|||||||
|
|
||||||
|
|
||||||
app = Flask(__name__)
|
app = Flask(__name__)
|
||||||
app.config['SECRET_KEY'] = 'dnsrecon-dev-key-change-in-production'
|
# Use centralized configuration for Flask settings
|
||||||
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=2) # 2 hour session lifetime
|
app.config['SECRET_KEY'] = config.flask_secret_key
|
||||||
|
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=config.flask_permanent_session_lifetime_hours)
|
||||||
|
|
||||||
def get_user_scanner():
|
def get_user_scanner():
|
||||||
"""
|
"""
|
||||||
Get or create scanner instance for current user session with enhanced debugging.
|
Retrieves the scanner for the current session, or creates a new
|
||||||
|
session and scanner if one doesn't exist.
|
||||||
Returns:
|
|
||||||
Tuple of (session_id, scanner_instance)
|
|
||||||
"""
|
"""
|
||||||
# Get current Flask session info for debugging
|
# Get current Flask session info for debugging
|
||||||
current_flask_session_id = session.get('dnsrecon_session_id')
|
current_flask_session_id = session.get('dnsrecon_session_id')
|
||||||
client_ip = request.remote_addr
|
|
||||||
user_agent = request.headers.get('User-Agent', '')[:100] # Truncate for logging
|
|
||||||
|
|
||||||
print("=== SESSION DEBUG ===")
|
|
||||||
print(f"Client IP: {client_ip}")
|
|
||||||
print(f"User Agent: {user_agent}")
|
|
||||||
print(f"Flask Session ID: {current_flask_session_id}")
|
|
||||||
print(f"Flask Session Keys: {list(session.keys())}")
|
|
||||||
|
|
||||||
# Try to get existing session
|
# Try to get existing session
|
||||||
if current_flask_session_id:
|
if current_flask_session_id:
|
||||||
existing_scanner = session_manager.get_session(current_flask_session_id)
|
existing_scanner = session_manager.get_session(current_flask_session_id)
|
||||||
if existing_scanner:
|
if existing_scanner:
|
||||||
print(f"Using existing session: {current_flask_session_id}")
|
|
||||||
print(f"Scanner status: {existing_scanner.status}")
|
|
||||||
return current_flask_session_id, existing_scanner
|
return current_flask_session_id, existing_scanner
|
||||||
else:
|
|
||||||
print(f"Session {current_flask_session_id} not found in session manager")
|
|
||||||
|
|
||||||
# Create new session
|
# Create new session if none exists
|
||||||
print("Creating new session...")
|
print("Creating new session as none was found...")
|
||||||
new_session_id = session_manager.create_session()
|
new_session_id = session_manager.create_session()
|
||||||
new_scanner = session_manager.get_session(new_session_id)
|
new_scanner = session_manager.get_session(new_session_id)
|
||||||
|
|
||||||
|
if not new_scanner:
|
||||||
|
raise Exception("Failed to create new scanner session")
|
||||||
|
|
||||||
# Store in Flask session
|
# Store in Flask session
|
||||||
session['dnsrecon_session_id'] = new_session_id
|
session['dnsrecon_session_id'] = new_session_id
|
||||||
session.permanent = True
|
session.permanent = True
|
||||||
|
|
||||||
print(f"Created new session: {new_session_id}")
|
|
||||||
print(f"New scanner status: {new_scanner.status}")
|
|
||||||
print("=== END SESSION DEBUG ===")
|
|
||||||
|
|
||||||
return new_session_id, new_scanner
|
return new_session_id, new_scanner
|
||||||
|
|
||||||
|
|
||||||
@app.route('/')
|
@app.route('/')
|
||||||
def index():
|
def index():
|
||||||
"""Serve the main web interface."""
|
"""Serve the main web interface."""
|
||||||
@@ -71,112 +57,72 @@ def index():
|
|||||||
@app.route('/api/scan/start', methods=['POST'])
|
@app.route('/api/scan/start', methods=['POST'])
|
||||||
def start_scan():
|
def start_scan():
|
||||||
"""
|
"""
|
||||||
Start a new reconnaissance scan for the current user session.
|
Start a new reconnaissance scan. Creates a new isolated scanner if
|
||||||
Enhanced with better error handling and debugging.
|
clear_graph is true, otherwise adds to the existing one.
|
||||||
"""
|
"""
|
||||||
print("=== API: /api/scan/start called ===")
|
print("=== API: /api/scan/start called ===")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
print("Getting JSON data from request...")
|
|
||||||
data = request.get_json()
|
data = request.get_json()
|
||||||
print(f"Request data: {data}")
|
|
||||||
|
|
||||||
if not data or 'target_domain' not in data:
|
if not data or 'target_domain' not in data:
|
||||||
print("ERROR: Missing target_domain in request")
|
return jsonify({'success': False, 'error': 'Missing target_domain in request'}), 400
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'Missing target_domain in request'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
target_domain = data['target_domain'].strip()
|
target_domain = data['target_domain'].strip()
|
||||||
max_depth = data.get('max_depth', config.default_recursion_depth)
|
max_depth = data.get('max_depth', config.default_recursion_depth)
|
||||||
|
clear_graph = data.get('clear_graph', True)
|
||||||
|
|
||||||
print(f"Parsed - target_domain: '{target_domain}', max_depth: {max_depth}")
|
print(f"Parsed - target_domain: '{target_domain}', max_depth: {max_depth}, clear_graph: {clear_graph}")
|
||||||
|
|
||||||
# Validation
|
# Validation
|
||||||
if not target_domain:
|
if not target_domain:
|
||||||
print("ERROR: Target domain cannot be empty")
|
return jsonify({'success': False, 'error': 'Target domain cannot be empty'}), 400
|
||||||
return jsonify({
|
if not isinstance(max_depth, int) or not 1 <= max_depth <= 5:
|
||||||
'success': False,
|
return jsonify({'success': False, 'error': 'Max depth must be an integer between 1 and 5'}), 400
|
||||||
'error': 'Target domain cannot be empty'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
if not isinstance(max_depth, int) or max_depth < 1 or max_depth > 5:
|
user_session_id, scanner = None, None
|
||||||
print(f"ERROR: Invalid max_depth: {max_depth}")
|
|
||||||
return jsonify({
|
if clear_graph:
|
||||||
'success': False,
|
print("Clear graph requested: Creating a new, isolated scanner session.")
|
||||||
'error': 'Max depth must be an integer between 1 and 5'
|
old_session_id = session.get('dnsrecon_session_id')
|
||||||
}), 400
|
if old_session_id:
|
||||||
|
session_manager.terminate_session(old_session_id)
|
||||||
print("Validation passed, getting user scanner...")
|
|
||||||
|
|
||||||
# Get user-specific scanner with enhanced debugging
|
|
||||||
user_session_id, scanner = get_user_scanner()
|
|
||||||
print(f"Using session: {user_session_id}")
|
|
||||||
print(f"Scanner object ID: {id(scanner)}")
|
|
||||||
print(f"Scanner status before start: {scanner.status}")
|
|
||||||
|
|
||||||
# Additional safety check - if scanner is somehow in running state, force reset
|
|
||||||
if scanner.status == 'running':
|
|
||||||
print(f"WARNING: Scanner in session {user_session_id} was already running - forcing reset")
|
|
||||||
scanner.stop_scan()
|
|
||||||
# Give it a moment to stop
|
|
||||||
import time
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# If still running, force status reset
|
user_session_id = session_manager.create_session()
|
||||||
if scanner.status == 'running':
|
session['dnsrecon_session_id'] = user_session_id
|
||||||
print("WARNING: Force resetting scanner status from 'running' to 'idle'")
|
session.permanent = True
|
||||||
scanner.status = 'idle'
|
scanner = session_manager.get_session(user_session_id)
|
||||||
|
else:
|
||||||
|
print("Adding to existing graph: Reusing the current scanner session.")
|
||||||
|
user_session_id, scanner = get_user_scanner()
|
||||||
|
|
||||||
|
if not scanner:
|
||||||
|
return jsonify({'success': False, 'error': 'Failed to get or create a scanner instance.'}), 500
|
||||||
|
|
||||||
# Start scan
|
print(f"Using scanner {id(scanner)} in session {user_session_id}")
|
||||||
print(f"Calling start_scan on scanner {id(scanner)}...")
|
|
||||||
success = scanner.start_scan(target_domain, max_depth)
|
|
||||||
|
|
||||||
print(f"scanner.start_scan returned: {success}")
|
success = scanner.start_scan(target_domain, max_depth, clear_graph=clear_graph)
|
||||||
print(f"Scanner status after start attempt: {scanner.status}")
|
|
||||||
|
|
||||||
if success:
|
if success:
|
||||||
scan_session_id = scanner.logger.session_id
|
|
||||||
print(f"Scan started successfully with scan session ID: {scan_session_id}")
|
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': True,
|
'success': True,
|
||||||
'message': 'Scan started successfully',
|
'message': 'Scan started successfully',
|
||||||
'scan_id': scan_session_id,
|
'scan_id': scanner.logger.session_id,
|
||||||
'user_session_id': user_session_id,
|
'user_session_id': user_session_id,
|
||||||
'debug_info': {
|
|
||||||
'scanner_object_id': id(scanner),
|
|
||||||
'scanner_status': scanner.status
|
|
||||||
}
|
|
||||||
})
|
})
|
||||||
else:
|
else:
|
||||||
print("ERROR: Scanner returned False")
|
|
||||||
|
|
||||||
# Provide more detailed error information
|
|
||||||
error_details = {
|
|
||||||
'scanner_status': scanner.status,
|
|
||||||
'scanner_object_id': id(scanner),
|
|
||||||
'session_id': user_session_id,
|
|
||||||
'providers_count': len(scanner.providers) if hasattr(scanner, 'providers') else 0
|
|
||||||
}
|
|
||||||
|
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': False,
|
'success': False,
|
||||||
'error': f'Failed to start scan (scanner status: {scanner.status})',
|
'error': f'Failed to start scan (scanner status: {scanner.status})',
|
||||||
'debug_info': error_details
|
|
||||||
}), 409
|
}), 409
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"ERROR: Exception in start_scan endpoint: {e}")
|
print(f"ERROR: Exception in start_scan endpoint: {e}")
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
return jsonify({
|
return jsonify({'success': False, 'error': f'Internal server error: {str(e)}'}), 500
|
||||||
'success': False,
|
|
||||||
'error': f'Internal server error: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
@app.route('/api/scan/stop', methods=['POST'])
|
@app.route('/api/scan/stop', methods=['POST'])
|
||||||
def stop_scan():
|
def stop_scan():
|
||||||
"""Stop the current scan for the user session."""
|
"""Stop the current scan with immediate GUI feedback."""
|
||||||
print("=== API: /api/scan/stop called ===")
|
print("=== API: /api/scan/stop called ===")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -184,19 +130,37 @@ def stop_scan():
|
|||||||
user_session_id, scanner = get_user_scanner()
|
user_session_id, scanner = get_user_scanner()
|
||||||
print(f"Stopping scan for session: {user_session_id}")
|
print(f"Stopping scan for session: {user_session_id}")
|
||||||
|
|
||||||
success = scanner.stop_scan()
|
if not scanner:
|
||||||
|
|
||||||
if success:
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'message': 'Scan stop requested',
|
|
||||||
'user_session_id': user_session_id
|
|
||||||
})
|
|
||||||
else:
|
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': False,
|
'success': False,
|
||||||
'error': 'No active scan to stop for this session'
|
'error': 'No scanner found for session'
|
||||||
}), 400
|
}), 404
|
||||||
|
|
||||||
|
# Ensure session ID is set
|
||||||
|
if not scanner.session_id:
|
||||||
|
scanner.session_id = user_session_id
|
||||||
|
|
||||||
|
# Use the stop mechanism
|
||||||
|
success = scanner.stop_scan()
|
||||||
|
|
||||||
|
# Also set the Redis stop signal directly for extra reliability
|
||||||
|
session_manager.set_stop_signal(user_session_id)
|
||||||
|
|
||||||
|
# Force immediate status update
|
||||||
|
session_manager.update_scanner_status(user_session_id, 'stopped')
|
||||||
|
|
||||||
|
# Update the full scanner state
|
||||||
|
session_manager.update_session_scanner(user_session_id, scanner)
|
||||||
|
|
||||||
|
print(f"Stop scan completed. Success: {success}, Scanner status: {scanner.status}")
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
'success': True,
|
||||||
|
'message': 'Scan stop requested - termination initiated',
|
||||||
|
'user_session_id': user_session_id,
|
||||||
|
'scanner_status': scanner.status,
|
||||||
|
'stop_method': 'cross_process'
|
||||||
|
})
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"ERROR: Exception in stop_scan endpoint: {e}")
|
print(f"ERROR: Exception in stop_scan endpoint: {e}")
|
||||||
@@ -209,14 +173,44 @@ def stop_scan():
|
|||||||
|
|
||||||
@app.route('/api/scan/status', methods=['GET'])
|
@app.route('/api/scan/status', methods=['GET'])
|
||||||
def get_scan_status():
|
def get_scan_status():
|
||||||
"""Get current scan status and progress for the user session."""
|
"""Get current scan status with error handling."""
|
||||||
try:
|
try:
|
||||||
# Get user-specific scanner
|
# Get user-specific scanner
|
||||||
user_session_id, scanner = get_user_scanner()
|
user_session_id, scanner = get_user_scanner()
|
||||||
|
|
||||||
|
if not scanner:
|
||||||
|
# Return default idle status if no scanner
|
||||||
|
return jsonify({
|
||||||
|
'success': True,
|
||||||
|
'status': {
|
||||||
|
'status': 'idle',
|
||||||
|
'target_domain': None,
|
||||||
|
'current_depth': 0,
|
||||||
|
'max_depth': 0,
|
||||||
|
'current_indicator': '',
|
||||||
|
'total_indicators_found': 0,
|
||||||
|
'indicators_processed': 0,
|
||||||
|
'progress_percentage': 0.0,
|
||||||
|
'enabled_providers': [],
|
||||||
|
'graph_statistics': {},
|
||||||
|
'user_session_id': user_session_id
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
# Ensure session ID is set
|
||||||
|
if not scanner.session_id:
|
||||||
|
scanner.session_id = user_session_id
|
||||||
|
|
||||||
status = scanner.get_scan_status()
|
status = scanner.get_scan_status()
|
||||||
status['user_session_id'] = user_session_id
|
status['user_session_id'] = user_session_id
|
||||||
|
|
||||||
|
# Additional debug info
|
||||||
|
status['debug_info'] = {
|
||||||
|
'scanner_object_id': id(scanner),
|
||||||
|
'session_id_set': bool(scanner.session_id),
|
||||||
|
'has_scan_thread': bool(scanner.scan_thread and scanner.scan_thread.is_alive())
|
||||||
|
}
|
||||||
|
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': True,
|
'success': True,
|
||||||
'status': status
|
'status': status
|
||||||
@@ -227,17 +221,42 @@ def get_scan_status():
|
|||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': False,
|
'success': False,
|
||||||
'error': f'Internal server error: {str(e)}'
|
'error': f'Internal server error: {str(e)}',
|
||||||
|
'fallback_status': {
|
||||||
|
'status': 'error',
|
||||||
|
'target_domain': None,
|
||||||
|
'current_depth': 0,
|
||||||
|
'max_depth': 0,
|
||||||
|
'progress_percentage': 0.0
|
||||||
|
}
|
||||||
}), 500
|
}), 500
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/graph', methods=['GET'])
|
@app.route('/api/graph', methods=['GET'])
|
||||||
def get_graph_data():
|
def get_graph_data():
|
||||||
"""Get current graph data for visualization for the user session."""
|
"""Get current graph data with error handling."""
|
||||||
try:
|
try:
|
||||||
# Get user-specific scanner
|
# Get user-specific scanner
|
||||||
user_session_id, scanner = get_user_scanner()
|
user_session_id, scanner = get_user_scanner()
|
||||||
|
|
||||||
|
if not scanner:
|
||||||
|
# Return empty graph if no scanner
|
||||||
|
return jsonify({
|
||||||
|
'success': True,
|
||||||
|
'graph': {
|
||||||
|
'nodes': [],
|
||||||
|
'edges': [],
|
||||||
|
'statistics': {
|
||||||
|
'node_count': 0,
|
||||||
|
'edge_count': 0,
|
||||||
|
'creation_time': datetime.now(timezone.utc).isoformat(),
|
||||||
|
'last_modified': datetime.now(timezone.utc).isoformat()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
'user_session_id': user_session_id
|
||||||
|
})
|
||||||
|
|
||||||
graph_data = scanner.get_graph_data()
|
graph_data = scanner.get_graph_data()
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': True,
|
'success': True,
|
||||||
@@ -250,10 +269,16 @@ def get_graph_data():
|
|||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': False,
|
'success': False,
|
||||||
'error': f'Internal server error: {str(e)}'
|
'error': f'Internal server error: {str(e)}',
|
||||||
|
'fallback_graph': {
|
||||||
|
'nodes': [],
|
||||||
|
'edges': [],
|
||||||
|
'statistics': {'node_count': 0, 'edge_count': 0}
|
||||||
|
}
|
||||||
}), 500
|
}), 500
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/export', methods=['GET'])
|
@app.route('/api/export', methods=['GET'])
|
||||||
def export_results():
|
def export_results():
|
||||||
"""Export complete scan results as downloadable JSON for the user session."""
|
"""Export complete scan results as downloadable JSON for the user session."""
|
||||||
@@ -299,25 +324,20 @@ def export_results():
|
|||||||
@app.route('/api/providers', methods=['GET'])
|
@app.route('/api/providers', methods=['GET'])
|
||||||
def get_providers():
|
def get_providers():
|
||||||
"""Get information about available providers for the user session."""
|
"""Get information about available providers for the user session."""
|
||||||
print("=== API: /api/providers called ===")
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Get user-specific scanner
|
# Get user-specific scanner
|
||||||
user_session_id, scanner = get_user_scanner()
|
user_session_id, scanner = get_user_scanner()
|
||||||
|
|
||||||
provider_stats = scanner.get_provider_statistics()
|
if scanner:
|
||||||
|
completed_tasks = scanner.indicators_completed
|
||||||
|
enqueued_tasks = len(scanner.task_queue)
|
||||||
|
print(f"DEBUG: Tasks - Completed: {completed_tasks}, Enqueued: {enqueued_tasks}")
|
||||||
|
else:
|
||||||
|
print("DEBUG: No active scanner session found.")
|
||||||
|
|
||||||
|
provider_info = scanner.get_provider_info()
|
||||||
|
|
||||||
# Add configuration information
|
|
||||||
provider_info = {}
|
|
||||||
for provider_name, stats in provider_stats.items():
|
|
||||||
provider_info[provider_name] = {
|
|
||||||
'statistics': stats,
|
|
||||||
'enabled': config.is_provider_enabled(provider_name),
|
|
||||||
'rate_limit': config.get_rate_limit(provider_name),
|
|
||||||
'requires_api_key': provider_name in ['shodan', 'virustotal']
|
|
||||||
}
|
|
||||||
|
|
||||||
print(f"Returning provider info for session {user_session_id}: {list(provider_info.keys())}")
|
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': True,
|
'success': True,
|
||||||
'providers': provider_info,
|
'providers': provider_info,
|
||||||
@@ -341,7 +361,7 @@ def set_api_keys():
|
|||||||
try:
|
try:
|
||||||
data = request.get_json()
|
data = request.get_json()
|
||||||
|
|
||||||
if not data:
|
if data is None:
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': False,
|
'success': False,
|
||||||
'error': 'No API keys provided'
|
'error': 'No API keys provided'
|
||||||
@@ -353,16 +373,23 @@ def set_api_keys():
|
|||||||
|
|
||||||
updated_providers = []
|
updated_providers = []
|
||||||
|
|
||||||
for provider, api_key in data.items():
|
# Iterate over the API keys provided in the request data
|
||||||
if provider in ['shodan', 'virustotal'] and api_key.strip():
|
for provider_name, api_key in data.items():
|
||||||
success = session_config.set_api_key(provider, api_key.strip())
|
# This allows us to both set and clear keys. The config
|
||||||
if success:
|
# handles enabling/disabling based on if the key is empty.
|
||||||
updated_providers.append(provider)
|
api_key_value = str(api_key or '').strip()
|
||||||
|
success = session_config.set_api_key(provider_name.lower(), api_key_value)
|
||||||
|
|
||||||
|
if success:
|
||||||
|
updated_providers.append(provider_name)
|
||||||
|
|
||||||
if updated_providers:
|
if updated_providers:
|
||||||
# Reinitialize scanner providers for this session only
|
# Reinitialize scanner providers to apply the new keys
|
||||||
scanner._initialize_providers()
|
scanner._initialize_providers()
|
||||||
|
|
||||||
|
# Persist the updated scanner object back to the user's session
|
||||||
|
session_manager.update_session_scanner(user_session_id, scanner)
|
||||||
|
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': True,
|
'success': True,
|
||||||
'message': f'API keys updated for session {user_session_id}: {", ".join(updated_providers)}',
|
'message': f'API keys updated for session {user_session_id}: {", ".join(updated_providers)}',
|
||||||
@@ -372,7 +399,7 @@ def set_api_keys():
|
|||||||
else:
|
else:
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': False,
|
'success': False,
|
||||||
'error': 'No valid API keys were provided'
|
'error': 'No valid API keys were provided or provider names were incorrect.'
|
||||||
}), 400
|
}), 400
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -382,121 +409,6 @@ def set_api_keys():
|
|||||||
'success': False,
|
'success': False,
|
||||||
'error': f'Internal server error: {str(e)}'
|
'error': f'Internal server error: {str(e)}'
|
||||||
}), 500
|
}), 500
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"ERROR: Exception in set_api_keys endpoint: {e}")
|
|
||||||
traceback.print_exc()
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Internal server error: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/session/info', methods=['GET'])
|
|
||||||
def get_session_info():
|
|
||||||
"""Get information about the current user session."""
|
|
||||||
try:
|
|
||||||
user_session_id, scanner = get_user_scanner()
|
|
||||||
session_info = session_manager.get_session_info(user_session_id)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'session_info': session_info
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"ERROR: Exception in get_session_info endpoint: {e}")
|
|
||||||
traceback.print_exc()
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Internal server error: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/session/terminate', methods=['POST'])
|
|
||||||
def terminate_session():
|
|
||||||
"""Terminate the current user session."""
|
|
||||||
try:
|
|
||||||
user_session_id = session.get('dnsrecon_session_id')
|
|
||||||
|
|
||||||
if user_session_id:
|
|
||||||
success = session_manager.terminate_session(user_session_id)
|
|
||||||
# Clear Flask session
|
|
||||||
session.pop('dnsrecon_session_id', None)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': success,
|
|
||||||
'message': 'Session terminated' if success else 'Session not found'
|
|
||||||
})
|
|
||||||
else:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'No active session to terminate'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"ERROR: Exception in terminate_session endpoint: {e}")
|
|
||||||
traceback.print_exc()
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Internal server error: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/admin/sessions', methods=['GET'])
|
|
||||||
def list_sessions():
|
|
||||||
"""Admin endpoint to list all active sessions."""
|
|
||||||
try:
|
|
||||||
sessions = session_manager.list_active_sessions()
|
|
||||||
stats = session_manager.get_statistics()
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'sessions': sessions,
|
|
||||||
'statistics': stats
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"ERROR: Exception in list_sessions endpoint: {e}")
|
|
||||||
traceback.print_exc()
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Internal server error: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/health', methods=['GET'])
|
|
||||||
def health_check():
|
|
||||||
"""Health check endpoint with enhanced Phase 2 information."""
|
|
||||||
try:
|
|
||||||
# Get session stats
|
|
||||||
session_stats = session_manager.get_statistics()
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'status': 'healthy',
|
|
||||||
'timestamp': datetime.now(timezone.utc).isoformat(),
|
|
||||||
'version': '1.0.0-phase2',
|
|
||||||
'phase': 2,
|
|
||||||
'features': {
|
|
||||||
'multi_provider': True,
|
|
||||||
'concurrent_processing': True,
|
|
||||||
'real_time_updates': True,
|
|
||||||
'api_key_management': True,
|
|
||||||
'enhanced_visualization': True,
|
|
||||||
'retry_logic': True,
|
|
||||||
'user_sessions': True,
|
|
||||||
'session_isolation': True
|
|
||||||
},
|
|
||||||
'session_statistics': session_stats
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
print(f"ERROR: Exception in health_check endpoint: {e}")
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Health check failed: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.errorhandler(404)
|
@app.errorhandler(404)
|
||||||
def not_found(error):
|
def not_found(error):
|
||||||
|
|||||||
131
config.py
131
config.py
@@ -5,116 +5,97 @@ Handles API key storage, rate limiting, and default settings.
|
|||||||
|
|
||||||
import os
|
import os
|
||||||
from typing import Dict, Optional
|
from typing import Dict, Optional
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
# Load environment variables from .env file
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
class Config:
|
class Config:
|
||||||
"""Configuration manager for DNSRecon application."""
|
"""Configuration manager for DNSRecon application."""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
"""Initialize configuration with default values."""
|
"""Initialize configuration with default values."""
|
||||||
self.api_keys: Dict[str, Optional[str]] = {
|
self.api_keys: Dict[str, Optional[str]] = {}
|
||||||
'shodan': None,
|
|
||||||
'virustotal': None
|
|
||||||
}
|
|
||||||
|
|
||||||
# Default settings
|
# --- General Settings ---
|
||||||
self.default_recursion_depth = 2
|
self.default_recursion_depth = 2
|
||||||
self.default_timeout = 30
|
self.default_timeout = 15
|
||||||
self.max_concurrent_requests = 5
|
self.max_concurrent_requests = 5
|
||||||
self.large_entity_threshold = 100
|
self.large_entity_threshold = 100
|
||||||
|
self.max_retries_per_target = 3
|
||||||
|
self.cache_expiry_hours = 12
|
||||||
|
|
||||||
# Rate limiting settings (requests per minute)
|
# --- Provider Caching Settings ---
|
||||||
|
self.cache_timeout_hours = 6 # Provider-specific cache timeout
|
||||||
|
|
||||||
|
# --- Rate Limiting (requests per minute) ---
|
||||||
self.rate_limits = {
|
self.rate_limits = {
|
||||||
'crtsh': 60, # Free service, be respectful
|
'crtsh': 30,
|
||||||
'virustotal': 4, # Free tier limit
|
'shodan': 60,
|
||||||
'shodan': 60, # API dependent
|
'dns': 100
|
||||||
'dns': 100 # Local DNS queries
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Provider settings
|
# --- Provider Settings ---
|
||||||
self.enabled_providers = {
|
self.enabled_providers = {
|
||||||
'crtsh': True, # Always enabled (free)
|
'crtsh': True,
|
||||||
'dns': True, # Always enabled (free)
|
'dns': True,
|
||||||
'virustotal': False, # Requires API key
|
'shodan': False
|
||||||
'shodan': False # Requires API key
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Logging configuration
|
# --- Logging ---
|
||||||
self.log_level = 'INFO'
|
self.log_level = 'INFO'
|
||||||
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||||
|
|
||||||
# Flask configuration
|
# --- Flask & Session Settings ---
|
||||||
self.flask_host = '127.0.0.1'
|
self.flask_host = '127.0.0.1'
|
||||||
self.flask_port = 5000
|
self.flask_port = 5000
|
||||||
self.flask_debug = True
|
self.flask_debug = True
|
||||||
|
self.flask_secret_key = 'default-secret-key-change-me'
|
||||||
|
self.flask_permanent_session_lifetime_hours = 2
|
||||||
|
self.session_timeout_minutes = 60
|
||||||
|
|
||||||
def set_api_key(self, provider: str, api_key: str) -> bool:
|
# Load environment variables to override defaults
|
||||||
"""
|
self.load_from_env()
|
||||||
Set API key for a provider.
|
|
||||||
|
|
||||||
Args:
|
def load_from_env(self):
|
||||||
provider: Provider name (shodan, virustotal)
|
"""Load configuration from environment variables."""
|
||||||
api_key: API key string
|
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
|
||||||
|
|
||||||
Returns:
|
# Override settings from environment
|
||||||
bool: True if key was set successfully
|
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', self.default_recursion_depth))
|
||||||
"""
|
self.default_timeout = int(os.getenv('DEFAULT_TIMEOUT', self.default_timeout))
|
||||||
if provider in self.api_keys:
|
self.max_concurrent_requests = int(os.getenv('MAX_CONCURRENT_REQUESTS', self.max_concurrent_requests))
|
||||||
self.api_keys[provider] = api_key
|
self.large_entity_threshold = int(os.getenv('LARGE_ENTITY_THRESHOLD', self.large_entity_threshold))
|
||||||
self.enabled_providers[provider] = True if api_key else False
|
self.max_retries_per_target = int(os.getenv('MAX_RETRIES_PER_TARGET', self.max_retries_per_target))
|
||||||
return True
|
self.cache_expiry_hours = int(os.getenv('CACHE_EXPIRY_HOURS', self.cache_expiry_hours))
|
||||||
return False
|
self.cache_timeout_hours = int(os.getenv('CACHE_TIMEOUT_HOURS', self.cache_timeout_hours))
|
||||||
|
|
||||||
|
# Override Flask and session settings
|
||||||
|
self.flask_host = os.getenv('FLASK_HOST', self.flask_host)
|
||||||
|
self.flask_port = int(os.getenv('FLASK_PORT', self.flask_port))
|
||||||
|
self.flask_debug = os.getenv('FLASK_DEBUG', str(self.flask_debug)).lower() == 'true'
|
||||||
|
self.flask_secret_key = os.getenv('FLASK_SECRET_KEY', self.flask_secret_key)
|
||||||
|
self.flask_permanent_session_lifetime_hours = int(os.getenv('FLASK_PERMANENT_SESSION_LIFETIME_HOURS', self.flask_permanent_session_lifetime_hours))
|
||||||
|
self.session_timeout_minutes = int(os.getenv('SESSION_TIMEOUT_MINUTES', self.session_timeout_minutes))
|
||||||
|
|
||||||
|
def set_api_key(self, provider: str, api_key: Optional[str]) -> bool:
|
||||||
|
"""Set API key for a provider."""
|
||||||
|
self.api_keys[provider] = api_key
|
||||||
|
if api_key:
|
||||||
|
self.enabled_providers[provider] = True
|
||||||
|
return True
|
||||||
|
|
||||||
def get_api_key(self, provider: str) -> Optional[str]:
|
def get_api_key(self, provider: str) -> Optional[str]:
|
||||||
"""
|
"""Get API key for a provider."""
|
||||||
Get API key for a provider.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
provider: Provider name
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
API key or None if not set
|
|
||||||
"""
|
|
||||||
return self.api_keys.get(provider)
|
return self.api_keys.get(provider)
|
||||||
|
|
||||||
def is_provider_enabled(self, provider: str) -> bool:
|
def is_provider_enabled(self, provider: str) -> bool:
|
||||||
"""
|
"""Check if a provider is enabled."""
|
||||||
Check if a provider is enabled.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
provider: Provider name
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
bool: True if provider is enabled
|
|
||||||
"""
|
|
||||||
return self.enabled_providers.get(provider, False)
|
return self.enabled_providers.get(provider, False)
|
||||||
|
|
||||||
def get_rate_limit(self, provider: str) -> int:
|
def get_rate_limit(self, provider: str) -> int:
|
||||||
"""
|
"""Get rate limit for a provider."""
|
||||||
Get rate limit for a provider.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
provider: Provider name
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Rate limit in requests per minute
|
|
||||||
"""
|
|
||||||
return self.rate_limits.get(provider, 60)
|
return self.rate_limits.get(provider, 60)
|
||||||
|
|
||||||
def load_from_env(self):
|
|
||||||
"""Load configuration from environment variables."""
|
|
||||||
if os.getenv('VIRUSTOTAL_API_KEY'):
|
|
||||||
self.set_api_key('virustotal', os.getenv('VIRUSTOTAL_API_KEY'))
|
|
||||||
|
|
||||||
if os.getenv('SHODAN_API_KEY'):
|
|
||||||
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
|
|
||||||
|
|
||||||
# Override default settings from environment
|
|
||||||
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', '2'))
|
|
||||||
self.flask_debug = os.getenv('FLASK_DEBUG', 'True').lower() == 'true'
|
|
||||||
self.default_timeout = 30
|
|
||||||
self.max_concurrent_requests = 5
|
|
||||||
|
|
||||||
|
|
||||||
# Global configuration instance
|
# Global configuration instance
|
||||||
config = Config()
|
config = Config()
|
||||||
@@ -1,28 +1,25 @@
|
|||||||
"""
|
"""
|
||||||
Core modules for DNSRecon passive reconnaissance tool.
|
Core modules for DNSRecon passive reconnaissance tool.
|
||||||
Contains graph management, scanning orchestration, and forensic logging.
|
Contains graph management, scanning orchestration, and forensic logging.
|
||||||
Phase 2: Enhanced with concurrent processing and real-time capabilities.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from .graph_manager import GraphManager, NodeType, RelationshipType
|
from .graph_manager import GraphManager, NodeType
|
||||||
from .scanner import Scanner, ScanStatus # Remove 'scanner' global instance
|
from .scanner import Scanner, ScanStatus
|
||||||
from .logger import ForensicLogger, get_forensic_logger, new_session
|
from .logger import ForensicLogger, get_forensic_logger, new_session
|
||||||
from .session_manager import session_manager # Add session manager
|
from .session_manager import session_manager
|
||||||
from .session_config import SessionConfig, create_session_config # Add session config
|
from .session_config import SessionConfig, create_session_config
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'GraphManager',
|
'GraphManager',
|
||||||
'NodeType',
|
'NodeType',
|
||||||
'RelationshipType',
|
|
||||||
'Scanner',
|
'Scanner',
|
||||||
'ScanStatus',
|
'ScanStatus',
|
||||||
# 'scanner', # Remove this - no more global scanner
|
|
||||||
'ForensicLogger',
|
'ForensicLogger',
|
||||||
'get_forensic_logger',
|
'get_forensic_logger',
|
||||||
'new_session',
|
'new_session',
|
||||||
'session_manager', # Add this
|
'session_manager',
|
||||||
'SessionConfig', # Add this
|
'SessionConfig',
|
||||||
'create_session_config' # Add this
|
'create_session_config'
|
||||||
]
|
]
|
||||||
|
|
||||||
__version__ = "1.0.0-phase2"
|
__version__ = "1.0.0-phase2"
|
||||||
@@ -1,12 +1,13 @@
|
|||||||
|
# core/graph_manager.py
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Graph data model for DNSRecon using NetworkX.
|
Graph data model for DNSRecon using NetworkX.
|
||||||
Manages in-memory graph storage with confidence scoring and forensic metadata.
|
Manages in-memory graph storage with confidence scoring and forensic metadata.
|
||||||
"""
|
"""
|
||||||
|
import re
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
from typing import Dict, List, Any, Optional, Tuple
|
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from datetime import timezone
|
from typing import Dict, List, Any, Optional, Tuple
|
||||||
|
|
||||||
import networkx as nx
|
import networkx as nx
|
||||||
|
|
||||||
@@ -16,38 +17,11 @@ class NodeType(Enum):
|
|||||||
DOMAIN = "domain"
|
DOMAIN = "domain"
|
||||||
IP = "ip"
|
IP = "ip"
|
||||||
ASN = "asn"
|
ASN = "asn"
|
||||||
DNS_RECORD = "dns_record"
|
|
||||||
LARGE_ENTITY = "large_entity"
|
LARGE_ENTITY = "large_entity"
|
||||||
|
CORRELATION_OBJECT = "correlation_object"
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
class RelationshipType(Enum):
|
return self.value
|
||||||
"""Enumeration of supported relationship types with confidence scores."""
|
|
||||||
SAN_CERTIFICATE = ("san", 0.9)
|
|
||||||
A_RECORD = ("a_record", 0.8)
|
|
||||||
AAAA_RECORD = ("aaaa_record", 0.8)
|
|
||||||
CNAME_RECORD = ("cname", 0.8)
|
|
||||||
MX_RECORD = ("mx_record", 0.7)
|
|
||||||
NS_RECORD = ("ns_record", 0.7)
|
|
||||||
PTR_RECORD = ("ptr_record", 0.8)
|
|
||||||
SOA_RECORD = ("soa_record", 0.7)
|
|
||||||
TXT_RECORD = ("txt_record", 0.7)
|
|
||||||
SRV_RECORD = ("srv_record", 0.7)
|
|
||||||
CAA_RECORD = ("caa_record", 0.7)
|
|
||||||
DNSKEY_RECORD = ("dnskey_record", 0.7)
|
|
||||||
DS_RECORD = ("ds_record", 0.7)
|
|
||||||
RRSIG_RECORD = ("rrsig_record", 0.7)
|
|
||||||
SSHFP_RECORD = ("sshfp_record", 0.7)
|
|
||||||
TLSA_RECORD = ("tlsa_record", 0.7)
|
|
||||||
NAPTR_RECORD = ("naptr_record", 0.7)
|
|
||||||
SPF_RECORD = ("spf_record", 0.7)
|
|
||||||
DNS_RECORD = ("dns_record", 0.8)
|
|
||||||
PASSIVE_DNS = ("passive_dns", 0.6)
|
|
||||||
ASN_MEMBERSHIP = ("asn", 0.7)
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, relationship_name: str, default_confidence: float):
|
|
||||||
self.relationship_name = relationship_name
|
|
||||||
self.default_confidence = default_confidence
|
|
||||||
|
|
||||||
|
|
||||||
class GraphManager:
|
class GraphManager:
|
||||||
@@ -59,88 +33,384 @@ class GraphManager:
|
|||||||
def __init__(self):
|
def __init__(self):
|
||||||
"""Initialize empty directed graph."""
|
"""Initialize empty directed graph."""
|
||||||
self.graph = nx.DiGraph()
|
self.graph = nx.DiGraph()
|
||||||
# self.lock = threading.Lock()
|
|
||||||
self.creation_time = datetime.now(timezone.utc).isoformat()
|
self.creation_time = datetime.now(timezone.utc).isoformat()
|
||||||
self.last_modified = self.creation_time
|
self.last_modified = self.creation_time
|
||||||
|
self.correlation_index = {}
|
||||||
|
# Compile regex for date filtering for efficiency
|
||||||
|
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||||
|
|
||||||
def add_node(self, node_id: str, node_type: NodeType,
|
def __getstate__(self):
|
||||||
metadata: Optional[Dict[str, Any]] = None) -> bool:
|
"""Prepare GraphManager for pickling, excluding compiled regex."""
|
||||||
"""
|
state = self.__dict__.copy()
|
||||||
Add a node to the graph.
|
# Compiled regex patterns are not always picklable
|
||||||
|
if 'date_pattern' in state:
|
||||||
|
del state['date_pattern']
|
||||||
|
return state
|
||||||
|
|
||||||
Args:
|
def __setstate__(self, state):
|
||||||
node_id: Unique identifier for the node
|
"""Restore GraphManager state and recompile regex."""
|
||||||
node_type: Type of the node (Domain, IP, Certificate, ASN)
|
self.__dict__.update(state)
|
||||||
metadata: Additional metadata for the node
|
self.date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}')
|
||||||
|
|
||||||
Returns:
|
def _update_correlation_index(self, node_id: str, data: Any, path: List[str] = [], parent_attr: str = ""):
|
||||||
bool: True if node was added, False if it already exists
|
"""Recursively traverse metadata and add hashable values to the index with better path tracking."""
|
||||||
"""
|
if path is None:
|
||||||
if self.graph.has_node(node_id):
|
path = []
|
||||||
# Update metadata if node exists
|
|
||||||
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
|
if isinstance(data, dict):
|
||||||
|
for key, value in data.items():
|
||||||
|
self._update_correlation_index(node_id, value, path + [key], key)
|
||||||
|
elif isinstance(data, list):
|
||||||
|
for i, item in enumerate(data):
|
||||||
|
# Instead of just using [i], include the parent attribute context
|
||||||
|
list_path_component = f"[{i}]" if not parent_attr else f"{parent_attr}[{i}]"
|
||||||
|
self._update_correlation_index(node_id, item, path + [list_path_component], parent_attr)
|
||||||
|
else:
|
||||||
|
self._add_to_correlation_index(node_id, data, ".".join(path), parent_attr)
|
||||||
|
|
||||||
|
def _add_to_correlation_index(self, node_id: str, value: Any, path_str: str, parent_attr: str = ""):
|
||||||
|
"""Add a hashable value to the correlation index, filtering out noise."""
|
||||||
|
if not isinstance(value, (str, int, float, bool)) or value is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Ignore certain paths that contain noisy, non-unique identifiers
|
||||||
|
if any(keyword in path_str.lower() for keyword in ['count', 'total', 'timestamp', 'date']):
|
||||||
|
return
|
||||||
|
|
||||||
|
# Filter out common low-entropy values and date-like strings
|
||||||
|
if isinstance(value, str):
|
||||||
|
# FIXED: Prevent correlation on date/time strings.
|
||||||
|
if self.date_pattern.match(value):
|
||||||
|
return
|
||||||
|
if len(value) < 4 or value.lower() in ['true', 'false', 'unknown', 'none', 'crt.sh']:
|
||||||
|
return
|
||||||
|
elif isinstance(value, int) and (abs(value) < 1024 or abs(value) > 65535):
|
||||||
|
return # Ignore small integers and common port numbers
|
||||||
|
elif isinstance(value, bool):
|
||||||
|
return # Ignore boolean values
|
||||||
|
|
||||||
|
# Add the valuable correlation data to the index
|
||||||
|
if value not in self.correlation_index:
|
||||||
|
self.correlation_index[value] = {}
|
||||||
|
if node_id not in self.correlation_index[value]:
|
||||||
|
self.correlation_index[value][node_id] = []
|
||||||
|
|
||||||
|
# Store both the full path and the parent attribute for better edge labeling
|
||||||
|
correlation_entry = {
|
||||||
|
'path': path_str,
|
||||||
|
'parent_attr': parent_attr,
|
||||||
|
'meaningful_attr': self._extract_meaningful_attribute(path_str, parent_attr)
|
||||||
|
}
|
||||||
|
|
||||||
|
if correlation_entry not in self.correlation_index[value][node_id]:
|
||||||
|
self.correlation_index[value][node_id].append(correlation_entry)
|
||||||
|
|
||||||
|
def _extract_meaningful_attribute(self, path_str: str, parent_attr: str = "") -> str:
|
||||||
|
"""Extract the most meaningful attribute name from a path string."""
|
||||||
|
if not path_str:
|
||||||
|
return "unknown"
|
||||||
|
|
||||||
|
path_parts = path_str.split('.')
|
||||||
|
|
||||||
|
# Look for the last non-array-index part
|
||||||
|
for part in reversed(path_parts):
|
||||||
|
# Skip array indices like [0], [1], etc.
|
||||||
|
if not (part.startswith('[') and part.endswith(']') and part[1:-1].isdigit()):
|
||||||
|
# Clean up compound names like "hostnames[0]" to just "hostnames"
|
||||||
|
clean_part = re.sub(r'\[\d+\]$', '', part)
|
||||||
|
if clean_part:
|
||||||
|
return clean_part
|
||||||
|
|
||||||
|
# Fallback to parent attribute if available
|
||||||
|
if parent_attr:
|
||||||
|
return parent_attr
|
||||||
|
|
||||||
|
# Last resort - use the first meaningful part
|
||||||
|
for part in path_parts:
|
||||||
|
if not (part.startswith('[') and part.endswith(']') and part[1:-1].isdigit()):
|
||||||
|
clean_part = re.sub(r'\[\d+\]$', '', part)
|
||||||
|
if clean_part:
|
||||||
|
return clean_part
|
||||||
|
|
||||||
|
return "correlation"
|
||||||
|
|
||||||
|
def _check_for_correlations(self, new_node_id: str, data: Any, path: List[str] = [], parent_attr: str = "") -> List[Dict]:
|
||||||
|
"""Recursively traverse metadata to find correlations with existing data."""
|
||||||
|
if path is None:
|
||||||
|
path = []
|
||||||
|
|
||||||
|
all_correlations = []
|
||||||
|
if isinstance(data, dict):
|
||||||
|
for key, value in data.items():
|
||||||
|
if key == 'source': # Avoid correlating on the provider name
|
||||||
|
continue
|
||||||
|
all_correlations.extend(self._check_for_correlations(new_node_id, value, path + [key], key))
|
||||||
|
elif isinstance(data, list):
|
||||||
|
for i, item in enumerate(data):
|
||||||
|
list_path_component = f"[{i}]" if not parent_attr else f"{parent_attr}[{i}]"
|
||||||
|
all_correlations.extend(self._check_for_correlations(new_node_id, item, path + [list_path_component], parent_attr))
|
||||||
|
else:
|
||||||
|
value = data
|
||||||
|
if value in self.correlation_index:
|
||||||
|
existing_nodes_with_paths = self.correlation_index[value]
|
||||||
|
unique_nodes = set(existing_nodes_with_paths.keys())
|
||||||
|
unique_nodes.add(new_node_id)
|
||||||
|
|
||||||
|
if len(unique_nodes) < 2:
|
||||||
|
return all_correlations # Correlation must involve at least two distinct nodes
|
||||||
|
|
||||||
|
new_source = {
|
||||||
|
'node_id': new_node_id,
|
||||||
|
'path': ".".join(path),
|
||||||
|
'parent_attr': parent_attr,
|
||||||
|
'meaningful_attr': self._extract_meaningful_attribute(".".join(path), parent_attr)
|
||||||
|
}
|
||||||
|
all_sources = [new_source]
|
||||||
|
|
||||||
|
for node_id, path_entries in existing_nodes_with_paths.items():
|
||||||
|
for entry in path_entries:
|
||||||
|
if isinstance(entry, dict):
|
||||||
|
all_sources.append({
|
||||||
|
'node_id': node_id,
|
||||||
|
'path': entry['path'],
|
||||||
|
'parent_attr': entry.get('parent_attr', ''),
|
||||||
|
'meaningful_attr': entry.get('meaningful_attr', self._extract_meaningful_attribute(entry['path'], entry.get('parent_attr', '')))
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
# Handle legacy string-only entries
|
||||||
|
all_sources.append({
|
||||||
|
'node_id': node_id,
|
||||||
|
'path': str(entry),
|
||||||
|
'parent_attr': '',
|
||||||
|
'meaningful_attr': self._extract_meaningful_attribute(str(entry))
|
||||||
|
})
|
||||||
|
|
||||||
|
all_correlations.append({
|
||||||
|
'value': value,
|
||||||
|
'sources': all_sources,
|
||||||
|
'nodes': list(unique_nodes)
|
||||||
|
})
|
||||||
|
return all_correlations
|
||||||
|
|
||||||
|
def add_node(self, node_id: str, node_type: NodeType, attributes: Optional[Dict[str, Any]] = None,
|
||||||
|
description: str = "", metadata: Optional[Dict[str, Any]] = None) -> bool:
|
||||||
|
"""Add a node to the graph, update attributes, and process correlations."""
|
||||||
|
is_new_node = not self.graph.has_node(node_id)
|
||||||
|
if is_new_node:
|
||||||
|
self.graph.add_node(node_id, type=node_type.value,
|
||||||
|
added_timestamp=datetime.now(timezone.utc).isoformat(),
|
||||||
|
attributes=attributes or {},
|
||||||
|
description=description,
|
||||||
|
metadata=metadata or {})
|
||||||
|
else:
|
||||||
|
# Safely merge new attributes into existing attributes
|
||||||
|
if attributes:
|
||||||
|
existing_attributes = self.graph.nodes[node_id].get('attributes', {})
|
||||||
|
existing_attributes.update(attributes)
|
||||||
|
self.graph.nodes[node_id]['attributes'] = existing_attributes
|
||||||
|
if description:
|
||||||
|
self.graph.nodes[node_id]['description'] = description
|
||||||
if metadata:
|
if metadata:
|
||||||
|
existing_metadata = self.graph.nodes[node_id].get('metadata', {})
|
||||||
existing_metadata.update(metadata)
|
existing_metadata.update(metadata)
|
||||||
self.graph.nodes[node_id]['metadata'] = existing_metadata
|
self.graph.nodes[node_id]['metadata'] = existing_metadata
|
||||||
|
|
||||||
|
if attributes and node_type != NodeType.CORRELATION_OBJECT:
|
||||||
|
correlations = self._check_for_correlations(node_id, attributes)
|
||||||
|
for corr in correlations:
|
||||||
|
value = corr['value']
|
||||||
|
|
||||||
|
# STEP 1: Substring check against all existing nodes
|
||||||
|
if self._correlation_value_matches_existing_node(value):
|
||||||
|
# Skip creating correlation node - would be redundant
|
||||||
|
continue
|
||||||
|
|
||||||
|
eligible_nodes = set(corr['nodes'])
|
||||||
|
|
||||||
|
if len(eligible_nodes) < 2:
|
||||||
|
# Need at least 2 nodes to create a correlation
|
||||||
|
continue
|
||||||
|
|
||||||
|
# STEP 3: Check for existing correlation node with same connection pattern
|
||||||
|
correlation_nodes_with_pattern = self._find_correlation_nodes_with_same_pattern(eligible_nodes)
|
||||||
|
|
||||||
|
if correlation_nodes_with_pattern:
|
||||||
|
# STEP 4: Merge with existing correlation node
|
||||||
|
target_correlation_node = correlation_nodes_with_pattern[0]
|
||||||
|
self._merge_correlation_values(target_correlation_node, value, corr)
|
||||||
|
else:
|
||||||
|
# STEP 5: Create new correlation node for eligible nodes only
|
||||||
|
correlation_node_id = f"corr_{abs(hash(str(sorted(eligible_nodes))))}"
|
||||||
|
self.add_node(correlation_node_id, NodeType.CORRELATION_OBJECT,
|
||||||
|
metadata={'values': [value], 'sources': corr['sources'],
|
||||||
|
'correlated_nodes': list(eligible_nodes)})
|
||||||
|
|
||||||
|
# Create edges from eligible nodes to this correlation node with better labeling
|
||||||
|
for c_node_id in eligible_nodes:
|
||||||
|
if self.graph.has_node(c_node_id):
|
||||||
|
# Find the best attribute name for this node
|
||||||
|
meaningful_attr = self._find_best_attribute_name_for_node(c_node_id, corr['sources'])
|
||||||
|
relationship_type = f"c_{meaningful_attr}"
|
||||||
|
self.add_edge(c_node_id, correlation_node_id, relationship_type, confidence_score=0.9)
|
||||||
|
|
||||||
|
self._update_correlation_index(node_id, attributes)
|
||||||
|
|
||||||
|
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||||
|
return is_new_node
|
||||||
|
|
||||||
|
def _find_best_attribute_name_for_node(self, node_id: str, sources: List[Dict]) -> str:
|
||||||
|
"""Find the best attribute name for a correlation edge by looking at the sources."""
|
||||||
|
node_sources = [s for s in sources if s['node_id'] == node_id]
|
||||||
|
|
||||||
|
if not node_sources:
|
||||||
|
return "correlation"
|
||||||
|
|
||||||
|
# Use the meaningful_attr if available
|
||||||
|
for source in node_sources:
|
||||||
|
meaningful_attr = source.get('meaningful_attr')
|
||||||
|
if meaningful_attr and meaningful_attr != "unknown":
|
||||||
|
return meaningful_attr
|
||||||
|
|
||||||
|
# Fallback to parent_attr
|
||||||
|
for source in node_sources:
|
||||||
|
parent_attr = source.get('parent_attr')
|
||||||
|
if parent_attr:
|
||||||
|
return parent_attr
|
||||||
|
|
||||||
|
# Last resort - extract from path
|
||||||
|
for source in node_sources:
|
||||||
|
path = source.get('path', '')
|
||||||
|
if path:
|
||||||
|
extracted = self._extract_meaningful_attribute(path)
|
||||||
|
if extracted != "unknown":
|
||||||
|
return extracted
|
||||||
|
|
||||||
|
return "correlation"
|
||||||
|
|
||||||
|
def _has_direct_edge_bidirectional(self, node_a: str, node_b: str) -> bool:
|
||||||
|
"""
|
||||||
|
Check if there's a direct edge between two nodes in either direction.
|
||||||
|
Returns True if node_a→node_b OR node_b→node_a exists.
|
||||||
|
"""
|
||||||
|
return (self.graph.has_edge(node_a, node_b) or
|
||||||
|
self.graph.has_edge(node_b, node_a))
|
||||||
|
|
||||||
|
def _correlation_value_matches_existing_node(self, correlation_value: str) -> bool:
|
||||||
|
"""
|
||||||
|
Check if correlation value contains any existing node ID as substring.
|
||||||
|
Returns True if match found (correlation node should NOT be created).
|
||||||
|
"""
|
||||||
|
correlation_str = str(correlation_value).lower()
|
||||||
|
|
||||||
|
# Check against all existing nodes
|
||||||
|
for existing_node_id in self.graph.nodes():
|
||||||
|
if existing_node_id.lower() in correlation_str:
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _find_correlation_nodes_with_same_pattern(self, node_set: set) -> List[str]:
|
||||||
|
"""
|
||||||
|
Find existing correlation nodes that have the exact same pattern of connected nodes.
|
||||||
|
Returns list of correlation node IDs with matching patterns.
|
||||||
|
"""
|
||||||
|
correlation_nodes = self.get_nodes_by_type(NodeType.CORRELATION_OBJECT)
|
||||||
|
matching_nodes = []
|
||||||
|
|
||||||
|
for corr_node_id in correlation_nodes:
|
||||||
|
# Get all nodes connected to this correlation node
|
||||||
|
connected_nodes = set()
|
||||||
|
|
||||||
|
# Add all predecessors (nodes pointing TO the correlation node)
|
||||||
|
connected_nodes.update(self.graph.predecessors(corr_node_id))
|
||||||
|
|
||||||
|
# Add all successors (nodes pointed TO by the correlation node)
|
||||||
|
connected_nodes.update(self.graph.successors(corr_node_id))
|
||||||
|
|
||||||
|
# Check if the pattern matches exactly
|
||||||
|
if connected_nodes == node_set:
|
||||||
|
matching_nodes.append(corr_node_id)
|
||||||
|
|
||||||
|
return matching_nodes
|
||||||
|
|
||||||
|
def _merge_correlation_values(self, target_node_id: str, new_value: Any, corr_data: Dict) -> None:
|
||||||
|
"""
|
||||||
|
Merge a new correlation value into an existing correlation node.
|
||||||
|
Uses same logic as large entity merging.
|
||||||
|
"""
|
||||||
|
if not self.graph.has_node(target_node_id):
|
||||||
|
return
|
||||||
|
|
||||||
|
target_metadata = self.graph.nodes[target_node_id]['metadata']
|
||||||
|
|
||||||
|
# Get existing values (ensure it's a list)
|
||||||
|
existing_values = target_metadata.get('values', [])
|
||||||
|
if not isinstance(existing_values, list):
|
||||||
|
existing_values = [existing_values]
|
||||||
|
|
||||||
|
# Add new value if not already present
|
||||||
|
if new_value not in existing_values:
|
||||||
|
existing_values.append(new_value)
|
||||||
|
|
||||||
|
# Merge sources
|
||||||
|
existing_sources = target_metadata.get('sources', [])
|
||||||
|
new_sources = corr_data.get('sources', [])
|
||||||
|
|
||||||
|
# Create set of unique sources based on (node_id, path) tuples
|
||||||
|
source_set = set()
|
||||||
|
for source in existing_sources + new_sources:
|
||||||
|
source_tuple = (source['node_id'], source.get('path', ''))
|
||||||
|
source_set.add(source_tuple)
|
||||||
|
|
||||||
|
# Convert back to list of dictionaries
|
||||||
|
merged_sources = [{'node_id': nid, 'path': path} for nid, path in source_set]
|
||||||
|
|
||||||
|
# Update metadata
|
||||||
|
target_metadata.update({
|
||||||
|
'values': existing_values,
|
||||||
|
'sources': merged_sources,
|
||||||
|
'correlated_nodes': list(set(target_metadata.get('correlated_nodes', []) + corr_data.get('nodes', []))),
|
||||||
|
'merge_count': len(existing_values),
|
||||||
|
'last_merge_timestamp': datetime.now(timezone.utc).isoformat()
|
||||||
|
})
|
||||||
|
|
||||||
|
# Update description to reflect merged nature
|
||||||
|
value_count = len(existing_values)
|
||||||
|
node_count = len(target_metadata['correlated_nodes'])
|
||||||
|
self.graph.nodes[target_node_id]['description'] = (
|
||||||
|
f"Correlation container with {value_count} merged values "
|
||||||
|
f"across {node_count} nodes"
|
||||||
|
)
|
||||||
|
|
||||||
|
def add_edge(self, source_id: str, target_id: str, relationship_type: str,
|
||||||
|
confidence_score: float = 0.5, source_provider: str = "unknown",
|
||||||
|
raw_data: Optional[Dict[str, Any]] = None) -> bool:
|
||||||
|
"""Add or update an edge between two nodes, ensuring nodes exist."""
|
||||||
|
if not self.graph.has_node(source_id) or not self.graph.has_node(target_id):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
node_attributes = {
|
new_confidence = confidence_score
|
||||||
'type': node_type.value,
|
|
||||||
'added_timestamp': datetime.now(timezone.utc).isoformat(),
|
if relationship_type.startswith("c_"):
|
||||||
'metadata': metadata or {}
|
edge_label = relationship_type
|
||||||
}
|
else:
|
||||||
|
edge_label = f"{source_provider}_{relationship_type}"
|
||||||
self.graph.add_node(node_id, **node_attributes)
|
|
||||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
|
||||||
return True
|
|
||||||
|
|
||||||
def add_edge(self, source_id: str, target_id: str,
|
|
||||||
relationship_type: RelationshipType,
|
|
||||||
confidence_score: Optional[float] = None,
|
|
||||||
source_provider: str = "unknown",
|
|
||||||
raw_data: Optional[Dict[str, Any]] = None) -> bool:
|
|
||||||
"""
|
|
||||||
Add an edge between two nodes.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
source_id: Source node identifier
|
|
||||||
target_id: Target node identifier
|
|
||||||
relationship_type: Type of relationship
|
|
||||||
confidence_score: Custom confidence score (overrides default)
|
|
||||||
source_provider: Provider that discovered this relationship
|
|
||||||
raw_data: Raw data from provider response
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
bool: True if edge was added, False if it already exists
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not self.graph.has_node(source_id) or not self.graph.has_node(target_id):
|
|
||||||
# If the target node is a subdomain, it should be added.
|
|
||||||
# The scanner will handle this logic.
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Check if edge already exists
|
|
||||||
if self.graph.has_edge(source_id, target_id):
|
if self.graph.has_edge(source_id, target_id):
|
||||||
# Update confidence score if new score is higher
|
# If edge exists, update confidence if the new score is higher.
|
||||||
existing_confidence = self.graph.edges[source_id, target_id]['confidence_score']
|
if new_confidence > self.graph.edges[source_id, target_id].get('confidence_score', 0):
|
||||||
new_confidence = confidence_score or relationship_type.default_confidence
|
|
||||||
|
|
||||||
if new_confidence > existing_confidence:
|
|
||||||
self.graph.edges[source_id, target_id]['confidence_score'] = new_confidence
|
self.graph.edges[source_id, target_id]['confidence_score'] = new_confidence
|
||||||
self.graph.edges[source_id, target_id]['updated_timestamp'] = datetime.now(timezone.utc).isoformat()
|
self.graph.edges[source_id, target_id]['updated_timestamp'] = datetime.now(timezone.utc).isoformat()
|
||||||
self.graph.edges[source_id, target_id]['updated_by'] = source_provider
|
self.graph.edges[source_id, target_id]['updated_by'] = source_provider
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
edge_attributes = {
|
# Add a new edge with all attributes.
|
||||||
'relationship_type': relationship_type.relationship_name,
|
self.graph.add_edge(source_id, target_id,
|
||||||
'confidence_score': confidence_score or relationship_type.default_confidence,
|
relationship_type=edge_label,
|
||||||
'source_provider': source_provider,
|
confidence_score=new_confidence,
|
||||||
'discovery_timestamp': datetime.now(timezone.utc).isoformat(),
|
source_provider=source_provider,
|
||||||
'raw_data': raw_data or {}
|
discovery_timestamp=datetime.now(timezone.utc).isoformat(),
|
||||||
}
|
raw_data=raw_data or {})
|
||||||
|
|
||||||
self.graph.add_edge(source_id, target_id, **edge_attributes)
|
|
||||||
self.last_modified = datetime.now(timezone.utc).isoformat()
|
self.last_modified = datetime.now(timezone.utc).isoformat()
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@@ -153,270 +423,105 @@ class GraphManager:
|
|||||||
return self.graph.number_of_edges()
|
return self.graph.number_of_edges()
|
||||||
|
|
||||||
def get_nodes_by_type(self, node_type: NodeType) -> List[str]:
|
def get_nodes_by_type(self, node_type: NodeType) -> List[str]:
|
||||||
"""
|
"""Get all nodes of a specific type."""
|
||||||
Get all nodes of a specific type.
|
return [n for n, d in self.graph.nodes(data=True) if d.get('type') == node_type.value]
|
||||||
|
|
||||||
Args:
|
|
||||||
node_type: Type of nodes to retrieve
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of node identifiers
|
|
||||||
"""
|
|
||||||
return [
|
|
||||||
node_id for node_id, attributes in self.graph.nodes(data=True)
|
|
||||||
if attributes.get('type') == node_type.value
|
|
||||||
]
|
|
||||||
|
|
||||||
def get_neighbors(self, node_id: str) -> List[str]:
|
def get_neighbors(self, node_id: str) -> List[str]:
|
||||||
"""
|
"""Get all unique neighbors (predecessors and successors) for a node."""
|
||||||
Get all neighboring nodes (both incoming and outgoing).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
node_id: Node identifier
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of neighboring node identifiers
|
|
||||||
"""
|
|
||||||
if not self.graph.has_node(node_id):
|
if not self.graph.has_node(node_id):
|
||||||
return []
|
return []
|
||||||
|
return list(set(self.graph.predecessors(node_id)) | set(self.graph.successors(node_id)))
|
||||||
predecessors = list(self.graph.predecessors(node_id))
|
|
||||||
successors = list(self.graph.successors(node_id))
|
|
||||||
return list(set(predecessors + successors))
|
|
||||||
|
|
||||||
def get_high_confidence_edges(self, min_confidence: float = 0.8) -> List[Tuple[str, str, Dict]]:
|
def get_high_confidence_edges(self, min_confidence: float = 0.8) -> List[Tuple[str, str, Dict]]:
|
||||||
"""
|
"""Get edges with confidence score above a given threshold."""
|
||||||
Get edges with confidence score above threshold.
|
return [(u, v, d) for u, v, d in self.graph.edges(data=True)
|
||||||
|
if d.get('confidence_score', 0) >= min_confidence]
|
||||||
Args:
|
|
||||||
min_confidence: Minimum confidence threshold
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of tuples (source, target, attributes)
|
|
||||||
"""
|
|
||||||
return [
|
|
||||||
(source, target, attributes)
|
|
||||||
for source, target, attributes in self.graph.edges(data=True)
|
|
||||||
if attributes.get('confidence_score', 0) >= min_confidence
|
|
||||||
]
|
|
||||||
|
|
||||||
def get_graph_data(self) -> Dict[str, Any]:
|
def get_graph_data(self) -> Dict[str, Any]:
|
||||||
"""
|
"""Export graph data formatted for frontend visualization."""
|
||||||
Export graph data for visualization.
|
|
||||||
Uses comprehensive metadata collected during scanning.
|
|
||||||
"""
|
|
||||||
nodes = []
|
nodes = []
|
||||||
edges = []
|
for node_id, attrs in self.graph.nodes(data=True):
|
||||||
|
node_data = {'id': node_id, 'label': node_id, 'type': attrs.get('type', 'unknown'),
|
||||||
# Create nodes with the comprehensive metadata already collected
|
'attributes': attrs.get('attributes', {}),
|
||||||
for node_id, attributes in self.graph.nodes(data=True):
|
'description': attrs.get('description', ''),
|
||||||
node_data = {
|
'metadata': attrs.get('metadata', {}),
|
||||||
'id': node_id,
|
'added_timestamp': attrs.get('added_timestamp')}
|
||||||
'label': node_id,
|
# Customize node appearance based on type and attributes
|
||||||
'type': attributes.get('type', 'unknown'),
|
node_type = node_data['type']
|
||||||
'metadata': attributes.get('metadata', {}),
|
attributes = node_data['attributes']
|
||||||
'added_timestamp': attributes.get('added_timestamp')
|
if node_type == 'domain' and attributes.get('certificates', {}).get('has_valid_cert') is False:
|
||||||
}
|
node_data['color'] = {'background': '#c7c7c7', 'border': '#999'} # Gray for invalid cert
|
||||||
|
|
||||||
# Handle certificate node labeling
|
# Add incoming and outgoing edges to node data
|
||||||
if node_id.startswith('cert_'):
|
if self.graph.has_node(node_id):
|
||||||
# For certificate nodes, create a more informative label
|
node_data['incoming_edges'] = [{'from': u, 'data': d} for u, _, d in self.graph.in_edges(node_id, data=True)]
|
||||||
cert_metadata = node_data['metadata']
|
node_data['outgoing_edges'] = [{'to': v, 'data': d} for _, v, d in self.graph.out_edges(node_id, data=True)]
|
||||||
issuer = cert_metadata.get('issuer_name', 'Unknown')
|
|
||||||
valid_status = "✓" if cert_metadata.get('is_currently_valid') else "✗"
|
|
||||||
node_data['label'] = f"Certificate {valid_status}\n{issuer[:30]}..."
|
|
||||||
|
|
||||||
# Color coding by type
|
|
||||||
type_colors = {
|
|
||||||
'domain': {
|
|
||||||
'background': '#00ff41',
|
|
||||||
'border': '#00aa2e',
|
|
||||||
'highlight': {'background': '#44ff75', 'border': '#00ff41'},
|
|
||||||
'hover': {'background': '#22ff63', 'border': '#00cc35'}
|
|
||||||
},
|
|
||||||
'ip': {
|
|
||||||
'background': '#ff9900',
|
|
||||||
'border': '#cc7700',
|
|
||||||
'highlight': {'background': '#ffbb44', 'border': '#ff9900'},
|
|
||||||
'hover': {'background': '#ffaa22', 'border': '#dd8800'}
|
|
||||||
},
|
|
||||||
'asn': {
|
|
||||||
'background': '#00aaff',
|
|
||||||
'border': '#0088cc',
|
|
||||||
'highlight': {'background': '#44ccff', 'border': '#00aaff'},
|
|
||||||
'hover': {'background': '#22bbff', 'border': '#0099dd'}
|
|
||||||
},
|
|
||||||
'dns_record': {
|
|
||||||
'background': '#9d4edd',
|
|
||||||
'border': '#7b2cbf',
|
|
||||||
'highlight': {'background': '#c77dff', 'border': '#9d4edd'},
|
|
||||||
'hover': {'background': '#b392f0', 'border': '#8b5cf6'}
|
|
||||||
},
|
|
||||||
'large_entity': {
|
|
||||||
'background': '#ff6b6b',
|
|
||||||
'border': '#cc3a3a',
|
|
||||||
'highlight': {'background': '#ff8c8c', 'border': '#ff6b6b'},
|
|
||||||
'hover': {'background': '#ff7a7a', 'border': '#dd4a4a'}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
node_color_config = type_colors.get(attributes.get('type', 'unknown'), type_colors['domain'])
|
|
||||||
|
|
||||||
node_data['color'] = node_color_config
|
|
||||||
|
|
||||||
# Add certificate validity indicator if available
|
|
||||||
metadata = node_data['metadata']
|
|
||||||
if 'certificate_data' in metadata and 'has_valid_cert' in metadata['certificate_data']:
|
|
||||||
node_data['has_valid_cert'] = metadata['certificate_data']['has_valid_cert']
|
|
||||||
|
|
||||||
nodes.append(node_data)
|
nodes.append(node_data)
|
||||||
|
|
||||||
# Create edges (unchanged from original)
|
edges = []
|
||||||
for source, target, attributes in self.graph.edges(data=True):
|
for source, target, attrs in self.graph.edges(data=True):
|
||||||
edge_data = {
|
edges.append({'from': source, 'to': target,
|
||||||
'from': source,
|
'label': attrs.get('relationship_type', ''),
|
||||||
'to': target,
|
'confidence_score': attrs.get('confidence_score', 0),
|
||||||
'label': attributes.get('relationship_type', ''),
|
'source_provider': attrs.get('source_provider', ''),
|
||||||
'confidence_score': attributes.get('confidence_score', 0),
|
'discovery_timestamp': attrs.get('discovery_timestamp')})
|
||||||
'source_provider': attributes.get('source_provider', ''),
|
|
||||||
'discovery_timestamp': attributes.get('discovery_timestamp')
|
|
||||||
}
|
|
||||||
|
|
||||||
# Enhanced edge styling based on confidence
|
|
||||||
confidence = attributes.get('confidence_score', 0)
|
|
||||||
if confidence >= 0.8:
|
|
||||||
edge_data['color'] = {
|
|
||||||
'color': '#00ff41',
|
|
||||||
'highlight': '#44ff75',
|
|
||||||
'hover': '#22ff63',
|
|
||||||
'inherit': False
|
|
||||||
}
|
|
||||||
edge_data['width'] = 4
|
|
||||||
elif confidence >= 0.6:
|
|
||||||
edge_data['color'] = {
|
|
||||||
'color': '#ff9900',
|
|
||||||
'highlight': '#ffbb44',
|
|
||||||
'hover': '#ffaa22',
|
|
||||||
'inherit': False
|
|
||||||
}
|
|
||||||
edge_data['width'] = 3
|
|
||||||
else:
|
|
||||||
edge_data['color'] = {
|
|
||||||
'color': '#666666',
|
|
||||||
'highlight': '#888888',
|
|
||||||
'hover': '#777777',
|
|
||||||
'inherit': False
|
|
||||||
}
|
|
||||||
edge_data['width'] = 2
|
|
||||||
|
|
||||||
# Add dashed line for low confidence
|
|
||||||
if confidence < 0.6:
|
|
||||||
edge_data['dashes'] = [5, 5]
|
|
||||||
|
|
||||||
edges.append(edge_data)
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'nodes': nodes,
|
'nodes': nodes, 'edges': edges,
|
||||||
'edges': edges,
|
'statistics': self.get_statistics()['basic_metrics']
|
||||||
'statistics': {
|
|
||||||
'node_count': len(nodes),
|
|
||||||
'edge_count': len(edges),
|
|
||||||
'creation_time': self.creation_time,
|
|
||||||
'last_modified': self.last_modified
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
def export_json(self) -> Dict[str, Any]:
|
def export_json(self) -> Dict[str, Any]:
|
||||||
"""
|
"""Export complete graph data as a JSON-serializable dictionary."""
|
||||||
Export complete graph data as JSON for download.
|
graph_data = nx.node_link_data(self.graph) # Use NetworkX's built-in robust serializer
|
||||||
|
return {
|
||||||
Returns:
|
|
||||||
Dictionary containing complete graph data with metadata
|
|
||||||
"""
|
|
||||||
# Get basic graph data
|
|
||||||
graph_data = self.get_graph_data()
|
|
||||||
|
|
||||||
# Add comprehensive metadata
|
|
||||||
export_data = {
|
|
||||||
'export_metadata': {
|
'export_metadata': {
|
||||||
'export_timestamp': datetime.now(timezone.utc).isoformat(),
|
'export_timestamp': datetime.now(timezone.utc).isoformat(),
|
||||||
'graph_creation_time': self.creation_time,
|
'graph_creation_time': self.creation_time,
|
||||||
'last_modified': self.last_modified,
|
'last_modified': self.last_modified,
|
||||||
'total_nodes': self.graph.number_of_nodes(),
|
'total_nodes': self.get_node_count(),
|
||||||
'total_edges': self.graph.number_of_edges(),
|
'total_edges': self.get_edge_count(),
|
||||||
'graph_format': 'dnsrecon_v1'
|
'graph_format': 'dnsrecon_v1_nodeling'
|
||||||
},
|
},
|
||||||
'nodes': graph_data['nodes'],
|
'graph': graph_data,
|
||||||
'edges': graph_data['edges'],
|
'statistics': self.get_statistics()
|
||||||
'node_types': [node_type.value for node_type in NodeType],
|
|
||||||
'relationship_types': [
|
|
||||||
{
|
|
||||||
'name': rel_type.relationship_name,
|
|
||||||
'default_confidence': rel_type.default_confidence
|
|
||||||
}
|
|
||||||
for rel_type in RelationshipType
|
|
||||||
],
|
|
||||||
'confidence_distribution': self._get_confidence_distribution()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return export_data
|
|
||||||
|
|
||||||
def _get_confidence_distribution(self) -> Dict[str, int]:
|
def _get_confidence_distribution(self) -> Dict[str, int]:
|
||||||
"""Get distribution of confidence scores."""
|
"""Get distribution of edge confidence scores."""
|
||||||
distribution = {'high': 0, 'medium': 0, 'low': 0}
|
distribution = {'high': 0, 'medium': 0, 'low': 0}
|
||||||
|
for _, _, data in self.graph.edges(data=True):
|
||||||
for _, _, attributes in self.graph.edges(data=True):
|
confidence = data.get('confidence_score', 0)
|
||||||
confidence = attributes.get('confidence_score', 0)
|
|
||||||
if confidence >= 0.8:
|
if confidence >= 0.8:
|
||||||
distribution['high'] += 1
|
distribution['high'] += 1
|
||||||
elif confidence >= 0.6:
|
elif confidence >= 0.6:
|
||||||
distribution['medium'] += 1
|
distribution['medium'] += 1
|
||||||
else:
|
else:
|
||||||
distribution['low'] += 1
|
distribution['low'] += 1
|
||||||
|
|
||||||
return distribution
|
return distribution
|
||||||
|
|
||||||
def get_statistics(self) -> Dict[str, Any]:
|
def get_statistics(self) -> Dict[str, Any]:
|
||||||
"""
|
"""Get comprehensive statistics about the graph."""
|
||||||
Get comprehensive graph statistics.
|
stats = {'basic_metrics': {'total_nodes': self.get_node_count(),
|
||||||
|
'total_edges': self.get_edge_count(),
|
||||||
Returns:
|
'creation_time': self.creation_time,
|
||||||
Dictionary containing various graph metrics
|
'last_modified': self.last_modified},
|
||||||
"""
|
'node_type_distribution': {}, 'relationship_type_distribution': {},
|
||||||
stats = {
|
'confidence_distribution': self._get_confidence_distribution(),
|
||||||
'basic_metrics': {
|
'provider_distribution': {}}
|
||||||
'total_nodes': self.graph.number_of_nodes(),
|
# Calculate distributions
|
||||||
'total_edges': self.graph.number_of_edges(),
|
|
||||||
'creation_time': self.creation_time,
|
|
||||||
'last_modified': self.last_modified
|
|
||||||
},
|
|
||||||
'node_type_distribution': {},
|
|
||||||
'relationship_type_distribution': {},
|
|
||||||
'confidence_distribution': self._get_confidence_distribution(),
|
|
||||||
'provider_distribution': {}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Node type distribution
|
|
||||||
for node_type in NodeType:
|
for node_type in NodeType:
|
||||||
count = len(self.get_nodes_by_type(node_type))
|
stats['node_type_distribution'][node_type.value] = self.get_nodes_by_type(node_type).__len__()
|
||||||
stats['node_type_distribution'][node_type.value] = count
|
for _, _, data in self.graph.edges(data=True):
|
||||||
|
rel_type = data.get('relationship_type', 'unknown')
|
||||||
# Relationship type distribution
|
stats['relationship_type_distribution'][rel_type] = stats['relationship_type_distribution'].get(rel_type, 0) + 1
|
||||||
for _, _, attributes in self.graph.edges(data=True):
|
provider = data.get('source_provider', 'unknown')
|
||||||
rel_type = attributes.get('relationship_type', 'unknown')
|
stats['provider_distribution'][provider] = stats['provider_distribution'].get(provider, 0) + 1
|
||||||
stats['relationship_type_distribution'][rel_type] = \
|
|
||||||
stats['relationship_type_distribution'].get(rel_type, 0) + 1
|
|
||||||
|
|
||||||
# Provider distribution
|
|
||||||
for _, _, attributes in self.graph.edges(data=True):
|
|
||||||
provider = attributes.get('source_provider', 'unknown')
|
|
||||||
stats['provider_distribution'][provider] = \
|
|
||||||
stats['provider_distribution'].get(provider, 0) + 1
|
|
||||||
|
|
||||||
return stats
|
return stats
|
||||||
|
|
||||||
def clear(self) -> None:
|
def clear(self) -> None:
|
||||||
"""Clear all nodes and edges from the graph."""
|
"""Clear all nodes, edges, and indices from the graph."""
|
||||||
self.graph.clear()
|
self.graph.clear()
|
||||||
|
self.correlation_index.clear()
|
||||||
self.creation_time = datetime.now(timezone.utc).isoformat()
|
self.creation_time = datetime.now(timezone.utc).isoformat()
|
||||||
self.last_modified = self.creation_time
|
self.last_modified = self.creation_time
|
||||||
@@ -1,7 +1,4 @@
|
|||||||
"""
|
# dnsrecon/core/logger.py
|
||||||
Forensic logging system for DNSRecon tool.
|
|
||||||
Provides structured audit trail for all reconnaissance activities.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
import threading
|
import threading
|
||||||
@@ -45,7 +42,7 @@ class ForensicLogger:
|
|||||||
Maintains detailed audit trail of all reconnaissance activities.
|
Maintains detailed audit trail of all reconnaissance activities.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, session_id: str = None):
|
def __init__(self, session_id: str = ""):
|
||||||
"""
|
"""
|
||||||
Initialize forensic logger.
|
Initialize forensic logger.
|
||||||
|
|
||||||
@@ -82,7 +79,29 @@ class ForensicLogger:
|
|||||||
console_handler = logging.StreamHandler()
|
console_handler = logging.StreamHandler()
|
||||||
console_handler.setFormatter(formatter)
|
console_handler.setFormatter(formatter)
|
||||||
self.logger.addHandler(console_handler)
|
self.logger.addHandler(console_handler)
|
||||||
|
|
||||||
|
def __getstate__(self):
|
||||||
|
"""Prepare ForensicLogger for pickling by excluding unpicklable objects."""
|
||||||
|
state = self.__dict__.copy()
|
||||||
|
# Remove the unpickleable 'logger' attribute
|
||||||
|
if 'logger' in state:
|
||||||
|
del state['logger']
|
||||||
|
return state
|
||||||
|
|
||||||
|
def __setstate__(self, state):
|
||||||
|
"""Restore ForensicLogger after unpickling by reconstructing logger."""
|
||||||
|
self.__dict__.update(state)
|
||||||
|
# Re-initialize the 'logger' attribute
|
||||||
|
self.logger = logging.getLogger(f'dnsrecon.{self.session_id}')
|
||||||
|
self.logger.setLevel(logging.INFO)
|
||||||
|
formatter = logging.Formatter(
|
||||||
|
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||||
|
)
|
||||||
|
if not self.logger.handlers:
|
||||||
|
console_handler = logging.StreamHandler()
|
||||||
|
console_handler.setFormatter(formatter)
|
||||||
|
self.logger.addHandler(console_handler)
|
||||||
|
|
||||||
def _generate_session_id(self) -> str:
|
def _generate_session_id(self) -> str:
|
||||||
"""Generate unique session identifier."""
|
"""Generate unique session identifier."""
|
||||||
return f"dnsrecon_{datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')}"
|
return f"dnsrecon_{datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')}"
|
||||||
@@ -184,8 +203,6 @@ class ForensicLogger:
|
|||||||
self.session_metadata['target_domains'] = list(self.session_metadata['target_domains'])
|
self.session_metadata['target_domains'] = list(self.session_metadata['target_domains'])
|
||||||
|
|
||||||
self.logger.info(f"Scan Complete - Session: {self.session_id}")
|
self.logger.info(f"Scan Complete - Session: {self.session_id}")
|
||||||
self.logger.info(f"Total API Requests: {self.session_metadata['total_requests']}")
|
|
||||||
self.logger.info(f"Total Relationships: {self.session_metadata['total_relationships']}")
|
|
||||||
|
|
||||||
def export_audit_trail(self) -> Dict[str, Any]:
|
def export_audit_trail(self) -> Dict[str, Any]:
|
||||||
"""
|
"""
|
||||||
|
|||||||
1336
core/scanner.py
1336
core/scanner.py
File diff suppressed because it is too large
Load Diff
@@ -3,11 +3,9 @@ Per-session configuration management for DNSRecon.
|
|||||||
Provides isolated configuration instances for each user session.
|
Provides isolated configuration instances for each user session.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import os
|
from config import Config
|
||||||
from typing import Dict, Optional
|
|
||||||
|
|
||||||
|
class SessionConfig(Config):
|
||||||
class SessionConfig:
|
|
||||||
"""
|
"""
|
||||||
Session-specific configuration that inherits from global config
|
Session-specific configuration that inherits from global config
|
||||||
but maintains isolated API keys and provider settings.
|
but maintains isolated API keys and provider settings.
|
||||||
@@ -15,112 +13,8 @@ class SessionConfig:
|
|||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
"""Initialize session config with global defaults."""
|
"""Initialize session config with global defaults."""
|
||||||
# Copy all attributes from global config
|
super().__init__()
|
||||||
self.api_keys: Dict[str, Optional[str]] = {
|
|
||||||
'shodan': None,
|
|
||||||
'virustotal': None
|
|
||||||
}
|
|
||||||
|
|
||||||
# Default settings (copied from global config)
|
|
||||||
self.default_recursion_depth = 2
|
|
||||||
self.default_timeout = 30
|
|
||||||
self.max_concurrent_requests = 5
|
|
||||||
self.large_entity_threshold = 100
|
|
||||||
|
|
||||||
# Rate limiting settings (per session)
|
|
||||||
self.rate_limits = {
|
|
||||||
'crtsh': 60,
|
|
||||||
'virustotal': 4,
|
|
||||||
'shodan': 60,
|
|
||||||
'dns': 100
|
|
||||||
}
|
|
||||||
|
|
||||||
# Provider settings (per session)
|
|
||||||
self.enabled_providers = {
|
|
||||||
'crtsh': True,
|
|
||||||
'dns': True,
|
|
||||||
'virustotal': False,
|
|
||||||
'shodan': False
|
|
||||||
}
|
|
||||||
|
|
||||||
# Logging configuration
|
|
||||||
self.log_level = 'INFO'
|
|
||||||
self.log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
|
||||||
|
|
||||||
# Flask configuration (shared)
|
|
||||||
self.flask_host = '127.0.0.1'
|
|
||||||
self.flask_port = 5000
|
|
||||||
self.flask_debug = True
|
|
||||||
|
|
||||||
def set_api_key(self, provider: str, api_key: str) -> bool:
|
|
||||||
"""
|
|
||||||
Set API key for a provider in this session.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
provider: Provider name (shodan, virustotal)
|
|
||||||
api_key: API key string
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
bool: True if key was set successfully
|
|
||||||
"""
|
|
||||||
if provider in self.api_keys:
|
|
||||||
self.api_keys[provider] = api_key
|
|
||||||
self.enabled_providers[provider] = True if api_key else False
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def get_api_key(self, provider: str) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
Get API key for a provider in this session.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
provider: Provider name
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
API key or None if not set
|
|
||||||
"""
|
|
||||||
return self.api_keys.get(provider)
|
|
||||||
|
|
||||||
def is_provider_enabled(self, provider: str) -> bool:
|
|
||||||
"""
|
|
||||||
Check if a provider is enabled in this session.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
provider: Provider name
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
bool: True if provider is enabled
|
|
||||||
"""
|
|
||||||
return self.enabled_providers.get(provider, False)
|
|
||||||
|
|
||||||
def get_rate_limit(self, provider: str) -> int:
|
|
||||||
"""
|
|
||||||
Get rate limit for a provider in this session.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
provider: Provider name
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Rate limit in requests per minute
|
|
||||||
"""
|
|
||||||
return self.rate_limits.get(provider, 60)
|
|
||||||
|
|
||||||
def load_from_env(self):
|
|
||||||
"""Load configuration from environment variables (only if not already set)."""
|
|
||||||
if os.getenv('VIRUSTOTAL_API_KEY') and not self.api_keys['virustotal']:
|
|
||||||
self.set_api_key('virustotal', os.getenv('VIRUSTOTAL_API_KEY'))
|
|
||||||
|
|
||||||
if os.getenv('SHODAN_API_KEY') and not self.api_keys['shodan']:
|
|
||||||
self.set_api_key('shodan', os.getenv('SHODAN_API_KEY'))
|
|
||||||
|
|
||||||
# Override default settings from environment
|
|
||||||
self.default_recursion_depth = int(os.getenv('DEFAULT_RECURSION_DEPTH', '2'))
|
|
||||||
self.default_timeout = 30
|
|
||||||
self.max_concurrent_requests = 5
|
|
||||||
|
|
||||||
|
def create_session_config() -> 'SessionConfig':
|
||||||
def create_session_config() -> SessionConfig:
|
|
||||||
"""Create a new session configuration instance."""
|
"""Create a new session configuration instance."""
|
||||||
session_config = SessionConfig()
|
return SessionConfig()
|
||||||
session_config.load_from_env()
|
|
||||||
return session_config
|
|
||||||
@@ -1,281 +1,391 @@
|
|||||||
"""
|
# dnsrecon/core/session_manager.py
|
||||||
Session manager for DNSRecon multi-user support.
|
|
||||||
Manages individual scanner instances per user session with automatic cleanup.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
from typing import Dict, Optional, Any
|
import redis
|
||||||
from datetime import datetime, timezone
|
import pickle
|
||||||
|
from typing import Dict, Optional, Any, List
|
||||||
|
|
||||||
from core.scanner import Scanner
|
from core.scanner import Scanner
|
||||||
|
from config import config
|
||||||
|
|
||||||
|
# WARNING: Using pickle can be a security risk if the data source is not trusted.
|
||||||
|
# In this case, we are only serializing/deserializing our own trusted Scanner objects,
|
||||||
|
# which is generally safe. Do not unpickle data from untrusted sources.
|
||||||
|
|
||||||
class SessionManager:
|
class SessionManager:
|
||||||
"""
|
"""
|
||||||
Manages multiple scanner instances for concurrent user sessions.
|
Manages multiple scanner instances for concurrent user sessions using Redis.
|
||||||
Provides session isolation and automatic cleanup of inactive sessions.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, session_timeout_minutes: int = 60):
|
def __init__(self, session_timeout_minutes: int = 0):
|
||||||
"""
|
"""
|
||||||
Initialize session manager.
|
Initialize session manager with a Redis backend.
|
||||||
|
|
||||||
Args:
|
|
||||||
session_timeout_minutes: Minutes of inactivity before session cleanup
|
|
||||||
"""
|
"""
|
||||||
self.sessions: Dict[str, Dict[str, Any]] = {}
|
if session_timeout_minutes is None:
|
||||||
|
session_timeout_minutes = config.session_timeout_minutes
|
||||||
|
|
||||||
|
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
|
||||||
self.session_timeout = session_timeout_minutes * 60 # Convert to seconds
|
self.session_timeout = session_timeout_minutes * 60 # Convert to seconds
|
||||||
self.lock = threading.Lock()
|
self.lock = threading.Lock() # Lock for local operations, Redis handles atomic ops
|
||||||
|
|
||||||
# Start cleanup thread
|
# Start cleanup thread
|
||||||
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
|
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
|
||||||
self.cleanup_thread.start()
|
self.cleanup_thread.start()
|
||||||
|
|
||||||
print(f"SessionManager initialized with {session_timeout_minutes}min timeout")
|
print(f"SessionManager initialized with Redis backend and {session_timeout_minutes}min timeout")
|
||||||
|
|
||||||
|
def __getstate__(self):
|
||||||
|
"""Prepare SessionManager for pickling."""
|
||||||
|
state = self.__dict__.copy()
|
||||||
|
# Exclude unpickleable attributes - Redis client and threading objects
|
||||||
|
unpicklable_attrs = ['lock', 'cleanup_thread', 'redis_client']
|
||||||
|
for attr in unpicklable_attrs:
|
||||||
|
if attr in state:
|
||||||
|
del state[attr]
|
||||||
|
return state
|
||||||
|
|
||||||
|
def __setstate__(self, state):
|
||||||
|
"""Restore SessionManager after unpickling."""
|
||||||
|
self.__dict__.update(state)
|
||||||
|
# Re-initialize unpickleable attributes
|
||||||
|
import redis
|
||||||
|
self.redis_client = redis.StrictRedis(db=0, decode_responses=False)
|
||||||
|
self.lock = threading.Lock()
|
||||||
|
self.cleanup_thread = threading.Thread(target=self._cleanup_loop, daemon=True)
|
||||||
|
self.cleanup_thread.start()
|
||||||
|
|
||||||
|
def _get_session_key(self, session_id: str) -> str:
|
||||||
|
"""Generates the Redis key for a session."""
|
||||||
|
return f"dnsrecon:session:{session_id}"
|
||||||
|
|
||||||
|
def _get_stop_signal_key(self, session_id: str) -> str:
|
||||||
|
"""Generates the Redis key for a session's stop signal."""
|
||||||
|
return f"dnsrecon:stop:{session_id}"
|
||||||
|
|
||||||
def create_session(self) -> str:
|
def create_session(self) -> str:
|
||||||
"""
|
"""
|
||||||
Create a new user session with dedicated scanner instance and configuration.
|
Create a new user session and store it in Redis.
|
||||||
Enhanced with better debugging and race condition protection.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Unique session ID
|
|
||||||
"""
|
"""
|
||||||
session_id = str(uuid.uuid4())
|
session_id = str(uuid.uuid4())
|
||||||
|
print(f"=== CREATING SESSION {session_id} IN REDIS ===")
|
||||||
print(f"=== CREATING SESSION {session_id} ===")
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Create session-specific configuration
|
|
||||||
from core.session_config import create_session_config
|
from core.session_config import create_session_config
|
||||||
session_config = create_session_config()
|
session_config = create_session_config()
|
||||||
|
|
||||||
print(f"Created session config for {session_id}")
|
|
||||||
|
|
||||||
# Create scanner with session config
|
|
||||||
from core.scanner import Scanner
|
|
||||||
scanner_instance = Scanner(session_config=session_config)
|
scanner_instance = Scanner(session_config=session_config)
|
||||||
|
|
||||||
print(f"Created scanner instance {id(scanner_instance)} for session {session_id}")
|
# Set the session ID on the scanner for cross-process stop signal management
|
||||||
print(f"Initial scanner status: {scanner_instance.status}")
|
scanner_instance.session_id = session_id
|
||||||
|
|
||||||
with self.lock:
|
session_data = {
|
||||||
self.sessions[session_id] = {
|
'scanner': scanner_instance,
|
||||||
'scanner': scanner_instance,
|
'config': session_config,
|
||||||
'config': session_config,
|
'created_at': time.time(),
|
||||||
'created_at': time.time(),
|
'last_activity': time.time(),
|
||||||
'last_activity': time.time(),
|
'status': 'active'
|
||||||
'user_agent': '',
|
}
|
||||||
'status': 'active'
|
|
||||||
}
|
|
||||||
|
|
||||||
print(f"Session {session_id} stored in session manager")
|
# Serialize the entire session data dictionary using pickle
|
||||||
print(f"Total active sessions: {len([s for s in self.sessions.values() if s['status'] == 'active'])}")
|
serialized_data = pickle.dumps(session_data)
|
||||||
print(f"=== SESSION {session_id} CREATED SUCCESSFULLY ===")
|
|
||||||
|
|
||||||
|
# Store in Redis
|
||||||
|
session_key = self._get_session_key(session_id)
|
||||||
|
self.redis_client.setex(session_key, self.session_timeout, serialized_data)
|
||||||
|
|
||||||
|
# Initialize stop signal as False
|
||||||
|
stop_key = self._get_stop_signal_key(session_id)
|
||||||
|
self.redis_client.setex(stop_key, self.session_timeout, b'0')
|
||||||
|
|
||||||
|
print(f"Session {session_id} stored in Redis with stop signal initialized")
|
||||||
return session_id
|
return session_id
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"ERROR: Failed to create session {session_id}: {e}")
|
print(f"ERROR: Failed to create session {session_id}: {e}")
|
||||||
raise
|
raise
|
||||||
|
|
||||||
def get_session(self, session_id: str) -> Optional[object]:
|
def set_stop_signal(self, session_id: str) -> bool:
|
||||||
"""
|
"""
|
||||||
Get scanner instance for a session with enhanced debugging.
|
Set the stop signal for a session (cross-process safe).
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
session_id: Session identifier
|
session_id: Session identifier
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Scanner instance or None if session doesn't exist
|
bool: True if signal was set successfully
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
stop_key = self._get_stop_signal_key(session_id)
|
||||||
|
# Set stop signal to '1' with the same TTL as the session
|
||||||
|
self.redis_client.setex(stop_key, self.session_timeout, b'1')
|
||||||
|
print(f"Stop signal set for session {session_id}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to set stop signal for session {session_id}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def is_stop_requested(self, session_id: str) -> bool:
|
||||||
|
"""
|
||||||
|
Check if stop is requested for a session (cross-process safe).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session_id: Session identifier
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if stop is requested
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
stop_key = self._get_stop_signal_key(session_id)
|
||||||
|
value = self.redis_client.get(stop_key)
|
||||||
|
return value == b'1' if value is not None else False
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to check stop signal for session {session_id}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def clear_stop_signal(self, session_id: str) -> bool:
|
||||||
|
"""
|
||||||
|
Clear the stop signal for a session.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session_id: Session identifier
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if signal was cleared successfully
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
stop_key = self._get_stop_signal_key(session_id)
|
||||||
|
self.redis_client.setex(stop_key, self.session_timeout, b'0')
|
||||||
|
print(f"Stop signal cleared for session {session_id}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to clear stop signal for session {session_id}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _get_session_data(self, session_id: str) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Retrieves and deserializes session data from Redis."""
|
||||||
|
try:
|
||||||
|
session_key = self._get_session_key(session_id)
|
||||||
|
serialized_data = self.redis_client.get(session_key)
|
||||||
|
if serialized_data:
|
||||||
|
session_data = pickle.loads(serialized_data)
|
||||||
|
# Ensure the scanner has the correct session ID for stop signal checking
|
||||||
|
if 'scanner' in session_data and session_data['scanner']:
|
||||||
|
session_data['scanner'].session_id = session_id
|
||||||
|
return session_data
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to get session data for {session_id}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _save_session_data(self, session_id: str, session_data: Dict[str, Any]) -> bool:
|
||||||
|
"""
|
||||||
|
Serializes and saves session data back to Redis with updated TTL.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if save was successful
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
session_key = self._get_session_key(session_id)
|
||||||
|
serialized_data = pickle.dumps(session_data)
|
||||||
|
result = self.redis_client.setex(session_key, self.session_timeout, serialized_data)
|
||||||
|
return result
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to save session data for {session_id}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def update_session_scanner(self, session_id: str, scanner: 'Scanner') -> bool:
|
||||||
|
"""
|
||||||
|
Updates just the scanner object in a session with immediate persistence.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if update was successful
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
session_data = self._get_session_data(session_id)
|
||||||
|
if session_data:
|
||||||
|
# Ensure scanner has the session ID
|
||||||
|
scanner.session_id = session_id
|
||||||
|
session_data['scanner'] = scanner
|
||||||
|
session_data['last_activity'] = time.time()
|
||||||
|
|
||||||
|
# Immediately save to Redis for GUI updates
|
||||||
|
success = self._save_session_data(session_id, session_data)
|
||||||
|
if success:
|
||||||
|
print(f"Scanner state updated for session {session_id} (status: {scanner.status})")
|
||||||
|
else:
|
||||||
|
print(f"WARNING: Failed to save scanner state for session {session_id}")
|
||||||
|
return success
|
||||||
|
else:
|
||||||
|
print(f"WARNING: Session {session_id} not found for scanner update")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to update scanner for session {session_id}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def update_scanner_status(self, session_id: str, status: str) -> bool:
|
||||||
|
"""
|
||||||
|
Quickly update just the scanner status for immediate GUI feedback.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session_id: Session identifier
|
||||||
|
status: New scanner status
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if update was successful
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
session_data = self._get_session_data(session_id)
|
||||||
|
if session_data and 'scanner' in session_data:
|
||||||
|
session_data['scanner'].status = status
|
||||||
|
session_data['last_activity'] = time.time()
|
||||||
|
|
||||||
|
success = self._save_session_data(session_id, session_data)
|
||||||
|
if success:
|
||||||
|
print(f"Scanner status updated to '{status}' for session {session_id}")
|
||||||
|
else:
|
||||||
|
print(f"WARNING: Failed to save status update for session {session_id}")
|
||||||
|
return success
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to update scanner status for session {session_id}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def get_session(self, session_id: str) -> Optional[Scanner]:
|
||||||
|
"""
|
||||||
|
Get scanner instance for a session from Redis with session ID management.
|
||||||
"""
|
"""
|
||||||
if not session_id:
|
if not session_id:
|
||||||
print("get_session called with empty session_id")
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
with self.lock:
|
session_data = self._get_session_data(session_id)
|
||||||
if session_id not in self.sessions:
|
|
||||||
print(f"Session {session_id} not found in session manager")
|
|
||||||
print(f"Available sessions: {list(self.sessions.keys())}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
session_data = self.sessions[session_id]
|
|
||||||
|
|
||||||
# Check if session is still active
|
|
||||||
if session_data['status'] != 'active':
|
|
||||||
print(f"Session {session_id} is not active (status: {session_data['status']})")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Update last activity
|
|
||||||
session_data['last_activity'] = time.time()
|
|
||||||
scanner = session_data['scanner']
|
|
||||||
|
|
||||||
print(f"Retrieved scanner {id(scanner)} for session {session_id}")
|
|
||||||
print(f"Scanner status: {scanner.status}")
|
|
||||||
|
|
||||||
return scanner
|
|
||||||
|
|
||||||
def get_or_create_session(self, session_id: Optional[str] = None) -> tuple[str, Scanner]:
|
|
||||||
"""
|
|
||||||
Get existing session or create new one.
|
|
||||||
|
|
||||||
Args:
|
if not session_data or session_data.get('status') != 'active':
|
||||||
session_id: Optional existing session ID
|
return None
|
||||||
|
|
||||||
Returns:
|
|
||||||
Tuple of (session_id, scanner_instance)
|
|
||||||
"""
|
|
||||||
if session_id and self.get_session(session_id):
|
|
||||||
return session_id, self.get_session(session_id)
|
|
||||||
else:
|
|
||||||
new_session_id = self.create_session()
|
|
||||||
return new_session_id, self.get_session(new_session_id)
|
|
||||||
|
|
||||||
def terminate_session(self, session_id: str) -> bool:
|
|
||||||
"""
|
|
||||||
Terminate a specific session and cleanup resources.
|
|
||||||
|
|
||||||
Args:
|
# Update last activity and save back to Redis
|
||||||
session_id: Session to terminate
|
session_data['last_activity'] = time.time()
|
||||||
|
self._save_session_data(session_id, session_data)
|
||||||
Returns:
|
|
||||||
True if session was terminated successfully
|
scanner = session_data.get('scanner')
|
||||||
|
if scanner:
|
||||||
|
# Ensure the scanner can check the Redis-based stop signal
|
||||||
|
scanner.session_id = session_id
|
||||||
|
|
||||||
|
return scanner
|
||||||
|
|
||||||
|
def get_session_status_only(self, session_id: str) -> Optional[str]:
|
||||||
"""
|
"""
|
||||||
with self.lock:
|
Get just the scanner status without full session retrieval (for performance).
|
||||||
if session_id not in self.sessions:
|
|
||||||
return False
|
|
||||||
|
|
||||||
session_data = self.sessions[session_id]
|
|
||||||
scanner = session_data['scanner']
|
|
||||||
|
|
||||||
# Stop any running scan
|
|
||||||
try:
|
|
||||||
if scanner.status == 'running':
|
|
||||||
scanner.stop_scan()
|
|
||||||
print(f"Stopped scan for session: {session_id}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error stopping scan for session {session_id}: {e}")
|
|
||||||
|
|
||||||
# Mark as terminated
|
|
||||||
session_data['status'] = 'terminated'
|
|
||||||
session_data['terminated_at'] = time.time()
|
|
||||||
|
|
||||||
# Remove from active sessions after a brief delay to allow cleanup
|
|
||||||
threading.Timer(5.0, lambda: self._remove_session(session_id)).start()
|
|
||||||
|
|
||||||
print(f"Terminated session: {session_id}")
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _remove_session(self, session_id: str) -> None:
|
|
||||||
"""Remove session from memory."""
|
|
||||||
with self.lock:
|
|
||||||
if session_id in self.sessions:
|
|
||||||
del self.sessions[session_id]
|
|
||||||
print(f"Removed session from memory: {session_id}")
|
|
||||||
|
|
||||||
def get_session_info(self, session_id: str) -> Optional[Dict[str, Any]]:
|
|
||||||
"""
|
|
||||||
Get session information without updating activity.
|
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
session_id: Session identifier
|
session_id: Session identifier
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Session information dictionary or None
|
Scanner status string or None if not found
|
||||||
"""
|
"""
|
||||||
with self.lock:
|
try:
|
||||||
if session_id not in self.sessions:
|
session_data = self._get_session_data(session_id)
|
||||||
return None
|
if session_data and 'scanner' in session_data:
|
||||||
|
return session_data['scanner'].status
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to get session status for {session_id}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def terminate_session(self, session_id: str) -> bool:
|
||||||
|
"""
|
||||||
|
Terminate a specific session in Redis with reliable stop signal and immediate status update.
|
||||||
|
"""
|
||||||
|
print(f"=== TERMINATING SESSION {session_id} ===")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# First, set the stop signal
|
||||||
|
self.set_stop_signal(session_id)
|
||||||
|
|
||||||
session_data = self.sessions[session_id]
|
# Update scanner status to stopped immediately for GUI feedback
|
||||||
scanner = session_data['scanner']
|
self.update_scanner_status(session_id, 'stopped')
|
||||||
|
|
||||||
return {
|
session_data = self._get_session_data(session_id)
|
||||||
'session_id': session_id,
|
if not session_data:
|
||||||
'created_at': datetime.fromtimestamp(session_data['created_at'], timezone.utc).isoformat(),
|
print(f"Session {session_id} not found")
|
||||||
'last_activity': datetime.fromtimestamp(session_data['last_activity'], timezone.utc).isoformat(),
|
return False
|
||||||
'status': session_data['status'],
|
|
||||||
'scan_status': scanner.status,
|
scanner = session_data.get('scanner')
|
||||||
'current_target': scanner.current_target,
|
if scanner and scanner.status == 'running':
|
||||||
'uptime_seconds': time.time() - session_data['created_at']
|
print(f"Stopping scan for session: {session_id}")
|
||||||
}
|
# The scanner will check the Redis stop signal
|
||||||
|
scanner.stop_scan()
|
||||||
def list_active_sessions(self) -> Dict[str, Dict[str, Any]]:
|
|
||||||
"""
|
# Update the scanner state immediately
|
||||||
List all active sessions with enhanced debugging info.
|
self.update_session_scanner(session_id, scanner)
|
||||||
|
|
||||||
Returns:
|
# Wait a moment for graceful shutdown
|
||||||
Dictionary of session information
|
time.sleep(0.5)
|
||||||
"""
|
|
||||||
active_sessions = {}
|
# Delete session data and stop signal from Redis
|
||||||
|
session_key = self._get_session_key(session_id)
|
||||||
with self.lock:
|
stop_key = self._get_stop_signal_key(session_id)
|
||||||
for session_id, session_data in self.sessions.items():
|
self.redis_client.delete(session_key)
|
||||||
if session_data['status'] == 'active':
|
self.redis_client.delete(stop_key)
|
||||||
scanner = session_data['scanner']
|
|
||||||
active_sessions[session_id] = {
|
print(f"Terminated and removed session from Redis: {session_id}")
|
||||||
'session_id': session_id,
|
return True
|
||||||
'created_at': datetime.fromtimestamp(session_data['created_at'], timezone.utc).isoformat(),
|
|
||||||
'last_activity': datetime.fromtimestamp(session_data['last_activity'], timezone.utc).isoformat(),
|
except Exception as e:
|
||||||
'status': session_data['status'],
|
print(f"ERROR: Failed to terminate session {session_id}: {e}")
|
||||||
'scan_status': scanner.status,
|
return False
|
||||||
'current_target': scanner.current_target,
|
|
||||||
'uptime_seconds': time.time() - session_data['created_at'],
|
|
||||||
'scanner_object_id': id(scanner)
|
|
||||||
}
|
|
||||||
|
|
||||||
return active_sessions
|
|
||||||
|
|
||||||
def _cleanup_loop(self) -> None:
|
def _cleanup_loop(self) -> None:
|
||||||
"""Background thread to cleanup inactive sessions."""
|
"""
|
||||||
|
Background thread to cleanup inactive sessions and orphaned stop signals.
|
||||||
|
"""
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
current_time = time.time()
|
# Clean up orphaned stop signals
|
||||||
sessions_to_cleanup = []
|
stop_keys = self.redis_client.keys("dnsrecon:stop:*")
|
||||||
|
for stop_key in stop_keys:
|
||||||
with self.lock:
|
# Extract session ID from stop key
|
||||||
for session_id, session_data in self.sessions.items():
|
session_id = stop_key.decode('utf-8').split(':')[-1]
|
||||||
if session_data['status'] != 'active':
|
session_key = self._get_session_key(session_id)
|
||||||
continue
|
|
||||||
|
# If session doesn't exist but stop signal does, clean it up
|
||||||
inactive_time = current_time - session_data['last_activity']
|
if not self.redis_client.exists(session_key):
|
||||||
|
self.redis_client.delete(stop_key)
|
||||||
|
print(f"Cleaned up orphaned stop signal for session {session_id}")
|
||||||
|
|
||||||
if inactive_time > self.session_timeout:
|
|
||||||
sessions_to_cleanup.append(session_id)
|
|
||||||
|
|
||||||
# Cleanup outside of lock to avoid deadlock
|
|
||||||
for session_id in sessions_to_cleanup:
|
|
||||||
print(f"Cleaning up inactive session: {session_id}")
|
|
||||||
self.terminate_session(session_id)
|
|
||||||
|
|
||||||
# Sleep for 5 minutes between cleanup cycles
|
|
||||||
time.sleep(300)
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error in session cleanup loop: {e}")
|
print(f"Error in cleanup loop: {e}")
|
||||||
time.sleep(60) # Sleep for 1 minute on error
|
|
||||||
|
time.sleep(300) # Sleep for 5 minutes
|
||||||
|
|
||||||
def get_statistics(self) -> Dict[str, Any]:
|
def get_statistics(self) -> Dict[str, Any]:
|
||||||
"""
|
"""Get session manager statistics."""
|
||||||
Get session manager statistics.
|
try:
|
||||||
|
session_keys = self.redis_client.keys("dnsrecon:session:*")
|
||||||
Returns:
|
stop_keys = self.redis_client.keys("dnsrecon:stop:*")
|
||||||
Statistics dictionary
|
|
||||||
"""
|
active_sessions = len(session_keys)
|
||||||
with self.lock:
|
running_scans = 0
|
||||||
active_count = sum(1 for s in self.sessions.values() if s['status'] == 'active')
|
|
||||||
running_scans = sum(1 for s in self.sessions.values()
|
for session_key in session_keys:
|
||||||
if s['status'] == 'active' and s['scanner'].status == 'running')
|
session_id = session_key.decode('utf-8').split(':')[-1]
|
||||||
|
status = self.get_session_status_only(session_id)
|
||||||
|
if status == 'running':
|
||||||
|
running_scans += 1
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'total_sessions': len(self.sessions),
|
'total_active_sessions': active_sessions,
|
||||||
'active_sessions': active_count,
|
|
||||||
'running_scans': running_scans,
|
'running_scans': running_scans,
|
||||||
'session_timeout_minutes': self.session_timeout / 60
|
'total_stop_signals': len(stop_keys)
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
print(f"ERROR: Failed to get statistics: {e}")
|
||||||
|
return {
|
||||||
|
'total_active_sessions': 0,
|
||||||
|
'running_scans': 0,
|
||||||
|
'total_stop_signals': 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
# Global session manager instance
|
# Global session manager instance
|
||||||
session_manager = SessionManager(session_timeout_minutes=60)
|
session_manager = SessionManager(session_timeout_minutes=60)
|
||||||
@@ -7,15 +7,13 @@ from .base_provider import BaseProvider, RateLimiter
|
|||||||
from .crtsh_provider import CrtShProvider
|
from .crtsh_provider import CrtShProvider
|
||||||
from .dns_provider import DNSProvider
|
from .dns_provider import DNSProvider
|
||||||
from .shodan_provider import ShodanProvider
|
from .shodan_provider import ShodanProvider
|
||||||
from .virustotal_provider import VirusTotalProvider
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'BaseProvider',
|
'BaseProvider',
|
||||||
'RateLimiter',
|
'RateLimiter',
|
||||||
'CrtShProvider',
|
'CrtShProvider',
|
||||||
'DNSProvider',
|
'DNSProvider',
|
||||||
'ShodanProvider',
|
'ShodanProvider'
|
||||||
'VirusTotalProvider'
|
|
||||||
]
|
]
|
||||||
|
|
||||||
__version__ = "1.0.0-phase2"
|
__version__ = "0.0.0-rc"
|
||||||
@@ -3,13 +3,10 @@
|
|||||||
import time
|
import time
|
||||||
import requests
|
import requests
|
||||||
import threading
|
import threading
|
||||||
import os
|
|
||||||
import json
|
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from typing import List, Dict, Any, Optional, Tuple
|
from typing import List, Dict, Any, Optional, Tuple
|
||||||
|
|
||||||
from core.logger import get_forensic_logger
|
from core.logger import get_forensic_logger
|
||||||
from core.graph_manager import RelationshipType
|
|
||||||
|
|
||||||
|
|
||||||
class RateLimiter:
|
class RateLimiter:
|
||||||
@@ -26,6 +23,14 @@ class RateLimiter:
|
|||||||
self.min_interval = 60.0 / requests_per_minute
|
self.min_interval = 60.0 / requests_per_minute
|
||||||
self.last_request_time = 0
|
self.last_request_time = 0
|
||||||
|
|
||||||
|
def __getstate__(self):
|
||||||
|
"""RateLimiter is fully picklable, return full state."""
|
||||||
|
return self.__dict__.copy()
|
||||||
|
|
||||||
|
def __setstate__(self, state):
|
||||||
|
"""Restore RateLimiter state."""
|
||||||
|
self.__dict__.update(state)
|
||||||
|
|
||||||
def wait_if_needed(self) -> None:
|
def wait_if_needed(self) -> None:
|
||||||
"""Wait if necessary to respect rate limits."""
|
"""Wait if necessary to respect rate limits."""
|
||||||
current_time = time.time()
|
current_time = time.time()
|
||||||
@@ -73,19 +78,28 @@ class BaseProvider(ABC):
|
|||||||
self.logger = get_forensic_logger()
|
self.logger = get_forensic_logger()
|
||||||
self._stop_event = None
|
self._stop_event = None
|
||||||
|
|
||||||
# Caching configuration (per session)
|
|
||||||
self.cache_dir = f'.cache/{id(self.config)}' # Unique cache per session config
|
|
||||||
self.cache_expiry = 12 * 3600 # 12 hours in seconds
|
|
||||||
if not os.path.exists(self.cache_dir):
|
|
||||||
os.makedirs(self.cache_dir)
|
|
||||||
|
|
||||||
# Statistics (per provider instance)
|
# Statistics (per provider instance)
|
||||||
self.total_requests = 0
|
self.total_requests = 0
|
||||||
self.successful_requests = 0
|
self.successful_requests = 0
|
||||||
self.failed_requests = 0
|
self.failed_requests = 0
|
||||||
self.total_relationships_found = 0
|
self.total_relationships_found = 0
|
||||||
|
|
||||||
print(f"Initialized {name} provider with session-specific config (rate: {actual_rate_limit}/min)")
|
def __getstate__(self):
|
||||||
|
"""Prepare BaseProvider for pickling by excluding unpicklable objects."""
|
||||||
|
state = self.__dict__.copy()
|
||||||
|
# Exclude the unpickleable '_local' attribute and stop event
|
||||||
|
unpicklable_attrs = ['_local', '_stop_event']
|
||||||
|
for attr in unpicklable_attrs:
|
||||||
|
if attr in state:
|
||||||
|
del state[attr]
|
||||||
|
return state
|
||||||
|
|
||||||
|
def __setstate__(self, state):
|
||||||
|
"""Restore BaseProvider after unpickling by reconstructing threading objects."""
|
||||||
|
self.__dict__.update(state)
|
||||||
|
# Re-initialize the '_local' attribute and stop event
|
||||||
|
self._local = threading.local()
|
||||||
|
self._stop_event = None
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def session(self):
|
def session(self):
|
||||||
@@ -101,13 +115,28 @@ class BaseProvider(ABC):
|
|||||||
"""Return the provider name."""
|
"""Return the provider name."""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_display_name(self) -> str:
|
||||||
|
"""Return the provider display name for the UI."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def requires_api_key(self) -> bool:
|
||||||
|
"""Return True if the provider requires an API key."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_eligibility(self) -> Dict[str, bool]:
|
||||||
|
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||||
|
pass
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def is_available(self) -> bool:
|
def is_available(self) -> bool:
|
||||||
"""Check if the provider is available and properly configured."""
|
"""Check if the provider is available and properly configured."""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def query_domain(self, domain: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
"""
|
||||||
Query the provider for information about a domain.
|
Query the provider for information about a domain.
|
||||||
|
|
||||||
@@ -120,7 +149,7 @@ class BaseProvider(ABC):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def query_ip(self, ip: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
"""
|
||||||
Query the provider for information about an IP address.
|
Query the provider for information about an IP address.
|
||||||
|
|
||||||
@@ -135,163 +164,87 @@ class BaseProvider(ABC):
|
|||||||
def make_request(self, url: str, method: str = "GET",
|
def make_request(self, url: str, method: str = "GET",
|
||||||
params: Optional[Dict[str, Any]] = None,
|
params: Optional[Dict[str, Any]] = None,
|
||||||
headers: Optional[Dict[str, str]] = None,
|
headers: Optional[Dict[str, str]] = None,
|
||||||
target_indicator: str = "",
|
target_indicator: str = "") -> Optional[requests.Response]:
|
||||||
max_retries: int = 3) -> Optional[requests.Response]:
|
|
||||||
"""
|
"""
|
||||||
Make a rate-limited HTTP request with forensic logging and retry logic.
|
Make a rate-limited HTTP request.
|
||||||
Now supports cancellation via stop_event from scanner.
|
|
||||||
"""
|
"""
|
||||||
# Check for cancellation before starting
|
if self._is_stop_requested():
|
||||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
|
||||||
print(f"Request cancelled before start: {url}")
|
print(f"Request cancelled before start: {url}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Create a unique cache key
|
self.rate_limiter.wait_if_needed()
|
||||||
cache_key = f"{self.name}_{hash(f'{method}:{url}:{json.dumps(params, sort_keys=True)}')}.json"
|
|
||||||
cache_path = os.path.join(self.cache_dir, cache_key)
|
|
||||||
|
|
||||||
# Check cache
|
start_time = time.time()
|
||||||
if os.path.exists(cache_path):
|
response = None
|
||||||
cache_age = time.time() - os.path.getmtime(cache_path)
|
error = None
|
||||||
if cache_age < self.cache_expiry:
|
|
||||||
print(f"Returning cached response for: {url}")
|
|
||||||
with open(cache_path, 'r') as f:
|
|
||||||
cached_data = json.load(f)
|
|
||||||
response = requests.Response()
|
|
||||||
response.status_code = cached_data['status_code']
|
|
||||||
response._content = cached_data['content'].encode('utf-8')
|
|
||||||
response.headers = cached_data['headers']
|
|
||||||
return response
|
|
||||||
|
|
||||||
for attempt in range(max_retries + 1):
|
try:
|
||||||
# Check for cancellation before each attempt
|
self.total_requests += 1
|
||||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
|
||||||
print(f"Request cancelled during attempt {attempt + 1}: {url}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Apply rate limiting (but reduce wait time if cancellation is requested)
|
request_headers = dict(self.session.headers).copy()
|
||||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
if headers:
|
||||||
break
|
request_headers.update(headers)
|
||||||
|
|
||||||
self.rate_limiter.wait_if_needed()
|
|
||||||
|
|
||||||
# Check again after rate limiting
|
print(f"Making {method} request to: {url}")
|
||||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
|
||||||
print(f"Request cancelled after rate limiting: {url}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
start_time = time.time()
|
if method.upper() == "GET":
|
||||||
response = None
|
response = self.session.get(
|
||||||
error = None
|
url,
|
||||||
|
params=params,
|
||||||
try:
|
headers=request_headers,
|
||||||
self.total_requests += 1
|
timeout=self.timeout
|
||||||
|
|
||||||
# Prepare request
|
|
||||||
request_headers = self.session.headers.copy()
|
|
||||||
if headers:
|
|
||||||
request_headers.update(headers)
|
|
||||||
|
|
||||||
print(f"Making {method} request to: {url} (attempt {attempt + 1})")
|
|
||||||
|
|
||||||
# Use shorter timeout if termination is requested
|
|
||||||
request_timeout = self.timeout
|
|
||||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
|
||||||
request_timeout = min(5, self.timeout) # Max 5 seconds if termination requested
|
|
||||||
|
|
||||||
# Make request
|
|
||||||
if method.upper() == "GET":
|
|
||||||
response = self.session.get(
|
|
||||||
url,
|
|
||||||
params=params,
|
|
||||||
headers=request_headers,
|
|
||||||
timeout=request_timeout
|
|
||||||
)
|
|
||||||
elif method.upper() == "POST":
|
|
||||||
response = self.session.post(
|
|
||||||
url,
|
|
||||||
json=params,
|
|
||||||
headers=request_headers,
|
|
||||||
timeout=request_timeout
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
|
||||||
|
|
||||||
print(f"Response status: {response.status_code}")
|
|
||||||
response.raise_for_status()
|
|
||||||
self.successful_requests += 1
|
|
||||||
|
|
||||||
# Success - log, cache, and return
|
|
||||||
duration_ms = (time.time() - start_time) * 1000
|
|
||||||
self.logger.log_api_request(
|
|
||||||
provider=self.name,
|
|
||||||
url=url,
|
|
||||||
method=method.upper(),
|
|
||||||
status_code=response.status_code,
|
|
||||||
response_size=len(response.content),
|
|
||||||
duration_ms=duration_ms,
|
|
||||||
error=None,
|
|
||||||
target_indicator=target_indicator
|
|
||||||
)
|
)
|
||||||
# Cache the successful response to disk
|
elif method.upper() == "POST":
|
||||||
with open(cache_path, 'w') as f:
|
response = self.session.post(
|
||||||
json.dump({
|
url,
|
||||||
'status_code': response.status_code,
|
json=params,
|
||||||
'content': response.text,
|
headers=request_headers,
|
||||||
'headers': dict(response.headers)
|
timeout=self.timeout
|
||||||
}, f)
|
)
|
||||||
return response
|
else:
|
||||||
|
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||||
|
|
||||||
except requests.exceptions.RequestException as e:
|
print(f"Response status: {response.status_code}")
|
||||||
error = str(e)
|
response.raise_for_status()
|
||||||
self.failed_requests += 1
|
self.successful_requests += 1
|
||||||
print(f"Request failed (attempt {attempt + 1}): {error}")
|
|
||||||
|
duration_ms = (time.time() - start_time) * 1000
|
||||||
# Check for cancellation before retrying
|
self.logger.log_api_request(
|
||||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
provider=self.name,
|
||||||
print(f"Request cancelled, not retrying: {url}")
|
url=url,
|
||||||
break
|
method=method.upper(),
|
||||||
|
status_code=response.status_code,
|
||||||
# Check if we should retry
|
response_size=len(response.content),
|
||||||
if attempt < max_retries and self._should_retry(e):
|
duration_ms=duration_ms,
|
||||||
backoff_time = (2 ** attempt) * 1 # Exponential backoff: 1s, 2s, 4s
|
error=None,
|
||||||
print(f"Retrying in {backoff_time} seconds...")
|
target_indicator=target_indicator
|
||||||
|
)
|
||||||
# Shorter backoff if termination is requested
|
|
||||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
return response
|
||||||
backoff_time = min(0.5, backoff_time)
|
|
||||||
|
|
||||||
# Sleep with cancellation checking
|
|
||||||
sleep_start = time.time()
|
|
||||||
while time.time() - sleep_start < backoff_time:
|
|
||||||
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
|
||||||
print(f"Request cancelled during backoff: {url}")
|
|
||||||
return None
|
|
||||||
time.sleep(0.1) # Check every 100ms
|
|
||||||
continue
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
except Exception as e:
|
except requests.exceptions.RequestException as e:
|
||||||
error = f"Unexpected error: {str(e)}"
|
error = str(e)
|
||||||
self.failed_requests += 1
|
self.failed_requests += 1
|
||||||
print(f"Unexpected error: {error}")
|
duration_ms = (time.time() - start_time) * 1000
|
||||||
break
|
self.logger.log_api_request(
|
||||||
|
provider=self.name,
|
||||||
|
url=url,
|
||||||
|
method=method.upper(),
|
||||||
|
status_code=response.status_code if response else None,
|
||||||
|
response_size=len(response.content) if response else None,
|
||||||
|
duration_ms=duration_ms,
|
||||||
|
error=error,
|
||||||
|
target_indicator=target_indicator
|
||||||
|
)
|
||||||
|
raise e
|
||||||
|
|
||||||
# All attempts failed - log and return None
|
def _is_stop_requested(self) -> bool:
|
||||||
duration_ms = (time.time() - start_time) * 1000
|
"""
|
||||||
self.logger.log_api_request(
|
Enhanced stop signal checking that handles both local and Redis-based signals.
|
||||||
provider=self.name,
|
"""
|
||||||
url=url,
|
if hasattr(self, '_stop_event') and self._stop_event and self._stop_event.is_set():
|
||||||
method=method.upper(),
|
return True
|
||||||
status_code=response.status_code if response else None,
|
return False
|
||||||
response_size=len(response.content) if response else None,
|
|
||||||
duration_ms=duration_ms,
|
|
||||||
error=error,
|
|
||||||
target_indicator=target_indicator
|
|
||||||
)
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_stop_event(self, stop_event: threading.Event) -> None:
|
def set_stop_event(self, stop_event: threading.Event) -> None:
|
||||||
"""
|
"""
|
||||||
@@ -302,30 +255,8 @@ class BaseProvider(ABC):
|
|||||||
"""
|
"""
|
||||||
self._stop_event = stop_event
|
self._stop_event = stop_event
|
||||||
|
|
||||||
def _should_retry(self, exception: requests.exceptions.RequestException) -> bool:
|
|
||||||
"""
|
|
||||||
Determine if a request should be retried based on the exception.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
exception: The request exception that occurred
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if the request should be retried
|
|
||||||
"""
|
|
||||||
# Retry on connection errors, timeouts, and 5xx server errors
|
|
||||||
if isinstance(exception, (requests.exceptions.ConnectionError,
|
|
||||||
requests.exceptions.Timeout)):
|
|
||||||
return True
|
|
||||||
|
|
||||||
if isinstance(exception, requests.exceptions.HTTPError):
|
|
||||||
if hasattr(exception, 'response') and exception.response:
|
|
||||||
# Retry on server errors (5xx) but not client errors (4xx)
|
|
||||||
return exception.response.status_code >= 500
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
def log_relationship_discovery(self, source_node: str, target_node: str,
|
def log_relationship_discovery(self, source_node: str, target_node: str,
|
||||||
relationship_type: RelationshipType,
|
relationship_type: str,
|
||||||
confidence_score: float,
|
confidence_score: float,
|
||||||
raw_data: Dict[str, Any],
|
raw_data: Dict[str, Any],
|
||||||
discovery_method: str) -> None:
|
discovery_method: str) -> None:
|
||||||
@@ -345,7 +276,7 @@ class BaseProvider(ABC):
|
|||||||
self.logger.log_relationship_discovery(
|
self.logger.log_relationship_discovery(
|
||||||
source_node=source_node,
|
source_node=source_node,
|
||||||
target_node=target_node,
|
target_node=target_node,
|
||||||
relationship_type=relationship_type.relationship_name,
|
relationship_type=relationship_type,
|
||||||
confidence_score=confidence_score,
|
confidence_score=confidence_score,
|
||||||
provider=self.name,
|
provider=self.name,
|
||||||
raw_data=raw_data,
|
raw_data=raw_data,
|
||||||
|
|||||||
@@ -1,534 +1,513 @@
|
|||||||
"""
|
# dnsrecon/providers/crtsh_provider.py
|
||||||
Certificate Transparency provider using crt.sh.
|
|
||||||
Discovers domain relationships through certificate SAN analysis with comprehensive certificate tracking.
|
|
||||||
Stores certificates as metadata on domain nodes rather than creating certificate nodes.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import re
|
import re
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
from typing import List, Dict, Any, Tuple, Set
|
from typing import List, Dict, Any, Tuple, Set
|
||||||
from urllib.parse import quote
|
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
|
# New dependency required for this provider
|
||||||
|
try:
|
||||||
|
import psycopg2
|
||||||
|
import psycopg2.extras
|
||||||
|
PSYCOPG2_AVAILABLE = True
|
||||||
|
except ImportError:
|
||||||
|
PSYCOPG2_AVAILABLE = False
|
||||||
|
|
||||||
from .base_provider import BaseProvider
|
from .base_provider import BaseProvider
|
||||||
from utils.helpers import _is_valid_domain
|
from utils.helpers import _is_valid_domain
|
||||||
from core.graph_manager import RelationshipType
|
|
||||||
|
# We use requests only to raise the same exception type for compatibility with core retry logic
|
||||||
|
import requests
|
||||||
|
|
||||||
|
|
||||||
class CrtShProvider(BaseProvider):
|
class CrtShProvider(BaseProvider):
|
||||||
"""
|
"""
|
||||||
Provider for querying crt.sh certificate transparency database.
|
Provider for querying crt.sh certificate transparency database via its public PostgreSQL endpoint.
|
||||||
Now uses session-specific configuration and caching.
|
This version is designed to be a drop-in, high-performance replacement for the API-based provider.
|
||||||
|
It preserves the same caching and data processing logic.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, session_config=None):
|
def __init__(self, name=None, session_config=None):
|
||||||
"""Initialize CrtSh provider with session-specific configuration."""
|
"""Initialize CrtShDB provider with session-specific configuration."""
|
||||||
super().__init__(
|
super().__init__(
|
||||||
name="crtsh",
|
name="crtsh",
|
||||||
rate_limit=60,
|
rate_limit=0, # No rate limit for direct DB access
|
||||||
timeout=15,
|
timeout=60, # Increased timeout for potentially long DB queries
|
||||||
session_config=session_config
|
session_config=session_config
|
||||||
)
|
)
|
||||||
self.base_url = "https://crt.sh/"
|
# Database connection details
|
||||||
|
self.db_host = "crt.sh"
|
||||||
|
self.db_port = 5432
|
||||||
|
self.db_name = "certwatch"
|
||||||
|
self.db_user = "guest"
|
||||||
self._stop_event = None
|
self._stop_event = None
|
||||||
|
|
||||||
|
# Initialize cache directory (same as original provider)
|
||||||
|
self.cache_dir = Path('cache') / 'crtsh'
|
||||||
|
self.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
def get_name(self) -> str:
|
def get_name(self) -> str:
|
||||||
"""Return the provider name."""
|
"""Return the provider name."""
|
||||||
return "crtsh"
|
return "crtsh"
|
||||||
|
|
||||||
|
def get_display_name(self) -> str:
|
||||||
|
"""Return the provider display name for the UI."""
|
||||||
|
return "crt.sh (DB)"
|
||||||
|
|
||||||
|
def requires_api_key(self) -> bool:
|
||||||
|
"""Return True if the provider requires an API key."""
|
||||||
|
return False
|
||||||
|
|
||||||
|
def get_eligibility(self) -> Dict[str, bool]:
|
||||||
|
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||||
|
return {'domains': True, 'ips': False}
|
||||||
|
|
||||||
def is_available(self) -> bool:
|
def is_available(self) -> bool:
|
||||||
"""
|
"""
|
||||||
Check if the provider is configured to be used.
|
Check if the provider can be used. Requires the psycopg2 library.
|
||||||
This method is intentionally simple and does not perform a network request
|
|
||||||
to avoid blocking application startup.
|
|
||||||
"""
|
"""
|
||||||
|
if not PSYCOPG2_AVAILABLE:
|
||||||
|
self.logger.logger.warning("psycopg2 library not found. CrtShDBProvider is unavailable. "
|
||||||
|
"Please run 'pip install psycopg2-binary'.")
|
||||||
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
def _query_crtsh(self, domain: str) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Query the crt.sh PostgreSQL database for raw certificate data.
|
||||||
|
Raises exceptions for DB/network errors to allow core logic to retry.
|
||||||
|
"""
|
||||||
|
conn = None
|
||||||
|
certificates = []
|
||||||
|
|
||||||
|
# SQL Query to find all certificate IDs related to the domain (including subdomains),
|
||||||
|
# then retrieve comprehensive details for each certificate, mimicking the JSON API structure.
|
||||||
|
sql_query = """
|
||||||
|
WITH certificates_of_interest AS (
|
||||||
|
SELECT DISTINCT ci.certificate_id
|
||||||
|
FROM certificate_identity ci
|
||||||
|
WHERE ci.name_value ILIKE %(domain_wildcard)s OR ci.name_value = %(domain)s
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
c.id,
|
||||||
|
c.serial_number,
|
||||||
|
c.not_before,
|
||||||
|
c.not_after,
|
||||||
|
(SELECT min(entry_timestamp) FROM ct_log_entry cle WHERE cle.certificate_id = c.id) as entry_timestamp,
|
||||||
|
ca.id as issuer_ca_id,
|
||||||
|
ca.name as issuer_name,
|
||||||
|
(SELECT array_to_string(array_agg(DISTINCT ci.name_value), E'\n') FROM certificate_identity ci WHERE ci.certificate_id = c.id) as name_value,
|
||||||
|
(SELECT name_value FROM certificate_identity ci WHERE ci.certificate_id = c.id AND ci.name_type = 'commonName' LIMIT 1) as common_name
|
||||||
|
FROM
|
||||||
|
certificate c
|
||||||
|
JOIN ca ON c.issuer_ca_id = ca.id
|
||||||
|
WHERE c.id IN (SELECT certificate_id FROM certificates_of_interest);
|
||||||
|
"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
conn = psycopg2.connect(
|
||||||
|
dbname=self.db_name,
|
||||||
|
user=self.db_user,
|
||||||
|
host=self.db_host,
|
||||||
|
port=self.db_port,
|
||||||
|
connect_timeout=self.timeout
|
||||||
|
)
|
||||||
|
|
||||||
|
with conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) as cursor:
|
||||||
|
cursor.execute(sql_query, {'domain': domain, 'domain_wildcard': f'%.{domain}'})
|
||||||
|
results = cursor.fetchall()
|
||||||
|
certificates = [dict(row) for row in results]
|
||||||
|
|
||||||
|
self.logger.logger.info(f"crt.sh DB query for '{domain}' returned {len(certificates)} certificates.")
|
||||||
|
|
||||||
|
except psycopg2.Error as e:
|
||||||
|
self.logger.logger.error(f"PostgreSQL query failed for {domain}: {e}")
|
||||||
|
# Raise a RequestException to be compatible with the existing retry logic in the core application
|
||||||
|
raise requests.exceptions.RequestException(f"PostgreSQL query failed: {e}") from e
|
||||||
|
finally:
|
||||||
|
if conn:
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
return certificates
|
||||||
|
|
||||||
|
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
|
"""
|
||||||
|
Query crt.sh for certificates containing the domain with caching support.
|
||||||
|
Properly raises exceptions for network errors to allow core logic retries.
|
||||||
|
"""
|
||||||
|
if not _is_valid_domain(domain):
|
||||||
|
return []
|
||||||
|
|
||||||
|
if self._stop_event and self._stop_event.is_set():
|
||||||
|
return []
|
||||||
|
|
||||||
|
cache_file = self._get_cache_file_path(domain)
|
||||||
|
cache_status = self._get_cache_status(cache_file)
|
||||||
|
|
||||||
|
certificates = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
if cache_status == "fresh":
|
||||||
|
certificates = self._load_cached_certificates(cache_file)
|
||||||
|
self.logger.logger.info(f"Using cached data for {domain} ({len(certificates)} certificates)")
|
||||||
|
|
||||||
|
elif cache_status == "not_found":
|
||||||
|
# Fresh query from DB, create new cache
|
||||||
|
certificates = self._query_crtsh(domain)
|
||||||
|
if certificates:
|
||||||
|
self._create_cache_file(cache_file, domain, self._serialize_certs_for_cache(certificates))
|
||||||
|
else:
|
||||||
|
self.logger.logger.info(f"No certificates found for {domain}, not caching")
|
||||||
|
|
||||||
|
elif cache_status == "stale":
|
||||||
|
try:
|
||||||
|
new_certificates = self._query_crtsh(domain)
|
||||||
|
if new_certificates:
|
||||||
|
certificates = self._append_to_cache(cache_file, self._serialize_certs_for_cache(new_certificates))
|
||||||
|
else:
|
||||||
|
certificates = self._load_cached_certificates(cache_file)
|
||||||
|
except requests.exceptions.RequestException:
|
||||||
|
certificates = self._load_cached_certificates(cache_file)
|
||||||
|
if certificates:
|
||||||
|
self.logger.logger.warning(f"DB query failed for {domain}, using stale cache data.")
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
except requests.exceptions.RequestException as e:
|
||||||
|
# Re-raise so core logic can retry
|
||||||
|
self.logger.logger.error(f"DB query failed for {domain}: {e}")
|
||||||
|
raise e
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
# JSON parsing errors from cache should also be handled
|
||||||
|
self.logger.logger.error(f"Failed to parse JSON from cache for {domain}: {e}")
|
||||||
|
raise e
|
||||||
|
|
||||||
|
if self._stop_event and self._stop_event.is_set():
|
||||||
|
return []
|
||||||
|
|
||||||
|
if not certificates:
|
||||||
|
return []
|
||||||
|
|
||||||
|
return self._process_certificates_to_relationships(domain, certificates)
|
||||||
|
|
||||||
|
def _serialize_certs_for_cache(self, certificates: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Serialize certificate data for JSON caching, converting datetime objects to ISO strings.
|
||||||
|
"""
|
||||||
|
serialized_certs = []
|
||||||
|
for cert in certificates:
|
||||||
|
serialized_cert = cert.copy()
|
||||||
|
for key in ['not_before', 'not_after', 'entry_timestamp']:
|
||||||
|
if isinstance(serialized_cert.get(key), datetime):
|
||||||
|
# Ensure datetime is timezone-aware before converting
|
||||||
|
dt_obj = serialized_cert[key]
|
||||||
|
if dt_obj.tzinfo is None:
|
||||||
|
dt_obj = dt_obj.replace(tzinfo=timezone.utc)
|
||||||
|
serialized_cert[key] = dt_obj.isoformat()
|
||||||
|
serialized_certs.append(serialized_cert)
|
||||||
|
return serialized_certs
|
||||||
|
|
||||||
|
# --- All methods below are copied directly from the original CrtShProvider ---
|
||||||
|
# They are compatible because _query_crtsh returns data in the same format
|
||||||
|
# as the original _query_crtsh_api method. A small adjustment is made to
|
||||||
|
# _parse_certificate_date to handle datetime objects directly from the DB.
|
||||||
|
|
||||||
|
def _get_cache_file_path(self, domain: str) -> Path:
|
||||||
|
"""Generate cache file path for a domain."""
|
||||||
|
safe_domain = domain.replace('.', '_').replace('/', '_').replace('\\', '_')
|
||||||
|
return self.cache_dir / f"{safe_domain}.json"
|
||||||
|
|
||||||
def _parse_certificate_date(self, date_string: str) -> datetime:
|
def _get_cache_status(self, cache_file_path: Path) -> str:
|
||||||
|
"""Check cache status for a domain."""
|
||||||
|
if not cache_file_path.exists():
|
||||||
|
return "not_found"
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(cache_file_path, 'r') as f:
|
||||||
|
cache_data = json.load(f)
|
||||||
|
|
||||||
|
last_query_str = cache_data.get("last_upstream_query")
|
||||||
|
if not last_query_str:
|
||||||
|
return "stale"
|
||||||
|
|
||||||
|
last_query = datetime.fromisoformat(last_query_str.replace('Z', '+00:00'))
|
||||||
|
hours_since_query = (datetime.now(timezone.utc) - last_query).total_seconds() / 3600
|
||||||
|
|
||||||
|
cache_timeout = self.config.cache_timeout_hours
|
||||||
|
if hours_since_query < cache_timeout:
|
||||||
|
return "fresh"
|
||||||
|
else:
|
||||||
|
return "stale"
|
||||||
|
|
||||||
|
except (json.JSONDecodeError, ValueError, KeyError) as e:
|
||||||
|
self.logger.logger.warning(f"Invalid cache file format for {cache_file_path}: {e}")
|
||||||
|
return "stale"
|
||||||
|
|
||||||
|
def _load_cached_certificates(self, cache_file_path: Path) -> List[Dict[str, Any]]:
|
||||||
|
"""Load certificates from cache file."""
|
||||||
|
try:
|
||||||
|
with open(cache_file_path, 'r') as f:
|
||||||
|
cache_data = json.load(f)
|
||||||
|
return cache_data.get('certificates', [])
|
||||||
|
except (json.JSONDecodeError, FileNotFoundError, KeyError) as e:
|
||||||
|
self.logger.logger.error(f"Failed to load cached certificates from {cache_file_path}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def _create_cache_file(self, cache_file_path: Path, domain: str, certificates: List[Dict[str, Any]]) -> None:
|
||||||
|
"""Create new cache file with certificates."""
|
||||||
|
try:
|
||||||
|
cache_data = {
|
||||||
|
"domain": domain,
|
||||||
|
"first_cached": datetime.now(timezone.utc).isoformat(),
|
||||||
|
"last_upstream_query": datetime.now(timezone.utc).isoformat(),
|
||||||
|
"upstream_query_count": 1,
|
||||||
|
"certificates": certificates
|
||||||
|
}
|
||||||
|
cache_file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(cache_file_path, 'w') as f:
|
||||||
|
json.dump(cache_data, f, separators=(',', ':'))
|
||||||
|
self.logger.logger.info(f"Created cache file for {domain} with {len(certificates)} certificates")
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.logger.warning(f"Failed to create cache file for {domain}: {e}")
|
||||||
|
|
||||||
|
def _append_to_cache(self, cache_file_path: Path, new_certificates: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||||
|
"""Append new certificates to existing cache and return all certificates."""
|
||||||
|
try:
|
||||||
|
with open(cache_file_path, 'r') as f:
|
||||||
|
cache_data = json.load(f)
|
||||||
|
|
||||||
|
existing_ids = {cert.get('id') for cert in cache_data.get('certificates', [])}
|
||||||
|
added_count = 0
|
||||||
|
for cert in new_certificates:
|
||||||
|
cert_id = cert.get('id')
|
||||||
|
if cert_id and cert_id not in existing_ids:
|
||||||
|
cache_data['certificates'].append(cert)
|
||||||
|
existing_ids.add(cert_id)
|
||||||
|
added_count += 1
|
||||||
|
|
||||||
|
cache_data['last_upstream_query'] = datetime.now(timezone.utc).isoformat()
|
||||||
|
cache_data['upstream_query_count'] = cache_data.get('upstream_query_count', 0) + 1
|
||||||
|
|
||||||
|
with open(cache_file_path, 'w') as f:
|
||||||
|
json.dump(cache_data, f, separators=(',', ':'))
|
||||||
|
|
||||||
|
total_certs = len(cache_data['certificates'])
|
||||||
|
self.logger.logger.info(f"Appended {added_count} new certificates to cache. Total: {total_certs}")
|
||||||
|
return cache_data['certificates']
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.logger.warning(f"Failed to append to cache: {e}")
|
||||||
|
return new_certificates
|
||||||
|
|
||||||
|
def _parse_issuer_organization(self, issuer_dn: str) -> str:
|
||||||
|
"""Parse the issuer Distinguished Name to extract just the organization name."""
|
||||||
|
if not issuer_dn: return issuer_dn
|
||||||
|
try:
|
||||||
|
components = [comp.strip() for comp in issuer_dn.split(',')]
|
||||||
|
for component in components:
|
||||||
|
if component.startswith('O='):
|
||||||
|
org_name = component[2:].strip()
|
||||||
|
if org_name.startswith('"') and org_name.endswith('"'):
|
||||||
|
org_name = org_name[1:-1]
|
||||||
|
return org_name
|
||||||
|
return issuer_dn
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.logger.debug(f"Failed to parse issuer DN '{issuer_dn}': {e}")
|
||||||
|
return issuer_dn
|
||||||
|
|
||||||
|
def _parse_certificate_date(self, date_input: Any) -> datetime:
|
||||||
"""
|
"""
|
||||||
Parse certificate date from crt.sh format.
|
Parse certificate date from various formats (string from cache, datetime from DB).
|
||||||
|
|
||||||
Args:
|
|
||||||
date_string: Date string from crt.sh API
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Parsed datetime object in UTC
|
|
||||||
"""
|
"""
|
||||||
|
if isinstance(date_input, datetime):
|
||||||
|
# If it's already a datetime object from the DB, just ensure it's UTC
|
||||||
|
if date_input.tzinfo is None:
|
||||||
|
return date_input.replace(tzinfo=timezone.utc)
|
||||||
|
return date_input
|
||||||
|
|
||||||
|
date_string = str(date_input)
|
||||||
if not date_string:
|
if not date_string:
|
||||||
raise ValueError("Empty date string")
|
raise ValueError("Empty date string")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Handle various possible formats from crt.sh
|
if 'Z' in date_string:
|
||||||
if date_string.endswith('Z'):
|
return datetime.fromisoformat(date_string.replace('Z', '+00:00'))
|
||||||
return datetime.fromisoformat(date_string[:-1]).replace(tzinfo=timezone.utc)
|
# Handle standard ISO format with or without timezone
|
||||||
elif '+' in date_string or date_string.endswith('UTC'):
|
dt = datetime.fromisoformat(date_string)
|
||||||
# Handle timezone-aware strings
|
if dt.tzinfo is None:
|
||||||
date_string = date_string.replace('UTC', '').strip()
|
return dt.replace(tzinfo=timezone.utc)
|
||||||
if '+' in date_string:
|
return dt
|
||||||
date_string = date_string.split('+')[0]
|
except ValueError as e:
|
||||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
|
||||||
else:
|
|
||||||
# Assume UTC if no timezone specified
|
|
||||||
return datetime.fromisoformat(date_string).replace(tzinfo=timezone.utc)
|
|
||||||
except Exception as e:
|
|
||||||
# Fallback: try parsing without timezone info and assume UTC
|
|
||||||
try:
|
try:
|
||||||
|
# Fallback for other formats
|
||||||
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
|
return datetime.strptime(date_string[:19], "%Y-%m-%dT%H:%M:%S").replace(tzinfo=timezone.utc)
|
||||||
except Exception:
|
except Exception:
|
||||||
raise ValueError(f"Unable to parse date: {date_string}") from e
|
raise ValueError(f"Unable to parse date: {date_string}") from e
|
||||||
|
|
||||||
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
|
def _is_cert_valid(self, cert_data: Dict[str, Any]) -> bool:
|
||||||
"""
|
"""Check if a certificate is currently valid based on its expiry date."""
|
||||||
Check if a certificate is currently valid based on its expiry date.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
cert_data: Certificate data from crt.sh
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if certificate is currently valid (not expired)
|
|
||||||
"""
|
|
||||||
try:
|
try:
|
||||||
not_after_str = cert_data.get('not_after')
|
not_after_str = cert_data.get('not_after')
|
||||||
if not not_after_str:
|
if not not_after_str: return False
|
||||||
return False
|
|
||||||
|
|
||||||
not_after_date = self._parse_certificate_date(not_after_str)
|
not_after_date = self._parse_certificate_date(not_after_str)
|
||||||
not_before_str = cert_data.get('not_before')
|
not_before_str = cert_data.get('not_before')
|
||||||
|
|
||||||
now = datetime.now(timezone.utc)
|
now = datetime.now(timezone.utc)
|
||||||
|
|
||||||
# Check if certificate is within valid date range
|
|
||||||
is_not_expired = not_after_date > now
|
is_not_expired = not_after_date > now
|
||||||
|
|
||||||
if not_before_str:
|
if not_before_str:
|
||||||
not_before_date = self._parse_certificate_date(not_before_str)
|
not_before_date = self._parse_certificate_date(not_before_str)
|
||||||
is_not_before_valid = not_before_date <= now
|
is_not_before_valid = not_before_date <= now
|
||||||
return is_not_expired and is_not_before_valid
|
return is_not_expired and is_not_before_valid
|
||||||
|
|
||||||
return is_not_expired
|
return is_not_expired
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.logger.logger.debug(f"Certificate validity check failed: {e}")
|
self.logger.logger.debug(f"Certificate validity check failed: {e}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def _extract_certificate_metadata(self, cert_data: Dict[str, Any]) -> Dict[str, Any]:
|
def _extract_certificate_metadata(self, cert_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
"""
|
# This method works as-is.
|
||||||
Extract comprehensive metadata from certificate data.
|
raw_issuer_name = cert_data.get('issuer_name', '')
|
||||||
|
parsed_issuer_name = self._parse_issuer_organization(raw_issuer_name)
|
||||||
Args:
|
|
||||||
cert_data: Raw certificate data from crt.sh
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Comprehensive certificate metadata dictionary
|
|
||||||
"""
|
|
||||||
metadata = {
|
metadata = {
|
||||||
'certificate_id': cert_data.get('id'),
|
'certificate_id': cert_data.get('id'),
|
||||||
'serial_number': cert_data.get('serial_number'),
|
'serial_number': cert_data.get('serial_number'),
|
||||||
'issuer_name': cert_data.get('issuer_name'),
|
'issuer_name': parsed_issuer_name,
|
||||||
'issuer_ca_id': cert_data.get('issuer_ca_id'),
|
'issuer_ca_id': cert_data.get('issuer_ca_id'),
|
||||||
'common_name': cert_data.get('common_name'),
|
'common_name': cert_data.get('common_name'),
|
||||||
'not_before': cert_data.get('not_before'),
|
'not_before': cert_data.get('not_before'),
|
||||||
'not_after': cert_data.get('not_after'),
|
'not_after': cert_data.get('not_after'),
|
||||||
'entry_timestamp': cert_data.get('entry_timestamp'),
|
'entry_timestamp': cert_data.get('entry_timestamp'),
|
||||||
'source': 'crt.sh'
|
'source': 'crt.sh (DB)'
|
||||||
}
|
}
|
||||||
|
|
||||||
# Add computed fields
|
|
||||||
try:
|
try:
|
||||||
if metadata['not_before'] and metadata['not_after']:
|
if metadata['not_before'] and metadata['not_after']:
|
||||||
not_before = self._parse_certificate_date(metadata['not_before'])
|
not_before = self._parse_certificate_date(metadata['not_before'])
|
||||||
not_after = self._parse_certificate_date(metadata['not_after'])
|
not_after = self._parse_certificate_date(metadata['not_after'])
|
||||||
|
|
||||||
metadata['validity_period_days'] = (not_after - not_before).days
|
metadata['validity_period_days'] = (not_after - not_before).days
|
||||||
metadata['is_currently_valid'] = self._is_cert_valid(cert_data)
|
metadata['is_currently_valid'] = self._is_cert_valid(cert_data)
|
||||||
metadata['expires_soon'] = (not_after - datetime.now(timezone.utc)).days <= 30
|
metadata['expires_soon'] = (not_after - datetime.now(timezone.utc)).days <= 30
|
||||||
|
metadata['not_before'] = not_before.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||||
# Add human-readable dates
|
metadata['not_after'] = not_after.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||||
metadata['not_before_formatted'] = not_before.strftime('%Y-%m-%d %H:%M:%S UTC')
|
|
||||||
metadata['not_after_formatted'] = not_after.strftime('%Y-%m-%d %H:%M:%S UTC')
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.logger.logger.debug(f"Error computing certificate metadata: {e}")
|
self.logger.logger.debug(f"Error computing certificate metadata: {e}")
|
||||||
metadata['is_currently_valid'] = False
|
metadata['is_currently_valid'] = False
|
||||||
metadata['expires_soon'] = False
|
metadata['expires_soon'] = False
|
||||||
|
|
||||||
return metadata
|
return metadata
|
||||||
|
|
||||||
def query_domain(self, domain: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
def _process_certificates_to_relationships(self, domain: str, certificates: List[Dict[str, Any]]) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
# This method works as-is.
|
||||||
Query crt.sh for certificates containing the domain.
|
|
||||||
Creates domain-to-domain relationships and stores certificate data as metadata.
|
|
||||||
Now supports early termination via stop_event.
|
|
||||||
"""
|
|
||||||
if not _is_valid_domain(domain):
|
|
||||||
return []
|
|
||||||
|
|
||||||
# Check for cancellation before starting
|
|
||||||
if self._stop_event and self._stop_event.is_set():
|
|
||||||
print(f"CrtSh query cancelled before start for domain: {domain}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
relationships = []
|
relationships = []
|
||||||
|
if self._stop_event and self._stop_event.is_set(): return []
|
||||||
try:
|
domain_certificates = {}
|
||||||
# Query crt.sh for certificates
|
all_discovered_domains = set()
|
||||||
url = f"{self.base_url}?q={quote(domain)}&output=json"
|
for i, cert_data in enumerate(certificates):
|
||||||
response = self.make_request(url, target_indicator=domain, max_retries=1) # Reduce retries for faster cancellation
|
if i % 5 == 0 and self._stop_event and self._stop_event.is_set(): break
|
||||||
|
cert_metadata = self._extract_certificate_metadata(cert_data)
|
||||||
if not response or response.status_code != 200:
|
cert_domains = self._extract_domains_from_certificate(cert_data)
|
||||||
return []
|
all_discovered_domains.update(cert_domains)
|
||||||
|
for cert_domain in cert_domains:
|
||||||
# Check for cancellation after request
|
if not _is_valid_domain(cert_domain): continue
|
||||||
if self._stop_event and self._stop_event.is_set():
|
if cert_domain not in domain_certificates:
|
||||||
print(f"CrtSh query cancelled after request for domain: {domain}")
|
domain_certificates[cert_domain] = []
|
||||||
return []
|
domain_certificates[cert_domain].append(cert_metadata)
|
||||||
|
if self._stop_event and self._stop_event.is_set(): return []
|
||||||
certificates = response.json()
|
for i, discovered_domain in enumerate(all_discovered_domains):
|
||||||
|
if discovered_domain == domain: continue
|
||||||
if not certificates:
|
if i % 10 == 0 and self._stop_event and self._stop_event.is_set(): break
|
||||||
return []
|
if not _is_valid_domain(discovered_domain): continue
|
||||||
|
query_domain_certs = domain_certificates.get(domain, [])
|
||||||
# Check for cancellation before processing
|
discovered_domain_certs = domain_certificates.get(discovered_domain, [])
|
||||||
if self._stop_event and self._stop_event.is_set():
|
shared_certificates = self._find_shared_certificates(query_domain_certs, discovered_domain_certs)
|
||||||
print(f"CrtSh query cancelled before processing for domain: {domain}")
|
confidence = self._calculate_domain_relationship_confidence(
|
||||||
return []
|
domain, discovered_domain, shared_certificates, all_discovered_domains
|
||||||
|
)
|
||||||
# Aggregate certificate data by domain
|
relationship_raw_data = {
|
||||||
domain_certificates = {}
|
'relationship_type': 'certificate_discovery',
|
||||||
all_discovered_domains = set()
|
'shared_certificates': shared_certificates,
|
||||||
|
'total_shared_certs': len(shared_certificates),
|
||||||
# Process certificates and group by domain (with cancellation checks)
|
'discovery_context': self._determine_relationship_context(discovered_domain, domain),
|
||||||
for i, cert_data in enumerate(certificates):
|
'domain_certificates': {
|
||||||
# Check for cancellation every 10 certificates
|
domain: self._summarize_certificates(query_domain_certs),
|
||||||
if i % 10 == 0 and self._stop_event and self._stop_event.is_set():
|
discovered_domain: self._summarize_certificates(discovered_domain_certs)
|
||||||
print(f"CrtSh processing cancelled at certificate {i} for domain: {domain}")
|
|
||||||
break
|
|
||||||
|
|
||||||
cert_metadata = self._extract_certificate_metadata(cert_data)
|
|
||||||
cert_domains = self._extract_domains_from_certificate(cert_data)
|
|
||||||
|
|
||||||
# Add all domains from this certificate to our tracking
|
|
||||||
for cert_domain in cert_domains:
|
|
||||||
if not _is_valid_domain(cert_domain):
|
|
||||||
continue
|
|
||||||
|
|
||||||
all_discovered_domains.add(cert_domain)
|
|
||||||
|
|
||||||
# Initialize domain certificate list if needed
|
|
||||||
if cert_domain not in domain_certificates:
|
|
||||||
domain_certificates[cert_domain] = []
|
|
||||||
|
|
||||||
# Add this certificate to the domain's certificate list
|
|
||||||
domain_certificates[cert_domain].append(cert_metadata)
|
|
||||||
|
|
||||||
# Final cancellation check before creating relationships
|
|
||||||
if self._stop_event and self._stop_event.is_set():
|
|
||||||
print(f"CrtSh query cancelled before relationship creation for domain: {domain}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
# Create relationships from query domain to ALL discovered domains
|
|
||||||
for discovered_domain in all_discovered_domains:
|
|
||||||
if discovered_domain == domain:
|
|
||||||
continue # Skip self-relationships
|
|
||||||
|
|
||||||
# Check for cancellation during relationship creation
|
|
||||||
if self._stop_event and self._stop_event.is_set():
|
|
||||||
print(f"CrtSh relationship creation cancelled for domain: {domain}")
|
|
||||||
break
|
|
||||||
|
|
||||||
if not _is_valid_domain(discovered_domain):
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Get certificates for both domains
|
|
||||||
query_domain_certs = domain_certificates.get(domain, [])
|
|
||||||
discovered_domain_certs = domain_certificates.get(discovered_domain, [])
|
|
||||||
|
|
||||||
# Find shared certificates (for metadata purposes)
|
|
||||||
shared_certificates = self._find_shared_certificates(query_domain_certs, discovered_domain_certs)
|
|
||||||
|
|
||||||
# Calculate confidence based on relationship type and shared certificates
|
|
||||||
confidence = self._calculate_domain_relationship_confidence(
|
|
||||||
domain, discovered_domain, shared_certificates, all_discovered_domains
|
|
||||||
)
|
|
||||||
|
|
||||||
# Create comprehensive raw data for the relationship
|
|
||||||
relationship_raw_data = {
|
|
||||||
'relationship_type': 'certificate_discovery',
|
|
||||||
'shared_certificates': shared_certificates,
|
|
||||||
'total_shared_certs': len(shared_certificates),
|
|
||||||
'discovery_context': self._determine_relationship_context(discovered_domain, domain),
|
|
||||||
'domain_certificates': {
|
|
||||||
domain: self._summarize_certificates(query_domain_certs),
|
|
||||||
discovered_domain: self._summarize_certificates(discovered_domain_certs)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
}
|
||||||
# Create domain -> domain relationship
|
relationships.append((
|
||||||
relationships.append((
|
domain, discovered_domain, 'san_certificate', confidence, relationship_raw_data
|
||||||
domain,
|
))
|
||||||
discovered_domain,
|
self.log_relationship_discovery(
|
||||||
RelationshipType.SAN_CERTIFICATE,
|
source_node=domain, target_node=discovered_domain, relationship_type='san_certificate',
|
||||||
confidence,
|
confidence_score=confidence, raw_data=relationship_raw_data,
|
||||||
relationship_raw_data
|
discovery_method="certificate_transparency_analysis"
|
||||||
))
|
)
|
||||||
|
|
||||||
# Log the relationship discovery
|
|
||||||
self.log_relationship_discovery(
|
|
||||||
source_node=domain,
|
|
||||||
target_node=discovered_domain,
|
|
||||||
relationship_type=RelationshipType.SAN_CERTIFICATE,
|
|
||||||
confidence_score=confidence,
|
|
||||||
raw_data=relationship_raw_data,
|
|
||||||
discovery_method="certificate_transparency_analysis"
|
|
||||||
)
|
|
||||||
|
|
||||||
except json.JSONDecodeError as e:
|
|
||||||
self.logger.logger.error(f"Failed to parse JSON response from crt.sh: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
self.logger.logger.error(f"Error querying crt.sh for {domain}: {e}")
|
|
||||||
|
|
||||||
return relationships
|
return relationships
|
||||||
|
|
||||||
|
# --- All remaining helper methods are identical to the original and fully compatible ---
|
||||||
|
# They are included here for completeness.
|
||||||
|
|
||||||
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
def _find_shared_certificates(self, certs1: List[Dict[str, Any]], certs2: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||||
"""
|
|
||||||
Find certificates that are shared between two domain certificate lists.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
certs1: First domain's certificates
|
|
||||||
certs2: Second domain's certificates
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of shared certificate metadata
|
|
||||||
"""
|
|
||||||
shared = []
|
|
||||||
|
|
||||||
# Create a set of certificate IDs from the first list for quick lookup
|
|
||||||
cert1_ids = {cert.get('certificate_id') for cert in certs1 if cert.get('certificate_id')}
|
cert1_ids = {cert.get('certificate_id') for cert in certs1 if cert.get('certificate_id')}
|
||||||
|
return [cert for cert in certs2 if cert.get('certificate_id') in cert1_ids]
|
||||||
# Find certificates in the second list that match
|
|
||||||
for cert in certs2:
|
|
||||||
if cert.get('certificate_id') in cert1_ids:
|
|
||||||
shared.append(cert)
|
|
||||||
|
|
||||||
return shared
|
|
||||||
|
|
||||||
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
|
def _summarize_certificates(self, certificates: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||||
"""
|
if not certificates: return {'total_certificates': 0, 'valid_certificates': 0, 'expired_certificates': 0, 'expires_soon_count': 0, 'unique_issuers': [], 'latest_certificate': None, 'has_valid_cert': False}
|
||||||
Create a summary of certificates for a domain.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
certificates: List of certificate metadata
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Summary dictionary with aggregate statistics
|
|
||||||
"""
|
|
||||||
if not certificates:
|
|
||||||
return {
|
|
||||||
'total_certificates': 0,
|
|
||||||
'valid_certificates': 0,
|
|
||||||
'expired_certificates': 0,
|
|
||||||
'expires_soon_count': 0,
|
|
||||||
'unique_issuers': [],
|
|
||||||
'latest_certificate': None,
|
|
||||||
'has_valid_cert': False
|
|
||||||
}
|
|
||||||
|
|
||||||
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
|
valid_count = sum(1 for cert in certificates if cert.get('is_currently_valid'))
|
||||||
expired_count = len(certificates) - valid_count
|
|
||||||
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
|
expires_soon_count = sum(1 for cert in certificates if cert.get('expires_soon'))
|
||||||
|
|
||||||
# Get unique issuers
|
|
||||||
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
|
unique_issuers = list(set(cert.get('issuer_name') for cert in certificates if cert.get('issuer_name')))
|
||||||
|
latest_cert, latest_date = None, None
|
||||||
# Find the most recent certificate
|
|
||||||
latest_cert = None
|
|
||||||
latest_date = None
|
|
||||||
|
|
||||||
for cert in certificates:
|
for cert in certificates:
|
||||||
try:
|
try:
|
||||||
if cert.get('not_before'):
|
if cert.get('not_before'):
|
||||||
cert_date = self._parse_certificate_date(cert['not_before'])
|
cert_date = self._parse_certificate_date(cert['not_before'])
|
||||||
if latest_date is None or cert_date > latest_date:
|
if latest_date is None or cert_date > latest_date:
|
||||||
latest_date = cert_date
|
latest_date, latest_cert = cert_date, cert
|
||||||
latest_cert = cert
|
except Exception: continue
|
||||||
except Exception:
|
return {'total_certificates': len(certificates), 'valid_certificates': valid_count, 'expired_certificates': len(certificates) - valid_count, 'expires_soon_count': expires_soon_count, 'unique_issuers': unique_issuers, 'latest_certificate': latest_cert, 'has_valid_cert': valid_count > 0, 'certificate_details': certificates}
|
||||||
continue
|
|
||||||
|
|
||||||
return {
|
|
||||||
'total_certificates': len(certificates),
|
|
||||||
'valid_certificates': valid_count,
|
|
||||||
'expired_certificates': expired_count,
|
|
||||||
'expires_soon_count': expires_soon_count,
|
|
||||||
'unique_issuers': unique_issuers,
|
|
||||||
'latest_certificate': latest_cert,
|
|
||||||
'has_valid_cert': valid_count > 0,
|
|
||||||
'certificate_details': certificates # Full details for forensic analysis
|
|
||||||
}
|
|
||||||
|
|
||||||
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str,
|
def _calculate_domain_relationship_confidence(self, domain1: str, domain2: str, shared_certificates: List[Dict[str, Any]], all_discovered_domains: Set[str]) -> float:
|
||||||
shared_certificates: List[Dict[str, Any]],
|
base_confidence, context_bonus, shared_bonus, validity_bonus, issuer_bonus = 0.9, 0.0, 0.0, 0.0, 0.0
|
||||||
all_discovered_domains: Set[str]) -> float:
|
|
||||||
"""
|
|
||||||
Calculate confidence score for domain relationship based on various factors.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
domain1: Source domain (query domain)
|
|
||||||
domain2: Target domain (discovered domain)
|
|
||||||
shared_certificates: List of shared certificate metadata
|
|
||||||
all_discovered_domains: All domains discovered in this query
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Confidence score between 0.0 and 1.0
|
|
||||||
"""
|
|
||||||
base_confidence = RelationshipType.SAN_CERTIFICATE.default_confidence
|
|
||||||
|
|
||||||
# Adjust confidence based on domain relationship context
|
|
||||||
relationship_context = self._determine_relationship_context(domain2, domain1)
|
relationship_context = self._determine_relationship_context(domain2, domain1)
|
||||||
|
if relationship_context == 'subdomain': context_bonus = 0.1
|
||||||
if relationship_context == 'exact_match':
|
elif relationship_context == 'parent_domain': context_bonus = 0.05
|
||||||
context_bonus = 0.0 # This shouldn't happen, but just in case
|
|
||||||
elif relationship_context == 'subdomain':
|
|
||||||
context_bonus = 0.1 # High confidence for subdomains
|
|
||||||
elif relationship_context == 'parent_domain':
|
|
||||||
context_bonus = 0.05 # Medium confidence for parent domains
|
|
||||||
else:
|
|
||||||
context_bonus = 0.0 # Related domains get base confidence
|
|
||||||
|
|
||||||
# Adjust confidence based on shared certificates
|
|
||||||
if shared_certificates:
|
|
||||||
shared_count = len(shared_certificates)
|
|
||||||
if shared_count >= 3:
|
|
||||||
shared_bonus = 0.1
|
|
||||||
elif shared_count >= 2:
|
|
||||||
shared_bonus = 0.05
|
|
||||||
else:
|
|
||||||
shared_bonus = 0.02
|
|
||||||
|
|
||||||
# Additional bonus for valid shared certificates
|
|
||||||
valid_shared = sum(1 for cert in shared_certificates if cert.get('is_currently_valid'))
|
|
||||||
if valid_shared > 0:
|
|
||||||
validity_bonus = 0.05
|
|
||||||
else:
|
|
||||||
validity_bonus = 0.0
|
|
||||||
else:
|
|
||||||
# Even without shared certificates, domains found in the same query have some relationship
|
|
||||||
shared_bonus = 0.0
|
|
||||||
validity_bonus = 0.0
|
|
||||||
|
|
||||||
# Adjust confidence based on certificate issuer reputation (if shared certificates exist)
|
|
||||||
issuer_bonus = 0.0
|
|
||||||
if shared_certificates:
|
if shared_certificates:
|
||||||
|
if len(shared_certificates) >= 3: shared_bonus = 0.1
|
||||||
|
elif len(shared_certificates) >= 2: shared_bonus = 0.05
|
||||||
|
else: shared_bonus = 0.02
|
||||||
|
if any(cert.get('is_currently_valid') for cert in shared_certificates): validity_bonus = 0.05
|
||||||
for cert in shared_certificates:
|
for cert in shared_certificates:
|
||||||
issuer = cert.get('issuer_name', '').lower()
|
if any(ca in cert.get('issuer_name', '').lower() for ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
|
||||||
if any(trusted_ca in issuer for trusted_ca in ['let\'s encrypt', 'digicert', 'sectigo', 'globalsign']):
|
|
||||||
issuer_bonus = max(issuer_bonus, 0.03)
|
issuer_bonus = max(issuer_bonus, 0.03)
|
||||||
break
|
break
|
||||||
|
return max(0.1, min(1.0, base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus))
|
||||||
# Calculate final confidence
|
|
||||||
final_confidence = base_confidence + context_bonus + shared_bonus + validity_bonus + issuer_bonus
|
|
||||||
return max(0.1, min(1.0, final_confidence)) # Clamp between 0.1 and 1.0
|
|
||||||
|
|
||||||
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
|
def _determine_relationship_context(self, cert_domain: str, query_domain: str) -> str:
|
||||||
"""
|
if cert_domain == query_domain: return 'exact_match'
|
||||||
Determine the context of the relationship between certificate domain and query domain.
|
if cert_domain.endswith(f'.{query_domain}'): return 'subdomain'
|
||||||
|
if query_domain.endswith(f'.{cert_domain}'): return 'parent_domain'
|
||||||
Args:
|
return 'related_domain'
|
||||||
cert_domain: Domain found in certificate
|
|
||||||
query_domain: Original query domain
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
String describing the relationship context
|
|
||||||
"""
|
|
||||||
if cert_domain == query_domain:
|
|
||||||
return 'exact_match'
|
|
||||||
elif cert_domain.endswith(f'.{query_domain}'):
|
|
||||||
return 'subdomain'
|
|
||||||
elif query_domain.endswith(f'.{cert_domain}'):
|
|
||||||
return 'parent_domain'
|
|
||||||
else:
|
|
||||||
return 'related_domain'
|
|
||||||
|
|
||||||
def query_ip(self, ip: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
|
||||||
Query crt.sh for certificates containing the IP address.
|
|
||||||
Note: crt.sh doesn't typically index by IP, so this returns empty results.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
ip: IP address to investigate
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Empty list (crt.sh doesn't support IP-based certificate queries effectively)
|
|
||||||
"""
|
|
||||||
# crt.sh doesn't effectively support IP-based certificate queries
|
|
||||||
return []
|
return []
|
||||||
|
|
||||||
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
|
def _extract_domains_from_certificate(self, cert_data: Dict[str, Any]) -> Set[str]:
|
||||||
"""
|
|
||||||
Extract all domains from certificate data.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
cert_data: Certificate data from crt.sh API
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Set of unique domain names found in the certificate
|
|
||||||
"""
|
|
||||||
domains = set()
|
domains = set()
|
||||||
|
if cn := cert_data.get('common_name'):
|
||||||
# Extract from common name
|
if cleaned := self._clean_domain_name(cn):
|
||||||
common_name = cert_data.get('common_name', '')
|
domains.update(cleaned)
|
||||||
if common_name:
|
if nv := cert_data.get('name_value'):
|
||||||
cleaned_cn = self._clean_domain_name(common_name)
|
for line in nv.split('\n'):
|
||||||
if cleaned_cn and _is_valid_domain(cleaned_cn):
|
if cleaned := self._clean_domain_name(line.strip()):
|
||||||
domains.add(cleaned_cn)
|
domains.update(cleaned)
|
||||||
|
|
||||||
# Extract from name_value field (contains SANs)
|
|
||||||
name_value = cert_data.get('name_value', '')
|
|
||||||
if name_value:
|
|
||||||
# Split by newlines and clean each domain
|
|
||||||
for line in name_value.split('\n'):
|
|
||||||
cleaned_domain = self._clean_domain_name(line.strip())
|
|
||||||
if cleaned_domain and _is_valid_domain(cleaned_domain):
|
|
||||||
domains.add(cleaned_domain)
|
|
||||||
|
|
||||||
return domains
|
return domains
|
||||||
|
|
||||||
def _clean_domain_name(self, domain_name: str) -> str:
|
def _clean_domain_name(self, domain_name: str) -> List[str]:
|
||||||
"""
|
if not domain_name: return []
|
||||||
Clean and normalize domain name from certificate data.
|
domain = domain_name.strip().lower().split('://', 1)[-1].split('/', 1)[0]
|
||||||
|
if ':' in domain and not domain.count(':') > 1: domain = domain.split(':', 1)[0]
|
||||||
Args:
|
cleaned_domains = [domain, domain[2:]] if domain.startswith('*.') else [domain]
|
||||||
domain_name: Raw domain name from certificate
|
final_domains = []
|
||||||
|
for d in cleaned_domains:
|
||||||
Returns:
|
d = re.sub(r'[^\w\-\.]', '', d)
|
||||||
Cleaned domain name or empty string if invalid
|
if d and not d.startswith(('.', '-')) and not d.endswith(('.', '-')):
|
||||||
"""
|
final_domains.append(d)
|
||||||
if not domain_name:
|
return [d for d in final_domains if _is_valid_domain(d)]
|
||||||
return ""
|
|
||||||
|
|
||||||
# Remove common prefixes and clean up
|
|
||||||
domain = domain_name.strip().lower()
|
|
||||||
|
|
||||||
# Remove protocol if present
|
|
||||||
if domain.startswith(('http://', 'https://')):
|
|
||||||
domain = domain.split('://', 1)[1]
|
|
||||||
|
|
||||||
# Remove path if present
|
|
||||||
if '/' in domain:
|
|
||||||
domain = domain.split('/', 1)[0]
|
|
||||||
|
|
||||||
# Remove port if present
|
|
||||||
if ':' in domain and not domain.count(':') > 1: # Avoid breaking IPv6
|
|
||||||
domain = domain.split(':', 1)[0]
|
|
||||||
|
|
||||||
# Handle wildcard domains
|
|
||||||
if domain.startswith('*.'):
|
|
||||||
domain = domain[2:]
|
|
||||||
|
|
||||||
# Remove any remaining invalid characters
|
|
||||||
domain = re.sub(r'[^\w\-\.]', '', domain)
|
|
||||||
|
|
||||||
# Ensure it's not empty and doesn't start/end with dots or hyphens
|
|
||||||
if domain and not domain.startswith(('.', '-')) and not domain.endswith(('.', '-')):
|
|
||||||
return domain
|
|
||||||
|
|
||||||
return ""
|
|
||||||
@@ -1,11 +1,9 @@
|
|||||||
# dnsrecon/providers/dns_provider.py
|
# dnsrecon/providers/dns_provider.py
|
||||||
|
|
||||||
import dns.resolver
|
from dns import resolver, reversename
|
||||||
import dns.reversename
|
|
||||||
from typing import List, Dict, Any, Tuple
|
from typing import List, Dict, Any, Tuple
|
||||||
from .base_provider import BaseProvider
|
from .base_provider import BaseProvider
|
||||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||||
from core.graph_manager import RelationshipType
|
|
||||||
|
|
||||||
|
|
||||||
class DNSProvider(BaseProvider):
|
class DNSProvider(BaseProvider):
|
||||||
@@ -14,7 +12,7 @@ class DNSProvider(BaseProvider):
|
|||||||
Now uses session-specific configuration.
|
Now uses session-specific configuration.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, session_config=None):
|
def __init__(self, name=None, session_config=None):
|
||||||
"""Initialize DNS provider with session-specific configuration."""
|
"""Initialize DNS provider with session-specific configuration."""
|
||||||
super().__init__(
|
super().__init__(
|
||||||
name="dns",
|
name="dns",
|
||||||
@@ -24,27 +22,35 @@ class DNSProvider(BaseProvider):
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Configure DNS resolver
|
# Configure DNS resolver
|
||||||
self.resolver = dns.resolver.Resolver()
|
self.resolver = resolver.Resolver()
|
||||||
self.resolver.timeout = 5
|
self.resolver.timeout = 5
|
||||||
self.resolver.lifetime = 10
|
self.resolver.lifetime = 10
|
||||||
|
#self.resolver.nameservers = ['127.0.0.1']
|
||||||
|
|
||||||
def get_name(self) -> str:
|
def get_name(self) -> str:
|
||||||
"""Return the provider name."""
|
"""Return the provider name."""
|
||||||
return "dns"
|
return "dns"
|
||||||
|
|
||||||
|
def get_display_name(self) -> str:
|
||||||
|
"""Return the provider display name for the UI."""
|
||||||
|
return "DNS"
|
||||||
|
|
||||||
|
def requires_api_key(self) -> bool:
|
||||||
|
"""Return True if the provider requires an API key."""
|
||||||
|
return False
|
||||||
|
|
||||||
|
def get_eligibility(self) -> Dict[str, bool]:
|
||||||
|
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||||
|
return {'domains': True, 'ips': True}
|
||||||
|
|
||||||
def is_available(self) -> bool:
|
def is_available(self) -> bool:
|
||||||
"""DNS is always available - no API key required."""
|
"""DNS is always available - no API key required."""
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def query_domain(self, domain: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
"""
|
||||||
Query DNS records for the domain to discover relationships.
|
Query DNS records for the domain to discover relationships.
|
||||||
|
...
|
||||||
Args:
|
|
||||||
domain: Domain to investigate
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of relationships discovered from DNS analysis
|
|
||||||
"""
|
"""
|
||||||
if not _is_valid_domain(domain):
|
if not _is_valid_domain(domain):
|
||||||
return []
|
return []
|
||||||
@@ -52,12 +58,20 @@ class DNSProvider(BaseProvider):
|
|||||||
relationships = []
|
relationships = []
|
||||||
|
|
||||||
# Query all record types
|
# Query all record types
|
||||||
for record_type in ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SOA', 'TXT', 'SRV', 'CAA', 'DNSKEY', 'DS', 'RRSIG', 'SSHFP', 'TLSA', 'NAPTR', 'SPF']:
|
for record_type in ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SOA', 'TXT', 'SRV', 'CAA']:
|
||||||
relationships.extend(self._query_record(domain, record_type))
|
try:
|
||||||
|
relationships.extend(self._query_record(domain, record_type))
|
||||||
|
except resolver.NoAnswer:
|
||||||
|
# This is not an error, just a confirmation that the record doesn't exist.
|
||||||
|
self.logger.logger.debug(f"No {record_type} record found for {domain}")
|
||||||
|
except Exception as e:
|
||||||
|
self.failed_requests += 1
|
||||||
|
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
|
||||||
|
# Optionally, you might want to re-raise other, more serious exceptions.
|
||||||
|
|
||||||
return relationships
|
return relationships
|
||||||
|
|
||||||
def query_ip(self, ip: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
"""
|
||||||
Query reverse DNS for the IP address.
|
Query reverse DNS for the IP address.
|
||||||
|
|
||||||
@@ -75,7 +89,7 @@ class DNSProvider(BaseProvider):
|
|||||||
try:
|
try:
|
||||||
# Perform reverse DNS lookup
|
# Perform reverse DNS lookup
|
||||||
self.total_requests += 1
|
self.total_requests += 1
|
||||||
reverse_name = dns.reversename.from_address(ip)
|
reverse_name = reversename.from_address(ip)
|
||||||
response = self.resolver.resolve(reverse_name, 'PTR')
|
response = self.resolver.resolve(reverse_name, 'PTR')
|
||||||
self.successful_requests += 1
|
self.successful_requests += 1
|
||||||
|
|
||||||
@@ -93,27 +107,32 @@ class DNSProvider(BaseProvider):
|
|||||||
relationships.append((
|
relationships.append((
|
||||||
ip,
|
ip,
|
||||||
hostname,
|
hostname,
|
||||||
RelationshipType.PTR_RECORD,
|
'ptr_record',
|
||||||
RelationshipType.PTR_RECORD.default_confidence,
|
0.8,
|
||||||
raw_data
|
raw_data
|
||||||
))
|
))
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
self.log_relationship_discovery(
|
||||||
source_node=ip,
|
source_node=ip,
|
||||||
target_node=hostname,
|
target_node=hostname,
|
||||||
relationship_type=RelationshipType.PTR_RECORD,
|
relationship_type='ptr_record',
|
||||||
confidence_score=RelationshipType.PTR_RECORD.default_confidence,
|
confidence_score=0.8,
|
||||||
raw_data=raw_data,
|
raw_data=raw_data,
|
||||||
discovery_method="reverse_dns_lookup"
|
discovery_method="reverse_dns_lookup"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
except resolver.NXDOMAIN:
|
||||||
|
self.failed_requests += 1
|
||||||
|
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: NXDOMAIN")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.failed_requests += 1
|
self.failed_requests += 1
|
||||||
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: {e}")
|
self.logger.logger.debug(f"Reverse DNS lookup failed for {ip}: {e}")
|
||||||
|
# Re-raise the exception so the scanner can handle the failure
|
||||||
|
raise e
|
||||||
|
|
||||||
return relationships
|
return relationships
|
||||||
|
|
||||||
def _query_record(self, domain: str, record_type: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
def _query_record(self, domain: str, record_type: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
"""
|
||||||
Query a specific type of DNS record for the domain.
|
Query a specific type of DNS record for the domain.
|
||||||
"""
|
"""
|
||||||
@@ -133,8 +152,9 @@ class DNSProvider(BaseProvider):
|
|||||||
target = str(record.exchange).rstrip('.')
|
target = str(record.exchange).rstrip('.')
|
||||||
elif record_type == 'SOA':
|
elif record_type == 'SOA':
|
||||||
target = str(record.mname).rstrip('.')
|
target = str(record.mname).rstrip('.')
|
||||||
elif record_type in ['TXT', 'SPF']:
|
elif record_type in ['TXT']:
|
||||||
target = b' '.join(record.strings).decode('utf-8', 'ignore')
|
# TXT records are treated as metadata, not relationships.
|
||||||
|
continue
|
||||||
elif record_type == 'SRV':
|
elif record_type == 'SRV':
|
||||||
target = str(record.target).rstrip('.')
|
target = str(record.target).rstrip('.')
|
||||||
elif record_type == 'CAA':
|
elif record_type == 'CAA':
|
||||||
@@ -142,7 +162,6 @@ class DNSProvider(BaseProvider):
|
|||||||
else:
|
else:
|
||||||
target = str(record)
|
target = str(record)
|
||||||
|
|
||||||
|
|
||||||
if target:
|
if target:
|
||||||
raw_data = {
|
raw_data = {
|
||||||
'query_type': record_type,
|
'query_type': record_type,
|
||||||
@@ -150,29 +169,30 @@ class DNSProvider(BaseProvider):
|
|||||||
'value': target,
|
'value': target,
|
||||||
'ttl': response.ttl
|
'ttl': response.ttl
|
||||||
}
|
}
|
||||||
try:
|
relationship_type = f"{record_type.lower()}_record"
|
||||||
relationship_type_enum = getattr(RelationshipType, f"{record_type}_RECORD")
|
confidence = 0.8 # Default confidence for DNS records
|
||||||
relationships.append((
|
|
||||||
domain,
|
|
||||||
target,
|
|
||||||
relationship_type_enum,
|
|
||||||
relationship_type_enum.default_confidence,
|
|
||||||
raw_data
|
|
||||||
))
|
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
relationships.append((
|
||||||
source_node=domain,
|
domain,
|
||||||
target_node=target,
|
target,
|
||||||
relationship_type=relationship_type_enum,
|
relationship_type,
|
||||||
confidence_score=relationship_type_enum.default_confidence,
|
confidence,
|
||||||
raw_data=raw_data,
|
raw_data
|
||||||
discovery_method=f"dns_{record_type.lower()}_record"
|
))
|
||||||
)
|
|
||||||
except AttributeError:
|
self.log_relationship_discovery(
|
||||||
self.logger.logger.error(f"Unsupported record type '{record_type}' encountered for domain {domain}")
|
source_node=domain,
|
||||||
|
target_node=target,
|
||||||
|
relationship_type=relationship_type,
|
||||||
|
confidence_score=confidence,
|
||||||
|
raw_data=raw_data,
|
||||||
|
discovery_method=f"dns_{record_type.lower()}_record"
|
||||||
|
)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.failed_requests += 1
|
self.failed_requests += 1
|
||||||
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
|
self.logger.logger.debug(f"{record_type} record query failed for {domain}: {e}")
|
||||||
|
# Re-raise the exception so the scanner can handle it
|
||||||
|
raise e
|
||||||
|
|
||||||
return relationships
|
return relationships
|
||||||
@@ -1,13 +1,9 @@
|
|||||||
"""
|
# dnsrecon/providers/shodan_provider.py
|
||||||
Shodan provider for DNSRecon.
|
|
||||||
Discovers IP relationships and infrastructure context through Shodan API.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
import json
|
||||||
from typing import List, Dict, Any, Tuple
|
from typing import List, Dict, Any, Tuple
|
||||||
from .base_provider import BaseProvider
|
from .base_provider import BaseProvider
|
||||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
from utils.helpers import _is_valid_ip, _is_valid_domain
|
||||||
from core.graph_manager import RelationshipType
|
|
||||||
|
|
||||||
|
|
||||||
class ShodanProvider(BaseProvider):
|
class ShodanProvider(BaseProvider):
|
||||||
@@ -15,8 +11,8 @@ class ShodanProvider(BaseProvider):
|
|||||||
Provider for querying Shodan API for IP address and hostname information.
|
Provider for querying Shodan API for IP address and hostname information.
|
||||||
Now uses session-specific API keys.
|
Now uses session-specific API keys.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, session_config=None):
|
def __init__(self, name=None, session_config=None):
|
||||||
"""Initialize Shodan provider with session-specific configuration."""
|
"""Initialize Shodan provider with session-specific configuration."""
|
||||||
super().__init__(
|
super().__init__(
|
||||||
name="shodan",
|
name="shodan",
|
||||||
@@ -26,32 +22,43 @@ class ShodanProvider(BaseProvider):
|
|||||||
)
|
)
|
||||||
self.base_url = "https://api.shodan.io"
|
self.base_url = "https://api.shodan.io"
|
||||||
self.api_key = self.config.get_api_key('shodan')
|
self.api_key = self.config.get_api_key('shodan')
|
||||||
|
|
||||||
def is_available(self) -> bool:
|
def is_available(self) -> bool:
|
||||||
"""Check if Shodan provider is available (has valid API key in this session)."""
|
"""Check if Shodan provider is available (has valid API key in this session)."""
|
||||||
return self.api_key is not None and len(self.api_key.strip()) > 0
|
return self.api_key is not None and len(self.api_key.strip()) > 0
|
||||||
|
|
||||||
def get_name(self) -> str:
|
def get_name(self) -> str:
|
||||||
"""Return the provider name."""
|
"""Return the provider name."""
|
||||||
return "shodan"
|
return "shodan"
|
||||||
|
|
||||||
|
def get_display_name(self) -> str:
|
||||||
def query_domain(self, domain: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
"""Return the provider display name for the UI."""
|
||||||
|
return "shodan"
|
||||||
|
|
||||||
|
def requires_api_key(self) -> bool:
|
||||||
|
"""Return True if the provider requires an API key."""
|
||||||
|
return True
|
||||||
|
|
||||||
|
def get_eligibility(self) -> Dict[str, bool]:
|
||||||
|
"""Return a dictionary indicating if the provider can query domains and/or IPs."""
|
||||||
|
return {'domains': True, 'ips': True}
|
||||||
|
|
||||||
|
def query_domain(self, domain: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
"""
|
||||||
Query Shodan for information about a domain.
|
Query Shodan for information about a domain.
|
||||||
Uses Shodan's hostname search to find associated IPs.
|
Uses Shodan's hostname search to find associated IPs.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
domain: Domain to investigate
|
domain: Domain to investigate
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
List of relationships discovered from Shodan data
|
List of relationships discovered from Shodan data
|
||||||
"""
|
"""
|
||||||
if not _is_valid_domain(domain) or not self.is_available():
|
if not _is_valid_domain(domain) or not self.is_available():
|
||||||
return []
|
return []
|
||||||
|
|
||||||
relationships = []
|
relationships = []
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Search for hostname in Shodan
|
# Search for hostname in Shodan
|
||||||
search_query = f"hostname:{domain}"
|
search_query = f"hostname:{domain}"
|
||||||
@@ -61,22 +68,22 @@ class ShodanProvider(BaseProvider):
|
|||||||
'query': search_query,
|
'query': search_query,
|
||||||
'minify': True # Get minimal data to reduce bandwidth
|
'minify': True # Get minimal data to reduce bandwidth
|
||||||
}
|
}
|
||||||
|
|
||||||
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
|
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
|
||||||
|
|
||||||
if not response or response.status_code != 200:
|
if not response or response.status_code != 200:
|
||||||
return []
|
return []
|
||||||
|
|
||||||
data = response.json()
|
data = response.json()
|
||||||
|
|
||||||
if 'matches' not in data:
|
if 'matches' not in data:
|
||||||
return []
|
return []
|
||||||
|
|
||||||
# Process search results
|
# Process search results
|
||||||
for match in data['matches']:
|
for match in data['matches']:
|
||||||
ip_address = match.get('ip_str')
|
ip_address = match.get('ip_str')
|
||||||
hostnames = match.get('hostnames', [])
|
hostnames = match.get('hostnames', [])
|
||||||
|
|
||||||
if ip_address and domain in hostnames:
|
if ip_address and domain in hostnames:
|
||||||
raw_data = {
|
raw_data = {
|
||||||
'ip_address': ip_address,
|
'ip_address': ip_address,
|
||||||
@@ -88,24 +95,24 @@ class ShodanProvider(BaseProvider):
|
|||||||
'ports': match.get('ports', []),
|
'ports': match.get('ports', []),
|
||||||
'last_update': match.get('last_update', '')
|
'last_update': match.get('last_update', '')
|
||||||
}
|
}
|
||||||
|
|
||||||
relationships.append((
|
relationships.append((
|
||||||
domain,
|
domain,
|
||||||
ip_address,
|
ip_address,
|
||||||
RelationshipType.A_RECORD, # Domain resolves to IP
|
'a_record', # Domain resolves to IP
|
||||||
RelationshipType.A_RECORD.default_confidence,
|
0.8,
|
||||||
raw_data
|
raw_data
|
||||||
))
|
))
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
self.log_relationship_discovery(
|
||||||
source_node=domain,
|
source_node=domain,
|
||||||
target_node=ip_address,
|
target_node=ip_address,
|
||||||
relationship_type=RelationshipType.A_RECORD,
|
relationship_type='a_record',
|
||||||
confidence_score=RelationshipType.A_RECORD.default_confidence,
|
confidence_score=0.8,
|
||||||
raw_data=raw_data,
|
raw_data=raw_data,
|
||||||
discovery_method="shodan_hostname_search"
|
discovery_method="shodan_hostname_search"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Also create relationships to other hostnames on the same IP
|
# Also create relationships to other hostnames on the same IP
|
||||||
for hostname in hostnames:
|
for hostname in hostnames:
|
||||||
if hostname != domain and _is_valid_domain(hostname):
|
if hostname != domain and _is_valid_domain(hostname):
|
||||||
@@ -114,58 +121,56 @@ class ShodanProvider(BaseProvider):
|
|||||||
'all_hostnames': hostnames,
|
'all_hostnames': hostnames,
|
||||||
'discovery_context': 'shared_hosting'
|
'discovery_context': 'shared_hosting'
|
||||||
}
|
}
|
||||||
|
|
||||||
relationships.append((
|
relationships.append((
|
||||||
domain,
|
domain,
|
||||||
hostname,
|
hostname,
|
||||||
RelationshipType.PASSIVE_DNS, # Shared hosting relationship
|
'passive_dns', # Shared hosting relationship
|
||||||
0.6, # Lower confidence for shared hosting
|
0.6, # Lower confidence for shared hosting
|
||||||
hostname_raw_data
|
hostname_raw_data
|
||||||
))
|
))
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
self.log_relationship_discovery(
|
||||||
source_node=domain,
|
source_node=domain,
|
||||||
target_node=hostname,
|
target_node=hostname,
|
||||||
relationship_type=RelationshipType.PASSIVE_DNS,
|
relationship_type='passive_dns',
|
||||||
confidence_score=0.6,
|
confidence_score=0.6,
|
||||||
raw_data=hostname_raw_data,
|
raw_data=hostname_raw_data,
|
||||||
discovery_method="shodan_shared_hosting"
|
discovery_method="shodan_shared_hosting"
|
||||||
)
|
)
|
||||||
|
|
||||||
except json.JSONDecodeError as e:
|
except json.JSONDecodeError as e:
|
||||||
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
||||||
except Exception as e:
|
|
||||||
self.logger.logger.error(f"Error querying Shodan for domain {domain}: {e}")
|
|
||||||
|
|
||||||
return relationships
|
return relationships
|
||||||
|
|
||||||
def query_ip(self, ip: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
def query_ip(self, ip: str) -> List[Tuple[str, str, str, float, Dict[str, Any]]]:
|
||||||
"""
|
"""
|
||||||
Query Shodan for information about an IP address.
|
Query Shodan for information about an IP address.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
ip: IP address to investigate
|
ip: IP address to investigate
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
List of relationships discovered from Shodan IP data
|
List of relationships discovered from Shodan IP data
|
||||||
"""
|
"""
|
||||||
if not _is_valid_ip(ip) or not self.is_available():
|
if not _is_valid_ip(ip) or not self.is_available():
|
||||||
return []
|
return []
|
||||||
|
|
||||||
relationships = []
|
relationships = []
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Query Shodan host information
|
# Query Shodan host information
|
||||||
url = f"{self.base_url}/shodan/host/{ip}"
|
url = f"{self.base_url}/shodan/host/{ip}"
|
||||||
params = {'key': self.api_key}
|
params = {'key': self.api_key}
|
||||||
|
|
||||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||||
|
|
||||||
if not response or response.status_code != 200:
|
if not response or response.status_code != 200:
|
||||||
return []
|
return []
|
||||||
|
|
||||||
data = response.json()
|
data = response.json()
|
||||||
|
|
||||||
# Extract hostname relationships
|
# Extract hostname relationships
|
||||||
hostnames = data.get('hostnames', [])
|
hostnames = data.get('hostnames', [])
|
||||||
for hostname in hostnames:
|
for hostname in hostnames:
|
||||||
@@ -182,73 +187,77 @@ class ShodanProvider(BaseProvider):
|
|||||||
'last_update': data.get('last_update', ''),
|
'last_update': data.get('last_update', ''),
|
||||||
'os': data.get('os', '')
|
'os': data.get('os', '')
|
||||||
}
|
}
|
||||||
|
|
||||||
relationships.append((
|
relationships.append((
|
||||||
ip,
|
ip,
|
||||||
hostname,
|
hostname,
|
||||||
RelationshipType.A_RECORD, # IP resolves to hostname
|
'a_record', # IP resolves to hostname
|
||||||
RelationshipType.A_RECORD.default_confidence,
|
0.8,
|
||||||
raw_data
|
raw_data
|
||||||
))
|
))
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
self.log_relationship_discovery(
|
||||||
source_node=ip,
|
source_node=ip,
|
||||||
target_node=hostname,
|
target_node=hostname,
|
||||||
relationship_type=RelationshipType.A_RECORD,
|
relationship_type='a_record',
|
||||||
confidence_score=RelationshipType.A_RECORD.default_confidence,
|
confidence_score=0.8,
|
||||||
raw_data=raw_data,
|
raw_data=raw_data,
|
||||||
discovery_method="shodan_host_lookup"
|
discovery_method="shodan_host_lookup"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Extract ASN relationship if available
|
# Extract ASN relationship if available
|
||||||
asn = data.get('asn')
|
asn = data.get('asn')
|
||||||
if asn:
|
if asn:
|
||||||
asn_name = f"AS{asn}"
|
# Ensure the ASN starts with "AS"
|
||||||
|
if isinstance(asn, str) and asn.startswith('AS'):
|
||||||
|
asn_name = asn
|
||||||
|
asn_number = asn[2:]
|
||||||
|
else:
|
||||||
|
asn_name = f"AS{asn}"
|
||||||
|
asn_number = str(asn)
|
||||||
|
|
||||||
asn_raw_data = {
|
asn_raw_data = {
|
||||||
'ip_address': ip,
|
'ip_address': ip,
|
||||||
'asn': asn,
|
'asn': asn_number,
|
||||||
'isp': data.get('isp', ''),
|
'isp': data.get('isp', ''),
|
||||||
'org': data.get('org', '')
|
'org': data.get('org', '')
|
||||||
}
|
}
|
||||||
|
|
||||||
relationships.append((
|
relationships.append((
|
||||||
ip,
|
ip,
|
||||||
asn_name,
|
asn_name,
|
||||||
RelationshipType.ASN_MEMBERSHIP,
|
'asn_membership',
|
||||||
RelationshipType.ASN_MEMBERSHIP.default_confidence,
|
0.7,
|
||||||
asn_raw_data
|
asn_raw_data
|
||||||
))
|
))
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
self.log_relationship_discovery(
|
||||||
source_node=ip,
|
source_node=ip,
|
||||||
target_node=asn_name,
|
target_node=asn_name,
|
||||||
relationship_type=RelationshipType.ASN_MEMBERSHIP,
|
relationship_type='asn_membership',
|
||||||
confidence_score=RelationshipType.ASN_MEMBERSHIP.default_confidence,
|
confidence_score=0.7,
|
||||||
raw_data=asn_raw_data,
|
raw_data=asn_raw_data,
|
||||||
discovery_method="shodan_asn_lookup"
|
discovery_method="shodan_asn_lookup"
|
||||||
)
|
)
|
||||||
|
|
||||||
except json.JSONDecodeError as e:
|
except json.JSONDecodeError as e:
|
||||||
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
self.logger.logger.error(f"Failed to parse JSON response from Shodan: {e}")
|
||||||
except Exception as e:
|
|
||||||
self.logger.logger.error(f"Error querying Shodan for IP {ip}: {e}")
|
|
||||||
|
|
||||||
return relationships
|
return relationships
|
||||||
|
|
||||||
def search_by_organization(self, org_name: str) -> List[Dict[str, Any]]:
|
def search_by_organization(self, org_name: str) -> List[Dict[str, Any]]:
|
||||||
"""
|
"""
|
||||||
Search Shodan for hosts belonging to a specific organization.
|
Search Shodan for hosts belonging to a specific organization.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
org_name: Organization name to search for
|
org_name: Organization name to search for
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
List of host information dictionaries
|
List of host information dictionaries
|
||||||
"""
|
"""
|
||||||
if not self.is_available():
|
if not self.is_available():
|
||||||
return []
|
return []
|
||||||
|
|
||||||
try:
|
try:
|
||||||
search_query = f"org:\"{org_name}\""
|
search_query = f"org:\"{org_name}\""
|
||||||
url = f"{self.base_url}/shodan/host/search"
|
url = f"{self.base_url}/shodan/host/search"
|
||||||
@@ -257,42 +266,42 @@ class ShodanProvider(BaseProvider):
|
|||||||
'query': search_query,
|
'query': search_query,
|
||||||
'minify': True
|
'minify': True
|
||||||
}
|
}
|
||||||
|
|
||||||
response = self.make_request(url, method="GET", params=params, target_indicator=org_name)
|
response = self.make_request(url, method="GET", params=params, target_indicator=org_name)
|
||||||
|
|
||||||
if response and response.status_code == 200:
|
if response and response.status_code == 200:
|
||||||
data = response.json()
|
data = response.json()
|
||||||
return data.get('matches', [])
|
return data.get('matches', [])
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.logger.logger.error(f"Error searching Shodan by organization {org_name}: {e}")
|
self.logger.logger.error(f"Error searching Shodan by organization {org_name}: {e}")
|
||||||
|
|
||||||
return []
|
return []
|
||||||
|
|
||||||
def get_host_services(self, ip: str) -> List[Dict[str, Any]]:
|
def get_host_services(self, ip: str) -> List[Dict[str, Any]]:
|
||||||
"""
|
"""
|
||||||
Get service information for a specific IP address.
|
Get service information for a specific IP address.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
ip: IP address to query
|
ip: IP address to query
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
List of service information dictionaries
|
List of service information dictionaries
|
||||||
"""
|
"""
|
||||||
if not _is_valid_ip(ip) or not self.is_available():
|
if not _is_valid_ip(ip) or not self.is_available():
|
||||||
return []
|
return []
|
||||||
|
|
||||||
try:
|
try:
|
||||||
url = f"{self.base_url}/shodan/host/{ip}"
|
url = f"{self.base_url}/shodan/host/{ip}"
|
||||||
params = {'key': self.api_key}
|
params = {'key': self.api_key}
|
||||||
|
|
||||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
||||||
|
|
||||||
if response and response.status_code == 200:
|
if response and response.status_code == 200:
|
||||||
data = response.json()
|
data = response.json()
|
||||||
return data.get('data', []) # Service banners
|
return data.get('data', []) # Service banners
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.logger.logger.error(f"Error getting Shodan services for IP {ip}: {e}")
|
self.logger.logger.error(f"Error getting Shodan services for IP {ip}: {e}")
|
||||||
|
|
||||||
return []
|
return []
|
||||||
@@ -1,333 +0,0 @@
|
|||||||
"""
|
|
||||||
VirusTotal provider for DNSRecon.
|
|
||||||
Discovers domain relationships through passive DNS and URL analysis.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
from typing import List, Dict, Any, Tuple
|
|
||||||
from .base_provider import BaseProvider
|
|
||||||
from utils.helpers import _is_valid_ip, _is_valid_domain
|
|
||||||
from core.graph_manager import RelationshipType
|
|
||||||
|
|
||||||
|
|
||||||
class VirusTotalProvider(BaseProvider):
|
|
||||||
"""
|
|
||||||
Provider for querying VirusTotal API for passive DNS and domain reputation data.
|
|
||||||
Now uses session-specific API keys and rate limits.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, session_config=None):
|
|
||||||
"""Initialize VirusTotal provider with session-specific configuration."""
|
|
||||||
super().__init__(
|
|
||||||
name="virustotal",
|
|
||||||
rate_limit=4, # Free tier: 4 requests per minute
|
|
||||||
timeout=30,
|
|
||||||
session_config=session_config
|
|
||||||
)
|
|
||||||
self.base_url = "https://www.virustotal.com/vtapi/v2"
|
|
||||||
self.api_key = self.config.get_api_key('virustotal')
|
|
||||||
|
|
||||||
def is_available(self) -> bool:
|
|
||||||
"""Check if VirusTotal provider is available (has valid API key in this session)."""
|
|
||||||
return self.api_key is not None and len(self.api_key.strip()) > 0
|
|
||||||
|
|
||||||
def get_name(self) -> str:
|
|
||||||
"""Return the provider name."""
|
|
||||||
return "virustotal"
|
|
||||||
|
|
||||||
def query_domain(self, domain: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
|
||||||
"""
|
|
||||||
Query VirusTotal for domain information including passive DNS.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
domain: Domain to investigate
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of relationships discovered from VirusTotal data
|
|
||||||
"""
|
|
||||||
if not _is_valid_domain(domain) or not self.is_available():
|
|
||||||
return []
|
|
||||||
|
|
||||||
relationships = []
|
|
||||||
|
|
||||||
# Query domain report
|
|
||||||
domain_relationships = self._query_domain_report(domain)
|
|
||||||
relationships.extend(domain_relationships)
|
|
||||||
|
|
||||||
# Query passive DNS for the domain
|
|
||||||
passive_dns_relationships = self._query_passive_dns_domain(domain)
|
|
||||||
relationships.extend(passive_dns_relationships)
|
|
||||||
|
|
||||||
return relationships
|
|
||||||
|
|
||||||
def query_ip(self, ip: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
|
||||||
"""
|
|
||||||
Query VirusTotal for IP address information including passive DNS.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
ip: IP address to investigate
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of relationships discovered from VirusTotal IP data
|
|
||||||
"""
|
|
||||||
if not _is_valid_ip(ip) or not self.is_available():
|
|
||||||
return []
|
|
||||||
|
|
||||||
relationships = []
|
|
||||||
|
|
||||||
# Query IP report
|
|
||||||
ip_relationships = self._query_ip_report(ip)
|
|
||||||
relationships.extend(ip_relationships)
|
|
||||||
|
|
||||||
# Query passive DNS for the IP
|
|
||||||
passive_dns_relationships = self._query_passive_dns_ip(ip)
|
|
||||||
relationships.extend(passive_dns_relationships)
|
|
||||||
|
|
||||||
return relationships
|
|
||||||
|
|
||||||
def _query_domain_report(self, domain: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
|
||||||
"""Query VirusTotal domain report."""
|
|
||||||
relationships = []
|
|
||||||
|
|
||||||
try:
|
|
||||||
url = f"{self.base_url}/domain/report"
|
|
||||||
params = {
|
|
||||||
'apikey': self.api_key,
|
|
||||||
'domain': domain,
|
|
||||||
'allinfo': 1 # Get comprehensive information
|
|
||||||
}
|
|
||||||
|
|
||||||
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
|
|
||||||
|
|
||||||
if not response or response.status_code != 200:
|
|
||||||
return []
|
|
||||||
|
|
||||||
data = response.json()
|
|
||||||
|
|
||||||
if data.get('response_code') != 1:
|
|
||||||
return []
|
|
||||||
|
|
||||||
# Extract resolved IPs
|
|
||||||
resolutions = data.get('resolutions', [])
|
|
||||||
for resolution in resolutions:
|
|
||||||
ip_address = resolution.get('ip_address')
|
|
||||||
last_resolved = resolution.get('last_resolved')
|
|
||||||
|
|
||||||
if ip_address and _is_valid_ip(ip_address):
|
|
||||||
raw_data = {
|
|
||||||
'domain': domain,
|
|
||||||
'ip_address': ip_address,
|
|
||||||
'last_resolved': last_resolved,
|
|
||||||
'source': 'virustotal_domain_report'
|
|
||||||
}
|
|
||||||
|
|
||||||
relationships.append((
|
|
||||||
domain,
|
|
||||||
ip_address,
|
|
||||||
RelationshipType.PASSIVE_DNS,
|
|
||||||
RelationshipType.PASSIVE_DNS.default_confidence,
|
|
||||||
raw_data
|
|
||||||
))
|
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
|
||||||
source_node=domain,
|
|
||||||
target_node=ip_address,
|
|
||||||
relationship_type=RelationshipType.PASSIVE_DNS,
|
|
||||||
confidence_score=RelationshipType.PASSIVE_DNS.default_confidence,
|
|
||||||
raw_data=raw_data,
|
|
||||||
discovery_method="virustotal_domain_resolution"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Extract subdomains
|
|
||||||
subdomains = data.get('subdomains', [])
|
|
||||||
for subdomain in subdomains:
|
|
||||||
if subdomain != domain and _is_valid_domain(subdomain):
|
|
||||||
raw_data = {
|
|
||||||
'parent_domain': domain,
|
|
||||||
'subdomain': subdomain,
|
|
||||||
'source': 'virustotal_subdomain_discovery'
|
|
||||||
}
|
|
||||||
|
|
||||||
relationships.append((
|
|
||||||
domain,
|
|
||||||
subdomain,
|
|
||||||
RelationshipType.PASSIVE_DNS,
|
|
||||||
0.7, # Medium-high confidence for subdomains
|
|
||||||
raw_data
|
|
||||||
))
|
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
|
||||||
source_node=domain,
|
|
||||||
target_node=subdomain,
|
|
||||||
relationship_type=RelationshipType.PASSIVE_DNS,
|
|
||||||
confidence_score=0.7,
|
|
||||||
raw_data=raw_data,
|
|
||||||
discovery_method="virustotal_subdomain_discovery"
|
|
||||||
)
|
|
||||||
|
|
||||||
except json.JSONDecodeError as e:
|
|
||||||
self.logger.logger.error(f"Failed to parse JSON response from VirusTotal: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
self.logger.logger.error(f"Error querying VirusTotal domain report for {domain}: {e}")
|
|
||||||
|
|
||||||
return relationships
|
|
||||||
|
|
||||||
def _query_ip_report(self, ip: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
|
||||||
"""Query VirusTotal IP report."""
|
|
||||||
relationships = []
|
|
||||||
|
|
||||||
try:
|
|
||||||
url = f"{self.base_url}/ip-address/report"
|
|
||||||
params = {
|
|
||||||
'apikey': self.api_key,
|
|
||||||
'ip': ip
|
|
||||||
}
|
|
||||||
|
|
||||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
|
||||||
|
|
||||||
if not response or response.status_code != 200:
|
|
||||||
return []
|
|
||||||
|
|
||||||
data = response.json()
|
|
||||||
|
|
||||||
if data.get('response_code') != 1:
|
|
||||||
return []
|
|
||||||
|
|
||||||
# Extract resolved domains
|
|
||||||
resolutions = data.get('resolutions', [])
|
|
||||||
for resolution in resolutions:
|
|
||||||
hostname = resolution.get('hostname')
|
|
||||||
last_resolved = resolution.get('last_resolved')
|
|
||||||
|
|
||||||
if hostname and _is_valid_domain(hostname):
|
|
||||||
raw_data = {
|
|
||||||
'ip_address': ip,
|
|
||||||
'hostname': hostname,
|
|
||||||
'last_resolved': last_resolved,
|
|
||||||
'source': 'virustotal_ip_report'
|
|
||||||
}
|
|
||||||
|
|
||||||
relationships.append((
|
|
||||||
ip,
|
|
||||||
hostname,
|
|
||||||
RelationshipType.PASSIVE_DNS,
|
|
||||||
RelationshipType.PASSIVE_DNS.default_confidence,
|
|
||||||
raw_data
|
|
||||||
))
|
|
||||||
|
|
||||||
self.log_relationship_discovery(
|
|
||||||
source_node=ip,
|
|
||||||
target_node=hostname,
|
|
||||||
relationship_type=RelationshipType.PASSIVE_DNS,
|
|
||||||
confidence_score=RelationshipType.PASSIVE_DNS.default_confidence,
|
|
||||||
raw_data=raw_data,
|
|
||||||
discovery_method="virustotal_ip_resolution"
|
|
||||||
)
|
|
||||||
|
|
||||||
except json.JSONDecodeError as e:
|
|
||||||
self.logger.logger.error(f"Failed to parse JSON response from VirusTotal: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
self.logger.logger.error(f"Error querying VirusTotal IP report for {ip}: {e}")
|
|
||||||
|
|
||||||
return relationships
|
|
||||||
|
|
||||||
def _query_passive_dns_domain(self, domain: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
|
||||||
"""Query VirusTotal passive DNS for domain."""
|
|
||||||
# Note: VirusTotal's passive DNS API might require a premium subscription
|
|
||||||
# This is a placeholder for the endpoint structure
|
|
||||||
return []
|
|
||||||
|
|
||||||
def _query_passive_dns_ip(self, ip: str) -> List[Tuple[str, str, RelationshipType, float, Dict[str, Any]]]:
|
|
||||||
"""Query VirusTotal passive DNS for IP."""
|
|
||||||
# Note: VirusTotal's passive DNS API might require a premium subscription
|
|
||||||
# This is a placeholder for the endpoint structure
|
|
||||||
return []
|
|
||||||
|
|
||||||
def get_domain_reputation(self, domain: str) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Get domain reputation information from VirusTotal.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
domain: Domain to check reputation for
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dictionary containing reputation data
|
|
||||||
"""
|
|
||||||
if not _is_valid_domain(domain) or not self.is_available():
|
|
||||||
return {}
|
|
||||||
|
|
||||||
try:
|
|
||||||
url = f"{self.base_url}/domain/report"
|
|
||||||
params = {
|
|
||||||
'apikey': self.api_key,
|
|
||||||
'domain': domain
|
|
||||||
}
|
|
||||||
|
|
||||||
response = self.make_request(url, method="GET", params=params, target_indicator=domain)
|
|
||||||
|
|
||||||
if response and response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
|
|
||||||
if data.get('response_code') == 1:
|
|
||||||
return {
|
|
||||||
'positives': data.get('positives', 0),
|
|
||||||
'total': data.get('total', 0),
|
|
||||||
'scan_date': data.get('scan_date', ''),
|
|
||||||
'permalink': data.get('permalink', ''),
|
|
||||||
'reputation_score': self._calculate_reputation_score(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
self.logger.logger.error(f"Error getting VirusTotal reputation for domain {domain}: {e}")
|
|
||||||
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def get_ip_reputation(self, ip: str) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Get IP reputation information from VirusTotal.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
ip: IP address to check reputation for
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dictionary containing reputation data
|
|
||||||
"""
|
|
||||||
if not _is_valid_ip(ip) or not self.is_available():
|
|
||||||
return {}
|
|
||||||
|
|
||||||
try:
|
|
||||||
url = f"{self.base_url}/ip-address/report"
|
|
||||||
params = {
|
|
||||||
'apikey': self.api_key,
|
|
||||||
'ip': ip
|
|
||||||
}
|
|
||||||
|
|
||||||
response = self.make_request(url, method="GET", params=params, target_indicator=ip)
|
|
||||||
|
|
||||||
if response and response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
|
|
||||||
if data.get('response_code') == 1:
|
|
||||||
return {
|
|
||||||
'positives': data.get('positives', 0),
|
|
||||||
'total': data.get('total', 0),
|
|
||||||
'scan_date': data.get('scan_date', ''),
|
|
||||||
'permalink': data.get('permalink', ''),
|
|
||||||
'reputation_score': self._calculate_reputation_score(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
self.logger.logger.error(f"Error getting VirusTotal reputation for IP {ip}: {e}")
|
|
||||||
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def _calculate_reputation_score(self, data: Dict[str, Any]) -> float:
|
|
||||||
"""Calculate a normalized reputation score (0.0 to 1.0)."""
|
|
||||||
positives = data.get('positives', 0)
|
|
||||||
total = data.get('total', 1) # Avoid division by zero
|
|
||||||
|
|
||||||
if total == 0:
|
|
||||||
return 1.0 # No data means neutral
|
|
||||||
|
|
||||||
# Score is inverse of detection ratio (lower detection = higher reputation)
|
|
||||||
return max(0.0, 1.0 - (positives / total))
|
|
||||||
@@ -4,4 +4,8 @@ requests>=2.31.0
|
|||||||
python-dateutil>=2.8.2
|
python-dateutil>=2.8.2
|
||||||
Werkzeug>=2.3.7
|
Werkzeug>=2.3.7
|
||||||
urllib3>=2.0.0
|
urllib3>=2.0.0
|
||||||
dnspython>=2.4.2
|
dnspython>=2.4.2
|
||||||
|
gunicorn
|
||||||
|
redis
|
||||||
|
python-dotenv
|
||||||
|
psycopg2-binary
|
||||||
@@ -272,8 +272,24 @@ input[type="text"]:focus, select:focus {
|
|||||||
text-shadow: 0 0 3px rgba(0, 255, 65, 0.3);
|
text-shadow: 0 0 3px rgba(0, 255, 65, 0.3);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.progress-container {
|
||||||
|
padding: 0 1.5rem 1.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.progress-info {
|
||||||
|
display: flex;
|
||||||
|
justify-content: space-between;
|
||||||
|
font-size: 0.8rem;
|
||||||
|
color: #999;
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
#progress-compact {
|
||||||
|
color: #00ff41;
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
|
||||||
.progress-bar {
|
.progress-bar {
|
||||||
margin: 1rem 1.5rem;
|
|
||||||
height: 8px;
|
height: 8px;
|
||||||
background-color: #1a1a1a;
|
background-color: #1a1a1a;
|
||||||
border: 1px solid #444;
|
border: 1px solid #444;
|
||||||
@@ -314,9 +330,39 @@ input[type="text"]:focus, select:focus {
|
|||||||
|
|
||||||
.view-controls {
|
.view-controls {
|
||||||
display: flex;
|
display: flex;
|
||||||
|
gap: 1.5rem;
|
||||||
|
align-items: center;
|
||||||
|
}
|
||||||
|
|
||||||
|
.filter-group {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
gap: 0.5rem;
|
gap: 0.5rem;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.filter-group label {
|
||||||
|
font-size: 0.9rem;
|
||||||
|
color: #999;
|
||||||
|
}
|
||||||
|
|
||||||
|
.filter-group select,
|
||||||
|
.filter-group input[type="range"] {
|
||||||
|
background-color: #1a1a1a;
|
||||||
|
border: 1px solid #555;
|
||||||
|
color: #c7c7c7;
|
||||||
|
padding: 0.25rem 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.filter-group select {
|
||||||
|
max-width: 150px;
|
||||||
|
}
|
||||||
|
|
||||||
|
#confidence-value {
|
||||||
|
min-width: 30px;
|
||||||
|
text-align: center;
|
||||||
|
color: #00ff41;
|
||||||
|
}
|
||||||
|
|
||||||
.graph-container {
|
.graph-container {
|
||||||
height: 800px;
|
height: 800px;
|
||||||
position: relative;
|
position: relative;
|
||||||
@@ -487,7 +533,7 @@ input[type="text"]:focus, select:focus {
|
|||||||
color: #e0e0e0;
|
color: #e0e0e0;
|
||||||
}
|
}
|
||||||
|
|
||||||
.provider-stats {
|
.provider-stats, .provider-task-stats {
|
||||||
font-size: 0.8rem;
|
font-size: 0.8rem;
|
||||||
color: #999;
|
color: #999;
|
||||||
display: grid;
|
display: grid;
|
||||||
@@ -496,6 +542,13 @@ input[type="text"]:focus, select:focus {
|
|||||||
margin-top: 0.5rem;
|
margin-top: 0.5rem;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.provider-task-stats {
|
||||||
|
border-top: 1px solid #333;
|
||||||
|
padding-top: 0.5rem;
|
||||||
|
margin-top: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
.provider-stat {
|
.provider-stat {
|
||||||
display: flex;
|
display: flex;
|
||||||
justify-content: space-between;
|
justify-content: space-between;
|
||||||
@@ -551,30 +604,6 @@ input[type="text"]:focus, select:focus {
|
|||||||
color: #555;
|
color: #555;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Modal */
|
|
||||||
.modal {
|
|
||||||
display: none;
|
|
||||||
position: fixed;
|
|
||||||
z-index: 1000;
|
|
||||||
left: 0;
|
|
||||||
top: 0;
|
|
||||||
width: 100%;
|
|
||||||
height: 100%;
|
|
||||||
background-color: rgba(0, 0, 0, 0.8);
|
|
||||||
animation: fadeIn 0.3s ease-out;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-content {
|
|
||||||
background-color: #2a2a2a;
|
|
||||||
border: 1px solid #444;
|
|
||||||
margin: 5% auto;
|
|
||||||
width: 80%;
|
|
||||||
max-width: 600px;
|
|
||||||
max-height: 80vh;
|
|
||||||
overflow-y: auto;
|
|
||||||
animation: slideInDown 0.3s ease-out;
|
|
||||||
}
|
|
||||||
|
|
||||||
@keyframes slideInDown {
|
@keyframes slideInDown {
|
||||||
from {
|
from {
|
||||||
opacity: 0;
|
opacity: 0;
|
||||||
@@ -586,43 +615,6 @@ input[type="text"]:focus, select:focus {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
.modal-header {
|
|
||||||
background-color: #1a1a1a;
|
|
||||||
padding: 1rem;
|
|
||||||
border-bottom: 1px solid #444;
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-header h3 {
|
|
||||||
color: #00ff41;
|
|
||||||
font-size: 1.1rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-close {
|
|
||||||
background: transparent;
|
|
||||||
border: none;
|
|
||||||
color: #c7c7c7;
|
|
||||||
font-size: 1.2rem;
|
|
||||||
cursor: pointer;
|
|
||||||
font-family: 'Roboto Mono', monospace;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-close:hover {
|
|
||||||
color: #ff9900;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-body {
|
|
||||||
padding: 1.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-description {
|
|
||||||
color: #999;
|
|
||||||
margin-bottom: 1.5rem;
|
|
||||||
line-height: 1.6;
|
|
||||||
}
|
|
||||||
|
|
||||||
.detail-row {
|
.detail-row {
|
||||||
display: flex;
|
display: flex;
|
||||||
justify-content: space-between;
|
justify-content: space-between;
|
||||||
@@ -771,12 +763,6 @@ input[type="text"]:focus, select:focus {
|
|||||||
color: #00ff41 !important;
|
color: #00ff41 !important;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Animations */
|
|
||||||
@keyframes fadeIn {
|
|
||||||
from { opacity: 0; transform: translateY(10px); }
|
|
||||||
to { opacity: 1; transform: translateY(0); }
|
|
||||||
}
|
|
||||||
|
|
||||||
.fade-in {
|
.fade-in {
|
||||||
animation: fadeIn 0.3s ease-out;
|
animation: fadeIn 0.3s ease-out;
|
||||||
}
|
}
|
||||||
@@ -905,4 +891,179 @@ input[type="text"]:focus, select:focus {
|
|||||||
transform: translateX(100%);
|
transform: translateX(100%);
|
||||||
opacity: 0;
|
opacity: 0;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* dnsrecon/static/css/main.css */
|
||||||
|
|
||||||
|
/* ... (at the end of the file) */
|
||||||
|
|
||||||
|
.large-entity-nodes-list {
|
||||||
|
margin-top: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.large-entity-node-details {
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
border: 1px solid #333;
|
||||||
|
border-radius: 3px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.large-entity-node-details summary {
|
||||||
|
padding: 0.5rem;
|
||||||
|
background-color: #3a3a3a;
|
||||||
|
cursor: pointer;
|
||||||
|
outline: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.large-entity-node-details summary:hover {
|
||||||
|
background-color: #4a4a4a;
|
||||||
|
}
|
||||||
|
|
||||||
|
.large-entity-node-details .detail-row {
|
||||||
|
margin-left: 1rem;
|
||||||
|
margin-right: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.large-entity-node-details .detail-section-header {
|
||||||
|
margin-left: 1rem;
|
||||||
|
margin-right: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* dnsrecon/static/css/main.css */
|
||||||
|
|
||||||
|
/* --- Add these styles for the modal --- */
|
||||||
|
|
||||||
|
.modal {
|
||||||
|
display: none; /* Hidden by default */
|
||||||
|
position: fixed; /* Stay in place */
|
||||||
|
z-index: 1000; /* Sit on top */
|
||||||
|
left: 0;
|
||||||
|
top: 0;
|
||||||
|
width: 100%;
|
||||||
|
height: 100%;
|
||||||
|
overflow: auto; /* Enable scroll if needed */
|
||||||
|
background-color: rgba(0,0,0,0.6); /* Black w/ opacity */
|
||||||
|
backdrop-filter: blur(5px);
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-content {
|
||||||
|
background-color: #1e1e1e;
|
||||||
|
margin: 10% auto;
|
||||||
|
padding: 20px;
|
||||||
|
border: 1px solid #444;
|
||||||
|
width: 60%;
|
||||||
|
max-width: 800px;
|
||||||
|
border-radius: 5px;
|
||||||
|
box-shadow: 0 5px 15px rgba(0,0,0,0.5);
|
||||||
|
animation: fadeIn 0.3s;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-header {
|
||||||
|
display: flex;
|
||||||
|
justify-content: space-between;
|
||||||
|
align-items: center;
|
||||||
|
border-bottom: 1px solid #444;
|
||||||
|
padding-bottom: 10px;
|
||||||
|
margin-bottom: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-header h3 {
|
||||||
|
margin: 0;
|
||||||
|
font-family: 'Special Elite', monospace;
|
||||||
|
color: #00ff41;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-close {
|
||||||
|
background: none;
|
||||||
|
border: none;
|
||||||
|
color: #c7c7c7;
|
||||||
|
font-size: 24px;
|
||||||
|
cursor: pointer;
|
||||||
|
padding: 0 10px;
|
||||||
|
}
|
||||||
|
.modal-close:hover {
|
||||||
|
color: #ff6b6b;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-body {
|
||||||
|
max-height: 60vh;
|
||||||
|
overflow-y: auto;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Styles for the new data model display */
|
||||||
|
.modal-details-grid {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: 1fr;
|
||||||
|
gap: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-section h4 {
|
||||||
|
font-family: 'Special Elite', monospace;
|
||||||
|
color: #ff9900;
|
||||||
|
border-bottom: 1px dashed #555;
|
||||||
|
padding-bottom: 5px;
|
||||||
|
margin-top: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-section ul {
|
||||||
|
list-style-type: none;
|
||||||
|
padding-left: 15px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-section li {
|
||||||
|
margin-bottom: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-section li > ul {
|
||||||
|
padding-left: 20px;
|
||||||
|
margin-top: 5px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.description-text, .no-data {
|
||||||
|
color: #aaa;
|
||||||
|
font-style: italic;
|
||||||
|
}
|
||||||
|
|
||||||
|
.correlation-values-list {
|
||||||
|
margin-top: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.correlation-value-details {
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
border: 1px solid #333;
|
||||||
|
border-radius: 3px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.correlation-value-details summary {
|
||||||
|
padding: 0.5rem;
|
||||||
|
background-color: #3a3a3a;
|
||||||
|
cursor: pointer;
|
||||||
|
outline: none;
|
||||||
|
color: #c7c7c7;
|
||||||
|
}
|
||||||
|
|
||||||
|
.correlation-value-details summary:hover {
|
||||||
|
background-color: #4a4a4a;
|
||||||
|
}
|
||||||
|
|
||||||
|
.correlation-value-details .detail-row {
|
||||||
|
margin-left: 1rem;
|
||||||
|
margin-right: 1rem;
|
||||||
|
padding: 0.5rem 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.correlation-value-details .detail-label {
|
||||||
|
color: #999;
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
|
||||||
|
.correlation-value-details .detail-value {
|
||||||
|
color: #c7c7c7;
|
||||||
|
word-break: break-all;
|
||||||
|
font-family: 'Roboto Mono', monospace;
|
||||||
|
font-size: 0.9em;
|
||||||
|
}
|
||||||
|
|
||||||
|
@keyframes fadeIn {
|
||||||
|
from {opacity: 0; transform: scale(0.95);}
|
||||||
|
to {opacity: 1; transform: scale(1);}
|
||||||
}
|
}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
/**
|
/**
|
||||||
* Graph visualization module for DNSRecon
|
* Graph visualization module for DNSRecon
|
||||||
* Handles network graph rendering using vis.js with enhanced Phase 2 features
|
* Handles network graph rendering using vis.js
|
||||||
*/
|
*/
|
||||||
|
|
||||||
class GraphManager {
|
class GraphManager {
|
||||||
@@ -13,7 +13,6 @@ class GraphManager {
|
|||||||
this.currentLayout = 'physics';
|
this.currentLayout = 'physics';
|
||||||
this.nodeInfoPopup = null;
|
this.nodeInfoPopup = null;
|
||||||
|
|
||||||
// Enhanced graph options for Phase 2
|
|
||||||
this.options = {
|
this.options = {
|
||||||
nodes: {
|
nodes: {
|
||||||
shape: 'dot',
|
shape: 'dot',
|
||||||
@@ -28,13 +27,6 @@ class GraphManager {
|
|||||||
},
|
},
|
||||||
borderWidth: 2,
|
borderWidth: 2,
|
||||||
borderColor: '#444',
|
borderColor: '#444',
|
||||||
shadow: {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 0, 0, 0.5)',
|
|
||||||
size: 5,
|
|
||||||
x: 2,
|
|
||||||
y: 2
|
|
||||||
},
|
|
||||||
scaling: {
|
scaling: {
|
||||||
min: 10,
|
min: 10,
|
||||||
max: 30,
|
max: 30,
|
||||||
@@ -48,9 +40,6 @@ class GraphManager {
|
|||||||
node: (values, id, selected, hovering) => {
|
node: (values, id, selected, hovering) => {
|
||||||
values.borderColor = '#00ff41';
|
values.borderColor = '#00ff41';
|
||||||
values.borderWidth = 3;
|
values.borderWidth = 3;
|
||||||
values.shadow = true;
|
|
||||||
values.shadowColor = 'rgba(0, 255, 65, 0.6)';
|
|
||||||
values.shadowSize = 10;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@@ -82,19 +71,10 @@ class GraphManager {
|
|||||||
type: 'dynamic',
|
type: 'dynamic',
|
||||||
roundness: 0.6
|
roundness: 0.6
|
||||||
},
|
},
|
||||||
shadow: {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 0, 0, 0.3)',
|
|
||||||
size: 3,
|
|
||||||
x: 1,
|
|
||||||
y: 1
|
|
||||||
},
|
|
||||||
chosen: {
|
chosen: {
|
||||||
edge: (values, id, selected, hovering) => {
|
edge: (values, id, selected, hovering) => {
|
||||||
values.color = '#00ff41';
|
values.color = '#00ff41';
|
||||||
values.width = 4;
|
values.width = 4;
|
||||||
values.shadow = true;
|
|
||||||
values.shadowColor = 'rgba(0, 255, 65, 0.4)';
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@@ -150,7 +130,7 @@ class GraphManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Initialize the network graph with enhanced features
|
* Initialize the network graph
|
||||||
*/
|
*/
|
||||||
initialize() {
|
initialize() {
|
||||||
if (this.isInitialized) {
|
if (this.isInitialized) {
|
||||||
@@ -176,7 +156,7 @@ class GraphManager {
|
|||||||
// Add graph controls
|
// Add graph controls
|
||||||
this.addGraphControls();
|
this.addGraphControls();
|
||||||
|
|
||||||
console.log('Enhanced graph initialized successfully');
|
console.log('Graph initialized successfully');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Failed to initialize graph:', error);
|
console.error('Failed to initialize graph:', error);
|
||||||
this.showError('Failed to initialize visualization');
|
this.showError('Failed to initialize visualization');
|
||||||
@@ -191,44 +171,43 @@ class GraphManager {
|
|||||||
controlsContainer.className = 'graph-controls';
|
controlsContainer.className = 'graph-controls';
|
||||||
controlsContainer.innerHTML = `
|
controlsContainer.innerHTML = `
|
||||||
<button class="graph-control-btn" id="graph-fit" title="Fit to Screen">[FIT]</button>
|
<button class="graph-control-btn" id="graph-fit" title="Fit to Screen">[FIT]</button>
|
||||||
<button class="graph-control-btn" id="graph-reset" title="Reset View">[RESET]</button>
|
|
||||||
<button class="graph-control-btn" id="graph-physics" title="Toggle Physics">[PHYSICS]</button>
|
<button class="graph-control-btn" id="graph-physics" title="Toggle Physics">[PHYSICS]</button>
|
||||||
<button class="graph-control-btn" id="graph-cluster" title="Cluster Nodes">[CLUSTER]</button>
|
<button class="graph-control-btn" id="graph-cluster" title="Cluster Nodes">[CLUSTER]</button>
|
||||||
<button class="graph-control-btn" id="graph-clear" title="Clear Graph">[CLEAR]</button>
|
|
||||||
`;
|
`;
|
||||||
|
|
||||||
this.container.appendChild(controlsContainer);
|
this.container.appendChild(controlsContainer);
|
||||||
|
|
||||||
// Add control event listeners
|
// Add control event listeners
|
||||||
document.getElementById('graph-fit').addEventListener('click', () => this.fitView());
|
document.getElementById('graph-fit').addEventListener('click', () => this.fitView());
|
||||||
document.getElementById('graph-reset').addEventListener('click', () => this.resetView());
|
|
||||||
document.getElementById('graph-physics').addEventListener('click', () => this.togglePhysics());
|
document.getElementById('graph-physics').addEventListener('click', () => this.togglePhysics());
|
||||||
document.getElementById('graph-cluster').addEventListener('click', () => this.toggleClustering());
|
document.getElementById('graph-cluster').addEventListener('click', () => this.toggleClustering());
|
||||||
document.getElementById('graph-clear').addEventListener('click', () => this.clear());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Setup enhanced network event handlers
|
* Setup network event handlers
|
||||||
*/
|
*/
|
||||||
setupNetworkEvents() {
|
setupNetworkEvents() {
|
||||||
if (!this.network) return;
|
if (!this.network) return;
|
||||||
|
|
||||||
// Node click event with enhanced details
|
// Node click event with details
|
||||||
this.network.on('click', (params) => {
|
this.network.on('click', (params) => {
|
||||||
if (params.nodes.length > 0) {
|
if (params.nodes.length > 0) {
|
||||||
const nodeId = params.nodes[0];
|
const nodeId = params.nodes[0];
|
||||||
if (this.network.isCluster(nodeId)) {
|
if (this.network.isCluster(nodeId)) {
|
||||||
this.network.openCluster(nodeId);
|
this.network.openCluster(nodeId);
|
||||||
} else {
|
} else {
|
||||||
this.showNodeDetails(nodeId);
|
const node = this.nodes.get(nodeId);
|
||||||
this.highlightNodeConnections(nodeId);
|
if (node) {
|
||||||
|
this.showNodeDetails(node);
|
||||||
|
this.highlightNodeConnections(nodeId);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
this.clearHighlights();
|
this.clearHighlights();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Enhanced hover events
|
// Hover events
|
||||||
this.network.on('hoverNode', (params) => {
|
this.network.on('hoverNode', (params) => {
|
||||||
const nodeId = params.node;
|
const nodeId = params.node;
|
||||||
const node = this.nodes.get(nodeId);
|
const node = this.nodes.get(nodeId);
|
||||||
@@ -237,25 +216,8 @@ class GraphManager {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
this.network.on('blurNode', (params) => {
|
|
||||||
this.hideNodeInfoPopup();
|
|
||||||
this.clearHoverHighlights();
|
|
||||||
});
|
|
||||||
|
|
||||||
// Double-click to focus on node
|
|
||||||
this.network.on('doubleClick', (params) => {
|
|
||||||
if (params.nodes.length > 0) {
|
|
||||||
const nodeId = params.nodes[0];
|
|
||||||
this.focusOnNode(nodeId);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Context menu (right-click)
|
|
||||||
this.network.on('oncontext', (params) => {
|
this.network.on('oncontext', (params) => {
|
||||||
params.event.preventDefault();
|
params.event.preventDefault();
|
||||||
if (params.nodes.length > 0) {
|
|
||||||
this.showNodeContextMenu(params.pointer.DOM, params.nodes[0]);
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
|
|
||||||
// Stabilization events with progress
|
// Stabilization events with progress
|
||||||
@@ -276,7 +238,6 @@ class GraphManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Update graph with new data and enhanced processing
|
|
||||||
* @param {Object} graphData - Graph data from backend
|
* @param {Object} graphData - Graph data from backend
|
||||||
*/
|
*/
|
||||||
updateGraph(graphData) {
|
updateGraph(graphData) {
|
||||||
@@ -291,9 +252,52 @@ class GraphManager {
|
|||||||
this.initialize();
|
this.initialize();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Process nodes with enhanced attributes
|
const largeEntityMap = new Map();
|
||||||
const processedNodes = graphData.nodes.map(node => this.processNode(node));
|
graphData.nodes.forEach(node => {
|
||||||
const processedEdges = graphData.edges.map(edge => this.processEdge(edge));
|
if (node.type === 'large_entity' && node.attributes && Array.isArray(node.attributes.nodes)) {
|
||||||
|
node.attributes.nodes.forEach(nodeId => {
|
||||||
|
largeEntityMap.set(nodeId, node.id);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
const processedNodes = graphData.nodes.map(node => {
|
||||||
|
const processed = this.processNode(node);
|
||||||
|
if (largeEntityMap.has(node.id)) {
|
||||||
|
processed.hidden = true;
|
||||||
|
}
|
||||||
|
return processed;
|
||||||
|
});
|
||||||
|
|
||||||
|
const mergedEdges = {};
|
||||||
|
graphData.edges.forEach(edge => {
|
||||||
|
const fromNode = largeEntityMap.has(edge.from) ? largeEntityMap.get(edge.from) : edge.from;
|
||||||
|
const toNode = largeEntityMap.has(edge.to) ? largeEntityMap.get(edge.to) : edge.to;
|
||||||
|
const mergeKey = `${fromNode}-${toNode}-${edge.label}`;
|
||||||
|
|
||||||
|
if (!mergedEdges[mergeKey]) {
|
||||||
|
mergedEdges[mergeKey] = {
|
||||||
|
...edge,
|
||||||
|
from: fromNode,
|
||||||
|
to: toNode,
|
||||||
|
count: 0,
|
||||||
|
confidence_score: 0
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
mergedEdges[mergeKey].count++;
|
||||||
|
if (edge.confidence_score > mergedEdges[mergeKey].confidence_score) {
|
||||||
|
mergedEdges[mergeKey].confidence_score = edge.confidence_score;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
const processedEdges = Object.values(mergedEdges).map(edge => {
|
||||||
|
const processed = this.processEdge(edge);
|
||||||
|
if (edge.count > 1) {
|
||||||
|
processed.label = `${edge.label} (${edge.count})`;
|
||||||
|
}
|
||||||
|
return processed;
|
||||||
|
});
|
||||||
|
|
||||||
// Update datasets with animation
|
// Update datasets with animation
|
||||||
const existingNodeIds = this.nodes.getIds();
|
const existingNodeIds = this.nodes.getIds();
|
||||||
@@ -317,15 +321,15 @@ class GraphManager {
|
|||||||
setTimeout(() => this.fitView(), 800);
|
setTimeout(() => this.fitView(), 800);
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log(`Enhanced graph updated: ${processedNodes.length} nodes, ${processedEdges.length} edges (${newNodes.length} new nodes, ${newEdges.length} new edges)`);
|
console.log(`Graph updated: ${processedNodes.length} nodes, ${processedEdges.length} edges (${newNodes.length} new nodes, ${newEdges.length} new edges)`);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Failed to update enhanced graph:', error);
|
console.error('Failed to update graph:', error);
|
||||||
this.showError('Failed to update visualization');
|
this.showError('Failed to update visualization');
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Process node data with enhanced styling and metadata
|
* Process node data with styling and metadata
|
||||||
* @param {Object} node - Raw node data
|
* @param {Object} node - Raw node data
|
||||||
* @returns {Object} Processed node data
|
* @returns {Object} Processed node data
|
||||||
*/
|
*/
|
||||||
@@ -337,8 +341,12 @@ class GraphManager {
|
|||||||
size: this.getNodeSize(node.type),
|
size: this.getNodeSize(node.type),
|
||||||
borderColor: this.getNodeBorderColor(node.type),
|
borderColor: this.getNodeBorderColor(node.type),
|
||||||
shape: this.getNodeShape(node.type),
|
shape: this.getNodeShape(node.type),
|
||||||
|
attributes: node.attributes || {},
|
||||||
|
description: node.description || '',
|
||||||
metadata: node.metadata || {},
|
metadata: node.metadata || {},
|
||||||
type: node.type
|
type: node.type,
|
||||||
|
incoming_edges: node.incoming_edges || [],
|
||||||
|
outgoing_edges: node.outgoing_edges || []
|
||||||
};
|
};
|
||||||
|
|
||||||
// Add confidence-based styling
|
// Add confidence-based styling
|
||||||
@@ -346,25 +354,30 @@ class GraphManager {
|
|||||||
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
|
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add special styling for important nodes
|
|
||||||
if (this.isImportantNode(node)) {
|
|
||||||
processedNode.shadow = {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 255, 65, 0.6)',
|
|
||||||
size: 10,
|
|
||||||
x: 2,
|
|
||||||
y: 2
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Style based on certificate validity
|
// Style based on certificate validity
|
||||||
if (node.type === 'domain') {
|
if (node.type === 'domain') {
|
||||||
if (node.metadata && node.metadata.certificate_data && node.metadata.certificate_data.has_valid_cert === true) {
|
if (node.attributes && node.attributes.certificates && node.attributes.certificates.has_valid_cert === false) {
|
||||||
processedNode.color = '#00ff41'; // Bright green for valid cert
|
processedNode.color = { background: '#888888', border: '#666666' };
|
||||||
processedNode.borderColor = '#00aa2e';
|
}
|
||||||
} else if (node.metadata && node.metadata.certificate_data && node.metadata.certificate_data.has_valid_cert === false) {
|
}
|
||||||
processedNode.color = '#888888'; // Muted grey color
|
|
||||||
processedNode.borderColor = '#666666'; // Darker grey border
|
// Handle merged correlation objects (similar to large entities)
|
||||||
|
if (node.type === 'correlation_object') {
|
||||||
|
const metadata = node.metadata || {};
|
||||||
|
const values = metadata.values || [];
|
||||||
|
const mergeCount = metadata.merge_count || 1;
|
||||||
|
|
||||||
|
if (mergeCount > 1) {
|
||||||
|
// Display as merged correlation container
|
||||||
|
processedNode.label = `Correlations (${mergeCount})`;
|
||||||
|
processedNode.title = `Merged correlation container with ${mergeCount} values: ${values.slice(0, 3).join(', ')}${values.length > 3 ? '...' : ''}`;
|
||||||
|
processedNode.borderWidth = 3; // Thicker border for merged nodes
|
||||||
|
} else {
|
||||||
|
// Single correlation value
|
||||||
|
const value = Array.isArray(values) && values.length > 0 ? values[0] : (metadata.value || 'Unknown');
|
||||||
|
const displayValue = typeof value === 'string' && value.length > 20 ? value.substring(0, 17) + '...' : value;
|
||||||
|
processedNode.label = `${displayValue}`;
|
||||||
|
processedNode.title = `Correlation: ${value}`;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -372,7 +385,7 @@ class GraphManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Process edge data with enhanced styling and metadata
|
* Process edge data with styling and metadata
|
||||||
* @param {Object} edge - Raw edge data
|
* @param {Object} edge - Raw edge data
|
||||||
* @returns {Object} Processed edge data
|
* @returns {Object} Processed edge data
|
||||||
*/
|
*/
|
||||||
@@ -395,16 +408,7 @@ class GraphManager {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Add animation for high-confidence edges
|
|
||||||
if (confidence >= 0.8) {
|
|
||||||
processedEdge.shadow = {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 255, 65, 0.3)',
|
|
||||||
size: 5,
|
|
||||||
x: 1,
|
|
||||||
y: 1
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
return processedEdge;
|
return processedEdge;
|
||||||
}
|
}
|
||||||
@@ -416,7 +420,7 @@ class GraphManager {
|
|||||||
* @returns {string} Formatted label
|
* @returns {string} Formatted label
|
||||||
*/
|
*/
|
||||||
formatNodeLabel(nodeId, nodeType) {
|
formatNodeLabel(nodeId, nodeType) {
|
||||||
// Truncate long domain names
|
if (typeof nodeId !== 'string') return '';
|
||||||
if (nodeId.length > 20) {
|
if (nodeId.length > 20) {
|
||||||
return nodeId.substring(0, 17) + '...';
|
return nodeId.substring(0, 17) + '...';
|
||||||
}
|
}
|
||||||
@@ -447,7 +451,7 @@ class GraphManager {
|
|||||||
'ip': '#ff9900', // Amber
|
'ip': '#ff9900', // Amber
|
||||||
'asn': '#00aaff', // Blue
|
'asn': '#00aaff', // Blue
|
||||||
'large_entity': '#ff6b6b', // Red for large entities
|
'large_entity': '#ff6b6b', // Red for large entities
|
||||||
'dns_record': '#999999'
|
'correlation_object': '#9620c0ff'
|
||||||
};
|
};
|
||||||
return colors[nodeType] || '#ffffff';
|
return colors[nodeType] || '#ffffff';
|
||||||
}
|
}
|
||||||
@@ -463,7 +467,7 @@ class GraphManager {
|
|||||||
'domain': '#00aa2e',
|
'domain': '#00aa2e',
|
||||||
'ip': '#cc7700',
|
'ip': '#cc7700',
|
||||||
'asn': '#0088cc',
|
'asn': '#0088cc',
|
||||||
'dns_record': '#999999'
|
'correlation_object': '#c235c9ff'
|
||||||
};
|
};
|
||||||
return borderColors[nodeType] || '#666666';
|
return borderColors[nodeType] || '#666666';
|
||||||
}
|
}
|
||||||
@@ -478,13 +482,14 @@ class GraphManager {
|
|||||||
'domain': 12,
|
'domain': 12,
|
||||||
'ip': 14,
|
'ip': 14,
|
||||||
'asn': 16,
|
'asn': 16,
|
||||||
'dns_record': 8
|
'correlation_object': 8,
|
||||||
|
'large_entity': 5
|
||||||
};
|
};
|
||||||
return sizes[nodeType] || 12;
|
return sizes[nodeType] || 12;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get enhanced node shape based on type
|
* Get node shape based on type
|
||||||
* @param {string} nodeType - Node type
|
* @param {string} nodeType - Node type
|
||||||
* @returns {string} Shape name
|
* @returns {string} Shape name
|
||||||
*/
|
*/
|
||||||
@@ -493,7 +498,8 @@ class GraphManager {
|
|||||||
'domain': 'dot',
|
'domain': 'dot',
|
||||||
'ip': 'square',
|
'ip': 'square',
|
||||||
'asn': 'triangle',
|
'asn': 'triangle',
|
||||||
'dns_record': 'hexagon'
|
'correlation_object': 'hexagon',
|
||||||
|
'large_entity': 'database'
|
||||||
};
|
};
|
||||||
return shapes[nodeType] || 'dot';
|
return shapes[nodeType] || 'dot';
|
||||||
}
|
}
|
||||||
@@ -566,15 +572,12 @@ class GraphManager {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Show node details in modal
|
* Show node details in modal
|
||||||
* @param {string} nodeId - Node identifier
|
* @param {Object} node - Node object
|
||||||
*/
|
*/
|
||||||
showNodeDetails(nodeId) {
|
showNodeDetails(node) {
|
||||||
const node = this.nodes.get(nodeId);
|
|
||||||
if (!node) return;
|
|
||||||
|
|
||||||
// Trigger custom event for main application to handle
|
// Trigger custom event for main application to handle
|
||||||
const event = new CustomEvent('nodeSelected', {
|
const event = new CustomEvent('nodeSelected', {
|
||||||
detail: { nodeId, node }
|
detail: { node }
|
||||||
});
|
});
|
||||||
document.dispatchEvent(event);
|
document.dispatchEvent(event);
|
||||||
}
|
}
|
||||||
@@ -720,14 +723,7 @@ class GraphManager {
|
|||||||
const nodeHighlights = newNodes.map(node => ({
|
const nodeHighlights = newNodes.map(node => ({
|
||||||
id: node.id,
|
id: node.id,
|
||||||
borderColor: '#00ff41',
|
borderColor: '#00ff41',
|
||||||
borderWidth: 4,
|
borderWidth: 4
|
||||||
shadow: {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 255, 65, 0.8)',
|
|
||||||
size: 15,
|
|
||||||
x: 2,
|
|
||||||
y: 2
|
|
||||||
}
|
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Briefly highlight new edges
|
// Briefly highlight new edges
|
||||||
@@ -746,7 +742,6 @@ class GraphManager {
|
|||||||
id: node.id,
|
id: node.id,
|
||||||
borderColor: this.getNodeBorderColor(node.type),
|
borderColor: this.getNodeBorderColor(node.type),
|
||||||
borderWidth: 2,
|
borderWidth: 2,
|
||||||
shadow: node.shadow || { enabled: false }
|
|
||||||
}));
|
}));
|
||||||
|
|
||||||
const edgeResets = newEdges.map(edge => ({
|
const edgeResets = newEdges.map(edge => ({
|
||||||
@@ -845,22 +840,6 @@ class GraphManager {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Reset the view to initial state
|
|
||||||
*/
|
|
||||||
resetView() {
|
|
||||||
if (this.network) {
|
|
||||||
this.network.moveTo({
|
|
||||||
position: { x: 0, y: 0 },
|
|
||||||
scale: 1,
|
|
||||||
animation: {
|
|
||||||
duration: 1000,
|
|
||||||
easingFunction: 'easeInOutQuad'
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Clear the graph
|
* Clear the graph
|
||||||
*/
|
*/
|
||||||
@@ -900,17 +879,45 @@ class GraphManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Export graph as image (if needed for future implementation)
|
* Apply filters to the graph
|
||||||
* @param {string} format - Image format ('png', 'jpeg')
|
* @param {string} nodeType - The type of node to show ('all' for no filter)
|
||||||
* @returns {string} Data URL of the image
|
* @param {number} minConfidence - The minimum confidence score for edges to be visible
|
||||||
*/
|
*/
|
||||||
exportAsImage(format = 'png') {
|
applyFilters(nodeType, minConfidence) {
|
||||||
if (!this.network) return null;
|
console.log(`Applying filters: nodeType=${nodeType}, minConfidence=${minConfidence}`);
|
||||||
|
|
||||||
// This would require additional vis.js functionality
|
const nodeUpdates = [];
|
||||||
// Placeholder for future implementation
|
const edgeUpdates = [];
|
||||||
console.log('Image export not yet implemented');
|
|
||||||
return null;
|
const allNodes = this.nodes.get({ returnType: 'Object' });
|
||||||
|
const allEdges = this.edges.get();
|
||||||
|
|
||||||
|
// Determine which nodes are visible based on the nodeType filter
|
||||||
|
for (const nodeId in allNodes) {
|
||||||
|
const node = allNodes[nodeId];
|
||||||
|
const isVisible = (nodeType === 'all' || node.type === nodeType);
|
||||||
|
nodeUpdates.push({ id: nodeId, hidden: !isVisible });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update nodes first to determine edge visibility
|
||||||
|
this.nodes.update(nodeUpdates);
|
||||||
|
|
||||||
|
// Determine which edges are visible based on confidence and connected nodes
|
||||||
|
for (const edge of allEdges) {
|
||||||
|
const sourceNode = this.nodes.get(edge.from);
|
||||||
|
const targetNode = this.nodes.get(edge.to);
|
||||||
|
const confidence = edge.metadata ? edge.metadata.confidence_score : 0;
|
||||||
|
|
||||||
|
const isVisible = confidence >= minConfidence &&
|
||||||
|
sourceNode && !sourceNode.hidden &&
|
||||||
|
targetNode && !targetNode.hidden;
|
||||||
|
|
||||||
|
edgeUpdates.push({ id: edge.id, hidden: !isVisible });
|
||||||
|
}
|
||||||
|
|
||||||
|
this.edges.update(edgeUpdates);
|
||||||
|
|
||||||
|
console.log('Filters applied.');
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
/**
|
/**
|
||||||
* Main application logic for DNSRecon web interface
|
* Main application logic for DNSRecon web interface
|
||||||
* Handles UI interactions, API communication, and data flow
|
* Handles UI interactions, API communication, and data flow
|
||||||
* DEBUG VERSION WITH EXTRA LOGGING
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
class DNSReconApp {
|
class DNSReconApp {
|
||||||
@@ -12,10 +11,8 @@ class DNSReconApp {
|
|||||||
this.pollInterval = null;
|
this.pollInterval = null;
|
||||||
this.currentSessionId = null;
|
this.currentSessionId = null;
|
||||||
|
|
||||||
// UI Elements
|
|
||||||
this.elements = {};
|
this.elements = {};
|
||||||
|
|
||||||
// Application state
|
|
||||||
this.isScanning = false;
|
this.isScanning = false;
|
||||||
this.lastGraphUpdate = null;
|
this.lastGraphUpdate = null;
|
||||||
|
|
||||||
@@ -54,6 +51,7 @@ class DNSReconApp {
|
|||||||
targetDomain: document.getElementById('target-domain'),
|
targetDomain: document.getElementById('target-domain'),
|
||||||
maxDepth: document.getElementById('max-depth'),
|
maxDepth: document.getElementById('max-depth'),
|
||||||
startScan: document.getElementById('start-scan'),
|
startScan: document.getElementById('start-scan'),
|
||||||
|
addToGraph: document.getElementById('add-to-graph'),
|
||||||
stopScan: document.getElementById('stop-scan'),
|
stopScan: document.getElementById('stop-scan'),
|
||||||
exportResults: document.getElementById('export-results'),
|
exportResults: document.getElementById('export-results'),
|
||||||
configureApiKeys: document.getElementById('configure-api-keys'),
|
configureApiKeys: document.getElementById('configure-api-keys'),
|
||||||
@@ -62,9 +60,8 @@ class DNSReconApp {
|
|||||||
scanStatus: document.getElementById('scan-status'),
|
scanStatus: document.getElementById('scan-status'),
|
||||||
targetDisplay: document.getElementById('target-display'),
|
targetDisplay: document.getElementById('target-display'),
|
||||||
depthDisplay: document.getElementById('depth-display'),
|
depthDisplay: document.getElementById('depth-display'),
|
||||||
progressDisplay: document.getElementById('progress-display'),
|
|
||||||
indicatorsDisplay: document.getElementById('indicators-display'),
|
|
||||||
relationshipsDisplay: document.getElementById('relationships-display'),
|
relationshipsDisplay: document.getElementById('relationships-display'),
|
||||||
|
progressCompact: document.getElementById('progress-compact'),
|
||||||
progressFill: document.getElementById('progress-fill'),
|
progressFill: document.getElementById('progress-fill'),
|
||||||
|
|
||||||
// Provider elements
|
// Provider elements
|
||||||
@@ -79,14 +76,18 @@ class DNSReconApp {
|
|||||||
// API Key Modal elements
|
// API Key Modal elements
|
||||||
apiKeyModal: document.getElementById('api-key-modal'),
|
apiKeyModal: document.getElementById('api-key-modal'),
|
||||||
apiKeyModalClose: document.getElementById('api-key-modal-close'),
|
apiKeyModalClose: document.getElementById('api-key-modal-close'),
|
||||||
virustotalApiKey: document.getElementById('virustotal-api-key'),
|
apiKeyInputs: document.getElementById('api-key-inputs'),
|
||||||
shodanApiKey: document.getElementById('shodan-api-key'),
|
|
||||||
saveApiKeys: document.getElementById('save-api-keys'),
|
saveApiKeys: document.getElementById('save-api-keys'),
|
||||||
resetApiKeys: document.getElementById('reset-api-keys'),
|
resetApiKeys: document.getElementById('reset-api-keys'),
|
||||||
|
|
||||||
// Other elements
|
// Other elements
|
||||||
sessionId: document.getElementById('session-id'),
|
sessionId: document.getElementById('session-id'),
|
||||||
connectionStatus: document.getElementById('connection-status')
|
connectionStatus: document.getElementById('connection-status'),
|
||||||
|
|
||||||
|
// Filter elements
|
||||||
|
nodeTypeFilter: document.getElementById('node-type-filter'),
|
||||||
|
confidenceFilter: document.getElementById('confidence-filter'),
|
||||||
|
confidenceValue: document.getElementById('confidence-value')
|
||||||
};
|
};
|
||||||
|
|
||||||
// Verify critical elements exist
|
// Verify critical elements exist
|
||||||
@@ -136,6 +137,11 @@ class DNSReconApp {
|
|||||||
e.preventDefault();
|
e.preventDefault();
|
||||||
this.startScan();
|
this.startScan();
|
||||||
});
|
});
|
||||||
|
|
||||||
|
this.elements.addToGraph.addEventListener('click', (e) => {
|
||||||
|
e.preventDefault();
|
||||||
|
this.startScan(false);
|
||||||
|
});
|
||||||
|
|
||||||
this.elements.stopScan.addEventListener('click', (e) => {
|
this.elements.stopScan.addEventListener('click', (e) => {
|
||||||
console.log('Stop scan button clicked');
|
console.log('Stop scan button clicked');
|
||||||
@@ -185,9 +191,9 @@ class DNSReconApp {
|
|||||||
this.elements.resetApiKeys.addEventListener('click', () => this.resetApiKeys());
|
this.elements.resetApiKeys.addEventListener('click', () => this.resetApiKeys());
|
||||||
}
|
}
|
||||||
|
|
||||||
// Custom events
|
// ** FIX: Listen for the custom event from the graph **
|
||||||
document.addEventListener('nodeSelected', (e) => {
|
document.addEventListener('nodeSelected', (e) => {
|
||||||
this.showNodeModal(e.detail.nodeId, e.detail.node);
|
this.showNodeModal(e.detail.node);
|
||||||
});
|
});
|
||||||
|
|
||||||
// Keyboard shortcuts
|
// Keyboard shortcuts
|
||||||
@@ -205,6 +211,13 @@ class DNSReconApp {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Filter events
|
||||||
|
this.elements.nodeTypeFilter.addEventListener('change', () => this.applyFilters());
|
||||||
|
this.elements.confidenceFilter.addEventListener('input', () => {
|
||||||
|
this.elements.confidenceValue.textContent = this.elements.confidenceFilter.value;
|
||||||
|
this.applyFilters();
|
||||||
|
});
|
||||||
|
|
||||||
console.log('Event handlers set up successfully');
|
console.log('Event handlers set up successfully');
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -228,9 +241,9 @@ class DNSReconApp {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Start a reconnaissance scan
|
* Start scan with error handling
|
||||||
*/
|
*/
|
||||||
async startScan() {
|
async startScan(clearGraph = true) {
|
||||||
console.log('=== STARTING SCAN ===');
|
console.log('=== STARTING SCAN ===');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
@@ -262,7 +275,8 @@ class DNSReconApp {
|
|||||||
|
|
||||||
const requestData = {
|
const requestData = {
|
||||||
target_domain: targetDomain,
|
target_domain: targetDomain,
|
||||||
max_depth: maxDepth
|
max_depth: maxDepth,
|
||||||
|
clear_graph: clearGraph
|
||||||
};
|
};
|
||||||
|
|
||||||
console.log('Request data:', requestData);
|
console.log('Request data:', requestData);
|
||||||
@@ -273,15 +287,17 @@ class DNSReconApp {
|
|||||||
|
|
||||||
if (response.success) {
|
if (response.success) {
|
||||||
this.currentSessionId = response.scan_id;
|
this.currentSessionId = response.scan_id;
|
||||||
console.log('Starting polling with session ID:', this.currentSessionId);
|
|
||||||
this.startPolling();
|
|
||||||
this.showSuccess('Reconnaissance scan started successfully');
|
this.showSuccess('Reconnaissance scan started successfully');
|
||||||
|
|
||||||
// Clear previous graph
|
if (clearGraph) {
|
||||||
this.graphManager.clear();
|
this.graphManager.clear();
|
||||||
|
}
|
||||||
|
|
||||||
console.log(`Scan started for ${targetDomain} with depth ${maxDepth}`);
|
console.log(`Scan started for ${targetDomain} with depth ${maxDepth}`);
|
||||||
|
|
||||||
|
// Start polling immediately with faster interval for responsiveness
|
||||||
|
this.startPolling(1000);
|
||||||
|
|
||||||
// Force an immediate status update
|
// Force an immediate status update
|
||||||
console.log('Forcing immediate status update...');
|
console.log('Forcing immediate status update...');
|
||||||
setTimeout(() => {
|
setTimeout(() => {
|
||||||
@@ -299,18 +315,43 @@ class DNSReconApp {
|
|||||||
this.setUIState('idle');
|
this.setUIState('idle');
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Stop the current scan
|
* Scan stop with immediate UI feedback
|
||||||
*/
|
*/
|
||||||
async stopScan() {
|
async stopScan() {
|
||||||
try {
|
try {
|
||||||
console.log('Stopping scan...');
|
console.log('Stopping scan...');
|
||||||
|
|
||||||
|
// Immediately disable stop button and show stopping state
|
||||||
|
if (this.elements.stopScan) {
|
||||||
|
this.elements.stopScan.disabled = true;
|
||||||
|
this.elements.stopScan.innerHTML = '<span class="btn-icon">[STOPPING]</span><span>Stopping...</span>';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show immediate feedback
|
||||||
|
this.showInfo('Stopping scan...');
|
||||||
|
|
||||||
const response = await this.apiCall('/api/scan/stop', 'POST');
|
const response = await this.apiCall('/api/scan/stop', 'POST');
|
||||||
|
|
||||||
if (response.success) {
|
if (response.success) {
|
||||||
this.showSuccess('Scan stop requested');
|
this.showSuccess('Scan stop requested');
|
||||||
console.log('Scan stop requested');
|
console.log('Scan stop requested successfully');
|
||||||
|
|
||||||
|
// Force immediate status update
|
||||||
|
setTimeout(() => {
|
||||||
|
this.updateStatus();
|
||||||
|
}, 100);
|
||||||
|
|
||||||
|
// Continue polling for a bit to catch the status change
|
||||||
|
this.startPolling(500); // Fast polling to catch status change
|
||||||
|
|
||||||
|
// Stop fast polling after 10 seconds
|
||||||
|
setTimeout(() => {
|
||||||
|
if (this.scanStatus === 'stopped' || this.scanStatus === 'idle') {
|
||||||
|
this.stopPolling();
|
||||||
|
}
|
||||||
|
}, 10000);
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
throw new Error(response.error || 'Failed to stop scan');
|
throw new Error(response.error || 'Failed to stop scan');
|
||||||
}
|
}
|
||||||
@@ -318,6 +359,12 @@ class DNSReconApp {
|
|||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Failed to stop scan:', error);
|
console.error('Failed to stop scan:', error);
|
||||||
this.showError(`Failed to stop scan: ${error.message}`);
|
this.showError(`Failed to stop scan: ${error.message}`);
|
||||||
|
|
||||||
|
// Re-enable stop button on error
|
||||||
|
if (this.elements.stopScan) {
|
||||||
|
this.elements.stopScan.disabled = false;
|
||||||
|
this.elements.stopScan.innerHTML = '<span class="btn-icon">[STOP]</span><span>Terminate Scan</span>';
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -346,9 +393,9 @@ class DNSReconApp {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Start polling for scan updates
|
* Start polling for scan updates with configurable interval
|
||||||
*/
|
*/
|
||||||
startPolling() {
|
startPolling(interval = 2000) {
|
||||||
console.log('=== STARTING POLLING ===');
|
console.log('=== STARTING POLLING ===');
|
||||||
|
|
||||||
if (this.pollInterval) {
|
if (this.pollInterval) {
|
||||||
@@ -361,9 +408,9 @@ class DNSReconApp {
|
|||||||
this.updateStatus();
|
this.updateStatus();
|
||||||
this.updateGraph();
|
this.updateGraph();
|
||||||
this.loadProviders();
|
this.loadProviders();
|
||||||
}, 1000); // Poll every 1 second for debugging
|
}, interval);
|
||||||
|
|
||||||
console.log('Polling started with 1 second interval');
|
console.log(`Polling started with ${interval}ms interval`);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -378,7 +425,7 @@ class DNSReconApp {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Update scan status from server
|
* Status update with better error handling
|
||||||
*/
|
*/
|
||||||
async updateStatus() {
|
async updateStatus() {
|
||||||
try {
|
try {
|
||||||
@@ -387,7 +434,7 @@ class DNSReconApp {
|
|||||||
|
|
||||||
console.log('Status response:', response);
|
console.log('Status response:', response);
|
||||||
|
|
||||||
if (response.success) {
|
if (response.success && response.status) {
|
||||||
const status = response.status;
|
const status = response.status;
|
||||||
console.log('Current scan status:', status.status);
|
console.log('Current scan status:', status.status);
|
||||||
console.log('Current progress:', status.progress_percentage + '%');
|
console.log('Current progress:', status.progress_percentage + '%');
|
||||||
@@ -398,12 +445,13 @@ class DNSReconApp {
|
|||||||
// Handle status changes
|
// Handle status changes
|
||||||
if (status.status !== this.scanStatus) {
|
if (status.status !== this.scanStatus) {
|
||||||
console.log(`*** STATUS CHANGED: ${this.scanStatus} -> ${status.status} ***`);
|
console.log(`*** STATUS CHANGED: ${this.scanStatus} -> ${status.status} ***`);
|
||||||
this.handleStatusChange(status.status);
|
this.handleStatusChange(status.status, status.task_queue_size);
|
||||||
}
|
}
|
||||||
|
|
||||||
this.scanStatus = status.status;
|
this.scanStatus = status.status;
|
||||||
} else {
|
} else {
|
||||||
console.error('Status update failed:', response);
|
console.error('Status update failed:', response);
|
||||||
|
// Don't show error for status updates to avoid spam
|
||||||
}
|
}
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -492,17 +540,19 @@ class DNSReconApp {
|
|||||||
if (this.elements.depthDisplay) {
|
if (this.elements.depthDisplay) {
|
||||||
this.elements.depthDisplay.textContent = `${status.current_depth}/${status.max_depth}`;
|
this.elements.depthDisplay.textContent = `${status.current_depth}/${status.max_depth}`;
|
||||||
}
|
}
|
||||||
if (this.elements.progressDisplay) {
|
|
||||||
this.elements.progressDisplay.textContent = `${status.progress_percentage.toFixed(1)}%`;
|
|
||||||
}
|
|
||||||
if (this.elements.indicatorsDisplay) {
|
|
||||||
this.elements.indicatorsDisplay.textContent = status.indicators_processed || 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update progress bar with smooth animation
|
// Update progress bar and compact display
|
||||||
if (this.elements.progressFill) {
|
if (this.elements.progressFill) {
|
||||||
this.elements.progressFill.style.width = `${status.progress_percentage}%`;
|
const completed = status.indicators_completed || 0;
|
||||||
|
const enqueued = status.task_queue_size || 0;
|
||||||
|
const totalTasks = completed + enqueued;
|
||||||
|
const progressPercentage = totalTasks > 0 ? (completed / totalTasks) * 100 : 0;
|
||||||
|
|
||||||
|
this.elements.progressFill.style.width = `${progressPercentage}%`;
|
||||||
|
if (this.elements.progressCompact) {
|
||||||
|
this.elements.progressCompact.textContent = `${completed}/${totalTasks} - ${Math.round(progressPercentage)}%`;
|
||||||
|
}
|
||||||
|
|
||||||
// Add pulsing animation for active scans
|
// Add pulsing animation for active scans
|
||||||
if (status.status === 'running') {
|
if (status.status === 'running') {
|
||||||
this.elements.progressFill.parentElement.classList.add('scanning');
|
this.elements.progressFill.parentElement.classList.add('scanning');
|
||||||
@@ -524,6 +574,8 @@ class DNSReconApp {
|
|||||||
this.elements.sessionId.textContent = 'Session: Loading...';
|
this.elements.sessionId.textContent = 'Session: Loading...';
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
this.setUIState(status.status, status.task_queue_size);
|
||||||
|
|
||||||
console.log('Status display updated successfully');
|
console.log('Status display updated successfully');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -532,23 +584,23 @@ class DNSReconApp {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Handle status changes
|
* Handle status changes with improved state synchronization
|
||||||
* @param {string} newStatus - New scan status
|
* @param {string} newStatus - New scan status
|
||||||
*/
|
*/
|
||||||
handleStatusChange(newStatus) {
|
handleStatusChange(newStatus, task_queue_size) {
|
||||||
console.log(`=== STATUS CHANGE: ${this.scanStatus} -> ${newStatus} ===`);
|
console.log(`=== STATUS CHANGE: ${this.scanStatus} -> ${newStatus} ===`);
|
||||||
|
|
||||||
switch (newStatus) {
|
switch (newStatus) {
|
||||||
case 'running':
|
case 'running':
|
||||||
this.setUIState('scanning');
|
this.setUIState('scanning', task_queue_size);
|
||||||
this.showSuccess('Scan is running');
|
this.showSuccess('Scan is running');
|
||||||
// Reset polling frequency for active scans
|
// Increase polling frequency for active scans
|
||||||
this.pollFrequency = 2000;
|
this.startPolling(1000); // Poll every 1 second for running scans
|
||||||
this.updateConnectionStatus('active');
|
this.updateConnectionStatus('active');
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case 'completed':
|
case 'completed':
|
||||||
this.setUIState('completed');
|
this.setUIState('completed', task_queue_size);
|
||||||
this.stopPolling();
|
this.stopPolling();
|
||||||
this.showSuccess('Scan completed successfully');
|
this.showSuccess('Scan completed successfully');
|
||||||
this.updateConnectionStatus('completed');
|
this.updateConnectionStatus('completed');
|
||||||
@@ -559,7 +611,7 @@ class DNSReconApp {
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
case 'failed':
|
case 'failed':
|
||||||
this.setUIState('failed');
|
this.setUIState('failed', task_queue_size);
|
||||||
this.stopPolling();
|
this.stopPolling();
|
||||||
this.showError('Scan failed');
|
this.showError('Scan failed');
|
||||||
this.updateConnectionStatus('error');
|
this.updateConnectionStatus('error');
|
||||||
@@ -567,7 +619,7 @@ class DNSReconApp {
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
case 'stopped':
|
case 'stopped':
|
||||||
this.setUIState('stopped');
|
this.setUIState('stopped', task_queue_size);
|
||||||
this.stopPolling();
|
this.stopPolling();
|
||||||
this.showSuccess('Scan stopped');
|
this.showSuccess('Scan stopped');
|
||||||
this.updateConnectionStatus('stopped');
|
this.updateConnectionStatus('stopped');
|
||||||
@@ -575,13 +627,17 @@ class DNSReconApp {
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
case 'idle':
|
case 'idle':
|
||||||
this.setUIState('idle');
|
this.setUIState('idle', task_queue_size);
|
||||||
this.stopPolling();
|
this.stopPolling();
|
||||||
this.updateConnectionStatus('idle');
|
this.updateConnectionStatus('idle');
|
||||||
break;
|
break;
|
||||||
|
|
||||||
|
default:
|
||||||
|
console.warn(`Unknown status: ${newStatus}`);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Update connection status indicator
|
* Update connection status indicator
|
||||||
* @param {string} status - Connection status
|
* @param {string} status - Connection status
|
||||||
@@ -614,22 +670,29 @@ class DNSReconApp {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Set UI state based on scan status
|
* UI state management with immediate button updates
|
||||||
* @param {string} state - UI state
|
|
||||||
*/
|
*/
|
||||||
setUIState(state) {
|
setUIState(state, task_queue_size) {
|
||||||
console.log(`Setting UI state to: ${state}`);
|
console.log(`Setting UI state to: ${state}`);
|
||||||
|
|
||||||
|
const isQueueEmpty = task_queue_size === 0;
|
||||||
|
|
||||||
switch (state) {
|
switch (state) {
|
||||||
case 'scanning':
|
case 'scanning':
|
||||||
this.isScanning = true;
|
this.isScanning = true;
|
||||||
if (this.elements.startScan) {
|
if (this.elements.startScan) {
|
||||||
this.elements.startScan.disabled = true;
|
this.elements.startScan.disabled = true;
|
||||||
this.elements.startScan.classList.add('loading');
|
this.elements.startScan.classList.add('loading');
|
||||||
|
this.elements.startScan.innerHTML = '<span class="btn-icon">[SCANNING]</span><span>Scanning...</span>';
|
||||||
|
}
|
||||||
|
if (this.elements.addToGraph) {
|
||||||
|
this.elements.addToGraph.disabled = true;
|
||||||
|
this.elements.addToGraph.classList.add('loading');
|
||||||
}
|
}
|
||||||
if (this.elements.stopScan) {
|
if (this.elements.stopScan) {
|
||||||
this.elements.stopScan.disabled = false;
|
this.elements.stopScan.disabled = false;
|
||||||
this.elements.stopScan.classList.remove('loading');
|
this.elements.stopScan.classList.remove('loading');
|
||||||
|
this.elements.stopScan.innerHTML = '<span class="btn-icon">[STOP]</span><span>Terminate Scan</span>';
|
||||||
}
|
}
|
||||||
if (this.elements.targetDomain) this.elements.targetDomain.disabled = true;
|
if (this.elements.targetDomain) this.elements.targetDomain.disabled = true;
|
||||||
if (this.elements.maxDepth) this.elements.maxDepth.disabled = true;
|
if (this.elements.maxDepth) this.elements.maxDepth.disabled = true;
|
||||||
@@ -642,11 +705,17 @@ class DNSReconApp {
|
|||||||
case 'stopped':
|
case 'stopped':
|
||||||
this.isScanning = false;
|
this.isScanning = false;
|
||||||
if (this.elements.startScan) {
|
if (this.elements.startScan) {
|
||||||
this.elements.startScan.disabled = false;
|
this.elements.startScan.disabled = !isQueueEmpty;
|
||||||
this.elements.startScan.classList.remove('loading');
|
this.elements.startScan.classList.remove('loading');
|
||||||
|
this.elements.startScan.innerHTML = '<span class="btn-icon">[RUN]</span><span>Start Reconnaissance</span>';
|
||||||
|
}
|
||||||
|
if (this.elements.addToGraph) {
|
||||||
|
this.elements.addToGraph.disabled = !isQueueEmpty;
|
||||||
|
this.elements.addToGraph.classList.remove('loading');
|
||||||
}
|
}
|
||||||
if (this.elements.stopScan) {
|
if (this.elements.stopScan) {
|
||||||
this.elements.stopScan.disabled = true;
|
this.elements.stopScan.disabled = true;
|
||||||
|
this.elements.stopScan.innerHTML = '<span class="btn-icon">[STOP]</span><span>Terminate Scan</span>';
|
||||||
}
|
}
|
||||||
if (this.elements.targetDomain) this.elements.targetDomain.disabled = false;
|
if (this.elements.targetDomain) this.elements.targetDomain.disabled = false;
|
||||||
if (this.elements.maxDepth) this.elements.maxDepth.disabled = false;
|
if (this.elements.maxDepth) this.elements.maxDepth.disabled = false;
|
||||||
@@ -665,6 +734,7 @@ class DNSReconApp {
|
|||||||
|
|
||||||
if (response.success) {
|
if (response.success) {
|
||||||
this.updateProviderDisplay(response.providers);
|
this.updateProviderDisplay(response.providers);
|
||||||
|
this.buildApiKeyModal(response.providers);
|
||||||
console.log('Providers loaded successfully');
|
console.log('Providers loaded successfully');
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -699,7 +769,7 @@ class DNSReconApp {
|
|||||||
|
|
||||||
providerItem.innerHTML = `
|
providerItem.innerHTML = `
|
||||||
<div class="provider-header">
|
<div class="provider-header">
|
||||||
<div class="provider-name">${name.toUpperCase()}</div>
|
<div class="provider-name">${info.display_name}</div>
|
||||||
<div class="provider-status ${statusClass}">${statusText}</div>
|
<div class="provider-status ${statusClass}">${statusText}</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="provider-stats">
|
<div class="provider-stats">
|
||||||
@@ -725,110 +795,180 @@ class DNSReconApp {
|
|||||||
this.elements.providerList.appendChild(providerItem);
|
this.elements.providerList.appendChild(providerItem);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generates the HTML for the node details view using the new data model.
|
||||||
|
* @param {Object} node - The node object.
|
||||||
|
* @returns {string} The HTML string for the node details.
|
||||||
|
*/
|
||||||
|
generateNodeDetailsHtml(node) {
|
||||||
|
if (!node) return '<div class="detail-row"><span class="detail-value">Details not available.</span></div>';
|
||||||
|
|
||||||
|
let detailsHtml = '<div class="modal-details-grid">';
|
||||||
|
|
||||||
|
// Handle merged correlation objects similar to large entities
|
||||||
|
if (node.type === 'correlation_object') {
|
||||||
|
const metadata = node.metadata || {};
|
||||||
|
const values = metadata.values || [];
|
||||||
|
const mergeCount = metadata.merge_count || 1;
|
||||||
|
|
||||||
|
detailsHtml += '<div class="modal-section">';
|
||||||
|
detailsHtml += '<h4>Correlation Details</h4>';
|
||||||
|
|
||||||
|
if (mergeCount > 1) {
|
||||||
|
detailsHtml += `<p><strong>Merged Correlations:</strong> ${mergeCount} values</p>`;
|
||||||
|
detailsHtml += '<div class="correlation-values-list">';
|
||||||
|
|
||||||
|
values.forEach((value, index) => {
|
||||||
|
detailsHtml += `<details class="correlation-value-details">`;
|
||||||
|
detailsHtml += `<summary>Value ${index + 1}: ${typeof value === 'string' && value.length > 50 ? value.substring(0, 47) + '...' : value}</summary>`;
|
||||||
|
detailsHtml += `<div class="detail-row"><span class="detail-label">Full Value:</span><span class="detail-value">${value}</span></div>`;
|
||||||
|
detailsHtml += `</details>`;
|
||||||
|
});
|
||||||
|
|
||||||
|
detailsHtml += '</div>';
|
||||||
|
} else {
|
||||||
|
const singleValue = values.length > 0 ? values[0] : (metadata.value || 'Unknown');
|
||||||
|
detailsHtml += `<div class="detail-row"><span class="detail-label">Correlation Value:</span><span class="detail-value">${singleValue}</span></div>`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show correlated nodes
|
||||||
|
const correlatedNodes = metadata.correlated_nodes || [];
|
||||||
|
if (correlatedNodes.length > 0) {
|
||||||
|
detailsHtml += `<div class="detail-row"><span class="detail-label">Correlated Nodes:</span><span class="detail-value">${correlatedNodes.length}</span></div>`;
|
||||||
|
detailsHtml += '<ul>';
|
||||||
|
correlatedNodes.forEach(nodeId => {
|
||||||
|
detailsHtml += `<li><a href="#" class="node-link" data-node-id="${nodeId}">${nodeId}</a></li>`;
|
||||||
|
});
|
||||||
|
detailsHtml += '</ul>';
|
||||||
|
}
|
||||||
|
|
||||||
|
detailsHtml += '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Continue with standard node details for all node types
|
||||||
|
// Section for Incoming Edges (Source Nodes)
|
||||||
|
if (node.incoming_edges && node.incoming_edges.length > 0) {
|
||||||
|
detailsHtml += '<div class="modal-section">';
|
||||||
|
detailsHtml += '<h4>Source Nodes (Incoming)</h4>';
|
||||||
|
detailsHtml += '<ul>';
|
||||||
|
node.incoming_edges.forEach(edge => {
|
||||||
|
detailsHtml += `<li><a href="#" class="node-link" data-node-id="${edge.from}">${edge.from}</a> (${edge.data.relationship_type})</li>`;
|
||||||
|
});
|
||||||
|
detailsHtml += '</ul></div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Section for Outgoing Edges (Destination Nodes)
|
||||||
|
if (node.outgoing_edges && node.outgoing_edges.length > 0) {
|
||||||
|
detailsHtml += '<div class="modal-section">';
|
||||||
|
detailsHtml += '<h4>Destination Nodes (Outgoing)</h4>';
|
||||||
|
detailsHtml += '<ul>';
|
||||||
|
node.outgoing_edges.forEach(edge => {
|
||||||
|
detailsHtml += `<li><a href="#" class="node-link" data-node-id="${edge.to}">${edge.to}</a> (${edge.data.relationship_type})</li>`;
|
||||||
|
});
|
||||||
|
detailsHtml += '</ul></div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Section for Attributes (skip for correlation objects - already handled above)
|
||||||
|
if (node.type !== 'correlation_object') {
|
||||||
|
detailsHtml += '<div class="modal-section">';
|
||||||
|
detailsHtml += '<h4>Attributes</h4>';
|
||||||
|
detailsHtml += this.formatObjectToHtml(node.attributes);
|
||||||
|
detailsHtml += '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Section for Description
|
||||||
|
detailsHtml += '<div class="modal-section">';
|
||||||
|
detailsHtml += '<h4>Description</h4>';
|
||||||
|
detailsHtml += `<p class="description-text">${node.description || 'No description available.'}</p>`;
|
||||||
|
detailsHtml += '</div>';
|
||||||
|
|
||||||
|
// Section for Metadata (skip detailed metadata for correlation objects - already handled above)
|
||||||
|
if (node.type !== 'correlation_object') {
|
||||||
|
detailsHtml += '<div class="modal-section">';
|
||||||
|
detailsHtml += '<h4>Metadata</h4>';
|
||||||
|
detailsHtml += this.formatObjectToHtml(node.metadata);
|
||||||
|
detailsHtml += '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
detailsHtml += '</div>';
|
||||||
|
return detailsHtml;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Recursively formats a JavaScript object into an HTML unordered list with collapsible sections.
|
||||||
|
* @param {Object} obj - The object to format.
|
||||||
|
* @returns {string} - An HTML string representing the object.
|
||||||
|
*/
|
||||||
|
formatObjectToHtml(obj) {
|
||||||
|
if (!obj || Object.keys(obj).length === 0) {
|
||||||
|
return '<p class="no-data">No data available.</p>';
|
||||||
|
}
|
||||||
|
|
||||||
|
let html = '<ul>';
|
||||||
|
for (const key in obj) {
|
||||||
|
if (Object.hasOwnProperty.call(obj, key)) {
|
||||||
|
const value = obj[key];
|
||||||
|
const formattedKey = key.replace(/_/g, ' ').replace(/\b\w/g, l => l.toUpperCase());
|
||||||
|
|
||||||
|
if (typeof value === 'object' && value !== null) {
|
||||||
|
html += `<li><details><summary><strong>${formattedKey}</strong></summary>`;
|
||||||
|
html += this.formatObjectToHtml(value);
|
||||||
|
html += `</details></li>`;
|
||||||
|
} else {
|
||||||
|
html += `<li><strong>${formattedKey}:</strong> ${this.formatValue(value)}</li>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
html += '</ul>';
|
||||||
|
return html;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Show node details modal
|
* Show node details modal
|
||||||
* @param {string} nodeId - Node identifier
|
|
||||||
* @param {Object} node - Node data
|
* @param {Object} node - Node data
|
||||||
*/
|
*/
|
||||||
showNodeModal(nodeId, node) {
|
showNodeModal(node) {
|
||||||
if (!this.elements.nodeModal) return;
|
if (!this.elements.nodeModal || !node) return;
|
||||||
|
|
||||||
if (this.elements.modalTitle) {
|
if (this.elements.modalTitle) {
|
||||||
this.elements.modalTitle.textContent = `Node Details: ${nodeId}`;
|
this.elements.modalTitle.textContent = `${this.formatStatus(node.type)} Node: ${node.id}`;
|
||||||
}
|
}
|
||||||
|
|
||||||
let detailsHtml = '';
|
let detailsHtml = '';
|
||||||
const createDetailRow = (label, value, statusIcon = '') => {
|
if (node.type === 'large_entity') {
|
||||||
const baseId = `detail-${label.replace(/[^a-zA-Z0-9]/g, '-')}`;
|
const attributes = node.attributes || {};
|
||||||
|
const nodes = attributes.nodes || [];
|
||||||
|
const node_type = attributes.node_type || 'nodes';
|
||||||
|
detailsHtml += `<div class="detail-section-header">Contains ${attributes.count} ${node_type}s</div>`;
|
||||||
|
detailsHtml += '<div class="large-entity-nodes-list">';
|
||||||
|
|
||||||
if (value === null || value === undefined ||
|
for(const innerNodeId of nodes) {
|
||||||
(Array.isArray(value) && value.length === 0) ||
|
const innerNode = this.graphManager.nodes.get(innerNodeId);
|
||||||
(typeof value === 'object' && Object.keys(value).length === 0)) {
|
detailsHtml += `<details class="large-entity-node-details">`;
|
||||||
return `
|
detailsHtml += `<summary>${innerNodeId}</summary>`;
|
||||||
<div class="detail-row">
|
detailsHtml += this.generateNodeDetailsHtml(innerNode);
|
||||||
<span class="detail-label">${label} <span class="status-icon text-warning">✗</span></span>
|
detailsHtml += `</details>`;
|
||||||
<span class="detail-value">N/A</span>
|
|
||||||
</div>
|
|
||||||
`;
|
|
||||||
}
|
}
|
||||||
|
detailsHtml += '</div>';
|
||||||
if (Array.isArray(value)) {
|
} else {
|
||||||
return value.map((item, index) => {
|
detailsHtml = this.generateNodeDetailsHtml(node);
|
||||||
const itemId = `${baseId}-${index}`;
|
|
||||||
const itemLabel = index === 0 ? `${label} <span class="status-icon text-success">✓</span>` : '';
|
|
||||||
return `
|
|
||||||
<div class="detail-row">
|
|
||||||
<span class="detail-label">${itemLabel}</span>
|
|
||||||
<span class="detail-value" id="${itemId}">${this.formatValue(item)}</span>
|
|
||||||
<button class="copy-btn" onclick="copyToClipboard('${itemId}')" title="Copy">📋</button>
|
|
||||||
</div>
|
|
||||||
`;
|
|
||||||
}).join('');
|
|
||||||
} else {
|
|
||||||
const valueId = `${baseId}-0`;
|
|
||||||
const icon = statusIcon || '<span class="status-icon text-success">✓</span>';
|
|
||||||
return `
|
|
||||||
<div class="detail-row">
|
|
||||||
<span class="detail-label">${label} ${icon}</span>
|
|
||||||
<span class="detail-value" id="${valueId}">${this.formatValue(value)}</span>
|
|
||||||
<button class="copy-btn" onclick="copyToClipboard('${valueId}')" title="Copy">📋</button>
|
|
||||||
</div>
|
|
||||||
`;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const metadata = node.metadata || {};
|
|
||||||
|
|
||||||
// General Node Info
|
|
||||||
detailsHtml += createDetailRow('Node Type', node.type);
|
|
||||||
|
|
||||||
// Display data based on node type
|
|
||||||
switch (node.type) {
|
|
||||||
case 'domain':
|
|
||||||
detailsHtml += createDetailRow('DNS Records', metadata.dns_records);
|
|
||||||
detailsHtml += createDetailRow('Related Domains (SAN)', metadata.related_domains_san);
|
|
||||||
detailsHtml += createDetailRow('Passive DNS', metadata.passive_dns);
|
|
||||||
detailsHtml += createDetailRow('Shodan Data', metadata.shodan);
|
|
||||||
detailsHtml += createDetailRow('VirusTotal Data', metadata.virustotal);
|
|
||||||
break;
|
|
||||||
case 'ip':
|
|
||||||
detailsHtml += createDetailRow('Hostnames', metadata.hostnames);
|
|
||||||
detailsHtml += createDetailRow('Passive DNS', metadata.passive_dns);
|
|
||||||
detailsHtml += createDetailRow('Shodan Data', metadata.shodan);
|
|
||||||
detailsHtml += createDetailRow('VirusTotal Data', metadata.virustotal);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Special handling for certificate data
|
|
||||||
if (metadata.certificate_data && Object.keys(metadata.certificate_data).length > 0) {
|
|
||||||
const cert = metadata.certificate_data;
|
|
||||||
detailsHtml += `<div class="detail-section-header">Certificate Summary</div>`;
|
|
||||||
detailsHtml += createDetailRow('Total Found', cert.total_certificates);
|
|
||||||
detailsHtml += createDetailRow('Currently Valid', cert.valid_certificates);
|
|
||||||
detailsHtml += createDetailRow('Expires Soon (<30d)', cert.expires_soon_count);
|
|
||||||
detailsHtml += createDetailRow('Unique Issuers', cert.unique_issuers ? cert.unique_issuers.join(', ') : 'N/A');
|
|
||||||
|
|
||||||
if (cert.latest_certificate) {
|
|
||||||
detailsHtml += `<div class="detail-section-header">Latest Certificate</div>`;
|
|
||||||
detailsHtml += createDetailRow('Common Name', cert.latest_certificate.common_name);
|
|
||||||
detailsHtml += createDetailRow('Issuer', cert.latest_certificate.issuer_name);
|
|
||||||
detailsHtml += createDetailRow('Valid From', new Date(cert.latest_certificate.not_before).toLocaleString());
|
|
||||||
detailsHtml += createDetailRow('Valid Until', new Date(cert.latest_certificate.not_after).toLocaleString());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Special handling for ASN data
|
|
||||||
if (metadata.asn_data && Object.keys(metadata.asn_data).length > 0) {
|
|
||||||
detailsHtml += `<div class="detail-section-header">ASN Information</div>`;
|
|
||||||
detailsHtml += createDetailRow('ASN', metadata.asn_data.asn);
|
|
||||||
detailsHtml += createDetailRow('Organization', metadata.asn_data.description);
|
|
||||||
detailsHtml += createDetailRow('ISP', metadata.asn_data.isp);
|
|
||||||
detailsHtml += createDetailRow('Country', metadata.asn_data.country);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (this.elements.modalDetails) {
|
if (this.elements.modalDetails) {
|
||||||
this.elements.modalDetails.innerHTML = detailsHtml;
|
this.elements.modalDetails.innerHTML = detailsHtml;
|
||||||
|
this.elements.modalDetails.querySelectorAll('.node-link').forEach(link => {
|
||||||
|
link.addEventListener('click', (e) => {
|
||||||
|
e.preventDefault();
|
||||||
|
const nodeId = e.target.dataset.nodeId;
|
||||||
|
const nextNode = this.graphManager.nodes.get(nodeId);
|
||||||
|
if (nextNode) {
|
||||||
|
this.hideModal();
|
||||||
|
this.showNodeModal(nextNode);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
}
|
}
|
||||||
this.elements.nodeModal.style.display = 'block';
|
this.elements.nodeModal.style.display = 'block';
|
||||||
}
|
}
|
||||||
@@ -864,12 +1004,15 @@ class DNSReconApp {
|
|||||||
* Save API Keys
|
* Save API Keys
|
||||||
*/
|
*/
|
||||||
async saveApiKeys() {
|
async saveApiKeys() {
|
||||||
const shodanKey = this.elements.shodanApiKey.value.trim();
|
const inputs = this.elements.apiKeyInputs.querySelectorAll('input');
|
||||||
const virustotalKey = this.elements.virustotalApiKey.value.trim();
|
|
||||||
|
|
||||||
const keys = {};
|
const keys = {};
|
||||||
if (shodanKey) keys.shodan = shodanKey;
|
inputs.forEach(input => {
|
||||||
if (virustotalKey) keys.virustotal = virustotalKey;
|
const provider = input.dataset.provider;
|
||||||
|
const value = input.value.trim();
|
||||||
|
if (provider && value) {
|
||||||
|
keys[provider] = value;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
if (Object.keys(keys).length === 0) {
|
if (Object.keys(keys).length === 0) {
|
||||||
this.showWarning('No API keys were entered.');
|
this.showWarning('No API keys were entered.');
|
||||||
@@ -894,10 +1037,24 @@ class DNSReconApp {
|
|||||||
* Reset API Key fields
|
* Reset API Key fields
|
||||||
*/
|
*/
|
||||||
resetApiKeys() {
|
resetApiKeys() {
|
||||||
this.elements.shodanApiKey.value = '';
|
const inputs = this.elements.apiKeyInputs.querySelectorAll('input');
|
||||||
this.elements.virustotalApiKey.value = '';
|
inputs.forEach(input => {
|
||||||
|
input.value = '';
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Apply graph filters
|
||||||
|
*/
|
||||||
|
applyFilters() {
|
||||||
|
if (this.graphManager) {
|
||||||
|
const nodeType = this.elements.nodeTypeFilter.value;
|
||||||
|
const minConfidence = parseFloat(this.elements.confidenceFilter.value);
|
||||||
|
this.graphManager.applyFilters(nodeType, minConfidence);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if graph data has changed
|
* Check if graph data has changed
|
||||||
* @param {Object} graphData - New graph data
|
* @param {Object} graphData - New graph data
|
||||||
@@ -1180,6 +1337,74 @@ class DNSReconApp {
|
|||||||
};
|
};
|
||||||
return colors[type] || colors.info;
|
return colors[type] || colors.info;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Build the API key modal dynamically
|
||||||
|
* @param {Object} providers - Provider information
|
||||||
|
*/
|
||||||
|
buildApiKeyModal(providers) {
|
||||||
|
if (!this.elements.apiKeyInputs) return;
|
||||||
|
this.elements.apiKeyInputs.innerHTML = ''; // Clear existing inputs
|
||||||
|
let hasApiKeyProviders = false;
|
||||||
|
|
||||||
|
for (const [name, info] of Object.entries(providers)) {
|
||||||
|
if (info.requires_api_key) {
|
||||||
|
hasApiKeyProviders = true;
|
||||||
|
const inputGroup = document.createElement('div');
|
||||||
|
inputGroup.className = 'apikey-section';
|
||||||
|
|
||||||
|
if (info.enabled) {
|
||||||
|
// If the API key is set and the provider is enabled
|
||||||
|
inputGroup.innerHTML = `
|
||||||
|
<label for="${name}-api-key">${info.display_name} API Key</label>
|
||||||
|
<div class="api-key-set-message">
|
||||||
|
<span class="api-key-set-text">API Key is set</span>
|
||||||
|
<button class="clear-api-key-btn" data-provider="${name}">Clear</button>
|
||||||
|
</div>
|
||||||
|
<p class="apikey-help">Provides infrastructure context and service information.</p>
|
||||||
|
`;
|
||||||
|
} else {
|
||||||
|
// If the API key is not set
|
||||||
|
inputGroup.innerHTML = `
|
||||||
|
<label for="${name}-api-key">${info.display_name} API Key</label>
|
||||||
|
<input type="password" id="${name}-api-key" data-provider="${name}" placeholder="Enter ${info.display_name} API Key">
|
||||||
|
<p class="apikey-help">Provides infrastructure context and service information.</p>
|
||||||
|
`;
|
||||||
|
}
|
||||||
|
this.elements.apiKeyInputs.appendChild(inputGroup);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add event listeners for the new clear buttons
|
||||||
|
this.elements.apiKeyInputs.querySelectorAll('.clear-api-key-btn').forEach(button => {
|
||||||
|
button.addEventListener('click', (e) => {
|
||||||
|
const provider = e.target.dataset.provider;
|
||||||
|
this.clearApiKey(provider);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!hasApiKeyProviders) {
|
||||||
|
this.elements.apiKeyInputs.innerHTML = '<p>No providers require API keys.</p>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clear an API key for a specific provider
|
||||||
|
* @param {string} provider The name of the provider to clear the API key for
|
||||||
|
*/
|
||||||
|
async clearApiKey(provider) {
|
||||||
|
try {
|
||||||
|
const response = await this.apiCall('/api/config/api-keys', 'POST', { [provider]: '' });
|
||||||
|
if (response.success) {
|
||||||
|
this.showSuccess(`API key for ${provider} has been cleared.`);
|
||||||
|
this.loadProviders(); // This will rebuild the modal with the updated state
|
||||||
|
} else {
|
||||||
|
throw new Error(response.error || 'Failed to clear API key');
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
this.showError(`Error clearing API key: ${error.message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add CSS animations for message toasts
|
// Add CSS animations for message toasts
|
||||||
|
|||||||
@@ -52,6 +52,10 @@
|
|||||||
<span class="btn-icon">[RUN]</span>
|
<span class="btn-icon">[RUN]</span>
|
||||||
<span>Start Reconnaissance</span>
|
<span>Start Reconnaissance</span>
|
||||||
</button>
|
</button>
|
||||||
|
<button id="add-to-graph" class="btn btn-primary">
|
||||||
|
<span class="btn-icon">[ADD]</span>
|
||||||
|
<span>Add to Graph</span>
|
||||||
|
</button>
|
||||||
<button id="stop-scan" class="btn btn-secondary" disabled>
|
<button id="stop-scan" class="btn btn-secondary" disabled>
|
||||||
<span class="btn-icon">[STOP]</span>
|
<span class="btn-icon">[STOP]</span>
|
||||||
<span>Terminate Scan</span>
|
<span>Terminate Scan</span>
|
||||||
@@ -86,22 +90,20 @@
|
|||||||
<span class="status-label">Depth:</span>
|
<span class="status-label">Depth:</span>
|
||||||
<span id="depth-display" class="status-value">0/0</span>
|
<span id="depth-display" class="status-value">0/0</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="status-row">
|
|
||||||
<span class="status-label">Progress:</span>
|
|
||||||
<span id="progress-display" class="status-value">0%</span>
|
|
||||||
</div>
|
|
||||||
<div class="status-row">
|
|
||||||
<span class="status-label">Indicators:</span>
|
|
||||||
<span id="indicators-display" class="status-value">0</span>
|
|
||||||
</div>
|
|
||||||
<div class="status-row">
|
<div class="status-row">
|
||||||
<span class="status-label">Relationships:</span>
|
<span class="status-label">Relationships:</span>
|
||||||
<span id="relationships-display" class="status-value">0</span>
|
<span id="relationships-display" class="status-value">0</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="progress-bar">
|
<div class="progress-container">
|
||||||
<div id="progress-fill" class="progress-fill"></div>
|
<div class="progress-info">
|
||||||
|
<span id="progress-label">Progress:</span>
|
||||||
|
<span id="progress-compact">0/0</span>
|
||||||
|
</div>
|
||||||
|
<div class="progress-bar">
|
||||||
|
<div id="progress-fill" class="progress-fill"></div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</section>
|
</section>
|
||||||
|
|
||||||
@@ -109,8 +111,22 @@
|
|||||||
<div class="panel-header">
|
<div class="panel-header">
|
||||||
<h2>Infrastructure Map</h2>
|
<h2>Infrastructure Map</h2>
|
||||||
<div class="view-controls">
|
<div class="view-controls">
|
||||||
<button id="reset-view" class="btn-icon-small" title="Reset View">[↻]</button>
|
<div class="filter-group">
|
||||||
<button id="fit-view" class="btn-icon-small" title="Fit to Screen">[□]</button>
|
<label for="node-type-filter">Node Type:</label>
|
||||||
|
<select id="node-type-filter">
|
||||||
|
<option value="all">All</option>
|
||||||
|
<option value="domain">Domain</option>
|
||||||
|
<option value="ip">IP</option>
|
||||||
|
<option value="asn">ASN</option>
|
||||||
|
<option value="correlation_object">Correlation Object</option>
|
||||||
|
<option value="large_entity">Large Entity</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
<div class="filter-group">
|
||||||
|
<label for="confidence-filter">Min Confidence:</label>
|
||||||
|
<input type="range" id="confidence-filter" min="0" max="1" step="0.1" value="0">
|
||||||
|
<span id="confidence-value">0</span>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -135,11 +151,11 @@
|
|||||||
</div>
|
</div>
|
||||||
<div class="legend-item">
|
<div class="legend-item">
|
||||||
<div class="legend-color" style="background-color: #c7c7c7;"></div>
|
<div class="legend-color" style="background-color: #c7c7c7;"></div>
|
||||||
<span>Certificates</span>
|
<span>Domain (invalid cert)</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="legend-item">
|
<div class="legend-item">
|
||||||
<div class="legend-color" style="background-color: #9d4edd;"></div>
|
<div class="legend-color" style="background-color: #9d4edd;"></div>
|
||||||
<span>DNS Records</span>
|
<span>Correlation Objects</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="legend-item">
|
<div class="legend-item">
|
||||||
<div class="legend-edge high-confidence"></div>
|
<div class="legend-edge high-confidence"></div>
|
||||||
@@ -168,7 +184,7 @@
|
|||||||
|
|
||||||
<footer class="footer">
|
<footer class="footer">
|
||||||
<div class="footer-content">
|
<div class="footer-content">
|
||||||
<span>DNSRecon v1.0 - Phase 1 Implementation</span>
|
<span>v0.0.0rc</span>
|
||||||
<span class="footer-separator">|</span>
|
<span class="footer-separator">|</span>
|
||||||
<span>Passive Infrastructure Reconnaissance</span>
|
<span>Passive Infrastructure Reconnaissance</span>
|
||||||
<span class="footer-separator">|</span>
|
<span class="footer-separator">|</span>
|
||||||
@@ -199,16 +215,8 @@
|
|||||||
<p class="modal-description">
|
<p class="modal-description">
|
||||||
Enter your API keys for enhanced data providers. Keys are stored in memory for the current session only and are never saved to disk.
|
Enter your API keys for enhanced data providers. Keys are stored in memory for the current session only and are never saved to disk.
|
||||||
</p>
|
</p>
|
||||||
<div class="apikey-section">
|
<div id="api-key-inputs">
|
||||||
<label for="virustotal-api-key">VirusTotal API Key</label>
|
</div>
|
||||||
<input type="password" id="virustotal-api-key" placeholder="Enter VirusTotal API Key">
|
|
||||||
<p class="apikey-help">Enables passive DNS and domain reputation lookups.</p>
|
|
||||||
</div>
|
|
||||||
<div class="apikey-section">
|
|
||||||
<label for="shodan-api-key">Shodan API Key</label>
|
|
||||||
<input type="password" id="shodan-api-key" placeholder="Enter Shodan API Key">
|
|
||||||
<p class="apikey-help">Provides infrastructure context and service information.</p>
|
|
||||||
</div>
|
|
||||||
<div class="button-group" style="flex-direction: row; justify-content: flex-end;">
|
<div class="button-group" style="flex-direction: row; justify-content: flex-end;">
|
||||||
<button id="reset-api-keys" class="btn btn-secondary">
|
<button id="reset-api-keys" class="btn btn-secondary">
|
||||||
<span>Reset</span>
|
<span>Reset</span>
|
||||||
|
|||||||
Reference in New Issue
Block a user