prod staging
This commit is contained in:
parent
f445187025
commit
7e2473b521
469
README.md
469
README.md
@ -2,272 +2,257 @@
|
|||||||
|
|
||||||
DNSRecon is an interactive, passive reconnaissance tool designed to map adversary infrastructure. It operates on a "free-by-default" model, ensuring core functionality without subscriptions, while allowing power users to enhance its capabilities with paid API keys.
|
DNSRecon is an interactive, passive reconnaissance tool designed to map adversary infrastructure. It operates on a "free-by-default" model, ensuring core functionality without subscriptions, while allowing power users to enhance its capabilities with paid API keys.
|
||||||
|
|
||||||
**Current Status: Phase 1 Implementation**
|
**Current Status: Phase 2 Implementation**
|
||||||
- ✅ Core infrastructure and graph engine
|
|
||||||
- ✅ Certificate transparency data provider (crt.sh)
|
- ✅ Core infrastructure and graph engine
|
||||||
- ✅ Basic web interface with real-time visualization
|
- ✅ Multi-provider support (crt.sh, DNS, Shodan)
|
||||||
- ✅ Forensic logging system
|
- ✅ Session-based multi-user support
|
||||||
- ✅ JSON export functionality
|
- ✅ Real-time web interface with interactive visualization
|
||||||
|
- ✅ Forensic logging system and JSON export
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
### Core Capabilities
|
- **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
|
||||||
- **Zero Contact Reconnaissance**: Passive data gathering without touching target infrastructure
|
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
|
||||||
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping
|
- **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
|
||||||
- **Real-Time Visualization**: Interactive graph updates during scanning
|
- **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
|
||||||
- **Forensic Logging**: Complete audit trail of all reconnaissance activities
|
- **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
|
||||||
- **Confidence Scoring**: Weighted relationships based on data source reliability
|
- **Session Management**: Supports concurrent user sessions with isolated scanner instances.
|
||||||
|
|
||||||
### Data Sources (Phase 1)
|
|
||||||
- **Certificate Transparency (crt.sh)**: Discovers domain relationships through SSL certificate SAN analysis
|
|
||||||
- **Basic DNS Resolution**: A/AAAA record lookups for IP relationships
|
|
||||||
|
|
||||||
### Visualization
|
|
||||||
- **Interactive Network Graph**: Powered by vis.js with cybersecurity theme
|
|
||||||
- **Node Types**: Domains, IP addresses, certificates, ASNs
|
|
||||||
- **Confidence-Based Styling**: Visual indicators for relationship strength
|
|
||||||
- **Real-Time Updates**: Graph builds dynamically as relationships are discovered
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
- Python 3.8 or higher
|
|
||||||
- Modern web browser with JavaScript enabled
|
|
||||||
|
|
||||||
### Setup
|
- Python 3.8 or higher
|
||||||
1. **Clone or create the project directory**:
|
- A modern web browser with JavaScript enabled
|
||||||
```bash
|
- (Recommended) A Linux host for running the application and the optional DNS cache.
|
||||||
mkdir dnsrecon
|
|
||||||
cd dnsrecon
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Install Python dependencies**:
|
### 1\. Clone the Project
|
||||||
```bash
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Verify the directory structure**:
|
|
||||||
```
|
|
||||||
dnsrecon/
|
|
||||||
├── app.py
|
|
||||||
├── config.py
|
|
||||||
├── requirements.txt
|
|
||||||
├── core/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── graph_manager.py
|
|
||||||
│ ├── scanner.py
|
|
||||||
│ └── logger.py
|
|
||||||
├── providers/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── base_provider.py
|
|
||||||
│ └── crtsh_provider.py
|
|
||||||
├── static/
|
|
||||||
│ ├── css/
|
|
||||||
│ │ └── main.css
|
|
||||||
│ └── js/
|
|
||||||
│ ├── graph.js
|
|
||||||
│ └── main.js
|
|
||||||
└── templates/
|
|
||||||
└── index.html
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Starting the Application
|
|
||||||
1. **Run the Flask application**:
|
|
||||||
```bash
|
|
||||||
python app.py
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Open your web browser** and navigate to:
|
|
||||||
```
|
|
||||||
http://127.0.0.1:5000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Basic Reconnaissance Workflow
|
|
||||||
|
|
||||||
1. **Enter Target Domain**: Input the domain you want to investigate (e.g., `example.com`)
|
|
||||||
|
|
||||||
2. **Select Recursion Depth**:
|
|
||||||
- **Depth 1**: Direct relationships only
|
|
||||||
- **Depth 2**: Recommended for most investigations
|
|
||||||
- **Depth 3+**: Extended analysis for comprehensive mapping
|
|
||||||
|
|
||||||
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin passive data gathering
|
|
||||||
|
|
||||||
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered
|
|
||||||
|
|
||||||
5. **Analyze Results**: Interact with the graph to explore relationships and click nodes for detailed information
|
|
||||||
|
|
||||||
6. **Export Data**: Download complete results including graph data and forensic audit trail
|
|
||||||
|
|
||||||
### Understanding the Visualization
|
|
||||||
|
|
||||||
#### Node Types
|
|
||||||
- 🟢 **Green Circles**: Domain names
|
|
||||||
- 🟠 **Orange Squares**: IP addresses
|
|
||||||
- ⚪ **Gray Diamonds**: SSL certificates
|
|
||||||
- 🔵 **Blue Triangles**: ASN (Autonomous System) information
|
|
||||||
|
|
||||||
#### Edge Confidence
|
|
||||||
- **Thick Green Lines**: High confidence (≥80%) - Certificate SAN relationships
|
|
||||||
- **Medium Orange Lines**: Medium confidence (60-79%) - DNS record relationships
|
|
||||||
- **Thin Gray Lines**: Lower confidence (<60%) - Passive DNS or uncertain relationships
|
|
||||||
|
|
||||||
### Example Investigation
|
|
||||||
|
|
||||||
Let's investigate `github.com`:
|
|
||||||
|
|
||||||
1. Enter `github.com` as the target domain
|
|
||||||
2. Set recursion depth to 2
|
|
||||||
3. Start the scan
|
|
||||||
4. Observe relationships to other GitHub domains discovered through certificate analysis
|
|
||||||
5. Export results for further analysis
|
|
||||||
|
|
||||||
Expected discoveries might include:
|
|
||||||
- `*.github.com` domains through certificate SANs
|
|
||||||
- `github.io` and related domains
|
|
||||||
- Associated IP addresses
|
|
||||||
- Certificate authority relationships
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
You can configure DNSRecon using environment variables:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# API keys for future providers (Phase 2)
|
git clone https://github.com/your-repo/dnsrecon.git
|
||||||
export VIRUSTOTAL_API_KEY="your_api_key_here"
|
cd dnsrecon
|
||||||
export SHODAN_API_KEY="your_api_key_here"
|
|
||||||
|
|
||||||
# Application settings
|
|
||||||
export DEFAULT_RECURSION_DEPTH=2
|
|
||||||
export FLASK_DEBUG=False
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Rate Limiting
|
### 2\. Install Python Dependencies
|
||||||
DNSRecon includes built-in rate limiting to be respectful to data sources:
|
|
||||||
- **crt.sh**: 60 requests per minute
|
|
||||||
- **DNS queries**: 100 requests per minute
|
|
||||||
|
|
||||||
## Data Export Format
|
It is highly recommended to use a virtual environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 -m venv venv
|
||||||
|
source venv/bin/activate
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3\. (Optional but Recommended) Set up a Local DNS Caching Resolver
|
||||||
|
|
||||||
|
Running a local DNS caching resolver can significantly speed up DNS queries and reduce your network footprint. Here’s how to set up `unbound` on a Debian-based Linux distribution (like Ubuntu).
|
||||||
|
|
||||||
|
**a. Install Unbound:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install unbound -y
|
||||||
|
```
|
||||||
|
|
||||||
|
**b. Configure Unbound:**
|
||||||
|
Create a new configuration file for DNSRecon:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nano /etc/unbound/unbound.conf.d/dnsrecon.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the following content to the file:
|
||||||
|
|
||||||
|
```
|
||||||
|
server:
|
||||||
|
# Listen on localhost for all users
|
||||||
|
interface: 127.0.0.1
|
||||||
|
access-control: 0.0.0.0/0 refuse
|
||||||
|
access-control: 127.0.0.0/8 allow
|
||||||
|
|
||||||
|
# Enable prefetching of popular items
|
||||||
|
prefetch: yes
|
||||||
|
```
|
||||||
|
|
||||||
|
**c. Restart Unbound and set it as the default resolver:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl restart unbound
|
||||||
|
sudo systemctl enable unbound
|
||||||
|
```
|
||||||
|
|
||||||
|
To use this resolver for your system, you may need to update your network settings to point to `127.0.0.1` as your DNS server.
|
||||||
|
|
||||||
|
**d. Update DNSProvider to use the local resolver:**
|
||||||
|
In `dnsrecon/providers/dns_provider.py`, you can explicitly set the resolver's nameservers in the `__init__` method:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# dnsrecon/providers/dns_provider.py
|
||||||
|
|
||||||
|
class DNSProvider(BaseProvider):
|
||||||
|
def __init__(self, session_config=None):
|
||||||
|
"""Initialize DNS provider with session-specific configuration."""
|
||||||
|
super().__init__(...)
|
||||||
|
|
||||||
|
# Configure DNS resolver
|
||||||
|
self.resolver = dns.resolver.Resolver()
|
||||||
|
self.resolver.nameservers = ['127.0.0.1'] # Use local caching resolver
|
||||||
|
self.resolver.timeout = 5
|
||||||
|
self.resolver.lifetime = 10
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage (Development)
|
||||||
|
|
||||||
|
### 1\. Start the Application
|
||||||
|
|
||||||
Results are exported as JSON with the following structure:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"scan_metadata": {
|
|
||||||
"target_domain": "example.com",
|
|
||||||
"max_depth": 2,
|
|
||||||
"final_status": "completed"
|
|
||||||
},
|
|
||||||
"graph_data": {
|
|
||||||
"nodes": [...],
|
|
||||||
"edges": [...]
|
|
||||||
},
|
|
||||||
"forensic_audit": {
|
|
||||||
"session_metadata": {...},
|
|
||||||
"api_requests": [...],
|
|
||||||
"relationships": [...]
|
|
||||||
},
|
|
||||||
"provider_statistics": {...}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Forensic Integrity
|
|
||||||
|
|
||||||
DNSRecon maintains complete forensic integrity:
|
|
||||||
|
|
||||||
- **API Request Logging**: Every external request is logged with timestamps, URLs, and responses
|
|
||||||
- **Relationship Provenance**: Each discovered relationship includes source provider and discovery method
|
|
||||||
- **Session Tracking**: Unique session IDs for investigation continuity
|
|
||||||
- **Confidence Metadata**: Scoring rationale for all relationships
|
|
||||||
- **Export Integrity**: Complete audit trail included in all exports
|
|
||||||
|
|
||||||
## Architecture Overview
|
|
||||||
|
|
||||||
### Core Components
|
|
||||||
|
|
||||||
- **GraphManager**: NetworkX-based in-memory graph with confidence scoring
|
|
||||||
- **Scanner**: Multi-provider orchestration with depth-limited BFS exploration
|
|
||||||
- **ForensicLogger**: Thread-safe audit trail with structured logging
|
|
||||||
- **BaseProvider**: Abstract interface for data source plugins
|
|
||||||
|
|
||||||
### Data Flow
|
|
||||||
1. User initiates scan via web interface
|
|
||||||
2. Scanner coordinates multiple data providers
|
|
||||||
3. Relationships discovered and added to in-memory graph
|
|
||||||
4. Real-time updates sent to web interface
|
|
||||||
5. Graph visualization updates dynamically
|
|
||||||
6. Complete audit trail maintained throughout
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
**Graph not displaying**:
|
|
||||||
- Ensure JavaScript is enabled in your browser
|
|
||||||
- Check browser console for errors
|
|
||||||
- Verify vis.js library is loading correctly
|
|
||||||
|
|
||||||
**Scan fails to start**:
|
|
||||||
- Check target domain is valid
|
|
||||||
- Ensure crt.sh is accessible from your network
|
|
||||||
- Review Flask console output for errors
|
|
||||||
|
|
||||||
**No relationships discovered**:
|
|
||||||
- Some domains may have limited certificate transparency data
|
|
||||||
- Try a well-known domain like `google.com` to verify functionality
|
|
||||||
- Check provider status in the interface
|
|
||||||
|
|
||||||
### Debug Mode
|
|
||||||
Enable debug mode for verbose logging:
|
|
||||||
```bash
|
```bash
|
||||||
export FLASK_DEBUG=True
|
|
||||||
python app.py
|
python app.py
|
||||||
```
|
```
|
||||||
|
|
||||||
## Development Roadmap
|
### 2\. Open Your Browser
|
||||||
|
|
||||||
### Phase 2 (Planned)
|
Navigate to `http://127.0.0.1:5000`.
|
||||||
- Multi-provider system with Shodan and VirusTotal integration
|
|
||||||
- Real-time scanning with enhanced visualization
|
|
||||||
- Provider health monitoring and failure recovery
|
|
||||||
|
|
||||||
### Phase 3 (Planned)
|
### 3\. Basic Reconnaissance Workflow
|
||||||
- Advanced correlation algorithms
|
|
||||||
- Enhanced forensic reporting
|
1. **Enter Target Domain**: Input a domain like `example.com`.
|
||||||
- Performance optimization for large investigations
|
2. **Select Recursion Depth**: Depth 2 is recommended for most investigations.
|
||||||
|
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin.
|
||||||
|
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered.
|
||||||
|
5. **Analyze and Export**: Interact with the graph and download the results when the scan is complete.
|
||||||
|
|
||||||
|
## Production Deployment
|
||||||
|
|
||||||
|
To deploy DNSRecon in a production environment, follow these steps:
|
||||||
|
|
||||||
|
### 1\. Use a Production WSGI Server
|
||||||
|
|
||||||
|
Do not use the built-in Flask development server for production. Use a WSGI server like **Gunicorn**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install gunicorn
|
||||||
|
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2\. Configure Environment Variables
|
||||||
|
|
||||||
|
Set the following environment variables for a secure and configurable deployment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate a strong, random secret key
|
||||||
|
export SECRET_KEY='your-super-secret-and-random-key'
|
||||||
|
|
||||||
|
# Set Flask to production mode
|
||||||
|
export FLASK_ENV='production'
|
||||||
|
export FLASK_DEBUG=False
|
||||||
|
|
||||||
|
# API keys (optional, but recommended for full functionality)
|
||||||
|
export VIRUSTOTAL_API_KEY="your_virustotal_key"
|
||||||
|
export SHODAN_API_KEY="your_shodan_key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3\. Use a Reverse Proxy
|
||||||
|
|
||||||
|
Set up a reverse proxy like **Nginx** to sit in front of the Gunicorn server. This provides several benefits, including:
|
||||||
|
|
||||||
|
- **TLS/SSL Termination**: Securely handle HTTPS traffic.
|
||||||
|
- **Load Balancing**: Distribute traffic across multiple application instances.
|
||||||
|
- **Serving Static Files**: Efficiently serve CSS and JavaScript files.
|
||||||
|
|
||||||
|
**Example Nginx Configuration:**
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
server_name your_domain.com;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
return 301 https://$host$request_uri;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 443 ssl;
|
||||||
|
server_name your_domain.com;
|
||||||
|
|
||||||
|
# SSL cert configuration
|
||||||
|
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
|
||||||
|
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_pass http://127.0.0.1:5000;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
}
|
||||||
|
|
||||||
|
location /static {
|
||||||
|
alias /path/to/your/dnsrecon/static;
|
||||||
|
expires 30d;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Autostart with systemd
|
||||||
|
|
||||||
|
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
|
||||||
|
|
||||||
|
### 1\. Create a `.service` file
|
||||||
|
|
||||||
|
Create a new service file in `/etc/systemd/system/`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nano /etc/systemd/system/dnsrecon.service
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2\. Add the Service Configuration
|
||||||
|
|
||||||
|
Paste the following configuration into the file. **Remember to replace `/path/to/your/dnsrecon` and `your_user` with your actual project path and username.**
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=DNSRecon Application
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
User=your_user
|
||||||
|
Group=your_user
|
||||||
|
WorkingDirectory=/path/to/your/dnsrecon
|
||||||
|
ExecStart=/path/to/your/dnsrecon/venv/bin/gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||||
|
Restart=always
|
||||||
|
Environment="SECRET_KEY=your-super-secret-and-random-key"
|
||||||
|
Environment="FLASK_ENV=production"
|
||||||
|
Environment="FLASK_DEBUG=False"
|
||||||
|
Environment="VIRUSTOTAL_API_KEY=your_virustotal_key"
|
||||||
|
Environment="SHODAN_API_KEY=your_shodan_key"
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3\. Enable and Start the Service
|
||||||
|
|
||||||
|
Reload the `systemd` daemon, enable the service to start on boot, and then start it immediately:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl enable dnsrecon.service
|
||||||
|
sudo systemctl start dnsrecon.service
|
||||||
|
```
|
||||||
|
|
||||||
|
You can check the status of the service at any time with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl status dnsrecon.service
|
||||||
|
```
|
||||||
|
|
||||||
## Security Considerations
|
## Security Considerations
|
||||||
|
|
||||||
- **No Persistent Storage**: All data stored in memory only
|
- **API Keys**: API keys are stored in memory for the duration of a user session and are not written to disk.
|
||||||
- **API Keys**: Stored in memory only, never written to disk
|
- **Rate Limiting**: DNSRecon includes built-in rate limiting to be respectful to data sources.
|
||||||
- **Rate Limiting**: Prevents abuse of external services
|
- **Local Use**: The application is designed for local or trusted network use and does not have built-in authentication. **Do not expose it directly to the internet without proper security controls.**
|
||||||
- **Local Use Only**: No authentication required (designed for local use)
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
DNSRecon follows a phased development approach. Currently in Phase 1 with core infrastructure completed.
|
|
||||||
|
|
||||||
### Code Quality Standards
|
|
||||||
- Follow PEP 8 for Python code
|
|
||||||
- Comprehensive docstrings for all functions
|
|
||||||
- Type hints where appropriate
|
|
||||||
- Forensic logging for all external interactions
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
This project is intended for legitimate security research and infrastructure analysis. Users are responsible for compliance with applicable laws and regulations.
|
This project is licensed under the terms of the license agreement found in the `LICENSE` file.
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
For issues and questions:
|
|
||||||
1. Check the troubleshooting section above
|
|
||||||
2. Review the Flask console output for error details
|
|
||||||
3. Ensure all dependencies are properly installed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**DNSRecon v1.0 - Phase 1 Implementation**
|
|
||||||
*Passive Infrastructure Reconnaissance for Security Professionals*
|
|
6
app.py
6
app.py
@ -179,9 +179,9 @@ def stop_scan():
|
|||||||
})
|
})
|
||||||
else:
|
else:
|
||||||
return jsonify({
|
return jsonify({
|
||||||
'success': False,
|
'success': True,
|
||||||
'error': 'No active scan to stop for this session'
|
'message': 'No active scan to stop for this session'
|
||||||
}), 400
|
})
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"ERROR: Exception in stop_scan endpoint: {e}")
|
print(f"ERROR: Exception in stop_scan endpoint: {e}")
|
||||||
|
@ -187,29 +187,17 @@ class Scanner:
|
|||||||
"""Execute the reconnaissance scan with simplified recursion and forensic tracking."""
|
"""Execute the reconnaissance scan with simplified recursion and forensic tracking."""
|
||||||
print(f"_execute_scan started for {target_domain} with depth {max_depth}")
|
print(f"_execute_scan started for {target_domain} with depth {max_depth}")
|
||||||
self.executor = ThreadPoolExecutor(max_workers=self.max_workers)
|
self.executor = ThreadPoolExecutor(max_workers=self.max_workers)
|
||||||
|
processed_targets = set()
|
||||||
# Initialize variables outside try block
|
|
||||||
processed_targets = set() # Fix: Initialize here
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
print("Setting status to RUNNING")
|
|
||||||
self.status = ScanStatus.RUNNING
|
self.status = ScanStatus.RUNNING
|
||||||
|
|
||||||
# Log scan start
|
|
||||||
enabled_providers = [provider.get_name() for provider in self.providers]
|
enabled_providers = [provider.get_name() for provider in self.providers]
|
||||||
self.logger.log_scan_start(target_domain, max_depth, enabled_providers)
|
self.logger.log_scan_start(target_domain, max_depth, enabled_providers)
|
||||||
print(f"Logged scan start with providers: {enabled_providers}")
|
|
||||||
|
|
||||||
# Initialize with target domain and track it
|
|
||||||
print(f"Adding target domain '{target_domain}' as initial node")
|
|
||||||
self.graph.add_node(target_domain, NodeType.DOMAIN)
|
self.graph.add_node(target_domain, NodeType.DOMAIN)
|
||||||
self._initialize_provider_states(target_domain)
|
self._initialize_provider_states(target_domain)
|
||||||
|
|
||||||
# BFS-style exploration with simplified recursion
|
|
||||||
current_level_targets = {target_domain}
|
current_level_targets = {target_domain}
|
||||||
all_discovered_targets = set() # Track all discovered targets for large entity detection
|
all_discovered_targets = {target_domain}
|
||||||
|
|
||||||
print("Starting BFS exploration with simplified recursion...")
|
|
||||||
|
|
||||||
for depth in range(max_depth + 1):
|
for depth in range(max_depth + 1):
|
||||||
if self.stop_event.is_set():
|
if self.stop_event.is_set():
|
||||||
@ -217,32 +205,25 @@ class Scanner:
|
|||||||
break
|
break
|
||||||
|
|
||||||
self.current_depth = depth
|
self.current_depth = depth
|
||||||
print(f"Processing depth level {depth} with {len(current_level_targets)} targets")
|
targets_to_process = current_level_targets - processed_targets
|
||||||
|
if not targets_to_process:
|
||||||
if not current_level_targets:
|
print("No new targets to process at this level.")
|
||||||
print("No targets to process at this level")
|
|
||||||
break
|
break
|
||||||
|
|
||||||
self.total_indicators_found += len(current_level_targets)
|
print(f"Processing depth level {depth} with {len(targets_to_process)} new targets")
|
||||||
|
self.total_indicators_found += len(targets_to_process)
|
||||||
|
|
||||||
# Process targets and collect newly discovered ones
|
|
||||||
target_results = self._process_targets_concurrent_forensic(
|
target_results = self._process_targets_concurrent_forensic(
|
||||||
current_level_targets, processed_targets, all_discovered_targets, depth
|
targets_to_process, processed_targets, all_discovered_targets, depth
|
||||||
)
|
)
|
||||||
|
processed_targets.update(targets_to_process)
|
||||||
|
|
||||||
next_level_targets = set()
|
next_level_targets = set()
|
||||||
for target, new_targets in target_results:
|
for _target, new_targets in target_results:
|
||||||
processed_targets.add(target)
|
|
||||||
all_discovered_targets.update(new_targets)
|
all_discovered_targets.update(new_targets)
|
||||||
|
next_level_targets.update(new_targets)
|
||||||
# Simple recursion rule: only valid IPs and domains within depth limit
|
|
||||||
if depth < max_depth:
|
|
||||||
for new_target in new_targets:
|
|
||||||
if self._should_recurse_on_target(new_target, processed_targets, all_discovered_targets):
|
|
||||||
next_level_targets.add(new_target)
|
|
||||||
|
|
||||||
current_level_targets = next_level_targets
|
current_level_targets = next_level_targets
|
||||||
print(f"Completed depth {depth}, {len(next_level_targets)} targets for next level")
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"ERROR: Scan execution failed with error: {e}")
|
print(f"ERROR: Scan execution failed with error: {e}")
|
||||||
@ -252,14 +233,10 @@ class Scanner:
|
|||||||
finally:
|
finally:
|
||||||
if self.stop_event.is_set():
|
if self.stop_event.is_set():
|
||||||
self.status = ScanStatus.STOPPED
|
self.status = ScanStatus.STOPPED
|
||||||
print("Scan completed with STOPPED status")
|
|
||||||
else:
|
else:
|
||||||
self.status = ScanStatus.COMPLETED
|
self.status = ScanStatus.COMPLETED
|
||||||
print("Scan completed with COMPLETED status")
|
|
||||||
|
|
||||||
self.logger.log_scan_complete()
|
self.logger.log_scan_complete()
|
||||||
self.executor.shutdown(wait=False, cancel_futures=True)
|
self.executor.shutdown(wait=False, cancel_futures=True)
|
||||||
|
|
||||||
stats = self.graph.get_statistics()
|
stats = self.graph.get_statistics()
|
||||||
print("Final scan statistics:")
|
print("Final scan statistics:")
|
||||||
print(f" - Total nodes: {stats['basic_metrics']['total_nodes']}")
|
print(f" - Total nodes: {stats['basic_metrics']['total_nodes']}")
|
||||||
@ -382,9 +359,12 @@ class Scanner:
|
|||||||
except (Exception, CancelledError) as e:
|
except (Exception, CancelledError) as e:
|
||||||
self._log_provider_error(target, provider.get_name(), str(e))
|
self._log_provider_error(target, provider.get_name(), str(e))
|
||||||
|
|
||||||
# Update node with collected metadata
|
for node_id, metadata_dict in target_metadata.items():
|
||||||
if target_metadata[target]:
|
if self.graph.graph.has_node(node_id):
|
||||||
self.graph.add_node(target, target_type, metadata=dict(target_metadata[target]))
|
node_is_ip = _is_valid_ip(node_id)
|
||||||
|
node_type_to_add = NodeType.IP if node_is_ip else NodeType.DOMAIN
|
||||||
|
# This call updates the existing node with the new metadata
|
||||||
|
self.graph.add_node(node_id, node_type_to_add, metadata=metadata_dict)
|
||||||
|
|
||||||
return new_targets
|
return new_targets
|
||||||
|
|
||||||
@ -573,8 +553,6 @@ class Scanner:
|
|||||||
def _collect_node_metadata_forensic(self, node_id: str, provider_name: str, rel_type: RelationshipType,
|
def _collect_node_metadata_forensic(self, node_id: str, provider_name: str, rel_type: RelationshipType,
|
||||||
target: str, raw_data: Dict[str, Any], metadata: Dict[str, Any]) -> None:
|
target: str, raw_data: Dict[str, Any], metadata: Dict[str, Any]) -> None:
|
||||||
"""Collect and organize metadata for forensic tracking with enhanced logging."""
|
"""Collect and organize metadata for forensic tracking with enhanced logging."""
|
||||||
|
|
||||||
# Log metadata collection
|
|
||||||
self.logger.logger.debug(f"Collecting metadata for {node_id} from {provider_name}: {rel_type.relationship_name}")
|
self.logger.logger.debug(f"Collecting metadata for {node_id} from {provider_name}: {rel_type.relationship_name}")
|
||||||
|
|
||||||
if provider_name == 'dns':
|
if provider_name == 'dns':
|
||||||
@ -599,7 +577,6 @@ class Scanner:
|
|||||||
if key not in metadata.get('shodan', {}) or not metadata.get('shodan', {}).get(key):
|
if key not in metadata.get('shodan', {}) or not metadata.get('shodan', {}).get(key):
|
||||||
metadata.setdefault('shodan', {})[key] = value
|
metadata.setdefault('shodan', {})[key] = value
|
||||||
|
|
||||||
# Track ASN data
|
|
||||||
if rel_type == RelationshipType.ASN_MEMBERSHIP:
|
if rel_type == RelationshipType.ASN_MEMBERSHIP:
|
||||||
metadata['asn_data'] = {
|
metadata['asn_data'] = {
|
||||||
'asn': target,
|
'asn': target,
|
||||||
|
@ -28,13 +28,6 @@ class GraphManager {
|
|||||||
},
|
},
|
||||||
borderWidth: 2,
|
borderWidth: 2,
|
||||||
borderColor: '#444',
|
borderColor: '#444',
|
||||||
shadow: {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 0, 0, 0.5)',
|
|
||||||
size: 5,
|
|
||||||
x: 2,
|
|
||||||
y: 2
|
|
||||||
},
|
|
||||||
scaling: {
|
scaling: {
|
||||||
min: 10,
|
min: 10,
|
||||||
max: 30,
|
max: 30,
|
||||||
@ -48,9 +41,6 @@ class GraphManager {
|
|||||||
node: (values, id, selected, hovering) => {
|
node: (values, id, selected, hovering) => {
|
||||||
values.borderColor = '#00ff41';
|
values.borderColor = '#00ff41';
|
||||||
values.borderWidth = 3;
|
values.borderWidth = 3;
|
||||||
values.shadow = true;
|
|
||||||
values.shadowColor = 'rgba(0, 255, 65, 0.6)';
|
|
||||||
values.shadowSize = 10;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@ -82,19 +72,10 @@ class GraphManager {
|
|||||||
type: 'dynamic',
|
type: 'dynamic',
|
||||||
roundness: 0.6
|
roundness: 0.6
|
||||||
},
|
},
|
||||||
shadow: {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 0, 0, 0.3)',
|
|
||||||
size: 3,
|
|
||||||
x: 1,
|
|
||||||
y: 1
|
|
||||||
},
|
|
||||||
chosen: {
|
chosen: {
|
||||||
edge: (values, id, selected, hovering) => {
|
edge: (values, id, selected, hovering) => {
|
||||||
values.color = '#00ff41';
|
values.color = '#00ff41';
|
||||||
values.width = 4;
|
values.width = 4;
|
||||||
values.shadow = true;
|
|
||||||
values.shadowColor = 'rgba(0, 255, 65, 0.4)';
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@ -344,17 +325,6 @@ class GraphManager {
|
|||||||
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
|
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add special styling for important nodes
|
|
||||||
if (this.isImportantNode(node)) {
|
|
||||||
processedNode.shadow = {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 255, 65, 0.6)',
|
|
||||||
size: 10,
|
|
||||||
x: 2,
|
|
||||||
y: 2
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Style based on certificate validity
|
// Style based on certificate validity
|
||||||
if (node.type === 'domain') {
|
if (node.type === 'domain') {
|
||||||
if (node.metadata && node.metadata.certificate_data && node.metadata.certificate_data.has_valid_cert === true) {
|
if (node.metadata && node.metadata.certificate_data && node.metadata.certificate_data.has_valid_cert === true) {
|
||||||
@ -393,16 +363,7 @@ class GraphManager {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Add animation for high-confidence edges
|
|
||||||
if (confidence >= 0.8) {
|
|
||||||
processedEdge.shadow = {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 255, 65, 0.3)',
|
|
||||||
size: 5,
|
|
||||||
x: 1,
|
|
||||||
y: 1
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
return processedEdge;
|
return processedEdge;
|
||||||
}
|
}
|
||||||
@ -718,14 +679,7 @@ class GraphManager {
|
|||||||
const nodeHighlights = newNodes.map(node => ({
|
const nodeHighlights = newNodes.map(node => ({
|
||||||
id: node.id,
|
id: node.id,
|
||||||
borderColor: '#00ff41',
|
borderColor: '#00ff41',
|
||||||
borderWidth: 4,
|
borderWidth: 4
|
||||||
shadow: {
|
|
||||||
enabled: true,
|
|
||||||
color: 'rgba(0, 255, 65, 0.8)',
|
|
||||||
size: 15,
|
|
||||||
x: 2,
|
|
||||||
y: 2
|
|
||||||
}
|
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Briefly highlight new edges
|
// Briefly highlight new edges
|
||||||
@ -744,7 +698,6 @@ class GraphManager {
|
|||||||
id: node.id,
|
id: node.id,
|
||||||
borderColor: this.getNodeBorderColor(node.type),
|
borderColor: this.getNodeBorderColor(node.type),
|
||||||
borderWidth: 2,
|
borderWidth: 2,
|
||||||
shadow: node.shadow || { enabled: false }
|
|
||||||
}));
|
}));
|
||||||
|
|
||||||
const edgeResets = newEdges.map(edge => ({
|
const edgeResets = newEdges.map(edge => ({
|
||||||
|
Loading…
x
Reference in New Issue
Block a user