prod staging
This commit is contained in:
parent
f445187025
commit
7e2473b521
469
README.md
469
README.md
@ -2,272 +2,257 @@
|
||||
|
||||
DNSRecon is an interactive, passive reconnaissance tool designed to map adversary infrastructure. It operates on a "free-by-default" model, ensuring core functionality without subscriptions, while allowing power users to enhance its capabilities with paid API keys.
|
||||
|
||||
**Current Status: Phase 1 Implementation**
|
||||
- ✅ Core infrastructure and graph engine
|
||||
- ✅ Certificate transparency data provider (crt.sh)
|
||||
- ✅ Basic web interface with real-time visualization
|
||||
- ✅ Forensic logging system
|
||||
- ✅ JSON export functionality
|
||||
**Current Status: Phase 2 Implementation**
|
||||
|
||||
- ✅ Core infrastructure and graph engine
|
||||
- ✅ Multi-provider support (crt.sh, DNS, Shodan)
|
||||
- ✅ Session-based multi-user support
|
||||
- ✅ Real-time web interface with interactive visualization
|
||||
- ✅ Forensic logging system and JSON export
|
||||
|
||||
## Features
|
||||
|
||||
### Core Capabilities
|
||||
- **Zero Contact Reconnaissance**: Passive data gathering without touching target infrastructure
|
||||
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping
|
||||
- **Real-Time Visualization**: Interactive graph updates during scanning
|
||||
- **Forensic Logging**: Complete audit trail of all reconnaissance activities
|
||||
- **Confidence Scoring**: Weighted relationships based on data source reliability
|
||||
|
||||
### Data Sources (Phase 1)
|
||||
- **Certificate Transparency (crt.sh)**: Discovers domain relationships through SSL certificate SAN analysis
|
||||
- **Basic DNS Resolution**: A/AAAA record lookups for IP relationships
|
||||
|
||||
### Visualization
|
||||
- **Interactive Network Graph**: Powered by vis.js with cybersecurity theme
|
||||
- **Node Types**: Domains, IP addresses, certificates, ASNs
|
||||
- **Confidence-Based Styling**: Visual indicators for relationship strength
|
||||
- **Real-Time Updates**: Graph builds dynamically as relationships are discovered
|
||||
- **Passive Reconnaissance**: Gathers data without direct contact with target infrastructure.
|
||||
- **In-Memory Graph Analysis**: Uses NetworkX for efficient relationship mapping.
|
||||
- **Real-Time Visualization**: The graph updates dynamically as the scan progresses.
|
||||
- **Forensic Logging**: A complete audit trail of all reconnaissance activities is maintained.
|
||||
- **Confidence Scoring**: Relationships are weighted based on the reliability of the data source.
|
||||
- **Session Management**: Supports concurrent user sessions with isolated scanner instances.
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
- Python 3.8 or higher
|
||||
- Modern web browser with JavaScript enabled
|
||||
|
||||
### Setup
|
||||
1. **Clone or create the project directory**:
|
||||
```bash
|
||||
mkdir dnsrecon
|
||||
cd dnsrecon
|
||||
```
|
||||
- Python 3.8 or higher
|
||||
- A modern web browser with JavaScript enabled
|
||||
- (Recommended) A Linux host for running the application and the optional DNS cache.
|
||||
|
||||
2. **Install Python dependencies**:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
3. **Verify the directory structure**:
|
||||
```
|
||||
dnsrecon/
|
||||
├── app.py
|
||||
├── config.py
|
||||
├── requirements.txt
|
||||
├── core/
|
||||
│ ├── __init__.py
|
||||
│ ├── graph_manager.py
|
||||
│ ├── scanner.py
|
||||
│ └── logger.py
|
||||
├── providers/
|
||||
│ ├── __init__.py
|
||||
│ ├── base_provider.py
|
||||
│ └── crtsh_provider.py
|
||||
├── static/
|
||||
│ ├── css/
|
||||
│ │ └── main.css
|
||||
│ └── js/
|
||||
│ ├── graph.js
|
||||
│ └── main.js
|
||||
└── templates/
|
||||
└── index.html
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Starting the Application
|
||||
1. **Run the Flask application**:
|
||||
```bash
|
||||
python app.py
|
||||
```
|
||||
|
||||
2. **Open your web browser** and navigate to:
|
||||
```
|
||||
http://127.0.0.1:5000
|
||||
```
|
||||
|
||||
### Basic Reconnaissance Workflow
|
||||
|
||||
1. **Enter Target Domain**: Input the domain you want to investigate (e.g., `example.com`)
|
||||
|
||||
2. **Select Recursion Depth**:
|
||||
- **Depth 1**: Direct relationships only
|
||||
- **Depth 2**: Recommended for most investigations
|
||||
- **Depth 3+**: Extended analysis for comprehensive mapping
|
||||
|
||||
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin passive data gathering
|
||||
|
||||
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered
|
||||
|
||||
5. **Analyze Results**: Interact with the graph to explore relationships and click nodes for detailed information
|
||||
|
||||
6. **Export Data**: Download complete results including graph data and forensic audit trail
|
||||
|
||||
### Understanding the Visualization
|
||||
|
||||
#### Node Types
|
||||
- 🟢 **Green Circles**: Domain names
|
||||
- 🟠 **Orange Squares**: IP addresses
|
||||
- ⚪ **Gray Diamonds**: SSL certificates
|
||||
- 🔵 **Blue Triangles**: ASN (Autonomous System) information
|
||||
|
||||
#### Edge Confidence
|
||||
- **Thick Green Lines**: High confidence (≥80%) - Certificate SAN relationships
|
||||
- **Medium Orange Lines**: Medium confidence (60-79%) - DNS record relationships
|
||||
- **Thin Gray Lines**: Lower confidence (<60%) - Passive DNS or uncertain relationships
|
||||
|
||||
### Example Investigation
|
||||
|
||||
Let's investigate `github.com`:
|
||||
|
||||
1. Enter `github.com` as the target domain
|
||||
2. Set recursion depth to 2
|
||||
3. Start the scan
|
||||
4. Observe relationships to other GitHub domains discovered through certificate analysis
|
||||
5. Export results for further analysis
|
||||
|
||||
Expected discoveries might include:
|
||||
- `*.github.com` domains through certificate SANs
|
||||
- `github.io` and related domains
|
||||
- Associated IP addresses
|
||||
- Certificate authority relationships
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
You can configure DNSRecon using environment variables:
|
||||
### 1\. Clone the Project
|
||||
|
||||
```bash
|
||||
# API keys for future providers (Phase 2)
|
||||
export VIRUSTOTAL_API_KEY="your_api_key_here"
|
||||
export SHODAN_API_KEY="your_api_key_here"
|
||||
|
||||
# Application settings
|
||||
export DEFAULT_RECURSION_DEPTH=2
|
||||
export FLASK_DEBUG=False
|
||||
git clone https://github.com/your-repo/dnsrecon.git
|
||||
cd dnsrecon
|
||||
```
|
||||
|
||||
### Rate Limiting
|
||||
DNSRecon includes built-in rate limiting to be respectful to data sources:
|
||||
- **crt.sh**: 60 requests per minute
|
||||
- **DNS queries**: 100 requests per minute
|
||||
### 2\. Install Python Dependencies
|
||||
|
||||
## Data Export Format
|
||||
It is highly recommended to use a virtual environment:
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 3\. (Optional but Recommended) Set up a Local DNS Caching Resolver
|
||||
|
||||
Running a local DNS caching resolver can significantly speed up DNS queries and reduce your network footprint. Here’s how to set up `unbound` on a Debian-based Linux distribution (like Ubuntu).
|
||||
|
||||
**a. Install Unbound:**
|
||||
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install unbound -y
|
||||
```
|
||||
|
||||
**b. Configure Unbound:**
|
||||
Create a new configuration file for DNSRecon:
|
||||
|
||||
```bash
|
||||
sudo nano /etc/unbound/unbound.conf.d/dnsrecon.conf
|
||||
```
|
||||
|
||||
Add the following content to the file:
|
||||
|
||||
```
|
||||
server:
|
||||
# Listen on localhost for all users
|
||||
interface: 127.0.0.1
|
||||
access-control: 0.0.0.0/0 refuse
|
||||
access-control: 127.0.0.0/8 allow
|
||||
|
||||
# Enable prefetching of popular items
|
||||
prefetch: yes
|
||||
```
|
||||
|
||||
**c. Restart Unbound and set it as the default resolver:**
|
||||
|
||||
```bash
|
||||
sudo systemctl restart unbound
|
||||
sudo systemctl enable unbound
|
||||
```
|
||||
|
||||
To use this resolver for your system, you may need to update your network settings to point to `127.0.0.1` as your DNS server.
|
||||
|
||||
**d. Update DNSProvider to use the local resolver:**
|
||||
In `dnsrecon/providers/dns_provider.py`, you can explicitly set the resolver's nameservers in the `__init__` method:
|
||||
|
||||
```python
|
||||
# dnsrecon/providers/dns_provider.py
|
||||
|
||||
class DNSProvider(BaseProvider):
|
||||
def __init__(self, session_config=None):
|
||||
"""Initialize DNS provider with session-specific configuration."""
|
||||
super().__init__(...)
|
||||
|
||||
# Configure DNS resolver
|
||||
self.resolver = dns.resolver.Resolver()
|
||||
self.resolver.nameservers = ['127.0.0.1'] # Use local caching resolver
|
||||
self.resolver.timeout = 5
|
||||
self.resolver.lifetime = 10
|
||||
```
|
||||
|
||||
## Usage (Development)
|
||||
|
||||
### 1\. Start the Application
|
||||
|
||||
Results are exported as JSON with the following structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"scan_metadata": {
|
||||
"target_domain": "example.com",
|
||||
"max_depth": 2,
|
||||
"final_status": "completed"
|
||||
},
|
||||
"graph_data": {
|
||||
"nodes": [...],
|
||||
"edges": [...]
|
||||
},
|
||||
"forensic_audit": {
|
||||
"session_metadata": {...},
|
||||
"api_requests": [...],
|
||||
"relationships": [...]
|
||||
},
|
||||
"provider_statistics": {...}
|
||||
}
|
||||
```
|
||||
|
||||
## Forensic Integrity
|
||||
|
||||
DNSRecon maintains complete forensic integrity:
|
||||
|
||||
- **API Request Logging**: Every external request is logged with timestamps, URLs, and responses
|
||||
- **Relationship Provenance**: Each discovered relationship includes source provider and discovery method
|
||||
- **Session Tracking**: Unique session IDs for investigation continuity
|
||||
- **Confidence Metadata**: Scoring rationale for all relationships
|
||||
- **Export Integrity**: Complete audit trail included in all exports
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
### Core Components
|
||||
|
||||
- **GraphManager**: NetworkX-based in-memory graph with confidence scoring
|
||||
- **Scanner**: Multi-provider orchestration with depth-limited BFS exploration
|
||||
- **ForensicLogger**: Thread-safe audit trail with structured logging
|
||||
- **BaseProvider**: Abstract interface for data source plugins
|
||||
|
||||
### Data Flow
|
||||
1. User initiates scan via web interface
|
||||
2. Scanner coordinates multiple data providers
|
||||
3. Relationships discovered and added to in-memory graph
|
||||
4. Real-time updates sent to web interface
|
||||
5. Graph visualization updates dynamically
|
||||
6. Complete audit trail maintained throughout
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Graph not displaying**:
|
||||
- Ensure JavaScript is enabled in your browser
|
||||
- Check browser console for errors
|
||||
- Verify vis.js library is loading correctly
|
||||
|
||||
**Scan fails to start**:
|
||||
- Check target domain is valid
|
||||
- Ensure crt.sh is accessible from your network
|
||||
- Review Flask console output for errors
|
||||
|
||||
**No relationships discovered**:
|
||||
- Some domains may have limited certificate transparency data
|
||||
- Try a well-known domain like `google.com` to verify functionality
|
||||
- Check provider status in the interface
|
||||
|
||||
### Debug Mode
|
||||
Enable debug mode for verbose logging:
|
||||
```bash
|
||||
export FLASK_DEBUG=True
|
||||
python app.py
|
||||
```
|
||||
|
||||
## Development Roadmap
|
||||
### 2\. Open Your Browser
|
||||
|
||||
### Phase 2 (Planned)
|
||||
- Multi-provider system with Shodan and VirusTotal integration
|
||||
- Real-time scanning with enhanced visualization
|
||||
- Provider health monitoring and failure recovery
|
||||
Navigate to `http://127.0.0.1:5000`.
|
||||
|
||||
### Phase 3 (Planned)
|
||||
- Advanced correlation algorithms
|
||||
- Enhanced forensic reporting
|
||||
- Performance optimization for large investigations
|
||||
### 3\. Basic Reconnaissance Workflow
|
||||
|
||||
1. **Enter Target Domain**: Input a domain like `example.com`.
|
||||
2. **Select Recursion Depth**: Depth 2 is recommended for most investigations.
|
||||
3. **Start Reconnaissance**: Click "Start Reconnaissance" to begin.
|
||||
4. **Monitor Progress**: Watch the real-time graph build as relationships are discovered.
|
||||
5. **Analyze and Export**: Interact with the graph and download the results when the scan is complete.
|
||||
|
||||
## Production Deployment
|
||||
|
||||
To deploy DNSRecon in a production environment, follow these steps:
|
||||
|
||||
### 1\. Use a Production WSGI Server
|
||||
|
||||
Do not use the built-in Flask development server for production. Use a WSGI server like **Gunicorn**:
|
||||
|
||||
```bash
|
||||
pip install gunicorn
|
||||
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||
```
|
||||
|
||||
### 2\. Configure Environment Variables
|
||||
|
||||
Set the following environment variables for a secure and configurable deployment:
|
||||
|
||||
```bash
|
||||
# Generate a strong, random secret key
|
||||
export SECRET_KEY='your-super-secret-and-random-key'
|
||||
|
||||
# Set Flask to production mode
|
||||
export FLASK_ENV='production'
|
||||
export FLASK_DEBUG=False
|
||||
|
||||
# API keys (optional, but recommended for full functionality)
|
||||
export VIRUSTOTAL_API_KEY="your_virustotal_key"
|
||||
export SHODAN_API_KEY="your_shodan_key"
|
||||
```
|
||||
|
||||
### 3\. Use a Reverse Proxy
|
||||
|
||||
Set up a reverse proxy like **Nginx** to sit in front of the Gunicorn server. This provides several benefits, including:
|
||||
|
||||
- **TLS/SSL Termination**: Securely handle HTTPS traffic.
|
||||
- **Load Balancing**: Distribute traffic across multiple application instances.
|
||||
- **Serving Static Files**: Efficiently serve CSS and JavaScript files.
|
||||
|
||||
**Example Nginx Configuration:**
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name your_domain.com;
|
||||
|
||||
location / {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name your_domain.com;
|
||||
|
||||
# SSL cert configuration
|
||||
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:5000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
}
|
||||
|
||||
location /static {
|
||||
alias /path/to/your/dnsrecon/static;
|
||||
expires 30d;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Autostart with systemd
|
||||
|
||||
To run DNSRecon as a service that starts automatically on boot, you can use `systemd`.
|
||||
|
||||
### 1\. Create a `.service` file
|
||||
|
||||
Create a new service file in `/etc/systemd/system/`:
|
||||
|
||||
```bash
|
||||
sudo nano /etc/systemd/system/dnsrecon.service
|
||||
```
|
||||
|
||||
### 2\. Add the Service Configuration
|
||||
|
||||
Paste the following configuration into the file. **Remember to replace `/path/to/your/dnsrecon` and `your_user` with your actual project path and username.**
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=DNSRecon Application
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
User=your_user
|
||||
Group=your_user
|
||||
WorkingDirectory=/path/to/your/dnsrecon
|
||||
ExecStart=/path/to/your/dnsrecon/venv/bin/gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
|
||||
Restart=always
|
||||
Environment="SECRET_KEY=your-super-secret-and-random-key"
|
||||
Environment="FLASK_ENV=production"
|
||||
Environment="FLASK_DEBUG=False"
|
||||
Environment="VIRUSTOTAL_API_KEY=your_virustotal_key"
|
||||
Environment="SHODAN_API_KEY=your_shodan_key"
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
### 3\. Enable and Start the Service
|
||||
|
||||
Reload the `systemd` daemon, enable the service to start on boot, and then start it immediately:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable dnsrecon.service
|
||||
sudo systemctl start dnsrecon.service
|
||||
```
|
||||
|
||||
You can check the status of the service at any time with:
|
||||
|
||||
```bash
|
||||
sudo systemctl status dnsrecon.service
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **No Persistent Storage**: All data stored in memory only
|
||||
- **API Keys**: Stored in memory only, never written to disk
|
||||
- **Rate Limiting**: Prevents abuse of external services
|
||||
- **Local Use Only**: No authentication required (designed for local use)
|
||||
|
||||
## Contributing
|
||||
|
||||
DNSRecon follows a phased development approach. Currently in Phase 1 with core infrastructure completed.
|
||||
|
||||
### Code Quality Standards
|
||||
- Follow PEP 8 for Python code
|
||||
- Comprehensive docstrings for all functions
|
||||
- Type hints where appropriate
|
||||
- Forensic logging for all external interactions
|
||||
- **API Keys**: API keys are stored in memory for the duration of a user session and are not written to disk.
|
||||
- **Rate Limiting**: DNSRecon includes built-in rate limiting to be respectful to data sources.
|
||||
- **Local Use**: The application is designed for local or trusted network use and does not have built-in authentication. **Do not expose it directly to the internet without proper security controls.**
|
||||
|
||||
## License
|
||||
|
||||
This project is intended for legitimate security research and infrastructure analysis. Users are responsible for compliance with applicable laws and regulations.
|
||||
|
||||
## Support
|
||||
|
||||
For issues and questions:
|
||||
1. Check the troubleshooting section above
|
||||
2. Review the Flask console output for error details
|
||||
3. Ensure all dependencies are properly installed
|
||||
|
||||
---
|
||||
|
||||
**DNSRecon v1.0 - Phase 1 Implementation**
|
||||
*Passive Infrastructure Reconnaissance for Security Professionals*
|
||||
This project is licensed under the terms of the license agreement found in the `LICENSE` file.
|
6
app.py
6
app.py
@ -179,9 +179,9 @@ def stop_scan():
|
||||
})
|
||||
else:
|
||||
return jsonify({
|
||||
'success': False,
|
||||
'error': 'No active scan to stop for this session'
|
||||
}), 400
|
||||
'success': True,
|
||||
'message': 'No active scan to stop for this session'
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Exception in stop_scan endpoint: {e}")
|
||||
|
@ -187,29 +187,17 @@ class Scanner:
|
||||
"""Execute the reconnaissance scan with simplified recursion and forensic tracking."""
|
||||
print(f"_execute_scan started for {target_domain} with depth {max_depth}")
|
||||
self.executor = ThreadPoolExecutor(max_workers=self.max_workers)
|
||||
|
||||
# Initialize variables outside try block
|
||||
processed_targets = set() # Fix: Initialize here
|
||||
processed_targets = set()
|
||||
|
||||
try:
|
||||
print("Setting status to RUNNING")
|
||||
self.status = ScanStatus.RUNNING
|
||||
|
||||
# Log scan start
|
||||
enabled_providers = [provider.get_name() for provider in self.providers]
|
||||
self.logger.log_scan_start(target_domain, max_depth, enabled_providers)
|
||||
print(f"Logged scan start with providers: {enabled_providers}")
|
||||
|
||||
# Initialize with target domain and track it
|
||||
print(f"Adding target domain '{target_domain}' as initial node")
|
||||
self.graph.add_node(target_domain, NodeType.DOMAIN)
|
||||
self._initialize_provider_states(target_domain)
|
||||
|
||||
# BFS-style exploration with simplified recursion
|
||||
current_level_targets = {target_domain}
|
||||
all_discovered_targets = set() # Track all discovered targets for large entity detection
|
||||
|
||||
print("Starting BFS exploration with simplified recursion...")
|
||||
all_discovered_targets = {target_domain}
|
||||
|
||||
for depth in range(max_depth + 1):
|
||||
if self.stop_event.is_set():
|
||||
@ -217,32 +205,25 @@ class Scanner:
|
||||
break
|
||||
|
||||
self.current_depth = depth
|
||||
print(f"Processing depth level {depth} with {len(current_level_targets)} targets")
|
||||
|
||||
if not current_level_targets:
|
||||
print("No targets to process at this level")
|
||||
targets_to_process = current_level_targets - processed_targets
|
||||
if not targets_to_process:
|
||||
print("No new targets to process at this level.")
|
||||
break
|
||||
|
||||
self.total_indicators_found += len(current_level_targets)
|
||||
print(f"Processing depth level {depth} with {len(targets_to_process)} new targets")
|
||||
self.total_indicators_found += len(targets_to_process)
|
||||
|
||||
# Process targets and collect newly discovered ones
|
||||
target_results = self._process_targets_concurrent_forensic(
|
||||
current_level_targets, processed_targets, all_discovered_targets, depth
|
||||
targets_to_process, processed_targets, all_discovered_targets, depth
|
||||
)
|
||||
processed_targets.update(targets_to_process)
|
||||
|
||||
next_level_targets = set()
|
||||
for target, new_targets in target_results:
|
||||
processed_targets.add(target)
|
||||
for _target, new_targets in target_results:
|
||||
all_discovered_targets.update(new_targets)
|
||||
|
||||
# Simple recursion rule: only valid IPs and domains within depth limit
|
||||
if depth < max_depth:
|
||||
for new_target in new_targets:
|
||||
if self._should_recurse_on_target(new_target, processed_targets, all_discovered_targets):
|
||||
next_level_targets.add(new_target)
|
||||
next_level_targets.update(new_targets)
|
||||
|
||||
current_level_targets = next_level_targets
|
||||
print(f"Completed depth {depth}, {len(next_level_targets)} targets for next level")
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: Scan execution failed with error: {e}")
|
||||
@ -252,14 +233,10 @@ class Scanner:
|
||||
finally:
|
||||
if self.stop_event.is_set():
|
||||
self.status = ScanStatus.STOPPED
|
||||
print("Scan completed with STOPPED status")
|
||||
else:
|
||||
self.status = ScanStatus.COMPLETED
|
||||
print("Scan completed with COMPLETED status")
|
||||
|
||||
self.logger.log_scan_complete()
|
||||
self.executor.shutdown(wait=False, cancel_futures=True)
|
||||
|
||||
stats = self.graph.get_statistics()
|
||||
print("Final scan statistics:")
|
||||
print(f" - Total nodes: {stats['basic_metrics']['total_nodes']}")
|
||||
@ -382,9 +359,12 @@ class Scanner:
|
||||
except (Exception, CancelledError) as e:
|
||||
self._log_provider_error(target, provider.get_name(), str(e))
|
||||
|
||||
# Update node with collected metadata
|
||||
if target_metadata[target]:
|
||||
self.graph.add_node(target, target_type, metadata=dict(target_metadata[target]))
|
||||
for node_id, metadata_dict in target_metadata.items():
|
||||
if self.graph.graph.has_node(node_id):
|
||||
node_is_ip = _is_valid_ip(node_id)
|
||||
node_type_to_add = NodeType.IP if node_is_ip else NodeType.DOMAIN
|
||||
# This call updates the existing node with the new metadata
|
||||
self.graph.add_node(node_id, node_type_to_add, metadata=metadata_dict)
|
||||
|
||||
return new_targets
|
||||
|
||||
@ -573,8 +553,6 @@ class Scanner:
|
||||
def _collect_node_metadata_forensic(self, node_id: str, provider_name: str, rel_type: RelationshipType,
|
||||
target: str, raw_data: Dict[str, Any], metadata: Dict[str, Any]) -> None:
|
||||
"""Collect and organize metadata for forensic tracking with enhanced logging."""
|
||||
|
||||
# Log metadata collection
|
||||
self.logger.logger.debug(f"Collecting metadata for {node_id} from {provider_name}: {rel_type.relationship_name}")
|
||||
|
||||
if provider_name == 'dns':
|
||||
@ -599,7 +577,6 @@ class Scanner:
|
||||
if key not in metadata.get('shodan', {}) or not metadata.get('shodan', {}).get(key):
|
||||
metadata.setdefault('shodan', {})[key] = value
|
||||
|
||||
# Track ASN data
|
||||
if rel_type == RelationshipType.ASN_MEMBERSHIP:
|
||||
metadata['asn_data'] = {
|
||||
'asn': target,
|
||||
|
@ -28,13 +28,6 @@ class GraphManager {
|
||||
},
|
||||
borderWidth: 2,
|
||||
borderColor: '#444',
|
||||
shadow: {
|
||||
enabled: true,
|
||||
color: 'rgba(0, 0, 0, 0.5)',
|
||||
size: 5,
|
||||
x: 2,
|
||||
y: 2
|
||||
},
|
||||
scaling: {
|
||||
min: 10,
|
||||
max: 30,
|
||||
@ -48,9 +41,6 @@ class GraphManager {
|
||||
node: (values, id, selected, hovering) => {
|
||||
values.borderColor = '#00ff41';
|
||||
values.borderWidth = 3;
|
||||
values.shadow = true;
|
||||
values.shadowColor = 'rgba(0, 255, 65, 0.6)';
|
||||
values.shadowSize = 10;
|
||||
}
|
||||
}
|
||||
},
|
||||
@ -82,19 +72,10 @@ class GraphManager {
|
||||
type: 'dynamic',
|
||||
roundness: 0.6
|
||||
},
|
||||
shadow: {
|
||||
enabled: true,
|
||||
color: 'rgba(0, 0, 0, 0.3)',
|
||||
size: 3,
|
||||
x: 1,
|
||||
y: 1
|
||||
},
|
||||
chosen: {
|
||||
edge: (values, id, selected, hovering) => {
|
||||
values.color = '#00ff41';
|
||||
values.width = 4;
|
||||
values.shadow = true;
|
||||
values.shadowColor = 'rgba(0, 255, 65, 0.4)';
|
||||
}
|
||||
}
|
||||
},
|
||||
@ -344,17 +325,6 @@ class GraphManager {
|
||||
processedNode.borderWidth = Math.max(2, Math.floor(node.confidence * 5));
|
||||
}
|
||||
|
||||
// Add special styling for important nodes
|
||||
if (this.isImportantNode(node)) {
|
||||
processedNode.shadow = {
|
||||
enabled: true,
|
||||
color: 'rgba(0, 255, 65, 0.6)',
|
||||
size: 10,
|
||||
x: 2,
|
||||
y: 2
|
||||
};
|
||||
}
|
||||
|
||||
// Style based on certificate validity
|
||||
if (node.type === 'domain') {
|
||||
if (node.metadata && node.metadata.certificate_data && node.metadata.certificate_data.has_valid_cert === true) {
|
||||
@ -393,16 +363,7 @@ class GraphManager {
|
||||
}
|
||||
};
|
||||
|
||||
// Add animation for high-confidence edges
|
||||
if (confidence >= 0.8) {
|
||||
processedEdge.shadow = {
|
||||
enabled: true,
|
||||
color: 'rgba(0, 255, 65, 0.3)',
|
||||
size: 5,
|
||||
x: 1,
|
||||
y: 1
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
return processedEdge;
|
||||
}
|
||||
@ -718,14 +679,7 @@ class GraphManager {
|
||||
const nodeHighlights = newNodes.map(node => ({
|
||||
id: node.id,
|
||||
borderColor: '#00ff41',
|
||||
borderWidth: 4,
|
||||
shadow: {
|
||||
enabled: true,
|
||||
color: 'rgba(0, 255, 65, 0.8)',
|
||||
size: 15,
|
||||
x: 2,
|
||||
y: 2
|
||||
}
|
||||
borderWidth: 4
|
||||
}));
|
||||
|
||||
// Briefly highlight new edges
|
||||
@ -744,7 +698,6 @@ class GraphManager {
|
||||
id: node.id,
|
||||
borderColor: this.getNodeBorderColor(node.type),
|
||||
borderWidth: 2,
|
||||
shadow: node.shadow || { enabled: false }
|
||||
}));
|
||||
|
||||
const edgeResets = newEdges.map(edge => ({
|
||||
|
Loading…
x
Reference in New Issue
Block a user