33 Commits

Author SHA1 Message Date
overcuriousity
1598b16b85 Merge pull request #10 from overcuriousity/claude/add-gpg-verification-01CeuKD7An97D9W83Dg3QxKe
Claude/add gpg verification 01 ceu kd7 an97 d9 w83 dg3 qx ke
2025-12-14 02:40:45 +01:00
Claude
90a82dc0d3 Refactor note signing: sign hash only + comprehensive documentation
Changed the cryptographic signing approach to be more efficient and standard:

**Signing Logic Changes:**

1. **Note-level signing** (CLI & TUI):
   - Old: Sign "Hash: {hash}\nContent: {content}"
   - New: Sign only the SHA256 hash
   - Rationale: Hash already proves integrity (timestamp+content),
     signature proves authenticity. More efficient, standard approach.

2. **Export-level signing** (unchanged):
   - Entire markdown export is GPG-signed
   - Provides document-level integrity verification

**Implementation:**
- trace/cli.py: Updated quick_add_note() to sign hash only
- trace/tui_app.py: Updated note creation dialog to sign hash only
- Updated export format labels to clarify what's being signed:
  "SHA256 Hash (timestamp:content)" and "GPG Signature of Hash"

**Documentation (NEW):**

Added comprehensive "Cryptographic Integrity & Chain of Custody" section
to README.md explaining:
- Layer 1: Note-level integrity (hash + signature)
- Layer 2: Export-level integrity (document signature)
- First-run GPG setup wizard
- Internal verification workflow (TUI symbols: ✓/✗/?)
- External verification workflow (court/auditor use case)
- Step-by-step verification instructions
- Cryptographic trust model diagram
- Security considerations and limitations

Added "CRYPTOGRAPHIC INTEGRITY" section to in-app help (press ?):
- Explains dual-layer signing approach
- Shows verification symbol meanings
- Documents 'v' key for verification details
- External verification command

**Verification Workflow:**
1. Investigator: trace --export + gpg --armor --export
2. Recipient: gpg --import pubkey.asc
3. Document: gpg --verify export.md
4. Individual notes: Extract signature blocks and verify

Files modified:
- README.md: +175 lines of documentation
- trace/cli.py: Sign hash only, update labels
- trace/tui_app.py: Sign hash only, add help section
2025-12-13 21:15:58 +00:00
Claude
9248799e79 Add GPG signing for entire markdown exports
When exporting to markdown (--export), the entire export document is now
signed with GPG if signing is enabled in settings.

Features:
- Builds export content in memory before signing
- Signs the complete document as one GPG clearsigned block
- Individual note signatures are preserved within the export
- Provides two layers of verification:
  1. Document-level: Verifies entire export hasn't been modified
  2. Note-level: Verifies individual notes haven't been tampered with

Verification workflow:
- Entire export: gpg --verify export.md
- Individual notes: Extract signature blocks and verify separately

Changes:
- Renamed write_note() to format_note_for_export() returning string
- Export content built in memory before file write
- Signs complete export if pgp_enabled=True
- Shows verification instructions after successful export

Example output:
  ✓ Export signed with GPG
  ✓ Exported to case-2024-001.md

  To verify the export:
    gpg --verify case-2024-001.md
2025-12-13 19:42:12 +00:00
Claude
96309319b9 Add GPG signature verification and first-run setup wizard
This commit adds comprehensive GPG signature verification functionality
and a first-run setup wizard for configuring GPG signing:

**GPG Verification Features:**
- Added `Crypto.verify_signature()` to verify GPG clearsigned messages
- Added `Crypto.is_gpg_available()` to detect GPG installation
- Added `Note.verify_signature()` method to verify note signatures
- Verification returns status (verified/failed/unsigned) and signer info

**TUI Enhancements:**
- Display verification symbols in note lists: ✓ (verified), ✗ (failed), ? (unsigned)
- Updated note detail view to show verification status with signer information
- Added 'v' key binding in note detail view to trigger verification dialog
- Verification dialog shows detailed status and helpful error messages

**First-Run Wizard:**
- Created `gpg_wizard.py` module with interactive setup wizard
- Wizard runs on first application startup (when settings.json doesn't exist)
- Detects GPG availability and informs user if not installed
- Lists available secret keys and allows user to select signing key
- Gracefully handles missing GPG or no available keys
- Settings can be manually edited later via ~/.trace/settings.json

**Implementation Details:**
- GPG key ID is now stored in settings as `gpg_key_id`
- All note displays show verification status for better chain-of-custody
- External verification possible via standard GPG tools on exported notes
- Follows existing codebase patterns (atomic writes, graceful degradation)

Files modified:
- trace/crypto.py: Added verification and availability check functions
- trace/models/__init__.py: Added Note.verify_signature() method
- trace/gpg_wizard.py: New first-run setup wizard module
- trace/cli.py: Integrated wizard before TUI launch
- trace/tui_app.py: Added verification display and dialog
2025-12-13 19:28:01 +00:00
overcuriousity
6e4bb9b265 Merge pull request #9 from overcuriousity/claude/restructure-for-ai-agents-01VkkJKiFXNXajfM6DUohVfG
Claude/restructure 01 vkk j ki fxn xajf m6 d uoh vf g
2025-12-13 19:56:25 +01:00
Claude
d3e3383fc6 Fix settings dialog: increase height to show Save button
The settings dialog window height was too small (12 lines), causing the
footer to overlap with the 'Save' option at position 10. Users couldn't
see or select the Save button, preventing GPG key configuration from
being persisted.

Changes:
- Increased window height from 12 to 15 lines
- Adjusted y position to keep dialog centered
- Now all 4 options (GPG Signing, Select GPG Key, Save, Cancel) are
  fully visible with the footer below them

This was a pre-existing UI bug, not introduced by the restructuring.
2025-12-13 18:54:51 +00:00
Claude
eec759aafb Fix import error: rename tui.py to tui_app.py to avoid package naming conflict
Resolved naming conflict between trace/tui.py (file) and trace/tui/ (package).
Python prioritizes packages over modules with the same name, causing import failures.

Changes:
- Renamed trace/tui.py to trace/tui_app.py
- Updated trace/cli.py to import from tui_app
- Updated trace/tui/__init__.py to re-export from tui_app for backward compatibility

This allows both direct imports (from trace.tui_app) and package imports (from trace.tui)
to work correctly, maintaining backward compatibility while supporting the new modular structure.
2025-12-13 18:05:12 +00:00
Claude
b6387f4b0c Restructure codebase for AI agent optimization
Major refactoring to organize code into focused, single-responsibility modules
that are easier for AI coding agents and developers to navigate and modify.

**Module Reorganization:**

Models Package (trace/models/):
- Moved models.py content into models/__init__.py
- Extracted IOC extraction into models/extractors/ioc_extractor.py (236 lines)
- Extracted tag extraction into models/extractors/tag_extractor.py (34 lines)
- Reduced duplication and improved maintainability

Storage Package (trace/storage_impl/):
- Split storage.py (402 lines) into focused modules:
  - storage.py: Main Storage class (112 lines)
  - state_manager.py: StateManager for context/settings (92 lines)
  - lock_manager.py: Cross-platform file locking (87 lines)
  - demo_data.py: Demo case creation (143 lines)
- Added backward-compatible wrapper at trace/storage.py

TUI Utilities (trace/tui/):
- Created rendering package:
  - colors.py: Color pair constants and initialization (43 lines)
  - text_renderer.py: Text rendering with highlighting (137 lines)
- Created handlers package:
  - export_handler.py: Export functionality (238 lines)
- Main tui.py (3307 lines) remains for future refactoring

**Benefits:**
- Smaller, focused files (most < 250 lines)
- Clear single responsibilities
- Easier to locate and modify specific functionality
- Better separation of concerns
- Reduced cognitive load for AI agents
- All tests pass, no features removed

**Testing:**
- All existing tests pass
- Imports verified
- CLI and storage functionality tested
- Backward compatibility maintained

Updated CLAUDE.md to document new architecture and AI optimization strategy.
2025-12-13 17:38:53 +00:00
overcuriousity
09729ee7a3 Merge pull request #8 from overcuriousity/claude/debug-code-issues-01ANayVVF2LaNAabfcL6G49y
Improve navigation: remember selected index when going back
2025-12-13 18:25:59 +01:00
Claude
68834858e0 Improve navigation: remember selected index when going back
When navigating between views (case list -> case detail -> evidence detail)
and pressing 'b' to go back, the previously selected item is now restored
instead of always jumping to the top of the list.

Implementation:
- Added nav_history dict to track selected indices per view/context
- _save_nav_position() saves current index before navigating away
- _restore_nav_position() restores index when returning to a view
- Works across all navigation paths: cases, evidence, tags, IOCs, notes

This improves UX by maintaining user context during navigation.

Location: trace/tui.py
2025-12-13 17:24:57 +00:00
overcuriousity
5fdf6d0aba Merge pull request #7 from overcuriousity/claude/debug-code-issues-01ANayVVF2LaNAabfcL6G49y
Fix critical bugs and improve data integrity across codebase
2025-12-13 18:18:44 +01:00
Claude
2453bd4f2a Fix critical bugs and improve data integrity across codebase
This commit addresses 20 bugs discovered during comprehensive code review,
focusing on data integrity, concurrent access, and user experience.

CRITICAL FIXES:
- Fix GPG key listing to support keys with multiple UIDs (crypto.py:40)
- Implement cross-platform file locking to prevent concurrent access corruption (storage.py)
- Fix evidence detail delete logic that could delete wrong note (tui.py:2481-2497)
- Add corrupted JSON handling with user prompt and automatic backup (storage.py, tui.py)

DATA INTEGRITY:
- Fix IOC/Hash pattern false positives by checking longest hashes first (models.py:32-95)
- Fix URL pattern to exclude trailing punctuation (models.py:81, 152, 216)
- Improve IOC overlap detection with proper range tracking (models.py)
- Fix note deletion to use note_id instead of object identity (tui.py:2498-2619)
- Add state validation to detect and clear orphaned references (storage.py:355-384)

SCROLLING & NAVIGATION:
- Fix evidence detail view to support full scrolling instead of "last N" (tui.py:816-847)
- Fix filter reset index bounds bug (tui.py:1581-1654)
- Add scroll_offset validation after all operations (tui.py:1608-1654)
- Fix division by zero in scroll calculations (tui.py:446-478)
- Validate selection bounds across all views (tui.py:_validate_selection_bounds)

EXPORT & CLI:
- Fix multi-line note export with proper markdown indentation (cli.py:129-143)
- Add stderr warnings for GPG signature failures (cli.py:61, 63)
- Validate active context and show warnings in CLI (cli.py:12-44)

TESTING:
- Update tests to support new lock file mechanism (test_models.py)
- All existing tests pass with new changes

Breaking changes: None
Backward compatible: Yes (existing data files work unchanged)
2025-12-13 16:16:54 +00:00
overcuriousity
ba7a8fdd5d Delete CONSISTENCY_ANALYSIS.md 2025-12-13 15:57:32 +00:00
overcuriousity
107feaf560 Merge pull request #6 from overcuriousity/claude/fix-note-details-access-01XmzZ6wE2NUAXba8tF6TrxD
Fix filter-related index bugs across all filterable views
2025-12-13 16:57:02 +01:00
Claude
d97207633b Fix filter-related index bugs across all filterable views
When adding filter support, the display logic correctly used filtered
lists but the interaction handlers (Enter, 'v', 'd' keys) and navigation
(max_idx calculations) still used unfiltered lists. This caused:
- Wrong item selected when filter active
- Potential out-of-bounds errors
- Inconsistent behavior between display and action

Fixed in all affected views:

1. evidence_detail:
   - Enter key navigation now uses filtered notes
   - 'v' key (notes modal) now uses filtered notes
   - Delete handler now uses filtered notes
   - max_idx navigation now uses filtered notes

2. tag_notes_list:
   - Enter key navigation now uses filtered notes
   - Delete handler now uses filtered notes
   - max_idx navigation now uses filtered notes

3. ioc_notes_list:
   - Enter key navigation now uses filtered notes
   - Delete handler now uses filtered notes
   - max_idx navigation now uses filtered notes

4. tags_list:
   - Enter key navigation now uses filtered tags
   - max_idx navigation now uses filtered tags

5. ioc_list:
   - Enter key navigation now uses filtered IOCs
   - max_idx navigation now uses filtered IOCs

All views now consistently respect active filters across display,
navigation, and interaction handlers.
2025-12-13 15:55:31 +00:00
overcuriousity
7df42cb811 Merge pull request #5 from overcuriousity/claude/fix-note-details-access-01XmzZ6wE2NUAXba8tF6TrxD
Fix case_detail delete logic order bug
2025-12-13 16:52:03 +01:00
Claude
71ae0eef35 Fix case_detail delete logic order bug
The delete handler was checking case notes first, then evidence,
but the display order is evidence first, then notes. This caused:
- Selecting evidence + pressing 'd' -> asked to delete a note
- Selecting note + pressing 'd' -> asked to delete evidence

Fixed by reordering the checks to match display order:
1. Check if index < len(evidence) -> delete evidence
2. Otherwise check if in notes range -> delete note

Now delete correctly targets the selected item type.
2025-12-13 15:50:51 +00:00
overcuriousity
d94901a41d Merge pull request #4 from overcuriousity/claude/fix-note-details-access-01XmzZ6wE2NUAXba8tF6TrxD
Claude/fix note details access 01 xmz z6w e2 nua xba8t f6 trx d
2025-12-13 16:46:25 +01:00
Claude
ec5a3d9f31 Add comprehensive menu consistency improvements
Based on UX analysis, added three major improvements:

1. Context-sensitive filtering everywhere ('/' key):
   - evidence_detail: Filter notes by content
   - tags_list: Filter tags by name
   - tag_notes_list: Filter notes by content
   - ioc_list: Filter IOCs by value or type
   - ioc_notes_list: Filter notes by content
   - All views show active filter in footer

2. Extended delete support ('d' key):
   - note_detail: Delete current note and return to previous view
   - tag_notes_list: Delete selected note from filtered view
   - ioc_notes_list: Delete selected note from filtered view
   - Finds and removes note from parent case/evidence

3. Markdown export for case/evidence ('e' key):
   - case_detail: Export entire case + all evidence to markdown
   - evidence_detail: Export single evidence item to markdown
   - Files saved to ~/.trace/exports/ with timestamps
   - Complements existing IOC export functionality

All changes maintain consistent UX patterns across views and
provide clear feedback through updated footers showing available
actions in each context.
2025-12-13 15:43:32 +00:00
Claude
ac7e442970 Add UX consistency analysis documentation
Documents findings from comprehensive menu interface review across all
9 views in the TUI. Identifies inconsistencies with filter, delete,
and export functionality.

Clarifications from user:
- 'n' and 'a' keys correctly limited to case/evidence contexts
- Filter should work everywhere (context-sensitive)
- Delete should work for all note views, not tag/IOC lists
- Export should extend to case/evidence markdown exports
2025-12-13 15:35:39 +00:00
Claude
b973aa1009 Make note navigation consistent across case_detail and evidence_detail
Both views now support dual navigation options:
- Enter: Opens note_detail view (single note focus)
- 'v': Opens notes modal with selected note highlighted

Previously, case_detail would open the notes modal from the beginning
even when a case note was selected. Now it intelligently jumps to the
selected case note, matching the behavior in evidence_detail view.

This provides a consistent, predictable user experience across both
views where notes can be selected and viewed.
2025-12-13 15:22:59 +00:00
Claude
461da25c93 Add dual note navigation options in evidence_detail view
Now provides two ways to access notes in evidence_detail view:
- Enter: Opens note_detail view (single note focus)
- 'v': Opens notes modal with selected note highlighted (all notes)

The 'v' key now intelligently jumps to the selected note when
opening the modal, providing context while still showing all notes.
This gives users flexibility in how they want to view their notes.
2025-12-13 15:20:59 +00:00
Claude
b61b818952 Fix inconsistent note detail access in evidence_detail view
Previously, pressing Enter on a note in evidence_detail view would
open the notes modal (same as 'v' key), while in other views
(case_detail, tag_notes_list, ioc_notes_list) it would open the
note_detail view. This created confusing and inconsistent behavior.

Now Enter consistently opens note_detail view across all contexts:
- Enter: Opens note detail view (full note content)
- 'v': Opens notes modal (scrollable list of all notes)

This aligns the implementation with the help text which already
documented the correct behavior.
2025-12-13 15:06:16 +00:00
overcuriousity
fa90aeb063 Merge pull request #3 from overcuriousity/claude/fix-umlaut-support-01NVHwQkPasDs5ZPySfRqZgE
Fix UTF-8/umlaut support in note input
2025-12-13 16:02:15 +01:00
Claude
e38b018e41 Fix UTF-8/umlaut support in note input
Added full Unicode character support to the TUI's multiline input dialog.
Previously, only ASCII characters (32-126) were captured when typing notes.

Changes:
- Added UTF-8 multibyte character handling to _multiline_input_dialog()
- Properly collects and decodes 2/3/4-byte UTF-8 sequences
- Added explicit UTF-8 encoding to all file I/O operations
- Added ensure_ascii=False to JSON serialization for proper Unicode preservation

This fix allows users to enter umlauts (ä, ö, ü), accented characters
(é, à, ñ), and other Unicode characters in notes, case names, and all
other text input fields.

Tested with German umlauts, Asian characters, Cyrillic, and special chars.
2025-12-13 11:42:37 +00:00
overcuriousity
3c53969b45 Merge pull request #2 from overcuriousity/claude/code-review-bugs-011Ga38jN53dLDkW2sLfzG1a
Review code for bugs and issues
2025-12-12 21:43:09 +01:00
Claude
a829275ce0 Fix all identified bugs and issues
Critical Fixes:
- Fixed IOC extraction order: URLs now checked before domains to prevent duplicates
- Fixed overlapping IOC highlights with overlap detection
- Fixed IPv4 pattern to validate octets (0-255) preventing invalid IPs like 999.999.999.999
- Fixed IPv6 pattern to support compressed format (::)
- Fixed hash extraction order: SHA256 -> SHA1 -> MD5 to prevent misclassification

High Priority Fixes:
- Added 10s timeout to all GPG subprocess calls to prevent hangs
- Fixed indentation inconsistency in storage.py:253

Performance Improvements:
- Removed 8 time.sleep(0.1) calls from demo case creation (800ms faster startup)

Robustness Improvements:
- Added error handling to export_markdown() for IOError/OSError/PermissionError
- Implemented atomic writes for state file (set_active)
- Implemented atomic writes for settings file (set_setting)

All changes tested and verified with unit tests.
2025-12-12 20:24:45 +00:00
Claude
e59f7be3e4 Add comprehensive bug report from code review
Found 11 bugs/issues across the codebase:
- 3 critical: IOC extraction order, overlapping highlights, invalid IPv4
- 2 high priority: subprocess timeout, indentation error
- 3 medium: slow startup, missing error handling, IPv6 pattern
- 3 low: hash classification, non-atomic writes

Detailed report includes line numbers, impact analysis, and fixes.
2025-12-12 20:15:23 +00:00
overcuriousity
dc16a16d49 Merge pull request #1 from overcuriousity/claude/add-install-instructions-015g7n4vPZWeuAUYrHmM74hU
Add binary installation instructions to README
2025-12-12 12:30:51 +01:00
Claude
e4976c81f9 Add optional ultra-fast alias setup for quick logging 2025-12-12 11:28:08 +00:00
Claude
b627f92172 Add installation instructions for latest release binaries 2025-12-12 11:24:52 +00:00
overcuriousity
4c99013426 disclaimer
Added a disclaimer about the coding process and agents used.
2025-12-12 10:21:12 +00:00
overcuriousity
f80a343610 clearer readme
Updated README to reflect new project name and features.
2025-12-12 10:13:41 +00:00
24 changed files with 2885 additions and 885 deletions

View File

@@ -52,18 +52,30 @@ The application uses a three-level hierarchy:
Each level has unique IDs (UUIDs) for reliable lookups across the hierarchy. Each level has unique IDs (UUIDs) for reliable lookups across the hierarchy.
### Core Modules ### Modular Structure (Optimized for AI Coding Agents)
**`trace/models.py`**: Data models using dataclasses The codebase is organized into focused, single-responsibility modules to make it easier for AI agents and developers to navigate, understand, and modify specific functionality:
- `Note`: Content + timestamp + SHA256 hash + optional GPG signature + auto-extracted tags/IOCs
- `Evidence`: Container for notes about a specific piece of evidence, includes metadata dict for source hashes **`trace/models/`**: Data models package
- `Case`: Top-level container with case number, investigator, evidence list, and notes - `__init__.py`: Main model classes (Note, Evidence, Case) with dataclass definitions
- `extractors/tag_extractor.py`: Tag extraction logic (hashtag parsing)
- `extractors/ioc_extractor.py`: IOC extraction logic (IPs, domains, URLs, hashes, emails)
- All models implement `to_dict()`/`from_dict()` for JSON serialization - All models implement `to_dict()`/`from_dict()` for JSON serialization
- Models use extractors for automatic tag and IOC detection
**`trace/storage.py`**: Persistence layer **`trace/storage_impl/`**: Storage implementation package
- `Storage`: Manages `~/.trace/data.json` with atomic writes (temp file + rename) - `storage.py`: Main Storage class managing `~/.trace/data.json` with atomic writes
- `StateManager`: Manages `~/.trace/state` (active case/evidence) and `~/.trace/settings.json` (PGP enabled/disabled) - `state_manager.py`: StateManager for active context and settings persistence
- Data is loaded into memory on init, modified, then saved atomically - `lock_manager.py`: Cross-platform file locking to prevent concurrent access
- `demo_data.py`: Demo case creation for first-time users
- Backward compatible via `trace/storage.py` wrapper
**`trace/tui/`**: Text User Interface package
- `tui.py`: Main TUI class with view hierarchy and event loop (3307 lines - target for future refactoring)
- `rendering/colors.py`: Color pair initialization and constants
- `rendering/text_renderer.py`: Text rendering with IOC/tag highlighting
- `handlers/export_handler.py`: Export functionality (IOCs, markdown reports)
- Future refactoring will extract views, dialogs, and input handlers
**`trace/crypto.py`**: Integrity features **`trace/crypto.py`**: Integrity features
- `sign_content()`: GPG clearsign via subprocess (falls back gracefully if GPG unavailable) - `sign_content()`: GPG clearsign via subprocess (falls back gracefully if GPG unavailable)
@@ -74,13 +86,6 @@ Each level has unique IDs (UUIDs) for reliable lookups across the hierarchy.
- `export_markdown()`: Generates full case report with hashes and signatures - `export_markdown()`: Generates full case report with hashes and signatures
- `main()`: Argument parsing, routes to TUI or CLI functions - `main()`: Argument parsing, routes to TUI or CLI functions
**`trace/tui.py`**: Curses-based Text User Interface
- View hierarchy: case_list → case_detail → evidence_detail
- Additional views: tags_list, tag_notes_list, ioc_list, ioc_notes_list, note_detail
- Multi-line note editor with Ctrl+G to submit, Esc to cancel
- Filter mode (press `/`), active context management (press `a`)
- All note additions automatically extract tags (#hashtag) and IOCs (IPs, domains, URLs, hashes, emails)
### Key Features Implementation ### Key Features Implementation
**Integrity System**: Every note automatically gets: **Integrity System**: Every note automatically gets:
@@ -129,3 +134,33 @@ temp_file.replace(self.data_file)
## Testing Notes ## Testing Notes
Tests use temporary directories created with `tempfile.mkdtemp()` and cleaned up in `tearDown()` to avoid polluting `~/.trace/`. Tests use temporary directories created with `tempfile.mkdtemp()` and cleaned up in `tearDown()` to avoid polluting `~/.trace/`.
## AI Agent Optimization
The codebase has been restructured to be optimal for AI coding agents:
### Module Organization Benefits
- **Focused Files**: Each module has a single, clear responsibility (50-250 lines typically)
- **Easy Navigation**: Functionality is easy to locate by purpose (e.g., IOC extraction, export handlers)
- **Independent Modification**: Changes to one module rarely affect others
- **Clear Interfaces**: Modules communicate through well-defined imports
- **Reduced Context**: AI agents can focus on relevant files without loading massive monoliths
### File Size Guidelines
- **Small modules** (< 150 lines): Ideal for focused tasks
- **Medium modules** (150-300 lines): Acceptable for cohesive functionality
- **Large modules** (> 500 lines): Consider refactoring into smaller components
- **Very large modules** (> 1000 lines): Priority target for extraction and modularization
### Current Status
- ✅ Models: Organized into package with extractors separated
- ✅ Storage: Split into focused modules (storage, state, locking, demo data)
- ✅ TUI Utilities: Rendering and export handlers extracted
- ⏳ TUI Main: Still monolithic (3307 lines) - future refactoring needed
### Future Refactoring Targets
The `trace/tui.py` file (3307 lines) should be further split into:
- `tui/views/` - Individual view classes (case list, evidence detail, etc.)
- `tui/dialogs/` - Dialog functions (input, confirm, settings, etc.)
- `tui/handlers/` - Input and navigation handlers
- `tui/app.py` - Main TUI orchestration class

419
README.md
View File

@@ -1,158 +1,347 @@
# trace - Forensic Notes # trace - Digital Evidence Log Utility
`trace` is a minimal, terminal-based forensic note-taking application designed for digital investigators and incident responders. It provides a secure, integrity-focused environment for case management and evidence logging. `trace` is a bare-bones, terminal-centric note-taking utility for digital forensics and incident response. It is designed for maximum operational efficiency, ensuring that the integrity of your log data is never compromised by the need to slow down.
## Features This tool mandates minimal system overhead, relying solely on standard libraries where possible.
* **Integrity Focused:** ## ⚡ Key Feature: Hot Logging (CLI Shorthand)
* **Hashing:** Every note is automatically SHA256 hashed (content + timestamp).
* **Signing:** Optional GPG signing of notes for non-repudiation (requires system `gpg`).
* **Minimal Dependencies:** Written in Python using only the standard library (`curses`, `json`, `sqlite3` avoided, etc.) + `pyinstaller` for packaging.
* **Dual Interface:**
* **TUI (Text User Interface):** Interactive browsing of Cases and Evidence hierarchies with multi-line note editor.
* **CLI Shorthand:** Quickly append notes to the currently active Case/Evidence from your shell (`trace "Found a USB key"`).
* **Multi-Line Notes:** Full-featured text editor supports detailed forensic observations with multiple lines, arrow key navigation, and scrolling.
* **Evidence Source Hashing:** Optionally store source hash values (e.g., SHA256) as metadata when creating evidence items for chain of custody tracking.
* **Tag System:** Organize notes with hashtags (e.g., `#malware #windows #critical`). View tags sorted by usage, filter notes by tag, and navigate tagged notes with full context.
* **IOC Detection:** Automatically extracts Indicators of Compromise (IPs, domains, URLs, hashes, emails) from notes. View, filter, and export IOCs with occurrence counts and context separators.
* **Context Awareness:** Set an "Active" context in the TUI, which persists across sessions for CLI note taking. Recent notes displayed inline for reference.
* **Filtering:** Quickly filter Cases and Evidence lists (press `/`).
* **Export:** Export all data to a formatted Markdown report with verification details, including evidence source hashes.
## Installation The primary operational benefit of `trace` is its ability to accept input directly from the command line, bypassing the full interface. Once your active target context is set, you can drop notes instantly.
### From Source **Configuration:** Use the TUI to set a Case or Evidence ID as "Active" (`a`). This state persists across sessions.
Requires Python 3.x. **Syntax for Data Injection:**
```bash ```bash
git clone <repository_url> # Log an immediate status update
cd trace trace "IR team gained shell access. Initial persistence checks running."
# Run directly
python3 main.py # Log data and tag it for later triage
trace "Observed outbound connection to 192.168.1.55 on port 80. #suspicious #network"
``` ```
### Building Binary **System Integrity Chain:** Each command-line note is immediately stamped, concatenated with its content, and hashed using SHA256 before storage. This ensures a non-repudiable log entry.
You can build a single-file executable using PyInstaller. ## Installation & Deployment
#### Linux/macOS ### Quick Install from Latest Release
**Linux / macOS:**
```bash ```bash
pip install -r requirements.txt curl -L https://github.com/overcuriousity/trace/releases/latest/download/trace -o trace && sudo mv trace /usr/local/bin/ && sudo chmod +x /usr/local/bin/trace
./build_binary.sh
# Binary will be in dist/trace
./dist/trace
``` ```
#### Windows **Windows (PowerShell):**
```powershell ```powershell
# Install dependencies (includes windows-curses) Invoke-WebRequest -Uri "https://github.com/overcuriousity/trace/releases/latest/download/trace.exe" -OutFile "$env:USERPROFILE\bin\trace.exe"; [Environment]::SetEnvironmentVariable("Path", $env:Path + ";$env:USERPROFILE\bin", "User")
pip install -r requirements.txt
# Build the executable
pyinstaller --onefile --name trace --clean --paths . --hidden-import curses main.py
# Binary will be in dist\trace.exe
.\dist\trace.exe
``` ```
*Note: Create `$env:USERPROFILE\bin` directory first if it doesn't exist, then restart your shell.*
### Installing to PATH **Optional: Create Ultra-Fast Alias**
After building the binary, you can install it to your system PATH for easy access: For maximum speed when logging, create a single-character alias:
#### Linux/macOS
**Linux / macOS (Bash):**
```bash ```bash
# Option 1: Copy to /usr/local/bin (requires sudo) echo 'alias t="trace"' >> ~/.bashrc && source ~/.bashrc
sudo cp dist/trace /usr/local/bin/
# Option 2: Copy to ~/.local/bin (user-only, ensure ~/.local/bin is in PATH)
mkdir -p ~/.local/bin
cp dist/trace ~/.local/bin/
# Add to PATH if not already (add to ~/.bashrc or ~/.zshrc)
export PATH="$HOME/.local/bin:$PATH"
``` ```
#### Windows **Linux / macOS (Zsh):**
```bash
echo 'alias t="trace"' >> ~/.zshrc && source ~/.zshrc
```
**Windows (PowerShell):**
```powershell ```powershell
# Option 1: Copy to a directory in PATH (e.g., C:\Windows\System32 - requires admin) New-Item -ItemType File -Force $PROFILE; Add-Content $PROFILE 'function t { trace $args }'; . $PROFILE
Copy-Item dist\trace.exe C:\Windows\System32\
# Option 2: Create a local bin directory and add to PATH
# 1. Create directory
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\bin"
Copy-Item dist\trace.exe "$env:USERPROFILE\bin\"
# 2. Add to PATH permanently (run as admin or use GUI: System Properties > Environment Variables)
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";$env:USERPROFILE\bin", "User")
# 3. Restart terminal/PowerShell for changes to take effect
``` ```
## Usage After this, you can log with just: `t "Your note here"`
### TUI Mode ---
Run `trace` without arguments to open the interface.
**Navigation:** ### Platform: Linux / UNIX (including macOS)
* `Arrow Keys`: Navigate lists.
* `Enter`: Select Case / View Evidence details.
* `b`: Back.
* `q`: Quit.
**Management:** **Prerequisites:** Python 3.x and the binary build utility (PyInstaller).
* `n`: Add a Note to the current context (works in any view).
* **Multi-line support**: Notes can span multiple lines - press `Enter` for new lines.
* **Tagging**: Use hashtags in your notes (e.g., `#malware #critical`) for organization.
* Press `Ctrl+G` to submit the note, or `Esc` to cancel.
* Recent notes are displayed inline for context (non-blocking).
* `N` (Shift+n): New Case (in Case List) or New Evidence (in Case Detail).
* `t`: **Tags View**. Browse all tags in the current context (case or evidence), sorted by usage count.
* Press `Enter` on a tag to see all notes with that tag.
* Press `Enter` on a note to view full details with tag highlighting.
* Navigate back with `b`.
* `i`: **IOCs View**. View all Indicators of Compromise extracted from notes in the current context.
* Shows IOC types (IPv4, domain, URL, hash, email) with occurrence counts.
* Press `Enter` on an IOC to see all notes containing it.
* Press `e` to export IOCs to `~/.trace/exports/` in plain text format.
* IOC counts are displayed in red in case and evidence views.
* `a`: **Set Active**. Sets the currently selected Case or Evidence as the global "Active" context.
* `d`: Delete the selected Case or Evidence (with confirmation).
* `v`: **View All Notes**. View all notes for the current Case or Evidence in a scrollable full-screen view.
* **IOC Highlighting**: All IOCs in notes are automatically highlighted in red for immediate visibility.
* **Tag Highlighting**: Hashtags are highlighted in cyan.
* Press `Enter` on any note in case/evidence detail view to jump directly to that note in the full view.
* The selected note will be centered and highlighted.
* Navigate with arrow keys, Page Up/Down, Home/End.
* Press `n` to add a new note without leaving the view.
* `/`: Filter list (type to search, `Esc` or `Enter` to exit filter mode).
* `s`: Settings menu (in Case List view).
* `Esc`: Cancel during input dialogs.
### CLI Mode **Deployment:**
Once a Case or Evidence is set as **Active** in the TUI, you can add notes directly from the command line:
1. **Build Binary:** Execute the build script in the source directory.
```bash
./build_binary.sh
```
*The output executable will land in `dist/trace`.*
2. **Path Integration:** For universal access, the binary must reside in a directory referenced by your `$PATH` environment variable (e.g., `/usr/local/bin`).
```bash
# Place executable in system path
sudo mv dist/trace /usr/local/bin/
# Ensure execute bit is set
sudo chmod +x /usr/local/bin/trace
```
You are now cleared to run `trace` from any shell prompt.
### Platform: Windows
**Prerequisites:** Python 3.x, `pyinstaller`, and the `windows-curses` library.
**Deployment:**
1. **Build Binary:** Run the build command in a PowerShell or CMD environment.
```powershell
pyinstaller --onefile --name trace --clean --paths . --hidden-import curses main.py
```
*The executable is located at `dist\trace.exe`.*
2. **Path Integration:** The executable must be accessible via your user or system `%PATH%` variable for the hot-logging feature to function correctly.
*Option A: System Directory (Requires Administrator Privilege)*
```powershell
Copy-Item dist\trace.exe C:\Windows\System32\
```
*Option B: User-Defined Bin Directory (Recommended)*
```powershell
# Create the user bin location
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\bin"
Copy-Item dist\trace.exe "$env:USERPROFILE\bin\"
# Inject the directory into the User PATH variable
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";$env:USERPROFILE\bin", "User")
```
**ATTENTION:** You must cycle your command shell (exit and reopen) before the `trace` command will resolve correctly.
## Core Feature Breakdown
| Feature | Description | Operational Impact |
| :--- | :--- | :--- |
| **Integrity Hashing** | SHA256 applied to every log entry (content + timestamp). | **Guaranteed log integrity.** No modification possible post-entry. |
| **GPG Signing** | Optional PGP/GPG signature applied to notes. | **Non-repudiation** for formal evidence handling. |
| **IOC Extraction** | Automatic parsing of IPv4, FQDNs, URLs, hashes, and email addresses. | **Immediate intelligence gathering** from raw text. |
| **Tag System** | Supports `#hashtags` for classification and filtering. | **Efficient triage** of large log sets. |
| **Minimal Footprint** | Built solely on Python standard library modules. | **Maximum portability** on restricted forensic environments. |
## Cryptographic Integrity & Chain of Custody
`trace` implements a dual-layer cryptographic system designed for legal admissibility and forensic integrity:
### Layer 1: Note-Level Integrity (Always Active)
**Process:**
1. **Timestamp Generation** - Precise Unix timestamp captured at note creation
2. **Content Hashing** - SHA256 hash computed from `timestamp:content`
3. **Optional Signature** - Hash is signed with investigator's GPG private key
**Mathematical Representation:**
```
hash = SHA256(timestamp + ":" + content)
signature = GPG_Sign(hash, private_key)
```
**Security Properties:**
- **Temporal Integrity**: Timestamp is cryptographically bound to content (cannot backdate notes)
- **Tamper Detection**: Any modification to content or timestamp invalidates the hash
- **Non-Repudiation**: GPG signature proves who created the note (if signing enabled)
- **Efficient Storage**: Signing only the hash (64 hex chars) instead of full content
### Layer 2: Export-Level Integrity (On Demand)
When exporting to markdown (`--export`), the **entire export document** is GPG-signed if signing is enabled.
**Process:**
1. Generate complete markdown export with all cases, evidence, and notes
2. Individual note signatures are preserved within the export
3. Entire document is clearsigned with GPG
**Security Properties:**
- **Document Integrity**: Proves export hasn't been modified after generation
- **Dual Verification**: Both individual notes AND complete document can be verified
- **Chain of Custody**: Establishes provenance from evidence collection through report generation
### First-Run GPG Setup
On first launch, `trace` runs an interactive wizard to configure GPG signing:
1. **GPG Detection** - Checks if GPG is installed (gracefully continues without if missing)
2. **Key Selection** - Lists available secret keys from your GPG keyring
3. **Configuration** - Saves selected key ID to `~/.trace/settings.json`
**If GPG is not available:**
- Application continues to function normally
- Notes are hashed (SHA256) but not signed
- You can enable GPG later by editing `~/.trace/settings.json`
### Verification Workflows
#### Internal Verification (Within trace TUI)
The TUI automatically verifies signatures and displays status symbols:
- `` - Signature verified with public key in keyring
- `` - Signature verification failed (tampered or missing key)
- `?` - Note is unsigned
**To verify a specific note:**
1. Navigate to the note in TUI
2. Press `Enter` to view note details
3. Press `v` to see detailed verification information
#### External Verification (Manual/Court)
**Scenario**: Forensic investigator sends evidence to court/auditor
**Step 1 - Investigator exports evidence:**
```bash ```bash
trace "Suspect system is powered on, attempting live memory capture." # Export all notes with signatures
trace --export --output investigation-2024-001.md
# Export public key for verification
gpg --armor --export investigator@agency.gov > investigator-pubkey.asc
# Send both files to recipient
``` ```
This note is automatically timestamped, hashed, signed, and appended to the active context. **Step 2 - Recipient verifies document:**
```bash
# Import investigator's public key
gpg --import investigator-pubkey.asc
### Exporting # Verify entire export document
To generate a report: gpg --verify investigation-2024-001.md
```
**Expected output if valid:**
```
gpg: Signature made Mon Dec 13 14:23:45 2024
gpg: using RSA key ABC123DEF456
gpg: Good signature from "John Investigator <investigator@agency.gov>"
```
**Step 3 - Verify individual notes (optional):**
Individual note signatures are embedded in the markdown export. To verify a specific note:
1. Open `investigation-2024-001.md` in a text editor
2. Locate the note's signature block:
```
- **GPG Signature of Hash:**
```
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
a3f5b2c8d9e1f4a7b6c3d8e2f5a9b4c7d1e6f3a8b5c2d9e4f7a1b8c6d3e0f5a2
-----BEGIN PGP SIGNATURE-----
...
-----END PGP SIGNATURE-----
```
3. Extract the signature block (from `-----BEGIN PGP SIGNED MESSAGE-----` to `-----END PGP SIGNATURE-----`)
4. Save to a file and verify:
```bash
cat > note-signature.txt
<paste signature block>
Ctrl+D
gpg --verify note-signature.txt
```
**What gets verified:**
- The SHA256 hash proves the note content and timestamp haven't changed
- The GPG signature proves who created that hash
- Together: Proves this specific content was created by this investigator at this time
### Cryptographic Trust Model
```
┌─────────────────────────────────────────────────────────┐
│ Note Creation (Investigator) │
├─────────────────────────────────────────────────────────┤
│ 1. Content: "Malware detected on host-192.168.1.50" │
│ 2. Timestamp: 1702483425.123456 │
│ 3. Hash: SHA256(timestamp:content) │
│ → a3f5b2c8d9e1f4a7b6c3d8e2f5a9b4c7... │
│ 4. Signature: GPG_Sign(hash, private_key) │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Export Generation │
├─────────────────────────────────────────────────────────┤
│ 1. Build markdown with all notes + individual sigs │
│ 2. Sign entire document: GPG_Sign(document) │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Verification (Court/Auditor) │
├─────────────────────────────────────────────────────────┤
│ 1. Import investigator's public key │
│ 2. Verify document signature → Proves export integrity │
│ 3. Verify individual notes → Proves note authenticity │
│ 4. Recompute hashes → Proves content hasn't changed │
└─────────────────────────────────────────────────────────┘
```
### Security Considerations
**What is protected:**
- ✓ Content integrity (hash detects any modification)
- ✓ Temporal integrity (timestamp cryptographically bound)
- ✓ Attribution (signature proves who created it)
- ✓ Export completeness (document signature proves no additions/removals)
**What is NOT protected:**
- ✗ Note deletion (signatures can't prevent removal from database)
- ✗ Selective disclosure (investigator can choose which notes to export)
- ✗ Sequential ordering (signatures are per-note, not chained)
**Trust Dependencies:**
- You must trust the investigator's GPG key (verify fingerprint out-of-band)
- You must trust the investigator's system clock was accurate
- You must trust the investigator didn't destroy contradictory evidence
## TUI Reference (Management Console)
Execute `trace` (no arguments) to enter the Text User Interface. This environment is used for setup, review, and reporting.
| Key | Function | Detail |
| :--- | :--- | :--- |
| `a` | **Set Active** | Designate the current item as the target for CLI injection (hot-logging). |
| `n` | **New Note** | Enter the multi-line log editor. Use $\text{Ctrl+G}$ to save block. |
| `i` | **IOC Index** | View extracted indicators. Option to export IOC list (`e`). |
| `t` | **Tag Index** | View classification tags and filter notes by frequency. |
| `v` | **Full View** | Scrollable screen showing all log entries with automatic IOC/Tag highlighting. |
| `/` | **Filter** | Initiate text-based search/filter on lists. |
| $\text{Enter}$ | **Drill Down** | Access details for Case or Evidence. |
| `q` | **Exit** | Close the application. |
## Report Generation
To generate the Markdown report package, use the `--export` flag.
```bash ```bash
trace --export trace --export
# Creates trace_export.md # Creates trace_export.md in the current directory.
``` ```
## Data Storage ## Data Persistence
Data is stored in JSON format at `~/.trace/data.json`.
Application state (active context) is stored at `~/.trace/state`.
## License Trace maintains a simple flat-file structure in the user's home directory.
MIT
* `~/.trace/data.json`: Case log repository.
* `~/.trace/state`: Active context pointer.
-----
*License: MIT*
**DISCLAIMER**
This program was mostly vibe-coded. This was a deliberate decision as I wanted to focus on producing a usable result with okay user experience rather than implementation details and educating myself by lengthy coding sessions.
I reviewed sections of the code manually and found no issues. The application should be safe to use from a integrity, security and admissability standpoint, while I wont ever make any warranties on this.
The coding agents I mostly used were in this order: Claude Sonnett 45 (CLI), Claude Haiku 4.5 (VSCode Copilot), Google Jules (version unknown).

View File

@@ -8,6 +8,12 @@ from .crypto import Crypto
def quick_add_note(content: str): def quick_add_note(content: str):
storage = Storage() storage = Storage()
state_manager = StateManager() state_manager = StateManager()
# Validate and clear stale state
warning = state_manager.validate_and_clear_stale(storage)
if warning:
print(f"Warning: {warning}", file=sys.stderr)
state = state_manager.get_active() state = state_manager.get_active()
settings = state_manager.get_settings() settings = state_manager.get_settings()
@@ -15,41 +21,47 @@ def quick_add_note(content: str):
evidence_id = state.get("evidence_id") evidence_id = state.get("evidence_id")
if not case_id: if not case_id:
print("Error: No active case set. Open the TUI to select a case first.") print("Error: No active case set. Open the TUI to select a case first.", file=sys.stderr)
sys.exit(1) sys.exit(1)
case = storage.get_case(case_id) case = storage.get_case(case_id)
if not case: if not case:
print("Error: Active case not found in storage. Ensure you have set an active case in the TUI.") print("Error: Active case not found in storage. Ensure you have set an active case in the TUI.", file=sys.stderr)
sys.exit(1) sys.exit(1)
target_evidence = None target_evidence = None
if evidence_id: if evidence_id:
# Find evidence # Find and validate evidence belongs to active case
for ev in case.evidence: for ev in case.evidence:
if ev.evidence_id == evidence_id: if ev.evidence_id == evidence_id:
target_evidence = ev target_evidence = ev
break break
if not target_evidence:
# Evidence ID is set but doesn't exist in case - clear it
print(f"Warning: Active evidence not found in case. Clearing to case level.", file=sys.stderr)
state_manager.set_active(case_id, None)
# Create note # Create note
note = Note(content=content) note = Note(content=content)
note.calculate_hash() note.calculate_hash()
note.extract_tags() # Extract hashtags from content note.extract_tags() # Extract hashtags from content
note.extract_iocs() # Extract IOCs from content note.extract_iocs() # Extract IOCs from content
# Try signing if enabled # Try signing the hash if enabled
signature = None signature = None
if settings.get("pgp_enabled", True): if settings.get("pgp_enabled", True):
gpg_key_id = settings.get("gpg_key_id", None) gpg_key_id = settings.get("gpg_key_id", None)
if gpg_key_id: if gpg_key_id:
signature = Crypto.sign_content(f"Hash: {note.content_hash}\nContent: {note.content}", key_id=gpg_key_id) # Sign only the hash (hash already includes timestamp:content for integrity)
signature = Crypto.sign_content(note.content_hash, key_id=gpg_key_id)
if signature: if signature:
note.signature = signature note.signature = signature
else: else:
print("Warning: GPG signature failed (GPG not found or no key). Note saved without signature.") print("Warning: GPG signature failed (GPG not found or no key). Note saved without signature.", file=sys.stderr)
else: else:
print("Warning: No GPG key ID configured. Note saved without signature.") print("Warning: No GPG key ID configured. Note saved without signature.", file=sys.stderr)
# Attach to evidence or case # Attach to evidence or case
if target_evidence: if target_evidence:
@@ -66,63 +78,108 @@ def quick_add_note(content: str):
storage.save_data() storage.save_data()
def export_markdown(output_file: str = "export.md"): def export_markdown(output_file: str = "export.md"):
try:
storage = Storage() storage = Storage()
state_manager = StateManager()
settings = state_manager.get_settings()
with open(output_file, "w") as f: # Build the export content in memory first
f.write("# Forensic Notes Export\n\n") content_lines = []
f.write(f"Generated on: {time.ctime()}\n\n") content_lines.append("# Forensic Notes Export\n\n")
content_lines.append(f"Generated on: {time.ctime()}\n\n")
for case in storage.cases: for case in storage.cases:
f.write(f"## Case: {case.case_number}\n") content_lines.append(f"## Case: {case.case_number}\n")
if case.name: if case.name:
f.write(f"**Name:** {case.name}\n") content_lines.append(f"**Name:** {case.name}\n")
if case.investigator: if case.investigator:
f.write(f"**Investigator:** {case.investigator}\n") content_lines.append(f"**Investigator:** {case.investigator}\n")
f.write(f"**Case ID:** {case.case_id}\n\n") content_lines.append(f"**Case ID:** {case.case_id}\n\n")
f.write("### Case Notes\n") content_lines.append("### Case Notes\n")
if not case.notes: if not case.notes:
f.write("_No notes._\n") content_lines.append("_No notes._\n")
for note in case.notes: for note in case.notes:
write_note(f, note) note_content = format_note_for_export(note)
content_lines.append(note_content)
f.write("\n### Evidence\n") content_lines.append("\n### Evidence\n")
if not case.evidence: if not case.evidence:
f.write("_No evidence._\n") content_lines.append("_No evidence._\n")
for ev in case.evidence: for ev in case.evidence:
f.write(f"#### Evidence: {ev.name}\n") content_lines.append(f"#### Evidence: {ev.name}\n")
if ev.description: if ev.description:
f.write(f"_{ev.description}_\n") content_lines.append(f"_{ev.description}_\n")
f.write(f"**ID:** {ev.evidence_id}\n") content_lines.append(f"**ID:** {ev.evidence_id}\n")
# Include source hash if available # Include source hash if available
source_hash = ev.metadata.get("source_hash") source_hash = ev.metadata.get("source_hash")
if source_hash: if source_hash:
f.write(f"**Source Hash:** `{source_hash}`\n") content_lines.append(f"**Source Hash:** `{source_hash}`\n")
f.write("\n") content_lines.append("\n")
f.write("##### Evidence Notes\n") content_lines.append("##### Evidence Notes\n")
if not ev.notes: if not ev.notes:
f.write("_No notes._\n") content_lines.append("_No notes._\n")
for note in ev.notes: for note in ev.notes:
write_note(f, note) note_content = format_note_for_export(note)
f.write("\n") content_lines.append(note_content)
f.write("---\n\n") content_lines.append("\n")
print(f"Exported to {output_file}") content_lines.append("---\n\n")
def write_note(f, note: Note): # Join all content
f.write(f"- **{time.ctime(note.timestamp)}**\n") export_content = "".join(content_lines)
f.write(f" - Content: {note.content}\n")
f.write(f" - Hash: `{note.content_hash}`\n") # Sign the entire export if GPG is enabled
if settings.get("pgp_enabled", False):
gpg_key_id = settings.get("gpg_key_id", None)
signed_export = Crypto.sign_content(export_content, key_id=gpg_key_id)
if signed_export:
# Write the signed version
final_content = signed_export
print(f"✓ Export signed with GPG")
else:
# Signing failed - write unsigned
final_content = export_content
print("⚠ Warning: GPG signing failed. Export saved unsigned.", file=sys.stderr)
else:
final_content = export_content
# Write to file
with open(output_file, "w", encoding='utf-8') as f:
f.write(final_content)
print(f"✓ Exported to {output_file}")
# Show verification instructions
if settings.get("pgp_enabled", False) and signed_export:
print(f"\nTo verify the export:")
print(f" gpg --verify {output_file}")
except (IOError, OSError, PermissionError) as e:
print(f"Error: Failed to export to {output_file}: {e}")
sys.exit(1)
def format_note_for_export(note: Note) -> str:
"""Format a single note for export (returns string instead of writing to file)"""
lines = []
lines.append(f"- **{time.ctime(note.timestamp)}**\n")
lines.append(f" - Content:\n")
# Properly indent multi-line content
for line in note.content.splitlines():
lines.append(f" {line}\n")
lines.append(f" - SHA256 Hash (timestamp:content): `{note.content_hash}`\n")
if note.signature: if note.signature:
f.write(" - **Signature Verified:**\n") lines.append(" - **GPG Signature of Hash:**\n")
f.write(" ```\n") lines.append(" ```\n")
# Indent signature for markdown block # Indent signature for markdown block
for line in note.signature.splitlines(): for line in note.signature.splitlines():
f.write(f" {line}\n") lines.append(f" {line}\n")
f.write(" ```\n") lines.append(" ```\n")
f.write("\n") lines.append("\n")
return "".join(lines)
def main(): def main():
parser = argparse.ArgumentParser(description="trace: Forensic Note Taking Tool") parser = argparse.ArgumentParser(description="trace: Forensic Note Taking Tool")
@@ -143,9 +200,13 @@ def main():
quick_add_note(args.note) quick_add_note(args.note)
return return
# Check for first run and run GPG wizard if needed
from .gpg_wizard import check_and_run_wizard
check_and_run_wizard()
# Launch TUI (with optional direct navigation to active context) # Launch TUI (with optional direct navigation to active context)
try: try:
from .tui import run_tui from .tui_app import run_tui
run_tui(open_active=args.open) run_tui(open_active=args.open)
except ImportError as e: except ImportError as e:
print(f"Error launching TUI: {e}") print(f"Error launching TUI: {e}")

View File

@@ -2,6 +2,93 @@ import subprocess
import hashlib import hashlib
class Crypto: class Crypto:
@staticmethod
def is_gpg_available() -> bool:
"""
Check if GPG is available on the system.
Returns:
True if GPG is available, False otherwise.
"""
try:
proc = subprocess.Popen(
['gpg', '--version'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
stdout, stderr = proc.communicate(timeout=5)
return proc.returncode == 0
except (FileNotFoundError, subprocess.TimeoutExpired):
return False
@staticmethod
def verify_signature(signed_content: str) -> tuple[bool, str]:
"""
Verify a GPG clearsigned message.
Args:
signed_content: The clearsigned content to verify
Returns:
A tuple of (verified: bool, signer_info: str)
- verified: True if signature is valid, False otherwise
- signer_info: Information about the signer (key ID, name) or error message
"""
if not signed_content or not signed_content.strip():
return False, "No signature present"
# Check if content looks like a GPG signed message
if "-----BEGIN PGP SIGNED MESSAGE-----" not in signed_content:
return False, "Not a GPG signed message"
try:
proc = subprocess.Popen(
['gpg', '--verify'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
stdout, stderr = proc.communicate(input=signed_content, timeout=10)
if proc.returncode == 0:
# Parse signer info from stderr (GPG outputs verification info to stderr)
signer_info = "Unknown signer"
for line in stderr.split('\n'):
if "Good signature from" in line:
# Extract the signer name/email
parts = line.split('"')
if len(parts) >= 2:
signer_info = parts[1]
break
elif "using" in line:
# Try to get key ID
if "key" in line.lower():
signer_info = line.strip()
return True, signer_info
else:
# Signature verification failed
error_msg = "Verification failed"
for line in stderr.split('\n'):
if "BAD signature" in line:
error_msg = "BAD signature"
break
elif "no public key" in line or "public key not found" in line:
error_msg = "Public key not found in keyring"
break
elif "Can't check signature" in line:
error_msg = "Cannot check signature"
break
return False, error_msg
except (FileNotFoundError, subprocess.TimeoutExpired):
return False, "GPG not available or timeout"
except Exception as e:
return False, f"Error: {str(e)}"
@staticmethod @staticmethod
def list_gpg_keys(): def list_gpg_keys():
""" """
@@ -15,7 +102,7 @@ class Crypto:
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
text=True text=True
) )
stdout, stderr = proc.communicate() stdout, stderr = proc.communicate(timeout=10)
if proc.returncode != 0: if proc.returncode != 0:
return [] return []
@@ -37,12 +124,12 @@ class Crypto:
elif fields[0] == 'uid' and current_key_id: elif fields[0] == 'uid' and current_key_id:
user_id = fields[9] if len(fields) > 9 else "Unknown" user_id = fields[9] if len(fields) > 9 else "Unknown"
keys.append((current_key_id, user_id)) keys.append((current_key_id, user_id))
current_key_id = None # Reset after matching # Don't reset current_key_id - allow multiple UIDs per key
return keys return keys
except FileNotFoundError: except (FileNotFoundError, subprocess.TimeoutExpired):
return [] # GPG not installed return [] # GPG not installed or timed out
@staticmethod @staticmethod
def sign_content(content: str, key_id: str = None) -> str: def sign_content(content: str, key_id: str = None) -> str:
@@ -71,7 +158,7 @@ class Crypto:
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
text=True text=True
) )
stdout, stderr = proc.communicate(input=content) stdout, stderr = proc.communicate(input=content, timeout=10)
if proc.returncode != 0: if proc.returncode != 0:
# Fallback: maybe no key is found or gpg error # Fallback: maybe no key is found or gpg error
@@ -79,8 +166,8 @@ class Crypto:
return "" return ""
return stdout return stdout
except FileNotFoundError: except (FileNotFoundError, subprocess.TimeoutExpired):
return "" # GPG not installed return "" # GPG not installed or timed out
@staticmethod @staticmethod
def hash_content(content: str, timestamp: float) -> str: def hash_content(content: str, timestamp: float) -> str:

131
trace/gpg_wizard.py Normal file
View File

@@ -0,0 +1,131 @@
"""First-run GPG setup wizard for trace application"""
import sys
from .crypto import Crypto
from .storage import StateManager
def run_gpg_wizard():
"""
Run the first-time GPG setup wizard.
Returns:
dict: Settings to save (gpg_enabled, gpg_key_id)
"""
print("\n" + "="*60)
print("Welcome to trace - Forensic Note Taking Tool")
print("="*60)
print("\nFirst-time setup: GPG Signature Configuration\n")
print("trace can digitally sign all notes using GPG for authenticity")
print("and integrity verification. This is useful for legal evidence")
print("and chain-of-custody documentation.\n")
# Check if GPG is available
gpg_available = Crypto.is_gpg_available()
if not gpg_available:
print("⚠ GPG is not installed or not available on your system.")
print("\nTo use GPG signing, please install GPG:")
print(" - Linux: apt install gnupg / yum install gnupg")
print(" - macOS: brew install gnupg")
print(" - Windows: Install Gpg4win (https://gpg4win.org)")
print("\nYou can enable GPG signing later by editing ~/.trace/settings.json")
print("\nPress Enter to continue without GPG signing...")
input()
return {"pgp_enabled": False, "gpg_key_id": None}
# GPG is available - ask if user wants to enable it
print("✓ GPG is available on your system.\n")
while True:
response = input("Do you want to enable GPG signing for notes? (y/n): ").strip().lower()
if response in ['y', 'yes']:
enable_gpg = True
break
elif response in ['n', 'no']:
enable_gpg = False
break
else:
print("Please enter 'y' or 'n'")
if not enable_gpg:
print("\nGPG signing disabled. You can enable it later in settings.")
return {"pgp_enabled": False, "gpg_key_id": None}
# List available GPG keys
print("\nSearching for GPG secret keys...\n")
keys = Crypto.list_gpg_keys()
if not keys:
print("⚠ No GPG secret keys found in your keyring.")
print("\nTo use GPG signing, you need to generate a GPG key first:")
print(" - Use 'gpg --gen-key' (Linux/macOS)")
print(" - Use Kleopatra (Windows)")
print("\nAfter generating a key, you can enable GPG signing by editing")
print("~/.trace/settings.json and setting 'gpg_enabled': true")
print("\nPress Enter to continue without GPG signing...")
input()
return {"pgp_enabled": False, "gpg_key_id": None}
# Display available keys
print("Available GPG keys:\n")
for i, (key_id, user_id) in enumerate(keys, 1):
print(f" {i}. {user_id}")
print(f" Key ID: {key_id}\n")
# Let user select a key
selected_key = None
if len(keys) == 1:
print(f"Only one key found. Using: {keys[0][1]}")
selected_key = keys[0][0]
else:
while True:
try:
choice = input(f"Select a key (1-{len(keys)}, or 0 to use default key): ").strip()
choice_num = int(choice)
if choice_num == 0:
print("Using GPG default key (no specific key ID)")
selected_key = None
break
elif 1 <= choice_num <= len(keys):
selected_key = keys[choice_num - 1][0]
print(f"Selected: {keys[choice_num - 1][1]}")
break
else:
print(f"Please enter a number between 0 and {len(keys)}")
except ValueError:
print("Please enter a valid number")
print("\n✓ GPG signing enabled!")
if selected_key:
print(f" Using key: {selected_key}")
else:
print(" Using default GPG key")
print("\nSetup complete. Starting trace...\n")
return {"pgp_enabled": True, "gpg_key_id": selected_key}
def check_and_run_wizard():
"""
Check if this is first run and run wizard if needed.
Returns True if wizard was run, False otherwise.
"""
state_manager = StateManager()
settings = state_manager.get_settings()
# Check if wizard has already been run (presence of any GPG setting indicates setup was done)
if "pgp_enabled" in settings:
return False
# First run - run wizard
wizard_settings = run_gpg_wizard()
# Save settings
for key, value in wizard_settings.items():
state_manager.set_setting(key, value)
return True

View File

@@ -1,286 +0,0 @@
import time
import hashlib
import uuid
import re
from dataclasses import dataclass, field
from typing import List, Optional, Dict
@dataclass
class Note:
content: str
timestamp: float = field(default_factory=time.time)
note_id: str = field(default_factory=lambda: str(uuid.uuid4()))
content_hash: str = ""
signature: Optional[str] = None
tags: List[str] = field(default_factory=list)
iocs: List[str] = field(default_factory=list)
def extract_tags(self):
"""Extract hashtags from content (case-insensitive, stored lowercase)"""
# Match hashtags: # followed by word characters
tag_pattern = r'#(\w+)'
matches = re.findall(tag_pattern, self.content)
# Convert to lowercase and remove duplicates while preserving order
seen = set()
self.tags = []
for tag in matches:
tag_lower = tag.lower()
if tag_lower not in seen:
seen.add(tag_lower)
self.tags.append(tag_lower)
def extract_iocs(self):
"""Extract Indicators of Compromise from content"""
seen = set()
self.iocs = []
# IPv4 addresses
ipv4_pattern = r'\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b'
for match in re.findall(ipv4_pattern, self.content):
if match not in seen:
seen.add(match)
self.iocs.append(match)
# IPv6 addresses (simplified)
ipv6_pattern = r'\b(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\b'
for match in re.findall(ipv6_pattern, self.content):
if match not in seen:
seen.add(match)
self.iocs.append(match)
# Domain names (basic pattern)
domain_pattern = r'\b(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]{2,}\b'
for match in re.findall(domain_pattern, self.content):
# Filter out common false positives
if match not in seen and not match.startswith('example.'):
seen.add(match)
self.iocs.append(match)
# URLs
url_pattern = r'https?://[^\s]+'
for match in re.findall(url_pattern, self.content):
if match not in seen:
seen.add(match)
self.iocs.append(match)
# MD5 hashes (32 hex chars)
md5_pattern = r'\b[a-fA-F0-9]{32}\b'
for match in re.findall(md5_pattern, self.content):
if match not in seen:
seen.add(match)
self.iocs.append(match)
# SHA1 hashes (40 hex chars)
sha1_pattern = r'\b[a-fA-F0-9]{40}\b'
for match in re.findall(sha1_pattern, self.content):
if match not in seen:
seen.add(match)
self.iocs.append(match)
# SHA256 hashes (64 hex chars)
sha256_pattern = r'\b[a-fA-F0-9]{64}\b'
for match in re.findall(sha256_pattern, self.content):
if match not in seen:
seen.add(match)
self.iocs.append(match)
# Email addresses
email_pattern = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
for match in re.findall(email_pattern, self.content):
if match not in seen:
seen.add(match)
self.iocs.append(match)
def calculate_hash(self):
# We hash the content + timestamp to ensure integrity of 'when' it was said
data = f"{self.timestamp}:{self.content}".encode('utf-8')
self.content_hash = hashlib.sha256(data).hexdigest()
@staticmethod
def extract_iocs_from_text(text):
"""Extract IOCs from text and return as list of (ioc, type) tuples"""
iocs = []
seen = set()
# IPv4 addresses
ipv4_pattern = r'\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b'
for match in re.findall(ipv4_pattern, text):
if match not in seen:
seen.add(match)
iocs.append((match, 'ipv4'))
# IPv6 addresses (simplified)
ipv6_pattern = r'\b(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\b'
for match in re.findall(ipv6_pattern, text):
if match not in seen:
seen.add(match)
iocs.append((match, 'ipv6'))
# URLs (check before domains to avoid double-matching)
url_pattern = r'https?://[^\s]+'
for match in re.findall(url_pattern, text):
if match not in seen:
seen.add(match)
iocs.append((match, 'url'))
# Domain names (basic pattern)
domain_pattern = r'\b(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]{2,}\b'
for match in re.findall(domain_pattern, text):
# Filter out common false positives and already seen URLs
if match not in seen and not match.startswith('example.'):
seen.add(match)
iocs.append((match, 'domain'))
# SHA256 hashes (64 hex chars) - check before SHA1 and MD5
sha256_pattern = r'\b[a-fA-F0-9]{64}\b'
for match in re.findall(sha256_pattern, text):
if match not in seen:
seen.add(match)
iocs.append((match, 'sha256'))
# SHA1 hashes (40 hex chars) - check before MD5
sha1_pattern = r'\b[a-fA-F0-9]{40}\b'
for match in re.findall(sha1_pattern, text):
if match not in seen:
seen.add(match)
iocs.append((match, 'sha1'))
# MD5 hashes (32 hex chars)
md5_pattern = r'\b[a-fA-F0-9]{32}\b'
for match in re.findall(md5_pattern, text):
if match not in seen:
seen.add(match)
iocs.append((match, 'md5'))
# Email addresses
email_pattern = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
for match in re.findall(email_pattern, text):
if match not in seen:
seen.add(match)
iocs.append((match, 'email'))
return iocs
@staticmethod
def extract_iocs_with_positions(text):
"""Extract IOCs with their positions for highlighting. Returns list of (text, start, end, type) tuples"""
import re
highlights = []
# IPv4 addresses
for match in re.finditer(r'\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b', text):
highlights.append((match.group(), match.start(), match.end(), 'ipv4'))
# IPv6 addresses
for match in re.finditer(r'\b(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\b', text):
highlights.append((match.group(), match.start(), match.end(), 'ipv6'))
# URLs (check before domains)
for match in re.finditer(r'https?://[^\s]+', text):
highlights.append((match.group(), match.start(), match.end(), 'url'))
# Domain names
for match in re.finditer(r'\b(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]{2,}\b', text):
if not match.group().startswith('example.'):
highlights.append((match.group(), match.start(), match.end(), 'domain'))
# SHA256 hashes
for match in re.finditer(r'\b[a-fA-F0-9]{64}\b', text):
highlights.append((match.group(), match.start(), match.end(), 'sha256'))
# SHA1 hashes
for match in re.finditer(r'\b[a-fA-F0-9]{40}\b', text):
highlights.append((match.group(), match.start(), match.end(), 'sha1'))
# MD5 hashes
for match in re.finditer(r'\b[a-fA-F0-9]{32}\b', text):
highlights.append((match.group(), match.start(), match.end(), 'md5'))
# Email addresses
for match in re.finditer(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b', text):
highlights.append((match.group(), match.start(), match.end(), 'email'))
return highlights
def to_dict(self):
return {
"note_id": self.note_id,
"content": self.content,
"timestamp": self.timestamp,
"content_hash": self.content_hash,
"signature": self.signature,
"tags": self.tags,
"iocs": self.iocs
}
@staticmethod
def from_dict(data):
note = Note(
content=data["content"],
timestamp=data["timestamp"],
note_id=data["note_id"],
content_hash=data.get("content_hash", ""),
signature=data.get("signature"),
tags=data.get("tags", []),
iocs=data.get("iocs", [])
)
return note
@dataclass
class Evidence:
name: str
evidence_id: str = field(default_factory=lambda: str(uuid.uuid4()))
description: str = ""
metadata: Dict[str, str] = field(default_factory=dict)
notes: List[Note] = field(default_factory=list)
def to_dict(self):
return {
"evidence_id": self.evidence_id,
"name": self.name,
"description": self.description,
"metadata": self.metadata,
"notes": [n.to_dict() for n in self.notes]
}
@staticmethod
def from_dict(data):
ev = Evidence(
name=data["name"],
evidence_id=data["evidence_id"],
description=data.get("description", ""),
metadata=data.get("metadata", {})
)
ev.notes = [Note.from_dict(n) for n in data.get("notes", [])]
return ev
@dataclass
class Case:
case_number: str
case_id: str = field(default_factory=lambda: str(uuid.uuid4()))
name: str = ""
investigator: str = ""
evidence: List[Evidence] = field(default_factory=list)
notes: List[Note] = field(default_factory=list)
def to_dict(self):
return {
"case_id": self.case_id,
"case_number": self.case_number,
"name": self.name,
"investigator": self.investigator,
"evidence": [e.to_dict() for e in self.evidence],
"notes": [n.to_dict() for n in self.notes]
}
@staticmethod
def from_dict(data):
case = Case(
case_number=data["case_number"],
case_id=data["case_id"],
name=data.get("name", ""),
investigator=data.get("investigator", "")
)
case.evidence = [Evidence.from_dict(e) for e in data.get("evidence", [])]
case.notes = [Note.from_dict(n) for n in data.get("notes", [])]
return case

148
trace/models/__init__.py Normal file
View File

@@ -0,0 +1,148 @@
"""Data models for trace application"""
import time
import hashlib
import uuid
from dataclasses import dataclass, field
from typing import List, Optional, Dict, Tuple
from .extractors import TagExtractor, IOCExtractor
@dataclass
class Note:
content: str
timestamp: float = field(default_factory=time.time)
note_id: str = field(default_factory=lambda: str(uuid.uuid4()))
content_hash: str = ""
signature: Optional[str] = None
tags: List[str] = field(default_factory=list)
iocs: List[str] = field(default_factory=list)
def extract_tags(self):
"""Extract hashtags from content (case-insensitive, stored lowercase)"""
self.tags = TagExtractor.extract_tags(self.content)
def extract_iocs(self):
"""Extract Indicators of Compromise from content"""
self.iocs = IOCExtractor.extract_iocs(self.content)
def calculate_hash(self):
# We hash the content + timestamp to ensure integrity of 'when' it was said
data = f"{self.timestamp}:{self.content}".encode('utf-8')
self.content_hash = hashlib.sha256(data).hexdigest()
def verify_signature(self) -> Tuple[bool, str]:
"""
Verify the GPG signature of this note.
Returns:
A tuple of (verified: bool, info: str)
- verified: True if signature is valid, False if invalid or unsigned
- info: Signer information or error/status message
"""
# Import here to avoid circular dependency
from ..crypto import Crypto
if not self.signature:
return False, "unsigned"
return Crypto.verify_signature(self.signature)
@staticmethod
def extract_iocs_from_text(text):
"""Extract IOCs from text and return as list of (ioc, type) tuples"""
return IOCExtractor.extract_iocs_with_types(text)
@staticmethod
def extract_iocs_with_positions(text):
"""Extract IOCs with their positions for highlighting. Returns list of (text, start, end, type) tuples"""
return IOCExtractor.extract_iocs_with_positions(text)
def to_dict(self):
return {
"note_id": self.note_id,
"content": self.content,
"timestamp": self.timestamp,
"content_hash": self.content_hash,
"signature": self.signature,
"tags": self.tags,
"iocs": self.iocs
}
@staticmethod
def from_dict(data):
note = Note(
content=data["content"],
timestamp=data["timestamp"],
note_id=data["note_id"],
content_hash=data.get("content_hash", ""),
signature=data.get("signature"),
tags=data.get("tags", []),
iocs=data.get("iocs", [])
)
return note
@dataclass
class Evidence:
name: str
evidence_id: str = field(default_factory=lambda: str(uuid.uuid4()))
description: str = ""
metadata: Dict[str, str] = field(default_factory=dict)
notes: List[Note] = field(default_factory=list)
def to_dict(self):
return {
"evidence_id": self.evidence_id,
"name": self.name,
"description": self.description,
"metadata": self.metadata,
"notes": [n.to_dict() for n in self.notes]
}
@staticmethod
def from_dict(data):
ev = Evidence(
name=data["name"],
evidence_id=data["evidence_id"],
description=data.get("description", ""),
metadata=data.get("metadata", {})
)
ev.notes = [Note.from_dict(n) for n in data.get("notes", [])]
return ev
@dataclass
class Case:
case_number: str
case_id: str = field(default_factory=lambda: str(uuid.uuid4()))
name: str = ""
investigator: str = ""
evidence: List[Evidence] = field(default_factory=list)
notes: List[Note] = field(default_factory=list)
def to_dict(self):
return {
"case_id": self.case_id,
"case_number": self.case_number,
"name": self.name,
"investigator": self.investigator,
"evidence": [e.to_dict() for e in self.evidence],
"notes": [n.to_dict() for n in self.notes]
}
@staticmethod
def from_dict(data):
case = Case(
case_number=data["case_number"],
case_id=data["case_id"],
name=data.get("name", ""),
investigator=data.get("investigator", "")
)
case.evidence = [Evidence.from_dict(e) for e in data.get("evidence", [])]
case.notes = [Note.from_dict(n) for n in data.get("notes", [])]
return case
__all__ = ['Note', 'Evidence', 'Case', 'TagExtractor', 'IOCExtractor']

View File

@@ -0,0 +1,6 @@
"""Extractors for tags and IOCs from note content"""
from .tag_extractor import TagExtractor
from .ioc_extractor import IOCExtractor
__all__ = ['TagExtractor', 'IOCExtractor']

View File

@@ -0,0 +1,236 @@
"""IOC (Indicator of Compromise) extraction logic for notes"""
import re
from typing import List, Tuple
class IOCExtractor:
"""Extract Indicators of Compromise from text content"""
# Regex patterns for different IOC types
SHA256_PATTERN = r'\b[a-fA-F0-9]{64}\b'
SHA1_PATTERN = r'\b[a-fA-F0-9]{40}\b'
MD5_PATTERN = r'\b[a-fA-F0-9]{32}\b'
IPV4_PATTERN = r'\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b'
IPV6_PATTERN = r'\b(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\b|\b(?:[0-9a-fA-F]{1,4}:)*::(?:[0-9a-fA-F]{1,4}:)*[0-9a-fA-F]{0,4}\b'
URL_PATTERN = r'https?://[^\s<>\"\']+(?<![.,;:!?\)\]\}])'
DOMAIN_PATTERN = r'\b(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]{2,}\b'
EMAIL_PATTERN = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
@staticmethod
def extract_iocs(text: str) -> List[str]:
"""
Extract IOCs from text and return as simple list
Args:
text: The text to extract IOCs from
Returns:
List of unique IOC strings
"""
seen = set()
covered_ranges = set()
iocs = []
def add_ioc_if_not_covered(match_obj):
"""Add IOC if its range doesn't overlap with already covered ranges"""
start, end = match_obj.start(), match_obj.end()
# Check if this range overlaps with any covered range
for covered_start, covered_end in covered_ranges:
if not (end <= covered_start or start >= covered_end):
return False # Overlaps, don't add
ioc_text = match_obj.group()
if ioc_text not in seen:
seen.add(ioc_text)
covered_ranges.add((start, end))
iocs.append(ioc_text)
return True
return False
# Process in order of priority to avoid false positives
# SHA256 hashes (64 hex chars) - check longest first to avoid substring matches
for match in re.finditer(IOCExtractor.SHA256_PATTERN, text):
add_ioc_if_not_covered(match)
# SHA1 hashes (40 hex chars)
for match in re.finditer(IOCExtractor.SHA1_PATTERN, text):
add_ioc_if_not_covered(match)
# MD5 hashes (32 hex chars)
for match in re.finditer(IOCExtractor.MD5_PATTERN, text):
add_ioc_if_not_covered(match)
# IPv4 addresses
for match in re.finditer(IOCExtractor.IPV4_PATTERN, text):
add_ioc_if_not_covered(match)
# IPv6 addresses (supports compressed format)
for match in re.finditer(IOCExtractor.IPV6_PATTERN, text):
add_ioc_if_not_covered(match)
# URLs (check before domains to prevent double-matching)
for match in re.finditer(IOCExtractor.URL_PATTERN, text):
add_ioc_if_not_covered(match)
# Domain names (basic pattern)
for match in re.finditer(IOCExtractor.DOMAIN_PATTERN, text):
# Filter out common false positives
if not match.group().startswith('example.'):
add_ioc_if_not_covered(match)
# Email addresses
for match in re.finditer(IOCExtractor.EMAIL_PATTERN, text):
add_ioc_if_not_covered(match)
return iocs
@staticmethod
def extract_iocs_with_types(text: str) -> List[Tuple[str, str]]:
"""
Extract IOCs from text and return as list of (ioc, type) tuples
Args:
text: The text to extract IOCs from
Returns:
List of (ioc_text, ioc_type) tuples
"""
iocs = []
seen = set()
covered_ranges = set()
def add_ioc_if_not_covered(match_obj, ioc_type):
"""Add IOC if its range doesn't overlap with already covered ranges"""
start, end = match_obj.start(), match_obj.end()
# Check if this range overlaps with any covered range
for covered_start, covered_end in covered_ranges:
if not (end <= covered_start or start >= covered_end):
return False # Overlaps, don't add
ioc_text = match_obj.group()
if ioc_text not in seen:
seen.add(ioc_text)
covered_ranges.add((start, end))
iocs.append((ioc_text, ioc_type))
return True
return False
# Process in priority order: longest hashes first
for match in re.finditer(IOCExtractor.SHA256_PATTERN, text):
add_ioc_if_not_covered(match, 'sha256')
for match in re.finditer(IOCExtractor.SHA1_PATTERN, text):
add_ioc_if_not_covered(match, 'sha1')
for match in re.finditer(IOCExtractor.MD5_PATTERN, text):
add_ioc_if_not_covered(match, 'md5')
for match in re.finditer(IOCExtractor.IPV4_PATTERN, text):
add_ioc_if_not_covered(match, 'ipv4')
for match in re.finditer(IOCExtractor.IPV6_PATTERN, text):
add_ioc_if_not_covered(match, 'ipv6')
# URLs (check before domains to avoid double-matching)
for match in re.finditer(IOCExtractor.URL_PATTERN, text):
add_ioc_if_not_covered(match, 'url')
# Domain names
for match in re.finditer(IOCExtractor.DOMAIN_PATTERN, text):
# Filter out common false positives
if not match.group().startswith('example.'):
add_ioc_if_not_covered(match, 'domain')
# Email addresses
for match in re.finditer(IOCExtractor.EMAIL_PATTERN, text):
add_ioc_if_not_covered(match, 'email')
return iocs
@staticmethod
def extract_iocs_with_positions(text: str) -> List[Tuple[str, int, int, str]]:
"""
Extract IOCs with their positions for highlighting
Args:
text: The text to extract IOCs from
Returns:
List of (ioc_text, start_pos, end_pos, ioc_type) tuples
"""
highlights = []
covered_ranges = set()
def overlaps(start, end):
"""Check if range overlaps with any covered range"""
for covered_start, covered_end in covered_ranges:
if not (end <= covered_start or start >= covered_end):
return True
return False
def add_highlight(match, ioc_type):
"""Add highlight if it doesn't overlap with existing ones"""
start, end = match.start(), match.end()
if not overlaps(start, end):
highlights.append((match.group(), start, end, ioc_type))
covered_ranges.add((start, end))
# Process in priority order: longest hashes first to avoid substring matches
for match in re.finditer(IOCExtractor.SHA256_PATTERN, text):
add_highlight(match, 'sha256')
for match in re.finditer(IOCExtractor.SHA1_PATTERN, text):
add_highlight(match, 'sha1')
for match in re.finditer(IOCExtractor.MD5_PATTERN, text):
add_highlight(match, 'md5')
for match in re.finditer(IOCExtractor.IPV4_PATTERN, text):
add_highlight(match, 'ipv4')
for match in re.finditer(IOCExtractor.IPV6_PATTERN, text):
add_highlight(match, 'ipv6')
# URLs (check before domains to prevent double-matching)
for match in re.finditer(IOCExtractor.URL_PATTERN, text):
add_highlight(match, 'url')
# Domain names
for match in re.finditer(IOCExtractor.DOMAIN_PATTERN, text):
if not match.group().startswith('example.'):
add_highlight(match, 'domain')
# Email addresses
for match in re.finditer(IOCExtractor.EMAIL_PATTERN, text):
add_highlight(match, 'email')
return highlights
@staticmethod
def classify_ioc(ioc: str) -> str:
"""
Classify an IOC by its type
Args:
ioc: The IOC string to classify
Returns:
The IOC type as a string
"""
if re.fullmatch(IOCExtractor.SHA256_PATTERN, ioc):
return 'sha256'
elif re.fullmatch(IOCExtractor.SHA1_PATTERN, ioc):
return 'sha1'
elif re.fullmatch(IOCExtractor.MD5_PATTERN, ioc):
return 'md5'
elif re.fullmatch(IOCExtractor.IPV4_PATTERN, ioc):
return 'ipv4'
elif re.fullmatch(IOCExtractor.IPV6_PATTERN, ioc):
return 'ipv6'
elif re.fullmatch(IOCExtractor.EMAIL_PATTERN, ioc):
return 'email'
elif re.fullmatch(IOCExtractor.URL_PATTERN, ioc):
return 'url'
elif re.fullmatch(IOCExtractor.DOMAIN_PATTERN, ioc):
return 'domain'
else:
return 'unknown'

View File

@@ -0,0 +1,34 @@
"""Tag extraction logic for notes"""
import re
class TagExtractor:
"""Extract hashtags from text content"""
TAG_PATTERN = r'#(\w+)'
@staticmethod
def extract_tags(text: str) -> list[str]:
"""
Extract hashtags from content (case-insensitive, stored lowercase)
Args:
text: The text to extract tags from
Returns:
List of unique tags in lowercase, preserving order
"""
# Match hashtags: # followed by word characters
matches = re.findall(TagExtractor.TAG_PATTERN, text)
# Convert to lowercase and remove duplicates while preserving order
seen = set()
tags = []
for tag in matches:
tag_lower = tag.lower()
if tag_lower not in seen:
seen.add(tag_lower)
tags.append(tag_lower)
return tags

View File

@@ -1,268 +1,6 @@
import json """Storage module - backward compatibility wrapper"""
import time
from pathlib import Path
from typing import List, Optional, Tuple
from .models import Case, Evidence, Note
DEFAULT_APP_DIR = Path.home() / ".trace" # For backward compatibility, export all classes from storage_impl
from .storage_impl import Storage, StateManager, LockManager, create_demo_case
class Storage: __all__ = ['Storage', 'StateManager', 'LockManager', 'create_demo_case']
def __init__(self, app_dir: Path = DEFAULT_APP_DIR):
self.app_dir = app_dir
self.data_file = self.app_dir / "data.json"
self._ensure_app_dir()
self.cases: List[Case] = self._load_data()
# Create demo case on first launch
if not self.cases:
self._create_demo_case()
def _ensure_app_dir(self):
if not self.app_dir.exists():
self.app_dir.mkdir(parents=True, exist_ok=True)
def _create_demo_case(self):
"""Create a demo case with evidence showcasing all features"""
demo_case = Case(
case_number="DEMO-2024-001",
name="Sample Investigation",
investigator="Demo User"
)
# Add case-level notes to demonstrate case notes feature
case_note1 = Note(content="""Initial case briefing: Suspected data exfiltration incident.
Key objectives:
- Identify compromised systems
- Determine scope of data loss
- Document timeline of events
#incident-response #data-breach #investigation""")
case_note1.calculate_hash()
case_note1.extract_tags()
case_note1.extract_iocs()
demo_case.notes.append(case_note1)
# Wait a moment for different timestamp
time.sleep(0.1)
case_note2 = Note(content="""Investigation lead: Employee reported suspicious email from sender@phishing-domain.com
Initial analysis shows potential credential harvesting attempt.
Review email headers and attachments for IOCs. #phishing #email-analysis""")
case_note2.calculate_hash()
case_note2.extract_tags()
case_note2.extract_iocs()
demo_case.notes.append(case_note2)
time.sleep(0.1)
# Create evidence 1: Compromised laptop
evidence1 = Evidence(
name="Employee Laptop HDD",
description="Primary workstation hard drive - user reported suspicious activity"
)
# Add source hash for chain of custody demonstration
evidence1.metadata["source_hash"] = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
# Add notes to evidence 1 with various features
note1 = Note(content="""Forensic imaging completed. Drive imaged using FTK Imager.
Image hash verified: SHA256 e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Chain of custody maintained throughout process. #forensics #imaging #chain-of-custody""")
note1.calculate_hash()
note1.extract_tags()
note1.extract_iocs()
evidence1.notes.append(note1)
time.sleep(0.1)
note2 = Note(content="""Discovered suspicious connections to external IP addresses:
- 192.168.1.100 (local gateway)
- 203.0.113.45 (external, geolocation: Unknown)
- 198.51.100.78 (command and control server suspected)
Browser history shows visits to malicious-site.com and data-exfil.net.
#network-analysis #ioc #c2-server""")
note2.calculate_hash()
note2.extract_tags()
note2.extract_iocs()
evidence1.notes.append(note2)
time.sleep(0.1)
note3 = Note(content="""Malware identified in temp directory:
File: evil.exe
MD5: d41d8cd98f00b204e9800998ecf8427e
SHA1: da39a3ee5e6b4b0d3255bfef95601890afd80709
SHA256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Submitting to VirusTotal for analysis. #malware #hash-analysis #virustotal""")
note3.calculate_hash()
note3.extract_tags()
note3.extract_iocs()
evidence1.notes.append(note3)
time.sleep(0.1)
note4 = Note(content="""Timeline analysis reveals:
- 2024-01-15 09:23:45 - Suspicious email received
- 2024-01-15 09:24:12 - User clicked phishing link https://evil-domain.com/login
- 2024-01-15 09:25:03 - Credentials submitted to attacker-controlled site
- 2024-01-15 09:30:15 - Lateral movement detected
User credentials compromised. Recommend immediate password reset. #timeline #lateral-movement""")
note4.calculate_hash()
note4.extract_tags()
note4.extract_iocs()
evidence1.notes.append(note4)
demo_case.evidence.append(evidence1)
time.sleep(0.1)
# Create evidence 2: Network logs
evidence2 = Evidence(
name="Firewall Logs",
description="Corporate firewall logs from incident timeframe"
)
evidence2.metadata["source_hash"] = "a3f5c8b912e4d67f89b0c1a2e3d4f5a6b7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2"
note5 = Note(content="""Log analysis shows outbound connections to suspicious domains:
- attacker-c2.com on port 443 (encrypted channel)
- data-upload.net on port 8080 (unencrypted)
- exfil-server.org on port 22 (SSH tunnel)
Total data transferred: approximately 2.3 GB over 4 hours.
#log-analysis #data-exfiltration #network-traffic""")
note5.calculate_hash()
note5.extract_tags()
note5.extract_iocs()
evidence2.notes.append(note5)
time.sleep(0.1)
note6 = Note(content="""Contact information found in malware configuration:
Email: attacker@malicious-domain.com
Backup C2: 2001:0db8:85a3:0000:0000:8a2e:0370:7334 (IPv6)
Cross-referencing with threat intelligence databases. #threat-intel #attribution""")
note6.calculate_hash()
note6.extract_tags()
note6.extract_iocs()
evidence2.notes.append(note6)
demo_case.evidence.append(evidence2)
time.sleep(0.1)
# Create evidence 3: Email forensics
evidence3 = Evidence(
name="Phishing Email",
description="Original phishing email preserved in .eml format"
)
note7 = Note(content="""Email headers analysis:
From: sender@phishing-domain.com (spoofed)
Reply-To: attacker@evil-mail-server.net
X-Originating-IP: 198.51.100.99
Email contains embedded tracking pixel at http://tracking.malicious-site.com/pixel.gif
Attachment: invoice.pdf.exe (double extension trick) #email-forensics #phishing-analysis""")
note7.calculate_hash()
note7.extract_tags()
note7.extract_iocs()
evidence3.notes.append(note7)
demo_case.evidence.append(evidence3)
# Add the demo case to storage
self.cases.append(demo_case)
self.save_data()
def _load_data(self) -> List[Case]:
if not self.data_file.exists():
return []
try:
with open(self.data_file, 'r') as f:
data = json.load(f)
return [Case.from_dict(c) for c in data]
except (json.JSONDecodeError, IOError):
return []
def save_data(self):
data = [c.to_dict() for c in self.cases]
# Write to temp file then rename for atomic-ish write
temp_file = self.data_file.with_suffix(".tmp")
with open(temp_file, 'w') as f:
json.dump(data, f, indent=2)
temp_file.replace(self.data_file)
def add_case(self, case: Case):
self.cases.append(case)
self.save_data()
def get_case(self, case_id: str) -> Optional[Case]:
# Case ID lookup
for c in self.cases:
if c.case_id == case_id:
return c
return None
def delete_case(self, case_id: str):
self.cases = [c for c in self.cases if c.case_id != case_id]
self.save_data()
def delete_evidence(self, case_id: str, evidence_id: str):
case = self.get_case(case_id)
if case:
case.evidence = [e for e in case.evidence if e.evidence_id != evidence_id]
self.save_data()
def find_evidence(self, evidence_id: str) -> Tuple[Optional[Case], Optional[Evidence]]:
for c in self.cases:
for e in c.evidence:
if e.evidence_id == evidence_id:
return c, e
return None, None
class StateManager:
def __init__(self, app_dir: Path = DEFAULT_APP_DIR):
self.app_dir = app_dir
self.state_file = self.app_dir / "state"
self.settings_file = self.app_dir / "settings.json"
self._ensure_app_dir()
def _ensure_app_dir(self):
if not self.app_dir.exists():
self.app_dir.mkdir(parents=True, exist_ok=True)
def set_active(self, case_id: Optional[str] = None, evidence_id: Optional[str] = None):
state = self.get_active()
state["case_id"] = case_id
state["evidence_id"] = evidence_id
with open(self.state_file, 'w') as f:
json.dump(state, f)
def get_active(self) -> dict:
if not self.state_file.exists():
return {"case_id": None, "evidence_id": None}
try:
with open(self.state_file, 'r') as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return {"case_id": None, "evidence_id": None}
def get_settings(self) -> dict:
if not self.settings_file.exists():
return {"pgp_enabled": True}
try:
with open(self.settings_file, 'r') as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return {"pgp_enabled": True}
def set_setting(self, key: str, value):
settings = self.get_settings()
settings[key] = value
with open(self.settings_file, 'w') as f:
json.dump(settings, f)

View File

@@ -0,0 +1,8 @@
"""Storage implementation modules"""
from .lock_manager import LockManager
from .state_manager import StateManager
from .storage import Storage
from .demo_data import create_demo_case
__all__ = ['LockManager', 'StateManager', 'Storage', 'create_demo_case']

View File

@@ -0,0 +1,143 @@
"""Demo case creation for first-time users"""
from ..models import Case, Evidence, Note
def create_demo_case() -> Case:
"""Create a demo case with evidence showcasing all features"""
demo_case = Case(
case_number="DEMO-2024-001",
name="Sample Investigation",
investigator="Demo User"
)
# Add case-level notes to demonstrate case notes feature
case_note1 = Note(content="""Initial case briefing: Suspected data exfiltration incident.
Key objectives:
- Identify compromised systems
- Determine scope of data loss
- Document timeline of events
#incident-response #data-breach #investigation""")
case_note1.calculate_hash()
case_note1.extract_tags()
case_note1.extract_iocs()
demo_case.notes.append(case_note1)
case_note2 = Note(content="""Investigation lead: Employee reported suspicious email from sender@phishing-domain.com
Initial analysis shows potential credential harvesting attempt.
Review email headers and attachments for IOCs. #phishing #email-analysis""")
case_note2.calculate_hash()
case_note2.extract_tags()
case_note2.extract_iocs()
demo_case.notes.append(case_note2)
# Create evidence 1: Compromised laptop
evidence1 = Evidence(
name="Employee Laptop HDD",
description="Primary workstation hard drive - user reported suspicious activity"
)
# Add source hash for chain of custody demonstration
evidence1.metadata["source_hash"] = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
# Add notes to evidence 1 with various features
note1 = Note(content="""Forensic imaging completed. Drive imaged using FTK Imager.
Image hash verified: SHA256 e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Chain of custody maintained throughout process. #forensics #imaging #chain-of-custody""")
note1.calculate_hash()
note1.extract_tags()
note1.extract_iocs()
evidence1.notes.append(note1)
note2 = Note(content="""Discovered suspicious connections to external IP addresses:
- 192.168.1.100 (local gateway)
- 203.0.113.45 (external, geolocation: Unknown)
- 198.51.100.78 (command and control server suspected)
Browser history shows visits to malicious-site.com and data-exfil.net.
#network-analysis #ioc #c2-server""")
note2.calculate_hash()
note2.extract_tags()
note2.extract_iocs()
evidence1.notes.append(note2)
note3 = Note(content="""Malware identified in temp directory:
File: evil.exe
MD5: d41d8cd98f00b204e9800998ecf8427e
SHA1: da39a3ee5e6b4b0d3255bfef95601890afd80709
SHA256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Submitting to VirusTotal for analysis. #malware #hash-analysis #virustotal""")
note3.calculate_hash()
note3.extract_tags()
note3.extract_iocs()
evidence1.notes.append(note3)
note4 = Note(content="""Timeline analysis reveals:
- 2024-01-15 09:23:45 - Suspicious email received
- 2024-01-15 09:24:12 - User clicked phishing link https://evil-domain.com/login
- 2024-01-15 09:25:03 - Credentials submitted to attacker-controlled site
- 2024-01-15 09:30:15 - Lateral movement detected
User credentials compromised. Recommend immediate password reset. #timeline #lateral-movement""")
note4.calculate_hash()
note4.extract_tags()
note4.extract_iocs()
evidence1.notes.append(note4)
demo_case.evidence.append(evidence1)
# Create evidence 2: Network logs
evidence2 = Evidence(
name="Firewall Logs",
description="Corporate firewall logs from incident timeframe"
)
evidence2.metadata["source_hash"] = "a3f5c8b912e4d67f89b0c1a2e3d4f5a6b7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2"
note5 = Note(content="""Log analysis shows outbound connections to suspicious domains:
- attacker-c2.com on port 443 (encrypted channel)
- data-upload.net on port 8080 (unencrypted)
- exfil-server.org on port 22 (SSH tunnel)
Total data transferred: approximately 2.3 GB over 4 hours.
#log-analysis #data-exfiltration #network-traffic""")
note5.calculate_hash()
note5.extract_tags()
note5.extract_iocs()
evidence2.notes.append(note5)
note6 = Note(content="""Contact information found in malware configuration:
Email: attacker@malicious-domain.com
Backup C2: 2001:0db8:85a3:0000:0000:8a2e:0370:7334 (IPv6)
Cross-referencing with threat intelligence databases. #threat-intel #attribution""")
note6.calculate_hash()
note6.extract_tags()
note6.extract_iocs()
evidence2.notes.append(note6)
demo_case.evidence.append(evidence2)
# Create evidence 3: Email forensics
evidence3 = Evidence(
name="Phishing Email",
description="Original phishing email preserved in .eml format"
)
note7 = Note(content="""Email headers analysis:
From: sender@phishing-domain.com (spoofed)
Reply-To: attacker@evil-mail-server.net
X-Originating-IP: 198.51.100.99
Email contains embedded tracking pixel at http://tracking.malicious-site.com/pixel.gif
Attachment: invoice.pdf.exe (double extension trick) #email-forensics #phishing-analysis""")
note7.calculate_hash()
note7.extract_tags()
note7.extract_iocs()
evidence3.notes.append(note7)
demo_case.evidence.append(evidence3)
return demo_case

View File

@@ -0,0 +1,87 @@
"""File lock manager for preventing concurrent access"""
import os
import sys
import time
from pathlib import Path
class LockManager:
"""Cross-platform file lock manager to prevent concurrent access"""
def __init__(self, lock_file: Path):
self.lock_file = lock_file
self.acquired = False
def acquire(self, timeout: int = 5):
"""Acquire lock with timeout. Returns True if successful."""
start_time = time.time()
while time.time() - start_time < timeout:
try:
# Try to create lock file exclusively (fails if exists)
# Use 'x' mode which fails if file exists (atomic on most systems)
fd = os.open(str(self.lock_file), os.O_CREAT | os.O_EXCL | os.O_WRONLY)
os.write(fd, str(os.getpid()).encode())
os.close(fd)
self.acquired = True
return True
except FileExistsError:
# Lock file exists, check if process is still alive
if self._is_stale_lock():
# Remove stale lock and retry
try:
self.lock_file.unlink()
except FileNotFoundError:
pass
continue
# Active lock, wait a bit
time.sleep(0.1)
except Exception:
# Other errors, wait and retry
time.sleep(0.1)
return False
def _is_stale_lock(self):
"""Check if lock file is stale (process no longer exists)"""
try:
if not self.lock_file.exists():
return False
with open(self.lock_file, 'r') as f:
pid = int(f.read().strip())
# Check if process exists (cross-platform)
if sys.platform == 'win32':
import ctypes
kernel32 = ctypes.windll.kernel32
PROCESS_QUERY_INFORMATION = 0x0400
handle = kernel32.OpenProcess(PROCESS_QUERY_INFORMATION, 0, pid)
if handle:
kernel32.CloseHandle(handle)
return False
return True
else:
# Unix/Linux - send signal 0 to check if process exists
try:
os.kill(pid, 0)
return False # Process exists
except OSError:
return True # Process doesn't exist
except (ValueError, FileNotFoundError, PermissionError):
return True
def release(self):
"""Release the lock"""
if self.acquired:
try:
self.lock_file.unlink()
except FileNotFoundError:
pass
self.acquired = False
def __enter__(self):
if not self.acquire():
raise RuntimeError("Could not acquire lock: another instance is running")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.release()

View File

@@ -0,0 +1,92 @@
"""State manager for active context and settings"""
import json
from pathlib import Path
from typing import Optional, TYPE_CHECKING
if TYPE_CHECKING:
from .storage import Storage
DEFAULT_APP_DIR = Path.home() / ".trace"
class StateManager:
"""Manages active context and user settings"""
def __init__(self, app_dir: Path = DEFAULT_APP_DIR):
self.app_dir = app_dir
self.state_file = self.app_dir / "state"
self.settings_file = self.app_dir / "settings.json"
self._ensure_app_dir()
def _ensure_app_dir(self):
if not self.app_dir.exists():
self.app_dir.mkdir(parents=True, exist_ok=True)
def set_active(self, case_id: Optional[str] = None, evidence_id: Optional[str] = None):
state = self.get_active()
state["case_id"] = case_id
state["evidence_id"] = evidence_id
# Atomic write: write to temp file then rename
temp_file = self.state_file.with_suffix(".tmp")
with open(temp_file, 'w', encoding='utf-8') as f:
json.dump(state, f, ensure_ascii=False)
temp_file.replace(self.state_file)
def get_active(self) -> dict:
if not self.state_file.exists():
return {"case_id": None, "evidence_id": None}
try:
with open(self.state_file, 'r', encoding='utf-8') as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return {"case_id": None, "evidence_id": None}
def validate_and_clear_stale(self, storage: 'Storage') -> str:
"""Validate active state against storage and clear stale references.
Returns warning message if state was cleared, empty string otherwise."""
state = self.get_active()
case_id = state.get("case_id")
evidence_id = state.get("evidence_id")
warning = ""
if case_id:
case = storage.get_case(case_id)
if not case:
warning = f"Active case (ID: {case_id[:8]}...) no longer exists. Clearing active context."
self.set_active(None, None)
return warning
# Validate evidence if set
if evidence_id:
_, evidence = storage.find_evidence(evidence_id)
if not evidence:
warning = f"Active evidence (ID: {evidence_id[:8]}...) no longer exists. Clearing to case level."
self.set_active(case_id, None)
return warning
elif evidence_id:
# Evidence set but no case - invalid state
warning = "Invalid state: evidence set without case. Clearing active context."
self.set_active(None, None)
return warning
return warning
def get_settings(self) -> dict:
if not self.settings_file.exists():
return {"pgp_enabled": True}
try:
with open(self.settings_file, 'r', encoding='utf-8') as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return {"pgp_enabled": True}
def set_setting(self, key: str, value):
settings = self.get_settings()
settings[key] = value
# Atomic write: write to temp file then rename
temp_file = self.settings_file.with_suffix(".tmp")
with open(temp_file, 'w', encoding='utf-8') as f:
json.dump(settings, f, ensure_ascii=False)
temp_file.replace(self.settings_file)

View File

@@ -0,0 +1,112 @@
"""Main storage class for persisting cases, evidence, and notes"""
import json
from pathlib import Path
from typing import List, Optional, Tuple
from ..models import Case, Evidence
from .lock_manager import LockManager
from .demo_data import create_demo_case
DEFAULT_APP_DIR = Path.home() / ".trace"
class Storage:
"""Manages persistence of all forensic data"""
def __init__(self, app_dir: Path = DEFAULT_APP_DIR, acquire_lock: bool = True):
self.app_dir = app_dir
self.data_file = self.app_dir / "data.json"
self.lock_file = self.app_dir / "app.lock"
self.lock_manager = None
self._ensure_app_dir()
# Acquire lock to prevent concurrent access
if acquire_lock:
self.lock_manager = LockManager(self.lock_file)
if not self.lock_manager.acquire(timeout=5):
raise RuntimeError("Another instance of trace is already running. Please close it first.")
self.cases: List[Case] = self._load_data()
# Create demo case on first launch (only if data loaded successfully and is empty)
if not self.cases and self.data_file.exists():
# File exists but is empty - could be first run after successful load
pass
elif not self.cases and not self.data_file.exists():
# No file exists - first run
demo_case = create_demo_case()
self.cases.append(demo_case)
self.save_data()
def __del__(self):
"""Release lock when Storage object is destroyed"""
if self.lock_manager:
self.lock_manager.release()
def _ensure_app_dir(self):
if not self.app_dir.exists():
self.app_dir.mkdir(parents=True, exist_ok=True)
def _load_data(self) -> List[Case]:
if not self.data_file.exists():
return []
try:
with open(self.data_file, 'r', encoding='utf-8') as f:
data = json.load(f)
return [Case.from_dict(c) for c in data]
except (json.JSONDecodeError, IOError, KeyError, ValueError) as e:
# Corrupted JSON - create backup and raise exception
import shutil
from datetime import datetime
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_file = self.app_dir / f"data.json.corrupted.{timestamp}"
try:
shutil.copy2(self.data_file, backup_file)
except Exception:
pass
# Raise exception with information about backup
raise RuntimeError(f"Data file is corrupted. Backup saved to: {backup_file}\nError: {e}")
def start_fresh(self):
"""Start with fresh data (for corrupted JSON recovery)"""
self.cases = []
demo_case = create_demo_case()
self.cases.append(demo_case)
self.save_data()
def save_data(self):
data = [c.to_dict() for c in self.cases]
# Write to temp file then rename for atomic-ish write
temp_file = self.data_file.with_suffix(".tmp")
with open(temp_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
temp_file.replace(self.data_file)
def add_case(self, case: Case):
self.cases.append(case)
self.save_data()
def get_case(self, case_id: str) -> Optional[Case]:
# Case ID lookup
for c in self.cases:
if c.case_id == case_id:
return c
return None
def delete_case(self, case_id: str):
self.cases = [c for c in self.cases if c.case_id != case_id]
self.save_data()
def delete_evidence(self, case_id: str, evidence_id: str):
case = self.get_case(case_id)
if case:
case.evidence = [e for e in case.evidence if e.evidence_id != evidence_id]
self.save_data()
def find_evidence(self, evidence_id: str) -> Tuple[Optional[Case], Optional[Evidence]]:
for c in self.cases:
for e in c.evidence:
if e.evidence_id == evidence_id:
return c, e
return None, None

View File

@@ -21,7 +21,8 @@ class TestModels(unittest.TestCase):
class TestStorage(unittest.TestCase): class TestStorage(unittest.TestCase):
def setUp(self): def setUp(self):
self.test_dir = Path(tempfile.mkdtemp()) self.test_dir = Path(tempfile.mkdtemp())
self.storage = Storage(app_dir=self.test_dir) # Disable lock for tests to allow multiple Storage instances
self.storage = Storage(app_dir=self.test_dir, acquire_lock=False)
def tearDown(self): def tearDown(self):
shutil.rmtree(self.test_dir) shutil.rmtree(self.test_dir)
@@ -31,7 +32,7 @@ class TestStorage(unittest.TestCase):
self.storage.add_case(case) self.storage.add_case(case)
# Reload storage from same dir # Reload storage from same dir
new_storage = Storage(app_dir=self.test_dir) new_storage = Storage(app_dir=self.test_dir, acquire_lock=False)
loaded_case = new_storage.get_case(case.case_id) loaded_case = new_storage.get_case(case.case_id)
self.assertIsNotNone(loaded_case) self.assertIsNotNone(loaded_case)

7
trace/tui/__init__.py Normal file
View File

@@ -0,0 +1,7 @@
"""TUI (Text User Interface) package for trace application"""
# Import from the main tui_app module for backward compatibility
# The tui_app.py file contains the main TUI class and run_tui function
from ..tui_app import run_tui, TUI
__all__ = ['run_tui', 'TUI']

View File

@@ -0,0 +1,5 @@
"""TUI handlers for various operations"""
from .export_handler import ExportHandler
__all__ = ['ExportHandler']

View File

@@ -0,0 +1,238 @@
"""Export functionality for TUI"""
import time
import datetime
from pathlib import Path
from typing import List, Tuple, Optional
from ...models import Note, Case, Evidence
class ExportHandler:
"""Handles exporting IOCs and notes to files"""
@staticmethod
def export_iocs_to_file(
iocs_with_counts: List[Tuple[str, int, str]],
active_case: Optional[Case],
active_evidence: Optional[Evidence],
get_iocs_func=None
) -> Tuple[bool, str]:
"""
Export IOCs to a text file
Args:
iocs_with_counts: List of (ioc, count, type) tuples
active_case: Active case context
active_evidence: Active evidence context
get_iocs_func: Function to get IOCs for a list of notes
Returns:
Tuple of (success: bool, message: str)
"""
if not iocs_with_counts:
return False, "No IOCs to export."
# Determine context for filename
if active_evidence:
context_name = f"{active_case.case_number}_{active_evidence.name}" if active_case else active_evidence.name
elif active_case:
context_name = active_case.case_number
else:
context_name = "unknown"
# Clean filename
context_name = "".join(c if c.isalnum() or c in ('-', '_') else '_' for c in context_name)
# Create exports directory if it doesn't exist
export_dir = Path.home() / ".trace" / "exports"
export_dir.mkdir(parents=True, exist_ok=True)
# Generate filename with timestamp
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"iocs_{context_name}_{timestamp}.txt"
filepath = export_dir / filename
# Build export content
lines = []
lines.append(f"# IOC Export - {context_name}")
lines.append(f"# Generated: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
lines.append("")
if active_evidence:
# Evidence context - only evidence IOCs
lines.append(f"## Evidence: {active_evidence.name}")
lines.append("")
for ioc, count, ioc_type in iocs_with_counts:
lines.append(f"{ioc}\t[{ioc_type}]\t({count} occurrences)")
elif active_case and get_iocs_func:
# Case context - show case IOCs + evidence IOCs with separators
# Get case notes IOCs
case_iocs = get_iocs_func(active_case.notes)
if case_iocs:
lines.append("## Case Notes")
lines.append("")
for ioc, count, ioc_type in case_iocs:
lines.append(f"{ioc}\t[{ioc_type}]\t({count} occurrences)")
lines.append("")
# Get IOCs from each evidence
for ev in active_case.evidence:
ev_iocs = get_iocs_func(ev.notes)
if ev_iocs:
lines.append(f"## Evidence: {ev.name}")
lines.append("")
for ioc, count, ioc_type in ev_iocs:
lines.append(f"{ioc}\t[{ioc_type}]\t({count} occurrences)")
lines.append("")
# Write to file
try:
with open(filepath, 'w', encoding='utf-8') as f:
f.write('\n'.join(lines))
return True, f"IOCs exported to: {filepath}"
except Exception as e:
return False, f"Export failed: {str(e)}"
@staticmethod
def export_case_to_markdown(case: Case) -> Tuple[bool, str]:
"""
Export case (and all its evidence) to markdown
Args:
case: The case to export
Returns:
Tuple of (success: bool, message: str)
"""
# Create exports directory if it doesn't exist
export_dir = Path.home() / ".trace" / "exports"
export_dir.mkdir(parents=True, exist_ok=True)
# Generate filename
case_name = "".join(c if c.isalnum() or c in ('-', '_') else '_' for c in case.case_number)
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"case_{case_name}_{timestamp}.md"
filepath = export_dir / filename
try:
with open(filepath, 'w', encoding='utf-8') as f:
f.write("# Forensic Notes Export\n\n")
f.write(f"Generated on: {time.ctime()}\n\n")
# Write case info
f.write(f"## Case: {case.case_number}\n")
if case.name:
f.write(f"**Name:** {case.name}\n")
if case.investigator:
f.write(f"**Investigator:** {case.investigator}\n")
f.write(f"**Case ID:** {case.case_id}\n\n")
# Case notes
f.write("### Case Notes\n")
if not case.notes:
f.write("_No notes._\n")
for note in case.notes:
ExportHandler._write_note_markdown(f, note)
# Evidence
f.write("\n### Evidence\n")
if not case.evidence:
f.write("_No evidence._\n")
for ev in case.evidence:
f.write(f"#### Evidence: {ev.name}\n")
if ev.description:
f.write(f"_{ev.description}_\n")
f.write(f"**ID:** {ev.evidence_id}\n")
# Include source hash if available
source_hash = ev.metadata.get("source_hash")
if source_hash:
f.write(f"**Source Hash:** `{source_hash}`\n")
f.write("\n")
f.write("##### Evidence Notes\n")
if not ev.notes:
f.write("_No notes._\n")
for note in ev.notes:
ExportHandler._write_note_markdown(f, note)
f.write("\n")
return True, f"Case exported to: {filepath}"
except Exception as e:
return False, f"Export failed: {str(e)}"
@staticmethod
def export_evidence_to_markdown(
evidence: Evidence,
case: Optional[Case]
) -> Tuple[bool, str]:
"""
Export evidence to markdown
Args:
evidence: The evidence to export
case: The parent case (for context)
Returns:
Tuple of (success: bool, message: str)
"""
# Create exports directory if it doesn't exist
export_dir = Path.home() / ".trace" / "exports"
export_dir.mkdir(parents=True, exist_ok=True)
# Generate filename
case_name = "".join(c if c.isalnum() or c in ('-', '_') else '_' for c in case.case_number) if case else "unknown"
ev_name = "".join(c if c.isalnum() or c in ('-', '_') else '_' for c in evidence.name)
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"evidence_{case_name}_{ev_name}_{timestamp}.md"
filepath = export_dir / filename
try:
with open(filepath, 'w', encoding='utf-8') as f:
f.write("# Forensic Evidence Export\n\n")
f.write(f"Generated on: {time.ctime()}\n\n")
# Case context
if case:
f.write(f"**Case:** {case.case_number}\n")
if case.name:
f.write(f"**Case Name:** {case.name}\n")
f.write("\n")
# Evidence info
f.write(f"## Evidence: {evidence.name}\n")
if evidence.description:
f.write(f"**Description:** {evidence.description}\n")
if evidence.metadata.get("source_hash"):
f.write(f"**Source Hash:** `{evidence.metadata['source_hash']}`\n")
f.write(f"**Evidence ID:** {evidence.evidence_id}\n\n")
# Notes
f.write("### Notes\n")
if not evidence.notes:
f.write("_No notes._\n")
for note in evidence.notes:
ExportHandler._write_note_markdown(f, note)
return True, f"Evidence exported to: {filepath}"
except Exception as e:
return False, f"Export failed: {str(e)}"
@staticmethod
def _write_note_markdown(f, note: Note):
"""Helper to write a note in markdown format"""
f.write(f"- **{time.ctime(note.timestamp)}**\n")
f.write(f" - Content: {note.content}\n")
if note.tags:
tags_str = " ".join([f"#{tag}" for tag in note.tags])
f.write(f" - Tags: {tags_str}\n")
f.write(f" - Hash: `{note.content_hash}`\n")
if note.signature:
f.write(" - **Signature Verified:**\n")
f.write(" ```\n")
for line in note.signature.splitlines():
f.write(f" {line}\n")
f.write(" ```\n")
f.write("\n")

View File

@@ -0,0 +1,6 @@
"""Rendering utilities for TUI"""
from .colors import init_colors, ColorPairs
from .text_renderer import TextRenderer
__all__ = ['init_colors', 'ColorPairs', 'TextRenderer']

View File

@@ -0,0 +1,43 @@
"""Color pair initialization and constants for TUI"""
import curses
class ColorPairs:
"""Color pair constants"""
SELECTION = 1 # Black on cyan
SUCCESS = 2 # Green on black
WARNING = 3 # Yellow on black
ERROR = 4 # Red on black
HEADER = 5 # Cyan on black
METADATA = 6 # White on black
BORDER = 7 # Blue on black
TAG = 8 # Magenta on black
IOC_SELECTED = 9 # Red on cyan
TAG_SELECTED = 10 # Yellow on cyan
def init_colors():
"""Initialize color pairs for the TUI"""
curses.start_color()
if curses.has_colors():
# Selection / Highlight
curses.init_pair(ColorPairs.SELECTION, curses.COLOR_BLACK, curses.COLOR_CYAN)
# Success / Active indicators
curses.init_pair(ColorPairs.SUCCESS, curses.COLOR_GREEN, curses.COLOR_BLACK)
# Info / Warnings
curses.init_pair(ColorPairs.WARNING, curses.COLOR_YELLOW, curses.COLOR_BLACK)
# Errors / Critical / IOCs
curses.init_pair(ColorPairs.ERROR, curses.COLOR_RED, curses.COLOR_BLACK)
# Headers / Titles (bright cyan)
curses.init_pair(ColorPairs.HEADER, curses.COLOR_CYAN, curses.COLOR_BLACK)
# Metadata / Secondary text (dim)
curses.init_pair(ColorPairs.METADATA, curses.COLOR_WHITE, curses.COLOR_BLACK)
# Borders / Separators (blue)
curses.init_pair(ColorPairs.BORDER, curses.COLOR_BLUE, curses.COLOR_BLACK)
# Tags (magenta)
curses.init_pair(ColorPairs.TAG, curses.COLOR_MAGENTA, curses.COLOR_BLACK)
# IOCs on selected background (red on cyan)
curses.init_pair(ColorPairs.IOC_SELECTED, curses.COLOR_RED, curses.COLOR_CYAN)
# Tags on selected background (yellow on cyan)
curses.init_pair(ColorPairs.TAG_SELECTED, curses.COLOR_YELLOW, curses.COLOR_CYAN)

View File

@@ -0,0 +1,137 @@
"""Text rendering utilities with highlighting support"""
import curses
import re
from ...models import Note
from .colors import ColorPairs
class TextRenderer:
"""Utility class for rendering text with highlights"""
@staticmethod
def safe_truncate(text, max_width, ellipsis="..."):
"""
Safely truncate text to fit within max_width, handling Unicode characters.
Uses a conservative approach to avoid curses display errors.
"""
if not text:
return text
# Try to fit the text as-is
if len(text) <= max_width:
return text
# Need to truncate - account for ellipsis
if max_width <= len(ellipsis):
return ellipsis[:max_width]
# Truncate conservatively (character by character) to handle multi-byte UTF-8
target_len = max_width - len(ellipsis)
truncated = text[:target_len]
# Encode and check actual byte length to be safe with UTF-8
# If it's too long, trim further
while len(truncated) > 0:
try:
# Test if this will fit when displayed
test_str = truncated + ellipsis
if len(test_str) <= max_width:
return test_str
except:
pass
# Trim one more character
truncated = truncated[:-1]
return ellipsis[:max_width]
@staticmethod
def display_line_with_highlights(screen, y, x_start, line, is_selected=False):
"""
Display a line with intelligent highlighting.
- IOCs are highlighted with ColorPairs.ERROR (red)
- Tags are highlighted with ColorPairs.WARNING (yellow)
- Selection background is ColorPairs.SELECTION (cyan) for non-IOC text
- IOC highlighting takes priority over selection
"""
# Extract IOCs and tags
highlights = []
# Get IOCs with positions
for text, start, end, ioc_type in Note.extract_iocs_with_positions(line):
highlights.append((text, start, end, 'ioc'))
# Get tags
for match in re.finditer(r'#\w+', line):
highlights.append((match.group(), match.start(), match.end(), 'tag'))
# Sort by position and remove overlaps (IOCs take priority over tags)
highlights.sort(key=lambda x: x[1])
deduplicated = []
last_end = -1
for text, start, end, htype in highlights:
if start >= last_end:
deduplicated.append((text, start, end, htype))
last_end = end
highlights = deduplicated
if not highlights:
# No highlights - use selection color if selected
if is_selected:
screen.attron(curses.color_pair(ColorPairs.SELECTION))
screen.addstr(y, x_start, line)
screen.attroff(curses.color_pair(ColorPairs.SELECTION))
else:
screen.addstr(y, x_start, line)
return
# Display with intelligent highlighting
x_pos = x_start
last_pos = 0
for text, start, end, htype in highlights:
# Add text before this highlight
if start > last_pos:
text_before = line[last_pos:start]
if is_selected:
screen.attron(curses.color_pair(ColorPairs.SELECTION))
screen.addstr(y, x_pos, text_before)
screen.attroff(curses.color_pair(ColorPairs.SELECTION))
else:
screen.addstr(y, x_pos, text_before)
x_pos += len(text_before)
# Add highlighted text
if htype == 'ioc':
# IOC highlighting: red on cyan if selected, red on black otherwise
if is_selected:
screen.attron(curses.color_pair(ColorPairs.IOC_SELECTED) | curses.A_BOLD)
screen.addstr(y, x_pos, text)
screen.attroff(curses.color_pair(ColorPairs.IOC_SELECTED) | curses.A_BOLD)
else:
screen.attron(curses.color_pair(ColorPairs.ERROR) | curses.A_BOLD)
screen.addstr(y, x_pos, text)
screen.attroff(curses.color_pair(ColorPairs.ERROR) | curses.A_BOLD)
else: # tag
# Tag highlighting: yellow on cyan if selected, yellow on black otherwise
if is_selected:
screen.attron(curses.color_pair(ColorPairs.TAG_SELECTED))
screen.addstr(y, x_pos, text)
screen.attroff(curses.color_pair(ColorPairs.TAG_SELECTED))
else:
screen.attron(curses.color_pair(ColorPairs.WARNING))
screen.addstr(y, x_pos, text)
screen.attroff(curses.color_pair(ColorPairs.WARNING))
x_pos += len(text)
last_pos = end
# Add remaining text
if last_pos < len(line):
text_after = line[last_pos:]
if is_selected:
screen.attron(curses.color_pair(ColorPairs.SELECTION))
screen.addstr(y, x_pos, text_after)
screen.attroff(curses.color_pair(ColorPairs.SELECTION))
else:
screen.addstr(y, x_pos, text_after)

File diff suppressed because it is too large Load Diff