knowledgebase content
This commit is contained in:
556
src/content/knowledgebase/concept-regular-expressions-regex.md
Normal file
556
src/content/knowledgebase/concept-regular-expressions-regex.md
Normal file
@@ -0,0 +1,556 @@
|
||||
---
|
||||
title: "Regular Expressions in der Digitalen Forensik: Vom Grundmuster zur Beweisextraktion"
|
||||
description: "Umfassender Leitfaden für Regex-Anwendungen in der forensischen Analyse: IP-Adressen, E-Mails, Hashes und komplexe Logparser-Patterns für effiziente Beweissammlung"
|
||||
author: "Claude 4 Sonnett (Prompt: Mario Stöckl)"
|
||||
last_updated: 2024-01-15
|
||||
difficulty: intermediate
|
||||
categories: ["analysis", "automation", "log-analysis"]
|
||||
tags: ["regex", "pattern-matching", "log-analysis", "data-extraction", "text-processing", "automation", "yara-rules", "grep", "powershell", "python"]
|
||||
tool_name: "Regular Expressions (Regex)"
|
||||
related_tools: ["YARA", "Grep", "PowerShell", "Python"]
|
||||
published: true
|
||||
---
|
||||
|
||||
# Regular Expressions in der Digitalen Forensik: Vom Grundmuster zur Beweisextraktion
|
||||
|
||||
Regular Expressions (Regex) sind das Schweizer Taschenmesser der digitalen Forensik. Diese universelle Mustererkennungssprache ermöglicht es Forensikern, komplexe Textsuchen durchzuführen, relevante Daten aus Terabytes von Logs zu extrahieren und Beweise systematisch zu identifizieren. Von der einfachen IP-Adressen-Suche bis zur komplexen Malware-Signaturerstellung - Regex-Kenntnisse unterscheiden oft einen guten von einem großartigen Forensiker.
|
||||
|
||||
## Warum Regex in der Forensik unverzichtbar ist
|
||||
|
||||
In modernen Untersuchungen konfrontieren uns massive Datenmengen: Gigabytes von Logfiles, Speicherabbilder, Netzwerkverkehr und Dateisysteme mit Millionen von Einträgen. Manuelle Durchsuchung ist unmöglich - hier kommt Regex ins Spiel:
|
||||
|
||||
- **Präzise Mustersuche**: Findet spezifische Datenformate (IP-Adressen, E-Mails, Hashes) in unstrukturierten Texten
|
||||
- **Automatisierung**: Ermöglicht Skripterstellung für wiederkehrende Analysemuster
|
||||
- **Tool-Integration**: Kernfunktionalität in allen Major-Forensik-Tools
|
||||
- **Effizienzsteigerung**: Reduziert Analysezeit von Stunden auf Minuten
|
||||
|
||||
## Forensik-relevante Regex-Grundlagen
|
||||
|
||||
### Grundlegende Metacharakter
|
||||
|
||||
```regex
|
||||
. # Beliebiges Zeichen (außer Newline)
|
||||
* # 0 oder mehr Wiederholungen des vorherigen Elements
|
||||
+ # 1 oder mehr Wiederholungen
|
||||
? # 0 oder 1 Wiederholung (optional)
|
||||
^ # Zeilenanfang
|
||||
$ # Zeilenende
|
||||
[] # Zeichenklasse
|
||||
() # Gruppierung
|
||||
| # ODER-Verknüpfung
|
||||
\ # Escape-Zeichen
|
||||
```
|
||||
|
||||
### Quantifizierer für präzise Treffer
|
||||
|
||||
```regex
|
||||
{n} # Exakt n Wiederholungen
|
||||
{n,} # Mindestens n Wiederholungen
|
||||
{n,m} # Zwischen n und m Wiederholungen
|
||||
{,m} # Maximal m Wiederholungen
|
||||
```
|
||||
|
||||
### Zeichenklassen für strukturierte Daten
|
||||
|
||||
```regex
|
||||
\d # Ziffer (0-9)
|
||||
\w # Wort-Zeichen (a-z, A-Z, 0-9, _)
|
||||
\s # Whitespace (Leerzeichen, Tab, Newline)
|
||||
\D # Nicht-Ziffer
|
||||
\W # Nicht-Wort-Zeichen
|
||||
\S # Nicht-Whitespace
|
||||
[a-z] # Kleinbuchstaben
|
||||
[A-Z] # Großbuchstaben
|
||||
[0-9] # Ziffern
|
||||
[^abc] # Alles außer a, b, c
|
||||
```
|
||||
|
||||
## Forensische Standardmuster
|
||||
|
||||
### IP-Adressen (IPv4)
|
||||
|
||||
```regex
|
||||
# Basis-Pattern (weniger präzise)
|
||||
\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}
|
||||
|
||||
# Präzise IPv4-Validierung
|
||||
^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$
|
||||
|
||||
# Praktisches Pattern für Log-Analyse
|
||||
(?:(?:25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(?:25[0-5]|2[0-4]\d|[01]?\d\d?)
|
||||
```
|
||||
|
||||
**Anwendungsbeispiel**: Extraktion aller IP-Adressen aus IIS-Logs:
|
||||
```bash
|
||||
grep -oE '(?:(?:25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(?:25[0-5]|2[0-4]\d|[01]?\d\d?)' access.log | sort | uniq -c | sort -nr
|
||||
```
|
||||
|
||||
### E-Mail-Adressen
|
||||
|
||||
```regex
|
||||
# Einfaches Pattern für schnelle Suche
|
||||
[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}
|
||||
|
||||
# RFC-konforme E-Mail (vereinfacht)
|
||||
^[a-zA-Z0-9.!#$%&'*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$
|
||||
|
||||
# Für Forensik optimiert (weniger strikt)
|
||||
\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b
|
||||
```
|
||||
|
||||
### Hash-Werte
|
||||
|
||||
```regex
|
||||
# MD5 (32 Hexadezimalzeichen)
|
||||
\b[a-fA-F0-9]{32}\b
|
||||
|
||||
# SHA-1 (40 Hexadezimalzeichen)
|
||||
\b[a-fA-F0-9]{40}\b
|
||||
|
||||
# SHA-256 (64 Hexadezimalzeichen)
|
||||
\b[a-fA-F0-9]{64}\b
|
||||
|
||||
# Universelles Hash-Pattern
|
||||
\b[a-fA-F0-9]{32,64}\b
|
||||
```
|
||||
|
||||
### Bitcoin-Adressen
|
||||
|
||||
```regex
|
||||
# Legacy Bitcoin-Adressen (P2PKH und P2SH)
|
||||
\b[13][a-km-zA-HJ-NP-Z1-9]{25,34}\b
|
||||
|
||||
# Bech32 (SegWit) Adressen
|
||||
\bbc1[a-z0-9]{39,59}\b
|
||||
|
||||
# Kombiniert
|
||||
\b(?:[13][a-km-zA-HJ-NP-Z1-9]{25,34}|bc1[a-z0-9]{39,59})\b
|
||||
```
|
||||
|
||||
### Windows-Dateipfade
|
||||
|
||||
```regex
|
||||
# Vollständiger Windows-Pfad
|
||||
^[a-zA-Z]:\\(?:[^\\/:*?"<>|\r\n]+\\)*[^\\/:*?"<>|\r\n]*$
|
||||
|
||||
# UNC-Pfade
|
||||
^\\\\[^\\]+\\[^\\]+(?:\\[^\\]*)*$
|
||||
|
||||
# Für Log-Parsing (flexibler)
|
||||
[a-zA-Z]:\\[^"\s<>|]*
|
||||
```
|
||||
|
||||
### Kreditkartennummern
|
||||
|
||||
```regex
|
||||
# Visa (13-19 Ziffern, beginnt mit 4)
|
||||
4[0-9]{12,18}
|
||||
|
||||
# MasterCard (16 Ziffern, beginnt mit 5)
|
||||
5[1-5][0-9]{14}
|
||||
|
||||
# American Express (15 Ziffern, beginnt mit 34 oder 37)
|
||||
3[47][0-9]{13}
|
||||
|
||||
# Universell (mit optionalen Trennzeichen)
|
||||
(?:\d{4}[-\s]?){3,4}\d{4}
|
||||
```
|
||||
|
||||
## Tool-spezifische Regex-Implementierungen
|
||||
|
||||
### PowerShell-Integration
|
||||
|
||||
```powershell
|
||||
# Suche nach IP-Adressen in Eventlogs
|
||||
Get-WinEvent -LogName Security | Where-Object {
|
||||
$_.Message -match '\b(?:\d{1,3}\.){3}\d{1,3}\b'
|
||||
} | Select-Object TimeCreated, Id, Message
|
||||
|
||||
# E-Mail-Extraktion aus Speicherabbild
|
||||
Select-String -Path "memdump.raw" -Pattern '[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}' -AllMatches
|
||||
|
||||
# Hash-Werte aus Malware-Samples
|
||||
Get-ChildItem -Recurse | Get-FileHash | Where-Object {
|
||||
$_.Hash -match '^[a-fA-F0-9]{64}$'
|
||||
}
|
||||
```
|
||||
|
||||
### Grep-Anwendungen
|
||||
|
||||
```bash
|
||||
# Verdächtige ausführbare Dateien
|
||||
grep -r -E '\.(exe|dll|scr|bat|cmd)$' /mnt/evidence/
|
||||
|
||||
# Zeitstempel-Extraktion (ISO 8601)
|
||||
grep -oE '\d{4}-\d{2}-\d{2}[T ]\d{2}:\d{2}:\d{2}' application.log
|
||||
|
||||
# Base64-kodierte Daten
|
||||
grep -oE '[A-Za-z0-9+/]{20,}={0,2}' suspicious.txt
|
||||
|
||||
# Windows-Ereignis-IDs
|
||||
grep -E 'Event ID: (4624|4625|4648|4656)' security.log
|
||||
```
|
||||
|
||||
### Python-Implementierung
|
||||
|
||||
```python
|
||||
import re
|
||||
import hashlib
|
||||
|
||||
# IP-Adressen mit Kontext extrahieren
|
||||
def extract_ips_with_context(text, context_chars=50):
|
||||
ip_pattern = r'\b(?:\d{1,3}\.){3}\d{1,3}\b'
|
||||
matches = []
|
||||
|
||||
for match in re.finditer(ip_pattern, text):
|
||||
start = max(0, match.start() - context_chars)
|
||||
end = min(len(text), match.end() + context_chars)
|
||||
context = text[start:end]
|
||||
matches.append({
|
||||
'ip': match.group(),
|
||||
'position': match.start(),
|
||||
'context': context
|
||||
})
|
||||
|
||||
return matches
|
||||
|
||||
# Malware-Signaturen generieren
|
||||
def generate_yara_strings(binary_data, min_length=10):
|
||||
# Suche nach druckbaren ASCII-Strings
|
||||
ascii_pattern = rb'[ -~]{' + str(min_length).encode() + rb',}'
|
||||
strings = re.findall(ascii_pattern, binary_data)
|
||||
|
||||
yara_strings = []
|
||||
for i, string in enumerate(strings[:20]): # Erste 20 Strings
|
||||
# Escape problematische Zeichen
|
||||
escaped = string.decode('ascii').replace('\\', '\\\\').replace('"', '\\"')
|
||||
yara_strings.append(f'$s{i} = "{escaped}"')
|
||||
|
||||
return yara_strings
|
||||
```
|
||||
|
||||
## YARA-Rules mit Regex
|
||||
|
||||
```yara
|
||||
rule SuspiciousEmailPattern {
|
||||
strings:
|
||||
$email = /[a-zA-Z0-9._%+-]+@(tempmail|guerrillamail|10minutemail)\.(com|net|org)/ nocase
|
||||
$bitcoin = /\b[13][a-km-zA-HJ-NP-Z1-9]{25,34}\b/
|
||||
$ransom_msg = /your files have been encrypted/i
|
||||
|
||||
condition:
|
||||
$email and ($bitcoin or $ransom_msg)
|
||||
}
|
||||
|
||||
rule LogAnalysisPattern {
|
||||
strings:
|
||||
$failed_login = /Failed login.*from\s+(\d{1,3}\.){3}\d{1,3}/
|
||||
$brute_force = /authentication failure.*rhost=(\d{1,3}\.){3}\d{1,3}/
|
||||
$suspicious_ua = /User-Agent:.*(?:sqlmap|nikto|nmap|masscan)/i
|
||||
|
||||
condition:
|
||||
any of them
|
||||
}
|
||||
```
|
||||
|
||||
## Performance-Optimierung und Fallstricke
|
||||
|
||||
### Catastrophic Backtracking vermeiden
|
||||
|
||||
**Problematisch**:
|
||||
```regex
|
||||
(a+)+b # Exponentieller Zeitverbrauch bei "aaaa...c"
|
||||
(.*)* # Verschachtelte Quantifizierer
|
||||
```
|
||||
|
||||
**Optimiert**:
|
||||
```regex
|
||||
a+b # Atomare Gruppierung
|
||||
[^b]*b # Negierte Zeichenklasse statt .*
|
||||
```
|
||||
|
||||
### Anker für Effizienz nutzen
|
||||
|
||||
```regex
|
||||
# Langsam
|
||||
\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}
|
||||
|
||||
# Schneller mit Wortgrenzen
|
||||
\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
|
||||
|
||||
# Am schnellsten für Zeilensuche
|
||||
^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$
|
||||
```
|
||||
|
||||
### Compiled Patterns verwenden
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
# Einmal kompilieren, oft verwenden
|
||||
ip_pattern = re.compile(r'\b(?:\d{1,3}\.){3}\d{1,3}\b')
|
||||
email_pattern = re.compile(r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}')
|
||||
|
||||
def analyze_log_file(filepath):
|
||||
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
|
||||
content = f.read()
|
||||
|
||||
ips = ip_pattern.findall(content)
|
||||
emails = email_pattern.findall(content)
|
||||
|
||||
return ips, emails
|
||||
```
|
||||
|
||||
## Praktische Forensik-Szenarien
|
||||
|
||||
### Incident Response: Lateral Movement Detection
|
||||
|
||||
```bash
|
||||
# Suche nach PsExec-Aktivitäten
|
||||
grep -E 'PSEXESVC.*started|PsExec.*\\\\[^\\]+\\' security.log
|
||||
|
||||
# Pass-the-Hash Angriffe
|
||||
grep -E 'Logon Type:\s+9.*NTLM.*[0-9a-fA-F]{32}' security.log
|
||||
|
||||
# WMI-basierte Ausführung
|
||||
grep -E 'WmiPrvSE.*ExecuteShellCommand|wmic.*process.*call.*create' system.log
|
||||
```
|
||||
|
||||
### Malware-Analyse: C2-Kommunikation
|
||||
|
||||
```python
|
||||
# Domain Generation Algorithm (DGA) Detection
|
||||
dga_pattern = re.compile(r'\b[a-z]{8,20}\.(com|net|org|info)\b')
|
||||
|
||||
def detect_suspicious_domains(pcap_text):
|
||||
# Extrahiere DNS-Queries
|
||||
dns_pattern = r'DNS.*query.*?([a-zA-Z0-9.-]+\.[a-zA-Z]{2,})'
|
||||
domains = re.findall(dns_pattern, pcap_text)
|
||||
|
||||
suspicious = []
|
||||
for domain in domains:
|
||||
# Prüfe auf DGA-Charakteristika
|
||||
if dga_pattern.match(domain.lower()):
|
||||
# Zusätzliche Heuristiken
|
||||
vowel_ratio = len(re.findall(r'[aeiou]', domain.lower())) / len(domain)
|
||||
if vowel_ratio < 0.2: # Wenige Vokale = verdächtig
|
||||
suspicious.append(domain)
|
||||
|
||||
return suspicious
|
||||
```
|
||||
|
||||
### Data Exfiltration: Ungewöhnliche Datenübertragungen
|
||||
|
||||
```regex
|
||||
# Base64-kodierte Daten in URLs
|
||||
[?&]data=([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?
|
||||
|
||||
# DNS-Tunneling (ungewöhnlich lange Subdomains)
|
||||
\b[a-z0-9]{20,}\.[a-z0-9.-]+\.[a-z]{2,}\b
|
||||
|
||||
# Hex-kodierte Dateninhalte
|
||||
[?&]payload=[0-9a-fA-F]{40,}
|
||||
```
|
||||
|
||||
## Debugging und Testing
|
||||
|
||||
### Online-Tools für Regex-Entwicklung
|
||||
|
||||
1. **regex101.com**: Interaktive Regex-Entwicklung mit Erklärungen
|
||||
2. **regexr.com**: Visuelle Regex-Darstellung
|
||||
3. **regexpal.com**: Schnelle Tests ohne Anmeldung
|
||||
|
||||
### Regex-Validierung in der Praxis
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
def validate_regex_pattern(pattern, test_cases):
|
||||
"""
|
||||
Validiert Regex-Pattern gegen bekannte Test-Cases
|
||||
"""
|
||||
try:
|
||||
compiled = re.compile(pattern)
|
||||
except re.error as e:
|
||||
return False, f"Regex-Syntax-Fehler: {e}"
|
||||
|
||||
results = []
|
||||
for test_input, expected in test_cases:
|
||||
match = compiled.search(test_input)
|
||||
found = match.group() if match else None
|
||||
results.append({
|
||||
'input': test_input,
|
||||
'expected': expected,
|
||||
'found': found,
|
||||
'correct': found == expected
|
||||
})
|
||||
|
||||
return True, results
|
||||
|
||||
# Test-Cases für IP-Pattern
|
||||
ip_tests = [
|
||||
('192.168.1.1', '192.168.1.1'),
|
||||
('999.999.999.999', None), # Ungültige IP
|
||||
('text 10.0.0.1 more text', '10.0.0.1'),
|
||||
('no.ip.here', None)
|
||||
]
|
||||
|
||||
pattern = r'\b(?:(?:25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(?:25[0-5]|2[0-4]\d|[01]?\d\d?)\b'
|
||||
valid, results = validate_regex_pattern(pattern, ip_tests)
|
||||
```
|
||||
|
||||
## Häufige Fehler und Lösungen
|
||||
|
||||
### Problem: Gierige vs. nicht-gierige Quantifizierer
|
||||
|
||||
```regex
|
||||
# Problematisch: Gierig
|
||||
<.*> # Matched "<tag>content</tag>" komplett
|
||||
|
||||
# Lösung: Nicht-gierig
|
||||
<.*?> # Matched nur "<tag>"
|
||||
|
||||
# Alternative: Spezifisch
|
||||
<[^>]*> # Matched keine ">" innerhalb
|
||||
```
|
||||
|
||||
### Problem: Unbeabsichtigte Metacharakter
|
||||
|
||||
```regex
|
||||
# Falsch: . als Literalzeichen gemeint
|
||||
192.168.1.1 # Matched auch "192x168x1x1"
|
||||
|
||||
# Richtig: Escape von Metacharaktern
|
||||
192\.168\.1\.1 # Matched nur echte IP
|
||||
```
|
||||
|
||||
### Problem: Fehlende Wortgrenzen
|
||||
|
||||
```regex
|
||||
# Problematisch: Matcht Teilstrings
|
||||
\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3} # Matched "1192.168.1.10"
|
||||
|
||||
# Lösung: Wortgrenzen verwenden
|
||||
\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b # Nur vollständige IPs
|
||||
```
|
||||
|
||||
## Integration in Forensik-Workflows
|
||||
|
||||
### Automatisierte Triage-Scripts
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# forensic_triage.sh - Automatisierte erste Analyse
|
||||
|
||||
LOG_DIR="/evidence/logs"
|
||||
OUTPUT_DIR="/analysis/regex_results"
|
||||
|
||||
# IP-Adressen extrahieren und häufigste finden
|
||||
echo "=== IP-Analyse ===" > $OUTPUT_DIR/summary.txt
|
||||
find $LOG_DIR -name "*.log" -exec grep -h -oE '\b(?:\d{1,3}\.){3}\d{1,3}\b' {} \; | \
|
||||
sort | uniq -c | sort -nr | head -20 >> $OUTPUT_DIR/summary.txt
|
||||
|
||||
# E-Mail-Adressen sammeln
|
||||
echo -e "\n=== E-Mail-Adressen ===" >> $OUTPUT_DIR/summary.txt
|
||||
find $LOG_DIR -name "*.log" -exec grep -h -oE '[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}' {} \; | \
|
||||
sort | uniq >> $OUTPUT_DIR/summary.txt
|
||||
|
||||
# Verdächtige Prozessnamen
|
||||
echo -e "\n=== Verdächtige Prozesse ===" >> $OUTPUT_DIR/summary.txt
|
||||
find $LOG_DIR -name "*.log" -exec grep -h -iE '(powershell|cmd|wmic|psexec|mimikatz)' {} \; | \
|
||||
head -50 >> $OUTPUT_DIR/summary.txt
|
||||
```
|
||||
|
||||
### PowerShell-Module für wiederkehrende Aufgaben
|
||||
|
||||
```powershell
|
||||
function Get-ForensicPatterns {
|
||||
param(
|
||||
[string]$Path,
|
||||
[string[]]$Patterns = @(
|
||||
'\b(?:\d{1,3}\.){3}\d{1,3}\b', # IP-Adressen
|
||||
'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}', # E-Mails
|
||||
'\b[a-fA-F0-9]{32,64}\b' # Hash-Werte
|
||||
)
|
||||
)
|
||||
|
||||
$results = @{}
|
||||
|
||||
foreach ($pattern in $Patterns) {
|
||||
$matches = Select-String -Path $Path -Pattern $pattern -AllMatches
|
||||
$results[$pattern] = $matches | ForEach-Object {
|
||||
[PSCustomObject]@{
|
||||
File = $_.Filename
|
||||
Line = $_.LineNumber
|
||||
Match = $_.Matches.Value
|
||||
Context = $_.Line
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return $results
|
||||
}
|
||||
```
|
||||
|
||||
## Weiterführende Techniken
|
||||
|
||||
### Lookahead und Lookbehind
|
||||
|
||||
```regex
|
||||
# Positive Lookahead: Password gefolgt von Ziffer
|
||||
password(?=.*\d)
|
||||
|
||||
# Negative Lookahead: IP nicht in private ranges
|
||||
(?!(?:10\.|192\.168\.|172\.(?:1[6-9]|2[0-9]|3[01])\.))(?:\d{1,3}\.){3}\d{1,3}
|
||||
|
||||
# Positive Lookbehind: Zahl nach "Port:"
|
||||
(?<=Port:)\d+
|
||||
|
||||
# Negative Lookbehind: Nicht nach "Comment:"
|
||||
(?<!Comment:).+@.+\..+
|
||||
```
|
||||
|
||||
### Named Capture Groups
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
# Strukturierte Log-Parsing
|
||||
log_pattern = re.compile(
|
||||
r'(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) '
|
||||
r'\[(?P<level>\w+)\] '
|
||||
r'(?P<source>\w+): '
|
||||
r'(?P<message>.*)'
|
||||
)
|
||||
|
||||
def parse_log_entry(line):
|
||||
match = log_pattern.match(line)
|
||||
if match:
|
||||
return match.groupdict()
|
||||
return None
|
||||
|
||||
# Verwendung
|
||||
log_line = "2024-01-15 14:30:25 [ERROR] auth: Failed login from 192.168.1.100"
|
||||
parsed = parse_log_entry(log_line)
|
||||
# Result: {'timestamp': '2024-01-15 14:30:25', 'level': 'ERROR',
|
||||
# 'source': 'auth', 'message': 'Failed login from 192.168.1.100'}
|
||||
```
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
Nach diesem umfassenden Überblick können Sie:
|
||||
|
||||
1. **Praktische Übung**: Implementieren Sie die vorgestellten Patterns in Ihren aktuellen Untersuchungen
|
||||
2. **Tool-Integration**: Integrieren Sie Regex in Ihre bevorzugten Forensik-Tools
|
||||
3. **Automatisierung**: Entwickeln Sie Scripts für wiederkehrende Analysemuster
|
||||
4. **Spezialisierung**: Vertiefen Sie sich in tool-spezifische Regex-Implementierungen
|
||||
5. **Community**: Teilen Sie Ihre Patterns und lernen Sie von anderen Forensikern
|
||||
|
||||
### Weiterführende Ressourcen
|
||||
|
||||
- **SANS Regex Cheat Sheet**: Kompakte Referenz für Forensiker
|
||||
- **RegexBuddy**: Professionelle Regex-Entwicklungsumgebung
|
||||
- **Python re-Modul Dokumentation**: Detaillierte Syntax-Referenz
|
||||
- **YARA-Rules Repository**: Sammlung forensik-relevanter Regex-Patterns
|
||||
|
||||
Regular Expressions sind ein mächtiges Werkzeug, das Zeit spart und die Präzision forensischer Analysen erhöht. Die Investition in solide Regex-Kenntnisse zahlt sich in jeder Untersuchung aus und ermöglicht es, komplexe Muster zu erkennen, die manuell übersehen werden würden.
|
||||
Reference in New Issue
Block a user