DFIRVault

ForensIQ

Introducing ForensIQ: AI-Powered Elasticsearch Log Analysis for Cybersecurity Investigations


The Challenge of Modern Log Analysis

As cybersecurity professionals, we know that logs are the lifeblood of incident response and threat hunting. But with the exponential growth of log data—from endpoints, networks, cloud environments, and SIEMs—manually analyzing logs has become overwhelming.

Traditional SIEM tools help with correlation rules, but they often miss subtle attack patterns, especially in large datasets. Meanwhile, security teams are stretched thin, and adversaries are getting smarter.

What if we could combine Elasticsearch’s powerful log aggregation with AI-driven analysis to uncover hidden threats faster?

That’s where ForensIQ comes in.


🔗 https://github.com/dfirvault/forensIQ


What is ForensIQ?

ForensIQ is an open-source tool that bridges Elasticsearch and Ollama’s LLMs (like Mistral, Llama3, and Mixtral) to provide:

✅ Progressive log summarization – Analyzes logs in chunks while maintaining context
✅ Entity tracking – Identifies and links IPs, users, and malicious activity
✅ MITRE ATT&CK mapping – Automatically flags suspicious TTPs
✅ Interactive investigation – Lets you ask follow-up questions with full context
Free and open source – Built on Ollama and using Elasticsearch, this will dramatically speed up investigations.

Unlike static SIEM rules, ForensIQ learns as it goes, making it ideal for:

  • Threat hunting in large datasets

  • Incident response triage

  • Forensic investigations with timeline reconstruction


How It Works

1. Elasticsearch + AI Integration

ForensIQ connects to your Elasticsearch cluster, retrieves logs, and processes them using Ollama’s LLMs. You can choose from multiple models depending on your hardware:

Model RAM Required Best For
tinyllama 1GB Quick triage on low-end systems
mistral 4GB Balanced speed/accuracy
mixtral 29GB Best for deep analysis

2. Dynamic Chunking for Large Logs

Instead of dumping thousands of logs into an LLM (which would exceed token limits), ForensIQ:

  • Splits logs into optimized chunks

  • Maintains context between chunks

  • Builds a progressive summary

This ensures high-quality analysis without overwhelming the model.

3. Structured Threat Intelligence Extraction

ForensIQ doesn’t just give you a text summary—it extracts structured data:

{
  "entities": { "ip": ["192.168.1.100"], "user": ["admin"] },
  "themes": { "brute_force": ["10 failed logins"] },
  "attacks": { "T1110 - Brute Force": ["Multiple RDP attempts"] }
}

This makes it easy to pivot into other tools like MISP, TheHive, or Splunk.


Real-World Use Cases

🔍 Case 1: Detecting a Slow-Burn Attack

A red team might spread malicious activity over weeks to avoid detection. ForensIQ can:

  • Link low-severity events into a broader attack chain

  • Highlight anomalies that manual review would miss

🛡️ Case 2: Accelerating Incident Response

Instead of grepping through logs during a breach, analysts can:

  1. Run ForensIQ on relevant time windows

  2. Get an AI-generated incident timeline

  3. Ask follow-up questions like:
    “Show all logins from this suspicious IP”

📊 Case 3: Threat Hunting at Scale

For blue teams, ForensIQ can:

  • Process millions of logs from across the enterprise

  • Flag rare patterns (e.g., unusual service account usage)


Getting Started with ForensIQ

  • Make sure you’re using Windows.
  • Download Ollama: https://ollama.com/download/OllamaSetup.exe

  • Install Ollama and make sure it’s running:
    • To install the models in a different location than the software install directory, follow these steps:
    • Open Windows Settings.
    • Go to System.
    • Select About
    • Select Advanced System Settings.
    • Go to the Advanced tab.
    • Select Environment Variables….
    • Click on New…
    • And create a system variable called OLLAMA_MODELS pointing to where you want to store the models
  • restart Ollama and confirm it is running by tray icon:
     
  • Next, open a command prompt and type “ollama pull mistral”. You will have to pull each LLM that you intend to use, Mistral is safe and stable.
  •  If you want to force GPU execution (recommended), run this command,
    • setx “OLLAMA_GPU_ENABLED 1”
    • set OLLAMA_GPU_ENABLED=1
  • By default, ollama runs only through localhost, if you want to use a seperate compute server, then run the following:
    set OLLAMA_HOST=0.0.0.0:11434
    ollama serve
  • Next, double click the binary that is stored in the following repo: https://github.com/dfirvault/forensIQ
  •  
  • Next, select either Elastcsearch service or Local CSV:
  • From here, follow the prompts, it’s that simple.

Why This Matters

Security teams are drowning in alerts but starving for insights. Tools like ForensIQ don’t replace analysts—they augment human expertise by:

  • Reducing manual log review time

  • Surfacing hidden patterns

  • Making investigations more interactive

The future of cybersecurity isn’t just more data—it’s smarter analysis.


Try It Yourself

🔗 https://github.com/dfirvault/forensIQ

Questions? Email me at dfirvault@gmail.com.

Tags: