LevelBlue + SentinelOne: Global Partnership to Deliver AI-Powered Managed Security Operations and Incident Response. Learn More
Access immediate incident response support, available 24/7
Access immediate incident response support, available 24/7
LevelBlue + SentinelOne: Global Partnership to Deliver AI-Powered Managed Security Operations and Incident Response. Learn More
Most organizations start by using Microsoft Copilot the way it looks in demos: type a question, get an answer. That works for exploration. For repeatable operational work, it gets expensive quickly.
Microsoft Security Copilot is billed in Security Compute Units (SCUs). Every time you prompt it, be it ask a question, request a summary, investigate an alert, you consume SCUs.
The default assumption is that Copilot’s AI does the heavy lifting. In practice, Copilot doesn’t have to be the brain, it just needs to be the trigger. The intelligence can live in pre-built KQL, and Copilot simply executes it.
This post covers the design principles behind OSCAR (Operations Security & Compliance Automated Reporter), a Security Copilot agent built to run 100+ compliance checks daily across NIST CSF 2.0, NIST 800-53, and CIS Controls v8, while consuming only ~7.5% of the free 400 SCU monthly allocation.
Before building anything, understand what you’re spending.
The trade-off: natural language prompts are flexible but costly; KQL skills are precise and cheap. For repeatable, scheduled work — compliance reporting, daily threat checks, audit trail generation — pre-built KQL skills are the better choice.
The core idea is simple: move intelligence into KQL, use Copilot only as the orchestration layer.
In a traditional Copilot workflow, you ask a question and the AI figures out what data to look at, what query to run, and how to interpret it. Each step burns SCUs. In a KQL-first agent:
All detection logic lives in pre-built KQL skills — the query already knows exactly what to look for, which tables to query, which fields matter.
Security Copilot executes the skill — one SCU, the KQL runs against your Sentinel/Log Analytics workspace, results come back as structured JSON.
Logic Apps handle persistence — results flow automatically to a custom Sentinel table (ComplianceReports_CL) without further AI involvement.
The AI isn’t analyzing your data. It’s calling a function that does — and that function runs in Log Analytics, not in Copilot’s compute. This distinction is what makes the economics work.
No proprietary storage. No separate database. Everything queryable from Sentinel.
The agent manifest YAML format for KQL skills is straightforward:
- Format: KQL
Skills:
Name: FailedAuthenticationReport
DisplayName: Failed Authentication Attempts Report (AC-7, CIS-5.1)
Description: Detect failed authentication attempts indicating brute force attacks
Settings:
Target: Sentinel
Template: >-
let timeRange = 24h;
let findings = SigninLogs
| where TimeGenerated > ago(timeRange)
| where ResultType != 0
| summarize
FailedAttempts = count(),
FirstAttempt = min(TimeGenerated),
LastAttempt = max(TimeGenerated),
Locations = make_set(Location),
IPAddresses = make_set(IPAddress)
by UserPrincipalName
where FailedAttempts >= 5
extend
ControlID = "AC-7",
Framework = "NIST_800_53",
Severity = "High",
RemediationRequired = "true"
project TimeGenerated = now(), UserPrincipalName,
ailedAttempts, Locations, IPAddresses,
ControlID, Framework, Severity, RemediationRequired
Three things to notice:
Compliance reporting has a requirement that pure detection doesn’t: you need evidence that you checked, even when everything is clean. An empty result set doesn’t prove the query ran, it just looks like missing data.
The solution is a union that guarantees at least one row:
let findings = SigninLogs
| where TimeGenerated > ago(24h)
| where ResultType != 0
| summarize FailedAttempts = count() by UserPrincipalName
| where FailedAttempts >= 5
| extend FindingType = "Suspicious Activity";
let hasResults = toscalar(findings | count) > 0;
union findings,
(print placeholder = 1
| where not(hasResults)
| extend FindingType = "No Findings", UserPrincipalName = "N/A"
| project-away placeholder)
When no suspicious logins exist, the query returns a single No Findings row with the current timestamp. Your compliance workbook always shows the control was checked. Your auditors always have evidence. The Logic App always has something to write to ComplianceReports_CL.
This pattern is essential for any automated compliance use case. Without it, clean environments look identical to broken automation.
OSCAR’s daily run executes 13 KQL skills via one agent orchestrator call:

That exceeds the free 400 — but not all skills run every day. OSCAR uses report groups to control scope:
Tuned to daily critical + weekly full sweep: ~500 SCUs/month, achievable within a 1-SCU provisioned capacity. The 7.5% figure applies to the critical-only daily run. Full coverage requires modest provisioning, still far cheaper than ad-hoc prompting at scale.
OSCAR’s 13 skills span the domains that matter for compliance reporting:

Each skill returns results tagged with control IDs from NIST CSF 2.0, NIST 800-53 Rev 5, and CIS Controls v8 simultaneously — the same finding maps to all three frameworks in a single query pass.
The OSCAR architecture applies beyond compliance reporting. Any repeatable security operations workflow fits this model:
The pattern is always the same: express the detection logic in KQL, register it as a skill, let the Logic App be the scheduler, let Sentinel be the store.
Key Takeaways:
Next Steps:
LevelBlue is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.