Accentrust

Figena

Accentrust's enterprise LLM core. Built for governed data, approvals, and sensitive workflows—powering Fabric, Guard, Studio, and Signals.

At a glance

Enterprise-native LLM

Trained and tuned for approvals, governance, and regulated environments.

Grounded by Fabric

Consumes semantic, governed context from Fabric for accurate answers.

Wrapped by Guard

Policies on prompts, retrieval, and outputs with full auditability.

Operational in Studio

Packaged into assistants, workflows, and agents for every team.

Measured by Signals

Quality, risk, and business impact tracked across real usage.

Why Figena

Enterprises need an LLM that is built for approvals, sensitive data, and regulated workflows—not just generic chat. Figena is the intelligence layer at the heart of Accentrust, designed to stay grounded on governed context, respect policy, and deliver reliable outcomes.

Figena aligns model quality with enterprise control: Fabric supplies trusted context; Guard enforces policy; Studio operationalizes assistants; Signals measures impact. Together, Figena becomes the dependable “brain” across your AI-native stack.

Fabric
Grounded context
Guard
Policy envelope
Context + Policies
Figena LLM Core
Reasoning • Routing • Safety
Assistants • Workflows • Actions
Studio
Assistants & tools
Signals
Actions & feedback
Observability & Evals — Cost • Latency • Success • Tool Errors • Alerts

What Figena does

1

Grounded understanding

Uses Fabric's semantic layer to reason over governed data with citations.

  • Retrieval over governed tables, documents, and vectors.
  • Semantic grounding to metrics, entities, and contracts.
  • Citations with confidence and lineage for every answer.
2

Policy-aware generation

Respects Guard policies before, during, and after each response.

  • Prompt and context filters aligned to access scopes.
  • Redaction and masking on inputs and outputs.
  • Approval workflows for high-risk actions and sensitive data.
3

Tool-using reasoning

Calls functions and orchestrates tools through Studio with safety checks.

  • Function calling with typed schemas and rate controls.
  • Fallback and routing to alternate models when required.
  • Observability on tool calls, costs, and latency.
4

Continuous improvement

Learns from Signals and evals to get safer and more precise over time.

  • Automated evals on faithfulness, safety, and usefulness.
  • Human feedback loops to refine prompts and retrieval.
  • Signals-driven adjustments to thresholds and playbooks.

AI inside Figena

1

Governed by design

Native integration with Guard keeps every request policy-aligned and auditable.

2

Grounded reasoning

Prefers Fabric context and provides citations to keep answers faithful.

3

Adaptive routing

Selects the right path across tools and fallback models to meet quality and cost goals.

4

Learning loop

Feedback from Signals and evals tunes prompts, retrieval, and tool plans continuously.

Architecture overview

Figena sits at the center of the Accentrust suite—grounded by Fabric (connectors, semantic layer, contracts, vectors), wrapped by Guard (policy, approvals, masking, audit), orchestrated by Studio (assistants, workflows, tools), and observed by Signals (detect, forecast, recommend, act).

Grounding & Data Plane
Connectors & CDC
DBs · Object storage · APIs/files/logs · Web · Batch/stream schedulers · CDC
Semantic Layer & Contracts
Metrics · Dimensions · Time grains · KPI contracts (SLAs, owners, calendars)
Validated Tables / Events / Views
Governed datasets with lineage, versions, rollback
Vector Indexes & Caches
HNSW/IVF · Hybrid search · TTL · Hot/warm caches
Policy & Security (Guard)
Policy Engine
Prompt/context/output filters · RBAC/ABAC · Data residency tags
Approvals & Evidence
High‑risk gates (sensitive scopes, external sends) · Decisions logged · Evidence bundles
Masking & Tokenization
PII/PHI redaction · Format-preserving tokenization · KMS/rotation · BYOK
Audit & Reporting
Full trace of prompts, retrieval, outputs · CSV/PDF/API exports
Figena LLM Core
Reasoning & Generation
Chain-of-thought kept private · Cited answers · Structured outputs
Prompt Library & Personas
Templates · Variables · Safety presets · Guardrails per role
Model Router & Fallbacks
Figena default · Alt models by cost/latency/policy/quality · Kill-switch
Safety Filters
Toxicity/PII/faithfulness screens · Response shaping · Hallucination pre-checks
Orchestration, Tools, Observability
Function Calling & Tools
Typed schemas · Rate limits · Tool plans · Storage/CRM/Ticketing/Payments/Custom APIs
Studio Assistants & Workflows
Multi-step flows · Event triggers (webhook/schedule/Signals) · Embeds/Slack/Web
Observability & Evals
Online: cost/latency/success/retries/tool errors/cache hits · Offline: replay & evals (faithfulness/safety/usefulness) · Alerts/webhooks
API & SDK
Headless + UI · Scoped tokens · Per-request audit context
Signals Feedback & Learning Loop
Define/Detect/Forecast
KPI contracts (owners, thresholds) · Streaming/batch detection · Seasonality/change-point · Scenario sims
Recommend & Act
Playbooks · Owners/SLAs · Approvals on auto-actions · Route to Studio automation
Learn & Tune
Evals + human review · Thresholds · Prompt/routing/tool tuning
Outcome Capture
Tasks to Slack/CRM/ticketing/email · Impact measurement · Close the loop
Fabric → Figena: governed context & contracts
Guard → Figena: policy envelope & approvals
Studio ↔ Figena: assistants, workflows, tools
Signals → Figena: detection, feedback, tuning

Common use cases

Knowledge & search

Governed Q&A with citations on policies, products, and technical content.

Document co-pilot

Summaries, clause extraction, and drafting with approvals and redaction.

Operations & automation

Workflow orchestration with tool calling, guardrails, and audit trails.

Decision intelligence

Narratives, scenario comparisons, and action suggestions tied to Signals.

Model hub with controls

Figena by default, with governed routing to alternates for cost or policy needs.

Ready to put Figena at the core?

Make Figena your governed LLM foundation—grounded by Fabric, protected by Guard, operationalized in Studio, and measured by Signals.