Figena
Accentrust's enterprise LLM core. Built for governed data, approvals, and sensitive workflows—powering Fabric, Guard, Studio, and Signals.
At a glance
Enterprise-native LLM
Trained and tuned for approvals, governance, and regulated environments.
Grounded by Fabric
Consumes semantic, governed context from Fabric for accurate answers.
Wrapped by Guard
Policies on prompts, retrieval, and outputs with full auditability.
Operational in Studio
Packaged into assistants, workflows, and agents for every team.
Measured by Signals
Quality, risk, and business impact tracked across real usage.
Why Figena
Enterprises need an LLM that is built for approvals, sensitive data, and regulated workflows—not just generic chat. Figena is the intelligence layer at the heart of Accentrust, designed to stay grounded on governed context, respect policy, and deliver reliable outcomes.
Figena aligns model quality with enterprise control: Fabric supplies trusted context; Guard enforces policy; Studio operationalizes assistants; Signals measures impact. Together, Figena becomes the dependable “brain” across your AI-native stack.
What Figena does
Grounded understanding
Uses Fabric's semantic layer to reason over governed data with citations.
- •Retrieval over governed tables, documents, and vectors.
- •Semantic grounding to metrics, entities, and contracts.
- •Citations with confidence and lineage for every answer.
Policy-aware generation
Respects Guard policies before, during, and after each response.
- •Prompt and context filters aligned to access scopes.
- •Redaction and masking on inputs and outputs.
- •Approval workflows for high-risk actions and sensitive data.
Tool-using reasoning
Calls functions and orchestrates tools through Studio with safety checks.
- •Function calling with typed schemas and rate controls.
- •Fallback and routing to alternate models when required.
- •Observability on tool calls, costs, and latency.
Continuous improvement
Learns from Signals and evals to get safer and more precise over time.
- •Automated evals on faithfulness, safety, and usefulness.
- •Human feedback loops to refine prompts and retrieval.
- •Signals-driven adjustments to thresholds and playbooks.
AI inside Figena
Governed by design
Native integration with Guard keeps every request policy-aligned and auditable.
Grounded reasoning
Prefers Fabric context and provides citations to keep answers faithful.
Adaptive routing
Selects the right path across tools and fallback models to meet quality and cost goals.
Learning loop
Feedback from Signals and evals tunes prompts, retrieval, and tool plans continuously.
Architecture overview
Figena sits at the center of the Accentrust suite—grounded by Fabric (connectors, semantic layer, contracts, vectors), wrapped by Guard (policy, approvals, masking, audit), orchestrated by Studio (assistants, workflows, tools), and observed by Signals (detect, forecast, recommend, act).
Common use cases
Knowledge & search
Governed Q&A with citations on policies, products, and technical content.
Document co-pilot
Summaries, clause extraction, and drafting with approvals and redaction.
Operations & automation
Workflow orchestration with tool calling, guardrails, and audit trails.
Decision intelligence
Narratives, scenario comparisons, and action suggestions tied to Signals.
Model hub with controls
Figena by default, with governed routing to alternates for cost or policy needs.