Accentrust

Studio

A generative AI workbench for enterprises, powered by Figena. Secure search, seamless automation, and tailored models in one place.

At a glance

Private search and Q&A

Private search and Q&A on your data with citations and source controls, grounded for Figena.

Document automation

Document automation for contracts, reports, and emails with reusable templates.

Model orchestration

Model routing and orchestration with Figena at the core, plus open and commercial models.

Quality monitoring

Built-in evals, feedback loops, and quality monitoring.

Native integration

Native integration with Fabric for grounding, Guard for policy enforcement, and Signals for outcomes.

Why Studio

Enterprises need more than chat. They need accurate answers, repeatable workflows, and guardrails that scale. Studio packages Figena into role-ready assistants and automated processes. Teams move from searching for information to shipping outcomes with confidence.

InputsSecureSearchAutomationFlowsCustomModelsStudio WorkbenchAI InterfaceProcessingGenerationOutputOutcomesSummaries &DraftsAutomatedActionsRecommendations
SearchAutomateCreate

What Studio does

1

Search and Q&A

Retrieval-augmented generation with granular access control and citations, powered by Figena.

  • Retrieval-augmented generation over Fabric's semantic layer.
  • Granular access control and result filtering aligned with Guard policies.
  • Citations with page anchors, confidence scores, and source previews.
  • Figena uses only governed context and returns policy-aware answers.
2

Document automation

Automated document processing with configurable policies and templates.

  • Contract review and clause extraction with configurable policies.
  • Report generation for finance, operations, and compliance.
  • Summarization, translation, redaction, and format conversion at scale.
  • Figena handles language and reasoning while Guard enforces approvals.
3

Workflows and agents

Visual workflow builder with multi-step automation and tool integration.

  • Visual builder for multi-step flows.
  • Tools and function calling for CRM, ticketing, storage, and internal APIs.
  • Event triggers from webhooks, schedules, or Signals alerts.
  • Figena drives reasoning steps; Studio orchestrates tools and guardrails.
4

Model management

Intelligent model routing with prompt libraries and versioning.

  • Choose the right model per task with routing and fallbacks.
  • Support for open models and hosted APIs through adapters.
  • Prompt libraries, variables, and persona presets with versioning.
  • Figena is the default engine; others are used when policies or cost require.
5

Evals and monitoring

Comprehensive evaluation framework with quality metrics and dashboards.

  • Hallucination checks against ground truth.
  • Answer quality scoring with human-in-the-loop review.
  • Cost, latency, and success metrics with dashboards and export.
  • Quality loops tune Figena prompts, chunking, and tool use.

AI inside Studio

Figena by default

Studio runs on Figena for grounded generation; routing chooses alternates when needed.

Grounded by design

Studio queries Fabric's structured layer first, then composes answers with sources.

Policy aware

Guard policies apply at retrieval and generation. Sensitive content is masked or blocked before output.

Learning loop

Feedback trains selectors, improves chunking, and tunes prompts over time with Figena in the loop. Chain-of-thought stays private; explanations show steps and references to users.

Architecture overview

Five-layer architecture ensuring knowledge grounding, orchestration, execution, safety, and observability with Figena at the core.

Inputs
Fabric Collections
Semantic layer & governed data
Documents & Files
PDF, DOCX, HTML, MD, TXT, XLSX
Warehouses & Lakes
Snowflake, BigQuery, S3
User Prompt
Questions, roles, variables
Knowledge
Parsers & Loaders
Document parsing, table extraction
AI
Chunker & Segmenter
Semantic segmentation
AI
Embeddings Generator
Vector embeddings
AI
Metadata Enricher
Tags, ACL, categorization
Vector Index
Vector search & metadata
Semantic Bridge
To Fabric metrics
Orchestration
Prompt Manager
Templates, persona, context
AI
Context Builder
Retrieval & reranking
Model Router
Model selection & fallback
Tool Router
Tool selection & auth
Workflow Engine
Multi-step flows
Session State
Variables & cache
Execution
Retrieval Engine
Policy-filtered access
AI
Reasoning & Planner
Multi-step planning
Function Calling
CRM, ticketing, APIs
Document Pipelines
Summary, translation
Citation Builder
References & anchors
Output Renderer
Structured answers
Delivery
Assistants Runtime
Multi-tenant runtime
Publishing
Web, Slack, Teams, API
Webhooks & Triggers
Scheduled tasks
Adapter Registry
Model & tool adapters
Channels
Web App
Direct interface
Slack/Teams
Chat integration
Embed/API
Custom integration
Studio SDK
TypeScript/Python
Signals
Task delivery
Safety & Governance
• Policy Gate
• PII Redaction
• Approvals
• Audit Logs
Guard enforcement
Observability & Evals
• Traces & Metrics
• Evals Engine
• Feedback Store
• Tuning
Quality & learning
InputsKnowledgeOrchestrationExecutionDeliveryChannels

AI workbench architecture: Knowledge, Orchestration, Execution, and Delivery. Guard enforces policy at retrieval and generation. Observability and Evals close the loop for quality, cost, and learning.

Common use cases

Knowledge assistant

Private Q&A for policy, product, and technical docs with citations.

Contract intelligence

Clause extraction, risk flags, playbook suggestions, and comparison.

Financial reporting

Narrative MD&A drafts grounded on governed datasets.

Customer support

Draft replies, summarize tickets, suggest resolutions, create follow-ups.

Engineering enablement

Summarize design docs, generate runbooks, create change logs.

Example templates

Policy Q&A

Search and cite approved documents with confidence scores.

📋

Contract review

Extract terms, compare against playbooks, draft redlines.

📊

Quarterly narrative

Generate MD&A drafts grounded on governed metrics.

🎧

Support summarizer

Condense conversations, propose next steps, create tickets.

📰

Ops brief

Daily highlights from Fabric and Signals with links to sources.

How it works, end to end

1

Select Fabric collections and define access scopes.

2

Studio builds indexes and proposes retrieval settings.

3

Choose a starter template or assemble a workflow.

4

Attach Guard policies for redaction, approvals, and logging on Figena usage.

5

Publish an assistant to web, Slack, or an embed.

6

Monitor usage, eval quality, and iterate on prompts and tools with Signals feedback.

Ready to transform knowledge into action?

Turn fragmented information into reliable assistants and automated workflows with the Figena-powered enterprise AI workbench.