selarya insights --case-studies

Clarity before code

Architecture decisions, trade-offs, and implementation patterns from flagship engagements — documented with the same rigor we bring to production systems.

// case_01

HiddenFees.ae

FinTech transparency at scale

Live project

A consumer-first platform that surfaces hidden charges across financial products — turning opaque fee structures into auditable, comparable data.

40+

Fee categories

Multi-bank

Data sources

Next.js 14

Core stack

// outcomes

  • Normalized fee schemas across heterogeneous bank disclosures
  • Sub-second comparison queries via materialized views
  • Consumer-grade UX with engineering-grade data integrity

// technical_deep_dive

How we solved it

Domain-driven fee normalization

Problem

Each institution exposes fees in different formats — flat JSON, PDF tables, and legacy CSV. Without a canonical model, comparisons break down.

Solution

We introduced a FeeDescriptor pipeline: ingest → validate → map to a shared ontology → persist with provenance metadata for audit trails.

$ cat lib/fees/normalize.tstypescript
interface FeeDescriptor {
  id: string;
  category: "account" | "transfer" | "fx" | "card";
  amount: { value: number; currency: string };
  frequency: "once" | "monthly" | "annual";
  source: { institutionId: string; capturedAt: string };
}

export async function normalizeFeePayload(
  raw: unknown,
  institutionId: string,
): Promise<FeeDescriptor> {
  const parsed = FeeSchema.parse(raw);
  return {
    id: createFeeId(parsed),
    category: mapCategory(parsed.type),
    amount: { value: parsed.amount, currency: parsed.currency },
    frequency: parsed.billingCycle,
    source: { institutionId, capturedAt: new Date().toISOString() },
  };
}

Comparison API with cached aggregates

Problem

Real-time aggregation across millions of fee rows caused p95 latency spikes during peak traffic.

Solution

Materialized views refresh on a schedule; the public API reads from pre-computed comparison snapshots with stale-while-revalidate.

$ cat app/api/compare/route.tstypescript
export async function GET(req: Request) {
  const { productIds } = parseQuery(req);
  const cacheKey = `compare:${productIds.sort().join(",")}`;

  const cached = await redis.get(cacheKey);
  if (cached) {
    return Response.json(JSON.parse(cached), {
      headers: { "X-Cache": "HIT" },
    });
  }

  const snapshot = await db.comparisonSnapshot.findMany({
    where: { productId: { in: productIds } },
  });

  await redis.setex(cacheKey, 300, JSON.stringify(snapshot));
  return Response.json(snapshot, { headers: { "X-Cache": "MISS" } });
}

// case_02

Enterprise AI Pipelines

Governed LLM orchestration

Production LLM workflows for regulated industries — RAG with guardrails, evaluation harnesses, and human-in-the-loop review before anything reaches end users.

12+

Eval suites

3-tier

Guardrail layers

Configurable

HITL gates

// outcomes

  • Traceable prompts and retrieval context for every inference
  • Automated regression on hallucination and policy violations
  • Audit-ready logs aligned with enterprise compliance requirements

// technical_deep_dive

How we solved it

RAG pipeline with retrieval guardrails

Problem

Unbounded retrieval introduced stale policy documents and PII fragments into the context window — unacceptable for regulated outputs.

Solution

A staged pipeline filters chunks by classification, recency, and relevance score before assembly; blocked chunks are logged, not silently dropped.

$ cat pipelines/rag_orchestrator.pypython
async def build_context(query: str, tenant_id: str) -> RAGContext:
    chunks = await retriever.search(
        query=query,
        tenant_id=tenant_id,
        top_k=20,
    )

    filtered = [
        c for c in chunks
        if c.classification != "restricted"
        and c.score >= MIN_RELEVANCE
        and c.updated_at > policy_cutoff(tenant_id)
    ]

    guardrail_result = await policy_engine.scan(filtered)
    if guardrail_result.blocked:
        raise GuardrailViolation(guardrail_result.reason)

    return RAGContext(chunks=filtered[:8], trace_id=new_trace_id())

Evaluation harness before deploy

Problem

Model updates shipped without regression testing caused drift in factual accuracy and tone for customer-facing assistants.

Solution

Every prompt change runs through golden datasets with automated judges; deploy gates require passing thresholds on accuracy, safety, and latency.

$ cat eval/run_suite.tstypescript
const THRESHOLDS = {
  accuracy: 0.92,
  safety: 1.0,
  p95LatencyMs: 2400,
} as const;

export async function runEvalSuite(modelId: string) {
  const results = await Promise.all(
    GOLDEN_CASES.map((c) => evaluateCase(modelId, c)),
  );

  const summary = aggregate(results);
  if (summary.accuracy < THRESHOLDS.accuracy) {
    throw new DeployBlockedError("accuracy", summary);
  }
  return summary;
}

// next_step

Ready to architect your next system?

We apply the same clarity-first approach to every engagement — from discovery through production.