THE CALIBRATION

Strategic Use of AI Enablement to
Reframe Client Intake

Navigating the tension between scale and soul

At Harms Advisory Group, we protect the privacy of our clients. All case study Calibrations are anonymized to protect our partners' brand. What you’ll read here reflects real interventions, systemic truths, and outcome shifts, without compromising the relationships that made them possible

“AI is not a feature, it is a force function.”

-Tisha Hartman, Founder and CEO, Harms Advisory Group

The Inflection Point

A Canadian equity-centered venture with fewer than 25 employees and an annual budget just under $2M was experiencing rising national demand from women, Indigenous, and newcomer entrepreneurs, revealing critical strain in its manually run intake systems and time-limited advising capacity. For years, this not-for-profit handled client-intake one project at a time—manually, relationally, and in person. But as demand grew, so did the strain on staff and systems.

They were ready to modernize but not at the cost of empathy.

With a grant on the table, they needed a partner to articulate the case for change: to build a vision of how AI could meet operational needs without compromising the humanity of their work. Given that their client-base was a marginalized demographic, extra care had to be taken towards sensitivity, protection and bias prevention.

Strategic Outcome

The design of the AI program had to fit with the demographic context of their clients; this required special consideration. The resulting platform—housed within their existing intake portal—was designed to reduce bottlenecks while preserving high-value advising where it mattered most.

By automating only the right steps, the system would protect client dignity while freeing staff time to advise on vetted applications.

Engagement Tier Fit

This was a Scale-tier Engagement, delivered through our AI & Intelligent Operations Roadmap offering.

  • It required a foundational shift in how intake and financial evaluation were structured
  • Grant-dependent funding with tight timelines
  • A mission-critical need for alignment between values, tech, and trust

Harms authored the grant ready strategy and language that bridged nonprofit sensibilities with funder imperatives.

Signal Diagnoses

In a move towards modernization, here’s where we believe the introduction of the AI platform at the intake level would benefit this organization. Can you see areas where similar infrastructural improvements could help your own business scale?

  • Mismatch between client demand and manual intake capacity creating silent bottlenecks and service delays 
  • Advisors carried dual burdens as gatekeepers and evaluators
  • Lack of pre-screening tools reduced the strategic value of advisor-client interactions
  • Ethical risk loomed large: AI needed to be human-reviewed, bias-aware, and client sensitive 
  • Increased efficiency and immediate turnaround with an AI portal could increase client trust and enthusiasm in the organization


The Reframe

If we stop designing for efficiency alone and begin designing AI systems for dignity and discernment, we enter a whole new terrain. The challenge isn’t just scale—it’s scaling with soul. 

What this team needed wasn’t a tool that judged worthiness but one that noticed patterns of alignment. This wasn’t about automating more, but filtering more accurately and in minutes instead of days or weeks. The ideal solution was a system that could handle the volume of complex analysis without flattening the humanity in each proposal.

  • Need for consistent, scalable, and inclusive business support
  • Long wait times and variables in advising could be filtered, potential flags already raised, and therefore advisor-to-client interactions would become more valuable and consume less time
  • Because of the nature of the nonprofit’s clients, ethics and privacy remained key for all planning initiatives
  • A limited time government-funded grant was available to implement


System Design Highlights

While the strategy prioritized humanity, Harms also engineered the technical foundation to reflect the same level of care. This wasn’t an out of the box solution, it was a system designed for trust.

  • Consent by Design: No Data processed without opt-in, encrypted, timestamped logging.
  • PII Redaction and a 7-day expiry: Personally Identifiable Information data masked, stored encrypted, and deleted automatically after one week.
  • Tiered override protocol: Human-in-the loop review governs all scoring logic, with advisory dashboards and audit trails. 
  • Bias audit harness: 12+ diverse client profiles used to stress test fairness and surface skew in AI outputs
  • Canada-Residency GPT-4 API: All interaction hosted on Canadian data centers with zero reuse for model training

The system’s ethics weren’t abstract concepts; they were enforced into the code at every step of the design. 

What’s Different Now

At the time of writing, the grant and proposal for full system deployment are under review. But the strategy is already changing the conversation, with funders, staff, and stakeholders alike. We project that:

  1. Up to 30% of staff time could be reallocated from intake to impact execution
  2. Client responsiveness would increase with pre-screening available 24/7, applicants would receive guidance faster and more consistently
  3. Advisor-client time would become higher-value, focused only on clients already focused for alignment and feasibility
  4. Burnout risk would decline as staff energy repurposed for mission-aligned engagement.

Replication Signal

Some fear AI. Some run towards it. We believe in neither panic nor hype—but precision.

Used well, AI becomes an amplifier of discernment, not a shortcut. We know that AI is going to be a major feature of the future of business.

Especially in equity-driven work, it must be handled with care, oversight, and respect. Done right, AI becomes an operational multiplier and an ethical ally.


Time-to-shift: 6-12 months

Author: Jesse Harms, President
Human-led. AI-assisted. Always accountable.

“It’s about building an execution system that thinks with you like a second nervous system, embedded in your operations—fast, responsive, and deeply contextual.”

-Tisha Hartman, Founder and CEO, Harms Advisory Group

Is your organization ready to scale with AI?
Our AI & Intelligent Operations Roadmap is a premium diagnostic designed for leadership clarity,
operational precision, and ethical systems alignment.
→ Start the conversation

This remains an active engagement at the Scale tier initiated through our AI & Intelligent Operations Roadmap, through which Harms supported the partner organization in crafting the grant application and funder-facing strategy, aligning their operational needs with ethical AI imperatives. Harms also has an active proposal to lead the full system deployment as an extension of the original engagement, quoted at $235,000 CAD.

→ Contact us for off-menu engagements

#Tier(s): Scale
#Engagement Name(s):
AI & Intelligent Operations Roadmap
#Sector(s):
Nonprofit, Equity-Focused, Economic Empowerment
#Competencies:
AI Strategy, Process Design, Ethical Systems, Grant Alignment, Stakeholder Readiness
#Inflection Point(s): Scaling Pain, Systemic Bottlenecks, Trust Vulnerability

Similar posts