FredAI
Oct 17, 2025

Safe, Scalable, and Secure AI Enablement
Executive Summary
In late 2024, Freddie Mac launched FredAI — a secure, internal generative AI platform designed to harness the power of large language models (LLMs) while protecting confidential enterprise data. Starting as a small proof of concept with OpenAI’s self-hosted ChatGPT, FredAI evolved into an enterprise-wide productivity platform by mid-2025, enabling employees to safely query internal knowledge sources, automate routine tasks, and share reusable workflows. Within months of rollout, adoption surpassed 50% of the employee base, with some teams reporting measurable cost savings exceeding $10,000 per project.
Challenge
Freddie Mac’s enterprise policy restricts the use of external LLMs due to data-security and confidentiality concerns. While employees recognized the productivity potential of generative AI, there was no safe environment to experiment, test, or deploy these tools. The core challenge was to design a risk-free, internally contained AI system that allowed experimentation without exposing internal data — enabling teams to accelerate their work while maintaining compliance with legal and security frameworks.
Approach
The initiative began in Q4 2024 as a proof of concept using a self-hosted instance of OpenAI’s ChatGPT, enhanced with a secure middleware layer that prevented any data transmission outside the Freddie Mac environment.
By Q1 2025, the first version supported limited internal knowledge sources — including the Seller/Servicer Guide and confidential Terms of Business — ensuring responses were grounded in trusted information. Users could also add their own curated libraries, expanding relevance across business units.
In Q2 2025, FredAI introduced agentic and shared prompt workflows — modular, reusable automations designed to perform recurring tasks such as updating datasets, drafting communications, or summarizing policy changes. These workflows standardized output quality, increased delegation efficiency, and reduced manual work.
Throughout development, Legal, Risk, and IT Security were embedded as early collaborators to ensure compliance at every milestone.

Adoption & Awareness
Enterprise-wide rollout required significant enablement and culture-building. A cross-functional awareness campaign combined SharePoint posts, all-staff emails, pop-up events, workshops, and a “Prompt-a-Palooza” competition to spark curiosity and build prompt literacy.
The initial pilot included two dozen early users; however, after the enterprise rollout, adoption exceeded 5,000 employees — over half the workforce. The Prompt Workshop Series and department-level competitions helped employees visualize practical, safe use cases in their daily work, transforming perception from “experimental tool” to “trusted productivity platform.”



Outcomes & Impact
Adoption: Surpassed the target by over 15x, reaching more than 50% of employees within months.
Efficiency: Teams using FredAI reported $10,000 in savings per guide chapter automated — with potential for $300,000+ savings if scaled across all 30 chapters.
Risk Management: The platform maintained full isolation from external LLMs, meeting internal data-protection requirements.
Cultural Impact: Elevated AI literacy across business lines through experiential learning and peer collaboration.
Lessons Learned
The project underscored that AI innovation in a highly regulated environment requires early and continuous engagement from Legal and Risk. Each approval step took time, but integrating these stakeholders early accelerated overall progress and reduced rework.
Designing for confidentiality forced the team to think differently — to build AI capabilities “inside the fence” rather than seeking external shortcuts. This approach ultimately strengthened confidence across leadership and paved the way for broader AI governance at Freddie Mac.
Next Steps
FredAI is now expanding its library of shared workflows and exploring integration with other enterprise systems, such as Sigma and internal documentation hubs. Future iterations will focus on refinement of model-switching capabilities, deeper analytics tracking, and expanding the AI literacy program to further embed safe, scalable innovation across the enterprise.