According to a recent McKinsey study, the use of AI in a variety of forms continues to grow across organizations and business functions worldwide. Yet, while enthusiasm is high, the reality is that most organizations are still in the early phases of integration and experimenting with ways to achieve meaningful return on investment through AI adoption.
The greatest challenge often lies in AI’s complexity. True integration touches nearly every part of an organization—from technology infrastructure and data privacy to how employees interact with their spaces day to day.
One emerging strategy gaining attention is the shift toward on-premise AI (also known as private AI), an arrangement by which AI infrastructure and applications are hosted internally via on-site servers, private cloud systems or dedicated data centers. Compared to fully cloud-based models, on-premise solutions offer organizations greater control, improved security and, potentially, better alignment with privacy regulations.
As hardware continues to get smaller, faster and more energy efficient, many companies may soon be able to house at least a portion of their AI infrastructure in-house, reshaping what’s possible for on-site capabilities. Hybrid approaches—combining on-premise with cloud or co-located environments—will also be a popular option, depending on budget, scalability and sensitivity of data.
Making Room for AI Infrastructure with Flexibility at the Forefront
There’s no one-size-fits-all approach to AI. That’s why the best thing designers can do is prepare for robust conversations with their clients surrounding its implementation in their spaces. For workplace architects, engineers and strategists alike, this means being ready to weigh all the variables—from space planning and system design, to sustainability, security and user experience.
The foundation of any AI-ready office starts with physical infrastructure. As organizations begin to explore on-premise AI, the demand for secure, high-performance computing on site is reemerging. Although many companies reduced the size of their IT rooms in the widespread shift to cloud computing, the return of local data processing may reverse that trend. Supporting real-time AI applications requires fast, low-latency networks, robust computing power and secure environments for sensitive data, all of which necessitate intentional space planning.
Key components like high-performance GPUs and fast data connections, including high-speed copper and fiber, come with significant spatial and operational demands. Rack space, power and cooling are critical. In particular, liquid cooling systems—often essential for compact, high-performance computing—help to minimize hardware footprints, but require more extensive mechanical and plumbing infrastructure. Striking the right balance between these competing demands is essential.
Beyond immediate needs, designers must also consider future-proofing. Because AI capabilities are evolving rapidly, spaces should be designed with flexibility and scalability in mind. That means avoiding overcommitment to fixed infrastructure while still anticipating increased workloads. Power and cooling resources are finite, so overbuilding carries environmental and economic risks to be considered as well. The most effective designs make space for secure, scalable and adaptable AI infrastructure, capable of supporting both current needs and emerging technologies.
For many clients, a hybrid model will be the best path forward. This might include a mix of on-premise equipment, localized secure data centers and cloud-based processing. Each site’s power availability will be a limitation, and local grid constraints should be factored into designs.

STACK Infrastructure—digital infrastructure partner to the world’s most innovative companies and a leading global developer and operator of co-located data centers—proves that a localized data center can be a valuable piece to the hybrid puzzle. HGA worked with the company to develop its SVY01 data center campus in San Jose’s Enterprise Zone, a 10-square mile area that has been designated to attract and retain businesses. Due to this urban location, the project’s overall design quality and sensitivity to the center’s surroundings were paramount for HGA’s design team.
The resulting three-story, LEED-Silver building fits seamlessly into its site and houses offices and electrical rooms on its ground floor and data halls on its upper two floors. With 32 megawatts total spread over four data halls, each houses roughly 1,000 racks that support the IT needs of a variety of local corporations in the same space. Backed by substantial emergency generators and enhanced safety measures, even San Jose’s technology giants can feel confident that their data is secured. For many, this type of data center can be one component within a broader AI infrastructure strategy, complemented by on-premise systems designed to handle highly sensitive data with maximum security.
Securing Data and Enhancing the User Experience
For many organizations, data security is the leading reason to consider on-premise AI. Internalizing AI infrastructure allows full control over both data and models. Organizations can load foundation models into secure environments protected by access management protocols and biometric controls, as well as physically secure spaces with AI servers and hardware.
To accommodate this infrastructure well, architects and technology designers must collaborate closely. Designs should integrate restricted zones, layered security and seamless digital-physical coordination.
Beyond backend systems, AI is already enhancing how people interact with buildings. Popular applications in workplace settings include:
- AV and security cameras with intelligent tracking and real-time analytics, contributing to safer and more responsive environments;
- Smart videoconferencing that identifies speakers and improves AI-generated transcriptions, making meetings more efficient and inclusive;
- Digital signage and AI chatbots at common area kiosks for wayfinding and crowd management, helping visitors and staff navigate spaces more easily; and
- Generative digital artwork that responds to environmental conditions or activity in real time, creating more engaging and dynamic spaces.

Creating Smart and Sustainable Building Systems Through AI
AI-ready spaces should also be sustainable. Data centers and server rooms are energy-intensive, but several strategies can reduce their footprint, such as:
- Using liquid cooling over air cooling
- Implementing hot/cold aisle containment
- Sourcing renewable energy, such as solar
- Designing for modularity and material reuse
At the new STACK Infrastructure campus, the mechanical system comprises water-cooled chillers on its roof that feed fan walls delivering conditioned air across its data halls. Additionally, in some cases, the waste heat generated by 24/7 cooling can be repurposed to support other building systems to enhance overall energy efficiency.
AI can also enable office buildings to operate more intelligently. In the past, systems like HVAC and lighting needed to be networked and interoperable. Today, AI models can optimize performance, predict maintenance issues and reduce manual oversight.
To further sustainability efforts, a new shift toward edge AI means trained models are no longer confined to server rooms—they’re embedded in devices themselves. These models can:
- Predict occupancy and adjust systems in real time
- Detect faults before breakdowns occur
- Optimize energy use without manual intervention
However, to make this work, designers need to ensure devices collect the right data for inference.
Designing for an AI-Enabled Future
AI is not just a software upgrade—it’s a paradigm shift, one that touches every aspect of how we plan, design and operate our workplaces. While technology will continue to evolve, the current direction is clear: organizations must begin laying the groundwork now.
For designers, this means creating spaces that are agile, adaptable and resilient—capable of supporting today’s needs while remaining flexible for what’s next. It also means helping clients think strategically about scalability. AI adoption and development doesn’t have to happen all at once. Phased infrastructure rollouts and modular spaces offer practical, cost-effective entry points, especially for smaller organizations.
Equally important are ethical and regulatory considerations. From data privacy to environmental impact, design teams must prioritize not just capability, but accountability. Cross-disciplinary collaboration that brings together architects, engineers and technology experts is key to developing tailored, scalable solutions grounded in both present demands and future potential.

One example of this kind of future-oriented strategy is HGA’s development of Iris, a private AI chatbot trained on the firm’s own writing standards and embedded within its internal systems. Iris pulls from large language models (LLMs) such as ChatGPT, Gemini and Claude, but operates securely behind HGA’s firewall. This approach allows teams to generate content, leverage institutional knowledge and collaborate on in-progress materials without exposing confidential information. It’s a part of a broader hybrid AI strategy that combines on-premise security with the scalability of cloud-based tools, demonstrating how firms can tailor AI to fit both their operational and security needs.
As AI reshapes the workplace, designers have a unique opportunity to lead with empathy, intelligence and vision. By engaging early and working collaboratively, we can help create environments that are not only smarter and more secure—but also more human.
Special Thanks to our Tech Fest Supporter:
Let’s design spaces that inspire our best work—and remain flexible to everything the future holds. Explore flooring solutions for 2025 and beyond.

