Blog

Our latest thinking to keep you up to date
about technology & the industry
CloudGeometry AWS Partner Network SaaS Factory CompetencyCloudGeometry CNCF Kubernetes Certified Service Provider
CloudGeometry AWS Well-Architected Framework WAFR
Agentic Knowledge Base
Carter Holmes
May 1, 2026

Your AI Tools Don't Need a Smarter Model. They Need a Better Knowledge Base.

Most AI failures in companies aren’t because the model is dumb. They happen because the company’s knowledge is messy, scattered, and outdated. If your data is chaos, your AI will confidently give you wrong answers. The fix is not a better model. It’s a structured, governed knowledge base that AI can actually understand and trust.

Read post
Claude Code speeds delivery, but without system governance, coherence and predictability breaks. Learn how to scale AI-driven development
Nick Chase
April 29, 2026

Claude Code Is Not Enough. Why You Should Care Depends on Who You Are

Claude Code and similar AI coding tools genuinely make engineers faster, but speed alone doesn't guarantee better outcomes. The real variable is whether your system can absorb an increased rate of change. The same underlying problem shows up differently depending on who you are: technical leaders see loss of system coherence, business leaders see loss of delivery predictability. Most teams try to fix this with more tooling, better prompts, or better models, when what's actually missing is a governance layer that controls how changes enter the system.

Read post
Bigger context windows won't fix AI coding at enterprise scale. The real limits are structural, and the fix is a queryable map of your system, not more tokens.
Carter Holmes
April 27, 2026

Why Your Enterprise Codebase Will Never Fit in a Context Window (and What to Build Instead)

Enterprise codebases will never fit cleanly into an AI context window, and the limits are structural rather than engineering problems waiting on the next model release. Scaling AI coding agents at system level requires a structured, continuously maintained representation of the system itself, exposed to the model as a queryable map and governed by lifecycle gates that prevent "almost-right" code from quietly breaking production.

Read post
Senior engineers leave and critical system knowledge disappears. Before AI can safely change your codebase, you need a structured understanding of what it actually is and why.
Carter Holmes
April 9, 2026

What Your Codebase Knows That Your Team Forgot

When a senior engineer leaves, the context they carried walks out with them. AI tools operating on codebases without that context make dangerous assumptions. This article  breaks down why system understanding is the prerequisite for AI-assisted development, and what technical leaders should do before pointing any AI tool at their code.

Read post
AI generates code in hours, not weeks. That shifts the bottleneck to requirements, acceptance criteria, and governance. Here's how the PM role is changing and what to do about it.
Carter Holmes
April 6, 2026

What Happens to the Product Manager When AI Builds the Code

When AI handles implementation, the PM role shifts from requester to governor. Requirements become the product, feedback loops compress from sprints to days, and acceptance criteria become functional specifications. This article breaks down what changes, what the role looks like in practice, and five concrete steps PMs can take now to prepare.

Read post
AI coding tools are speeding up development, but they’re not fixing the real bottleneck. Discover why raw coding agents fail and what an AI-native software lifecycle actually looks like.
Nick Chase
March 25, 2026

Raw Coding Agents Are Not a Software Development Lifecycle

AI has dramatically accelerated how code is written, but it hasn’t changed how software is actually built. This mismatch is creating new bottlenecks, increasing hidden complexity, and making systems harder to trust. The next evolution isn’t better coding tools, it’s a fully structured, AI-driven lifecycle that governs how software is designed, validated, and continuously evolved.

Read post
AI hallucinations in enterprise software aren't a model problem, they're a context problem. Learn how strategic Context Engineering and the AppGraph eliminate tribal knowledge and unlock safe AI-SDLC adoption.
Eduardo Dominguez
March 5, 2026

The Context Crisis: Solving AI Hallucinations through Strategic Context Engineering

I hallucinations in legacy systems are not a technology problem. They are a context problem. When a coding assistant breaks your database sharding logic or ignores a legacy authentication wrapper, it hasn't failed; it has simply made the most statistically likely guess in the absence of specific facts.

Read post
More commits, slower roadmaps. Discover why the Engineering Velocity Trap is stalling software delivery and how AI-SDLC governance helps teams escape the productivity paradox.
Eduardo Dominguez
March 5, 2026

The Engineering Velocity Trap: Navigating the Productivity Paradox

AI has transformed how fast software gets written. But speed at the commit layer doesn't equal velocity at the system level. This post explores why unchecked AI adoption is creating a Review Crisis across engineering organizations and what executive teams need to do to govern change intelligently before the complexity tax compounds beyond recovery.

Read post
Replacing Process and Admin Work with AI Agents
Carter Holmes
March 3, 2026

Replacing Process and Admin Work with AI Agents

Most organisations are buried in admin that adds little value. Traditional automation failed because real work needs context, judgement, and coordination across tools. AI agents can now handle this, but only if teams are AI native. Most AI pilots fail due to lack of understanding, not bad tech. This blog explains CloudGeometry’s five step framework (Brief, Tooling, Sprint, POC, Adoption) which cut admin by 63% and recovered 40+ hours a week.

Read post
Most AI initiatives fail before launch because key decisions are never made. Learn the four choices required before AI makes decisions for you.
Nick Chase
January 20, 2026

Why Most AI Strategies Break Before They Ever Ship

Most AI initiatives stall not because models underperform, but because organizations fail to decide how AI behavior will be evaluated, governed, corrected, and explained. This article outlines the four foundational decisions every team must make before AI starts making decisions on their behalf, and why skipping them quietly breaks AI strategies long before anything ships.

Read post
Choosing AI tools too early locks in hidden assumptions. Learn why architecture, risk, and governance must come before tools.
Nick Chase
January 13, 2026

AI Tools Come Last. Here’s Why That Matters.

Starting AI projects by picking tools feels like progress, but it often hard-codes architectural decisions before teams understand their risks. This article explains why tools should come last, and how treating them as replaceable implementations leads to more resilient, future-proof AI systems.

Read post
Not every AI system should be an agent. Learn when workflows beat autonomy, how to choose the right AI architecture, and avoid unnecessary risk.
Nick Chase
January 9, 2026

You May Not Be Building an AI Agent. And That’s OK.

AI agents are powerful—but often overused. This piece explains the real architectural differences between single-step AI, workflows, and agents, and shows why most production systems don’t need autonomy to succeed.

Read post
Stop comparing AI tools by features. Learn how to choose AI tooling by understanding which decisions belong to you and which tools merely implement.
Nick Chase
January 2, 2026

So. Many. AI Tools. Here’s How to Know What You Actually Need.

The AI tooling landscape feels overwhelming because teams start with products instead of decisions. This article reframes AI tools as implementations of specific choices about control, autonomy, data, and evaluation, and shows how clarifying those decisions first makes tool selection simpler, safer, and more durable.

Read post
Avoid AI budget waste in 2026. Use this practical guide to plan investments that produce measurable business impact.
Nick Chase
December 30, 2025

Planning AI Investments That Actually Pay Off in 2026

In 2026, AI spending scrutiny will rise. This guide helps organizations plan AI investments that survive CFO review, avoid pilot purgatory, and deliver compounding ROI through clear outcomes, defined metrics, and scalable foundations.

Read post
Join Our Newsletter & Network

We share this knowledge with our clients, partners, and network.

Nick Chase delivers a 1hr Fundemental Crash Course on what you need to know for AI Agents. Stay ahead of the curve and leverage the latest insights.
Business
Beginner
Certification

AI Agents Business Crash Course

Get a Custom Course for your management team, to get the latest update on the stage of AI in your industry.

This course explores the foundational components required to design and deploy effective AI agents. It walks through the technology stack—from data handling to LLM integration—and highlights real-world use cases and infrastructure considerations in modern AI development.
Beginner
Certification

AI Agents: From Design to Tech Stack

Courses, Webinars, Resources Hub

Insights.CloudGeometry

We share this knowledge with our clients, partners, and network.
Up in Time, Down in Cost.
Intermediate
Certification

Balancing Kubernetes Reliability vs. Cost Optimization in the Real World  

Alex Ulyanov
CTO
Anton Weiss
Chief Evangelist
PerfectScale
Is it true that artificial intelligence can make business intelligence a little bit more... well, intelligent? The challenge: Get system data from one business process to tell you more about your other systems and business processes — using reports and dashboards you already have (even unstructured data). Rewatch experts Rob Giardina of Claritype Founder and Nick Chase of Cloudgeometry in a deep dive unlock the power of LLMs with a Standardized Data Model.
Beginner
Certification

AI for Better BI with the Data you Already Have

Nick Chase
Chief AI Officer
Rob Giardina
Founder
Claritype
This three-part series introduces the principles of securing AI systems. It covers foundational AI security concepts, provides a strategic overview of secure GenAI system deployment, and addresses future-proofing techniques to ensure safe and resilient AI architectures.
Intermediate
Certification

Foundations and Strategies for AI Security

Nick Chase
Chief AI Officer
David Fishman
VP Products & Services

Get your data, applications, and infrastructure ready for the AI revolution

Get your data, applications, and infrastructure ready for the AI revolution

Take your DevOps practices to the next level with Cloud Native & Platform Engineering

Take your DevOps practices to the next level with Cloud Native & Platform Engineering

CloudGeometry

AI Transformation Survey