Cloudsome Pulse #8: When Governance Becomes Intent
The call we didn’t expect
Two weeks after Marco discovered conversational deployment, he didn’t call us.
His CFO did.
“I need to understand something,” Elena said, without preamble.
“We’re paying for three cloud environments. I see the invoices. But no one can tell me what we’re actually using.”
She wasn’t angry.
She was confused.
Simple questions. No answers.
Elena continued, calmly:
“Are those 47 VMs in our private cloud still needed?”
“Why is the hyperscaler bill 30% higher than last quarter?”
“Which team owns that €12K-per-month service?”
Marco was on the call with her.
Silence.
Not because he didn’t care — but because, in practice, each platform spoke a different language.
Different cost models.
Applications layered over time by different teams.
Different tags.
Different levels of detail.
And it wasn’t easy to untangle the mess.
“Marco,” Elena said, “deployment works now. But who governs all of this?”
Simple questions. No answers.
After the call, Marco wrote to us.
“She’s right. I’ve fixed how we deploy and manage new applications.
But I still don’t have a clear picture of what, overall, is really running.”
He knew exactly what Elena meant.
A private cloud based on OpenStack, hosting core workloads layered over time and managed by different teams.
A hyperscaler used for elasticity and data services, to guarantee high availability.
A legacy VMware environment kept alive for that one system no one dares to touch.
Three environments.
Three dashboards.
Three different ways of describing reality.
“I don’t need another control panel,” Marco said.
“I need someone to tell me what’s going on — preferably in human language.”
Simple questions. No answers.
Marco came back with a more concrete question.
“With your platform,” he asked, “could I ask something simpler?
How many machines are actually doing work?”
We showed him the beta.
He typed directly:
“How many VMs have an average CPU utilization below 60% over the last 30 days?”
Eight seconds.
The platform correlated observability metrics across environments and replied:
“31 machines out of 47 show sustained average CPU utilization below 60%.
14 have had no significant network traffic for more than 14 days.
9 have no declared owner.”
Marco didn’t look surprised.
“Thirty-one,” he said.
“That’s most of them.”
Losing the link between decisions and reality
“I know why I have 47 machines,” Marco said.
“I created them over time, as application teams released new systems.
And each one asked for a VM.”
He paused.
“Of course, whenever I could, I optimized.
But over time, what I lost was the connection between those decisions — and what is actually running today.”
This is where a different operating model becomes necessary.
You shouldn’t create machines.
You should create space for application workloads.
Even better if you can express it as intent:
what an application needs in order to run correctly.
The platform then decides how to allocate infrastructure to satisfy that intent.
Machines become an implementation detail — not a decision you make.
The result isn’t “fewer resources through discipline.”
It’s capacity aligned by design.
You don’t pay for machines just in case.
You pay for what is actually being used.
Marco nodded.
“So if I had deployed everything this way from the start,” he said,
“many of those machines probably wouldn’t exist — or at least not in that form.”
Exactly.
From visibility to governance
Marco immediately saw where this was heading.
“Today you give me visibility,” he said.
“I can finally see inefficiencies.”
He paused.
“But the next step is doing something about them, right?”
Yes.
And it’s not a leap of faith —
it’s a natural extension of what already exists.
Why this works: a live infrastructure model
Cloudsome doesn’t rely on static inventories or disconnected dashboards.
It maintains a live infrastructure model — a continuously updated representation of:
what exists
how components connect
what depends on what
how systems actually behave across environments
Metrics, logs, relationships, and ownership signals all converge into the same model.
On top of this model runs a closed loop:
Discover — observe what exists and how it behaves
Decide — translate intent into actions
Deploy — execute deterministically
Observe — verify outcomes through telemetry
Govern — surface deviations and reconcile drift
Today, this loop governs deployment.
The key realization is simple.
The question
“how many machines are underutilized?”
and the intent
“ensure no machine stays underutilized for more than 14 days”
are the same thing.
One is a query.
The other is a continuous rule.
What comes next — naturally
This is exactly the direction we’re heading.
What today sounds like:
“How many resources have no declared owner?”
Becomes:
“Alert me when a resource has no owner for more than 7 days.”
And, later:
“Prevent new resources from being created without an owner and a declared budget.”
Same language.
Same model.
Same loop.
Just applied more deeply.
What Marco said as he left
“The complexity hasn’t disappeared,” Marco said.
“But at least now I can see it.”
Then he added:
“When I can tell the system to act on what it sees — not just show it to me — call me.”
We’re almost there.
——
Hybrid environments don’t need to become smaller.
They need to become understandable — and governable — by design.
And once deployment speaks intent,
governance inevitably follows.
