Marco and His Impossible Equation: Why We Built the First Conversational CloudOS

Marco and His Impossible Equation: Why We Built the First Conversational CloudOS

Dec 1, 2025

Cloudsome Pulse

The First Conversational CloudOS
The First Conversational CloudOS

Marco and His Impossible Equation: Why We Built the First Conversational CloudOS 

From CMP to CloudOS: The Leap That Changes Everything 

A few days after the CloudsomeOS launch — and after reading the announcement — Marco calls me back. Skeptical. 

"OK, I read your announcement. The first Conversational CloudOS. 

But what really changes from all the other platforms I'm testing?" 

"And sorry, but I'm also trying out various 'vibe-coding' solutions where you can deploy straight from the app. What's different? Why another tool? Didn't we agree that we need to simplify? And then, in short: why call it a CloudOS and not simply a new CMP with a bit of AI?" 

All legitimate questions, which I was expecting. Because they're the same ones we asked ourselves. And in fact, it's precisely from Marco's same reflections that we started. 

—-

The starting point: Cloudsome was also born (almost) as a CMP 

First of all: Cloudsome wasn't born as a CloudOS. Initially, our work looked very much like that of a "modern" CMP. 

On our backend, matured over time, we built: 

  • all the IaaS primitives to create, modify and orchestrate resources (VMs, networks, volumes, security, etc.); 

  • all the PaaS primitives to distribute services and applications (build, deploy, routing, scaling); 

  • an automation engine that unified provisioning, networking, security, observability, backup. 

In practice:  a unified IaaS + PaaS automation engine, designed to make complexity transparent to the end user. 

It was the "cloud engine" that today sits under the "hood" of CloudsomeOS. Born on AWS in a managed environment, it was then extended to run on any hyperscaler instance, on OpenStack, and to manage heterogeneous application stacks in any context. The Cloudsome team would create the environment, deployment infrastructure and a simple executable recipe of instructions, and with a simple "push" you could deploy your application from command line or from your Git repository.

At that point the natural reasoning was: 

  1. build a front-end; 

  2. add dashboards and governance views; 

  3. make it self-service; 

  4. package it as an "evolved cloud management platform". 

In other words: a CMP with a particularly powerful backend. 

But while we were designing, we took a pause.  One question stopped us: "Are we really solving Marco's problem?" 

—-

What CMPs do (well), and where they stop 

Back to Marco's question: 

"What really changes compared to a CMP with a bit of AI on top?" 

The honest answer is: CMPs do a lot, but they stop at a certain level. 

  • "Infrastructure-first" CMPs are excellent for: governance, policies, costs, roles, compliance. 

  • "Deploy-first" CMPs improve developers' day-2: integrated CI/CD, templates, environment management, some guided automation. 

But both start from the same assumption: the unit of work is the resource, the tool, the workflow. 

They help you manage the means better. But they don't take charge of the meaning of what you're trying to achieve. 

They ask you: "Tell me how to configure" but not: "Tell me what you want to achieve" 

—-

The natural limits of "vibe-coding" 

Let's also address Marco's second point: 

"The in-app solutions I'm testing aren't bad. Why isn't that enough?" 

Because their model works as long as: 

  • deployment is linear; 

  • the number of services is limited; 

  • integrations are few or none; 

  • there's only one provider. 

As soon as: 

  • you add multiple microservices, 

  • in different languages (say a Node.js frontend and a Python backend), 

  • you manage different environments (dev, staging, prod, demo), 

  • you need to connect to external services (DB, cache, queues, third-party systems), 

  • you work in real multi-cloud or on private OpenStack, 

the "deploy from app" language no longer holds up. It can't represent, nor orchestrate, that complexity. 

It's not a defect: it's their purpose. They're perfect tools for circumscribed cases. 

In other words, even looking at the most recent and "developer-first" solutions, the problem wasn't the tool itself, but the fact that everything continued to ask Marco to think in the system's terms, instead of his own. 

—-

The question that really stopped us: do we really need another control panel? 

While the backend was maturing, the landscape changed rapidly over the last year: 

  • "infra-first" CMPs remained central for governance and costs; 

  • new "developer-first" platforms promised fast deployments and simplified UX; 

  • "vibe-coding" solutions with in-app deployment started appearing; 

  • "all-in-one" experiences like DigitalOcean showed that simple cloud is possible — but only in their own sandbox. 

Looking at all this, the risk was clear: we were about to build another layer of interfaces on top of tools, not a different level of understanding. 

The question changed: 

"If in the end the user still has to think in terms of resources, configurations and scripts, are we really simplifying or just reorganizing complexity?" 

It was also thanks to the acceleration of LLMs that we started asking ourselves: 

"What if the problem wasn't how you show things, but in what language you let them express it?" 

And we wanted to start from a different hypothesis: intent should be able to handle both simple and complex cases without changing either form or substance. 

From there, the next step wasn't "let's add a chat to the panel." It was: change the language level. 

—-

The four fundamental questions (Marco's, and ours too) 

At a certain point, the reasoning boiled down to four simple questions: 

  1. Why do I still have to think like a system? If every platform still asks me to reason in terms of resources, configs, pipelines, I'm just shifting the problem up one level. 

  2. Why do I have to choose between simplicity and freedom? "All-in-one" models work great... as long as I accept lock-in to a single provider. 

  3. Why don’t app-centric models scale to real architectures? "Vibe-coding" or app-centric solutions work well for simple applications; as soon as I add services, environments, integrations, the magic disappears. 

  4. Why does multi-cloud exist only on paper? Most platforms support multiple providers, but don't offer common semantics: each cloud remains a world unto itself. 

Among the four questions, one is particularly revealing: 

Why do I have to choose between simplicity and freedom? 

To give a practical example, DigitalOcean — like few other "all-in-one" models — has shown that simple, consistent cloud with a unified language is possible. But it only works within their perimeter, with their infrastructure, with their rules. 

It's a model very similar to iOS: perfectly integrated, consistent, but tied to its own hardware. It only works there. 

A conversational CloudOS, especially if you want to truly manage multi-cloud, should instead resemble Android more: a layer that makes different environments coherent, even heterogeneous ones, regardless of who manages them or where they run. 

This isn't the real answer to all four questions, but it's the additional signal that made us intuit the right direction. 

The Conversational CloudOS was born here: as an answer to these four questions, even before becoming a product. 

—-

The Satori moment: separating intent from means 

The illumination was this: 

The problem isn't just automation. The problem is the language we use to activate it. 

As long as the language is: 

  • "configure", 

  • "provision", 

  • "set up", 

  • "scale", 

  • "connect this network to that instance", 

Marco is forced to think like the system. 

What he actually wants to say is: 

  • "Release the new application version." 

  • "Prepare a staging identical to production for the mobile team." 

  • "Add a Redis with the same settings as the existing one." 

  • "Performance has dropped: collect and compare logs from the last release." 

For years, we've manually translated these intents into pipelines, scripts, manifests, playbooks. The question became: 

Can we have a cloud operating system do this translation, instead of people or a collection of tools? 

This is where the idea of CloudOS comes from, and the reason for "Conversational". 

—-

Intent-driven in practice: from saying to doing 

The "conversational" part of CloudsomeOS isn't a gadget. It's how you enter the operating system. 

The chain, simplified, is this: 

  1. Intent in natural language Marco formulates what he wants, not the technical recipe: 

  • "Release the new app version on OpenStack with autoscaling." 

  • "Prepare staging identical to production for the mobile team." 

  • "Add a Redis to the deployment copying the redis-prod-01 configuration." 

  • "Analyze the performance drop by comparing logs from the last release." 

  1. Context understanding CloudsomeOS semantically maps the infrastructural model and connects: 

  • application model templates (blueprints), 

  • available infrastructures, 

  • policies, 

  • dependencies, 

  • environments. 

  1. Deterministic translation Here the AI doesn't "invent": 

  • compiles the executable recipe (manifest), 

  • generates resource creation plans, 

  • applies network rules, 

  • defines scaling and security, 

  • aligns everything with existing policies. 

  1. Orchestration on substrates The intent is the same, but it's executed on: 

  • OpenStack, 

  • hyperscalers, managed and unmanaged, and in the near future:

  • IaaS components of environments like VMware or Proxmox, 

  • edge contexts. 

  1. Feedback and adjustment The infrastructure state is reported back to Marco in the same language he used to express the intent. 

In this sense, "conversational" doesn't mean "nice chat." It means: natural language input, coherent technical architecture output. 

—-

Multi-cloud as consequence, not as functionality 

When meaning is separated from means, multi-cloud stops being a list of logos. 

If an intent is: 

"I want this Python app to handle Friday evening traffic peaks, using the most efficient resources in terms of cost and latency." 

CloudsomeOS can decide to: 

  • deploy it on private OpenStack, 

  • move part of it to a hyperscaler, 

  • use different zones or regions, 

without Marco having to change how he formulates the request.

New providers? New connectors. Same language. 

Remember the iOS vs. Android metaphor? This is where the comparison with Android becomes natural: 

  • it's not the cloud dictating the language, 

  • it's the CloudOS giving a common language over different clouds. 

——

A (very concrete) example 

Back to Marco. 

BEFORE (CMP thinking) Marco would write: 

"Configure a Python deployment with 3 replicas, network policy for Redis, HPA based on CPU > 70%, ServiceMonitor for Prometheus..." 

AFTER (CloudOS thinking): "Deploy the new Python app with auto-scaling, connect it to the existing Redis" 

CloudsomeOS: "Done. Deployment active on OpenStack prod-cluster-1, connected to redis-prod-01, monitoring active." 

All the parts: 

  • "how many pods", 

  • "which network", 

  • "which storage class", 

  • "which orchestrator", 

  • "which exact provider", 

haven't disappeared from the world. They're simply no longer the mental model required of Marco

—-

Why we call it CloudOS 

At this point, the answer to Marco's initial question is easier: 

"Why a CloudOS, and not a CMP with a bit of AI?" 

Because we're not: 

  • adding a chat to a platform, 

  • nor adding AI to an existing control plane, 

  • nor simplifying a single environment. 

We're introducing:

  • an intent language over infrastructure; 

  • a deterministic translation engine intent → action; 

  • a multi-cloud abstraction layer that doesn't depend on a single provider; 

  • a unified way to describe results, not means. 

In other words: 

CMPs help govern the cloud. A CloudOS changes how we think about the cloud. 

The conversational part isn't a whim: it's the natural gateway to this new level. 

—-

What stays (and what disappears) for Marco 

What doesn't disappear: 

  • policies, 

  • architectures, 

  • constraints, 

  • costs, 

  • responsibility. 

What disappears — progressively — is the need to:

  • think like an orchestrator; 

  • speak like a configuration file; 

  • mentally reconstruct the tool map; 

  • "translate" every business request into three different tools. 

——

CloudsomeOS doesn't promise "zero complexity." It promises something better: 

Complexity doesn't disappear. It just stops weighing on people. 

Marco can focus solely on building what really matters. 

And you?
When was the last time your infrastructure worked without you thinking about it?
 

Cloudsome is a registered trademark of Delta HF S.r.l.

P.IVA: IT01856120934 - Codice REA: PN350947
Sede operativa Via Carlo Farini, 5 - 20154 Milano - Sede legale Via Del Fante, 18 - 33170 Pordenone (PN)

English

© 2025 All Rights Reserved -

Cloudsome is a registered trademark of Delta HF S.r.l.

P.IVA: IT01856120934 - Codice REA: PN350947
Sede operativa Via Carlo Farini, 5 - 20154 Milano - Sede legale Via Del Fante, 18 - 33170 Pordenone (PN)

English

© 2025 All Rights Reserved -

Cloudsome is a registered trademark of Delta HF S.r.l.

P.IVA: IT01856120934 - Codice REA: PN350947
Sede operativa Via Carlo Farini, 5 - 20154 Milano - Sede legale Via Del Fante, 18 - 33170 Pordenone (PN)

English

© 2025 All Rights Reserved -

Cloudsome is a registered trademark of Delta HF S.r.l.

P.IVA: IT01856120934 - Codice REA: PN350947
Sede operativa Via Carlo Farini, 5 - 20154 Milano - Sede legale Via Del Fante, 18 - 33170 Pordenone (PN)

English

© 2025 All Rights Reserved -