4/22/2025 at 9:38:50 PM
Congrats on the launch.Here are my two cents, as I am very familiar with this space[1][2]
The problem of trying to position your product as "an easy way to deploy on over GCP" or "an easier way to do K8s" is that your product is always limited by the potential of what the underlying platform directly offers. I know multiple K8s management startups (in the pre-LLM era) that failed because of this.
You are not required to, but will be seduced to build 1:1 mapping to the concepts of the underlying systems. So, anyone using your product has to learn both the underlying platform (E.g., GCP) and your system. And the problem is that all of those concepts have been derived either directly or indirectly from AWS or K8s, both have a focus on SREs much more than software engineers.
The second problem is that there are now two interfaces to change something - one is infra.new, and another is the underlying platform directly. Your system will have to catch the drift in deployment when someone goes and changes the underlying platform.
The only major way to win is to have your deployment system, e.g., an alternative to vercel.com, Render.com, or https://railway.com.
- Vercel - a deployment system for frontend engineers (that's my perception)
- Render/Railway - a deployment system for backend software engineers (that's my perception)
This approach is not guaranteed to succeed, but you are no longer limited to using the underlying platform's concepts. 1. - https://github.com/ashishb/gabo
2. - https://ashishb.net/programming/how-to-deploy-side-projects-as-web-services-for-free/
by ashishb
4/22/2025 at 11:59:26 PM
Appreciate the detailed feedback and definitely agree that wrapping these cloud services is a bad idea. Our last product did this and it went exactly how you described.Our goal isn’t really to make deploying “easy” per se, we mainly want to help infra / DevOps teams make better configuration changes faster by blending AI code gen with specialized RAG + static analysis + human review. The cool thing about using LLMs for this use case is that we don’t need to do the 1:1 mapping you described, we can instead just teach the agent to use the underlying systems directly.
We like to think of ourselves as the anti-PaaS since we help engineering teams manage their own platform. Most of these teams already use Terraform and can continue to manage their infra however they like, they'll just do it faster and probably catch some issues that slipped through the cracks before.
Our launch post did a bad job mentioning this focus on infra teams, so I apologize if that caused any confusion! Maybe "the Cursor for infra teams" would be a better way to describe infra.new
by TankeJosh