5/21/2025 at 7:45:25 PM
> Gemini 2.5 Pro is incredible at coding, so we’re excited to bring it to Google AI Studio’s native code editor. It’s tightly optimized with our Gen AI SDK so it’s easier to generate apps with a simple text, image, or video prompt. The new Build tab is now your gateway to quickly build and deploy AI-powered web apps. We’ve also launched new showcase examples to experiment with new models and more.This is exactly what I see coming, between the marketing and reality of what the tool is actually able to deliver, eventually we will reach the next stage of compiler evolution, directly from AI tools into applications.
We are living through a development jump like when Assembly developers got to witness the adoption of FORTRAN.
Language flamewars are going to be a thing of the past, replaced by model wars.
It migth take a few cycles, it will come nonetheless.
by pjmlp
5/22/2025 at 6:46:23 AM
Remind me in a couple of years when this product is abandoned.by mirsadm
5/21/2025 at 7:53:43 PM
I agree. Until about 2005 it was code-on-device and run-on-device. The tools and languages were limited in absolute capabilities, but easy to understand and use. For about the past 20 years we've been in a total mess of code-on-device -> (nightmare of deployment complexity) -> run-on-cloud. We are finally entering the code-on-cloud and run-on-cloud stage.I'm hoping this will allow domain experts to more easily create valuable tools instead of having to go through technicians with arcane knowledge of languages and deployment stacks.
by xnx
5/21/2025 at 8:04:45 PM
Having worked on expert systems the difficulty in creating them is often the technical limitations of the end users. The sophistication of tooling needed to bridge that gap is immense and often insurmountable. I see the AI as the bridge to that gap.That said it seems like both domain expertise and the ability to create expert systems will be commoditized at roughly the same time. While domain experts may be happy that they don’t need devs they’ll find themselves competing against other domain experts who don’t need devs either.
by cjbgkagh
5/22/2025 at 2:57:27 AM
AI as the bridge to fix expert systems. Now I've heard it all!Obligatory video (sound familiar?): https://www.youtube.com/watch?v=oLHc7rlac2s
by glitchc
5/22/2025 at 4:29:45 AM
I think I’m missing your point. Perhaps we don’t share the same understanding of what I mean by AI and Expert Systems.I wouldn’t call expert systems AI even though the early use of AI referred to symbolic reasoners used in expert systems.
If you are capturing domain knowledge from an expert and creating a system around it, what would you call that? I think modern AI will help deliver on the promise of expert systems, and I don’t think modern AI obviates the utility of such systems. Instead of a decision support system for human users it’s a decision support system for an AI agent. The same AI agent can interface with human users with a more familiar chat interface - hence acting as a bridge.
Most users will not be able to write Multidimensional Expressions or SPARQL queries and with an AI intermediary they won’t need to.
by cjbgkagh
5/22/2025 at 12:42:28 PM
Expert Systems (TM) was trademarked (abandoned now) way way back in 1982. It was a licensed term that meant something very specific to those with a long background in technology. You sound new to the space and thus want to reinvent the term for your own purposes, which is a fine thing really, but an understanding of the history behind it may save you a great deal of grief.tl;dr, expert systems as a concept was hot! As an actual implementation, it was a colossal failure. The new AI hype train has always contained echoes of expert systems, and I was giddy with excitement to see someone complete the loop. The serpent eats its own tail after all. Which just goes to show that folks would be considerably more enlightened if they just read a bit more history.
by glitchc
5/22/2025 at 1:00:45 PM
I’m not new to the space, I’ve been working in tech for a very long time. I was into ontologies since the late 90s. And I agree they were in general a colossal failure, or at least didn’t live up to the general hype, they did find success in various niches and did have a big impact on the world. Tim Bernes-Lee was coming out with the semantic web stuff in 99, OWL ontologies is 2004. I did some interesting things with Freebase. I built out huge entity models and created an entirety extraction system to a help power a search engine for large scale cataloging which saw real world use. While not my work Bing and Wolfram Alpha did use such systems to improve their search/utility. Google bought Freebase and incorporated its functionality. The DoD did have success with it, and for a while Boeing had an interesting system but AFAIK they lost a lot of expertise and it atrophied somewhat.It was in general a library science thing, like search engines were, even today I wish I could disambiguate my search queries to be able to specify which word overloading I’m referring to, be it Java the language, the city, or the coffee.
I’ve spent a decent amount of time trying to introduce these concepts to regular people and have long considered it generally hopeless. I went on to work in ML and I’ve long thought it would be easier to teach a computer to use these systems than regular people. At least AI is at a point now where it can act as a bridge.
by cjbgkagh
5/22/2025 at 4:46:51 PM
I made a lot of money selling expert system tools between 1982 and 1986. Good times.by mark_l_watson
5/21/2025 at 9:12:35 PM
>We are finally entering the code-on-cloud and run-on-cloud stage.Sounds like an absolute nightmare for freedom and autonomy.
by suddenlybananas
5/22/2025 at 3:56:20 AM
How so? It's already status quo at companies with massive monorepos like Google and Meta. Your IDE connects to a remote computer and you write and run code on it. No more fiddling around with arcane dev env setup processes it's honestly really refreshing and doesn't feel restrictive at all. On the contrary I can nuke my cloud dev environment and bring it up again in minutes without worrying about losing anything.by Anon1096
5/22/2025 at 8:50:06 AM
To be fair, before Linux taking off, most UNIX shops where code-on-development-server and run-on-deployment-server stage.Cloud is only the rebranding of timesharing, clear with different technology stacks, however the approach to development is exactly the same as working on an UNIX shop back in 1975 - 1990's.
by pjmlp
5/23/2025 at 1:38:57 PM
So back to the dark ages before the tech boom basically.by nightski
5/21/2025 at 10:53:47 PM
but only because it isby Keyframe
5/22/2025 at 12:34:15 AM
SWE will be renamed to AIOps :)by bdangubic
5/22/2025 at 8:51:06 AM
You joke, but it is already here you only got the name wrong, MLOps.by pjmlp
5/22/2025 at 1:09:32 PM
SWE will be renamed to whatever the abbreviation is in an overseas language when AI can just be made to replace domestic experts with the cheapest college grads overseas. The problem doing so was always a lack of solid technical ability and experience for the cost cutting being done, but now? They just need to know basic english and block diagrams.I have never seen an entire profession race to make itself entirely unemployable and celebrate it.
Too many people are hoping they'll be one of the lucky ones still employed and doing little work while talking to a LLM ;)
by delfinom
5/22/2025 at 1:45:42 AM
Finally, companies can wrench back control from those pesky users. Only Google should have root; any other interaction should be routed through their AI! You wouldn't want to own your own device anyways, just rent it!by hooverd
5/21/2025 at 10:17:38 PM
> This is exactly what I see coming, between the marketing and reality of what the tool is actually able to deliver, eventually we will reach the next stage of compiler evolution, directly from AI tools into applications.Is this different from other recent models trained eg for tool calling? Sounds like they fine tuned on their SDK. Maybe someday, but it's still going to be limited in what it can zero shot without you needing to edit the code.
> Language flamewars are going to be a thing of the past, replaced by model wars.
This does seem funny coming from you. I feel like you'll still find a way :P
by magicalist
5/21/2025 at 11:15:24 PM
I think there will still need to be some kind of translation layer besides natural language. It's just not succinct enough (especially English, ew), especially where it matters like a rules engine. The thought of building something like an adjudication or payment system with a LLM sounds terrible.by candiddevmike
5/21/2025 at 11:44:27 PM
You don't need to use natural language to write your rules engine. LLMs speak every language under the sun, real or made up.You could define your rules in Prolog if you wanted - that's just as effective a way to communicate them to an LLM as English.
Or briefly describe some made-up DSL and then use that.
For coding LLMs the goal is to have the LLM represent the logic clearly in whatever programming language it's using. You can communicate with it however you want.
I've dropped in screenshots of things related to what I'm building before, that works too.
by simonw
5/22/2025 at 12:48:06 AM
> describe some made-up DSLIronically, for something like the parent suggested i.e. a rules engine, this is the main work.
by geraneum
5/22/2025 at 7:33:43 AM
It will fail where all other tools fail. Migrating databases, scaling issues, ..by NicoJuicy
5/21/2025 at 11:32:11 PM
...as long as your application is only a few thousand lines of code.Context windows are still tiny by "real world app" standards, and this doesn't seem to be changing significantly.
by stickfigure
5/22/2025 at 12:16:40 AM
I regularly put 50k LoC codebases in gemini, it has a 1M context window and actually uses it well.by CuriouslyC
5/22/2025 at 12:46:47 AM
I've had the opposite experience. If I give it that much context it starts to hallucinate parts of the application that it very much has access to look up. This only starts happening at large context windows.by sepositus
5/22/2025 at 1:19:17 AM
Depends on what you're doing. Too much context and code generation gets sloppy, but it does a decent job attending to the large context to answer questions, analyze control flow, investigate bugs, review style consistency and guideline violations, etc.by jacob019
5/22/2025 at 2:30:25 AM
Let me know when it handles 1.5M lines.by stickfigure
5/22/2025 at 7:39:46 AM
Let me know when you meet a person that can handle 1.5M lines, because most people I've worked with can't. Certainly I've never worked on something that required even reading that much... Especially when targeted search options exist to find just specific function/class implementations as needed.by vineyardmike
5/22/2025 at 1:49:29 PM
I have a large team that works in a 1.5M line codebase. Not everyone is familiar with every line, but we regularly make changes that would blow through the usable context window of LLMs (which is, in fact, much smaller than advertised).Is a messy 1.5M lines of tightly coupled code best practice? Of course not. But it evolved over about 20 years and processes tens of billions of dollars of financial transactions. In my experience, it is archetypical of real-world software for a large successful company.
I use LLMs where I can and they're incredibly useful. But their limits are severe compared to a good human software developer and the shortcomings mostly revolve around their tiny context. Human neuroplasticity is still champion.
by stickfigure
5/22/2025 at 2:59:42 PM
Sure, they work in that huge codebase, but for any specific task, the number of lines your team keeps in their short/long term memory is surely much much smaller than that.by yyhhsj0521
5/22/2025 at 9:13:41 PM
> 1.5m line codebaseI work regularly with AOSP code (~3mn). While LLMs (Copilot w/ Claude Sonnet 3.7, in my case) cannot gobble all of it up, they have no trouble answering my queries, for the most part, from submodules. If nothing else, using LLMs has dramatically reduced the time it takes to understand a new code file / submodule / a range of commits.
by ignoramous
5/22/2025 at 10:57:25 PM
I'm going to wildly guess that AOSP is part of every LLMs training set, so that feels like cheating somehow :-)by stickfigure
5/22/2025 at 4:17:17 PM
he will probably get back to you in a year thenby anthonypasq
5/21/2025 at 9:39:56 PM
This is why I think Rabbit is one of the most interesting startups around. If I could wave a wand and go pick any startup to go work at, it would be Rabbit.by neom
5/22/2025 at 12:23:48 AM
Isn’t rabbit a scam? https://paulogpd.bearblog.dev/en-rabbit-r1-its-a-scam/by MrDarcy
5/22/2025 at 12:31:41 AM
I use the R1 daily, it doesn't feel like a scam to me.by neom
5/22/2025 at 1:54:25 AM
[flagged]by DonHopkins
5/22/2025 at 2:14:29 AM
I have zero affiliation with the company, I don't know anybody there, never talked to anyone there, no kick backs, nothing. I also think I'm a pretty reputable member of this community...I don't particularly appreciate being called a shill. I don't know much about this Coffeezilla gentleman, he's a tech reviewer? Those videos are a year old. I've been using their R1 device for about 6 months now, I like it...?by neom
5/22/2025 at 2:17:50 AM
[flagged]by DonHopkins
5/21/2025 at 9:47:58 PM
Which Rabbit are you meaning? When I search for Rabbit AI I get a few hits and none of them seem like the most interesting startup around.by matt3D
5/21/2025 at 9:50:11 PM
https://www.rabbit.tech/They're developing some super interesting ways of the os developing itself as you use the device, apps building themselves, stuff like that. Super early days, but I have a really really good feeling about them (I know, everyone else doesn't and I'm sure thinks I'm nuts saying this).
by neom
5/21/2025 at 10:12:53 PM
You're not explaining why you have such a good feeling - is their team uniquely good, far ahead? Is there something specific in how they architected it? I think a lot of people are headed in this direction, they have a bad brand, the need to totally restructure their team, and probably bad equity structure now and a need for a down round, it'll be hard to get good talent.by nwienert
5/21/2025 at 10:51:41 PM
The rabbit OS project is literally the only correct path forward for AI. Hopefully they go for local on device inference, as they removes cloud costs, solving the burning pile of cash problem most AI companies have.Directly driving a user's device (or a device hooked up to a user's account at least) means an AI can do any task that a user can do, tearing down walled gardens. No more "my car doesn't allow programmatic access so I can't heat it up in the morning without opening the app."
Suddenly telling an agent "if it is below 50 outside preheat my car so it is warm when I leave at 8am" becomes a simple to solve problem.
by com2kid
5/22/2025 at 4:03:13 AM
I feel like I am experiencing so peak level trolling right now or am completely out of the loop. Are you guys seriously trying to make the point that that rabbit R1 deceive is the best think to happen to AI?by NewsaHackO
5/23/2025 at 12:28:42 AM
Not their device, look at their OS work.Complete AI control over a personal phone. Anything a user can do the AI can figure out how to do.
That is the end game for everyone right now - a new class of ambient AI powered personal computing.
by com2kid
5/22/2025 at 9:08:58 AM
Do you guys really think these obvious marketing comments will work here?by johanbcn
5/23/2025 at 12:34:04 AM
I'm not related to them at all. I've written about this field independently - https://meanderingthoughts.hashnode.dev/lets-do-some-actual-...The idea is a fully personal AI that can control ones devices to accomplish complex tasks. Rabbit is working on this through their rabbitOS project, lots of other players are doing the same thing. OpenAI is trying, and lots of open source projects. Even homekit has initial support for LLM integration.
IMHO controlling a phone directly is the best path forward. Google and Apple are best situated to exploit this, but they may be unable to do so due to structural issues within the companies.
by com2kid
5/22/2025 at 1:47:39 AM
Maybe. But everyone else here is celebrating Google being firmly inserted between them and any cognitive work they might do.by hooverd
5/22/2025 at 4:33:54 AM
Sounds like the old days of Windows where you just need to format and reload every so often to get everything working the way it should. You have to reset your AI sessions to get them back on track, why would an AI OS be any different?I feel that the lower level you go the more you want knowledgeable human experts in the loop. There is so much nuance in OS development that I think it'll be a while before I trust AI do have free rein over my devices.
But at the current speed of AI innovation I won't be that surprised if that day comes faster than I expect.
by matt_heimer
5/21/2025 at 9:49:18 PM
... that little AI assistant gadget thing that bombed? Them?by aquova
5/21/2025 at 9:52:02 PM
Yes, I think people wrote them off WAY too quickly, I don't really want to get into a back and forth on if they should have done tech reviews even at all blah blah blah, yeah I agree wasn't an ideal way to introduce yourself to the world, but if you listen to their CEO, use their product, and pay attention to the team they've put together... I feel strongly they're onto something big.by neom
5/21/2025 at 10:53:46 PM
Keep in mind that the company the CEO last founded before working on Rabbit was a crypto scam, though. They’re really not giving people much reason to trust them.Plus, why a separate device and not a mobile app?
by odo1242
5/22/2025 at 2:53:32 AM
I didn't know about their crypto stuff, but the R1 is still my fav thing to play with. I'm older and I don't want a phone with me all the time, I like to go for walks without the phone, but sometimes I still want something, camera + a bit of intelligence in the pocket is great, and the R1 is fun.by neom
5/22/2025 at 9:17:25 AM
R1 is rebadge android phoneby mrheosuper
5/22/2025 at 10:43:30 AM
"Listen to their CEO""No, not the scam part"
by 63stack
5/21/2025 at 10:26:04 PM
Are you being wrote off too quickly when you blatantly lie about your product capabilities?by j_w
5/21/2025 at 10:18:09 PM
Gemini 2.5 will write a whole Linux kernel from scratch! We are seeing a paradigm shift! This is bigger than the invention of electricity! Awesome times to be alive!by bgwalter