5/19/2025 at 11:49:27 AM
I think people like author are positive about us, humanity, being able to build AI or something being very close to that.I am not.
From the energy efficiency perspective human brain is very, very effective computational machine. Computers are not. Thinking about scale of infrastructure of network of computers being able to achieve similar capabilities and its energy consumption... it would be enormous. With big infrastructure comes high need of maintenance. This is costly and requires a lot of people just to prevent it from breaking down. With a lot of people being in one place, there socioeconomical cost, production, transportation needs to be build around such center. If you have centralized system, you are prone to attack from adversaries. In short I do not think we even close to what author is afraid of. We just closer to beginning to understand what is the need to actually start to think about building AI - if ever possible at all.
by npodbielski
5/19/2025 at 6:08:15 PM
I hope you are right - the point about energy efficiency is certainly spot on, and I do think it is possible that people are getting carried away by analogies when discussing the topic (I wrote something about that too, but will avoid linking to it here to avoid excessive self-promotion).That said, the article doesn't assume such a thing will happen soon, just that it may happen at some time in the future. That could be centuries away - I would still argue the end result is something to be concerned about.
by awanderingmind
5/19/2025 at 1:51:02 PM
> he energy efficiency perspective human brain is very, very effective computational machineCan you explain why you think that? Very often, mechanical efficiency outperforms biological. Humans have existed for thougsands of years, neurons even longer. Computers and AI and relatively recent, we haven't really begun to explore optimisation possibilities.
by Chris2048
5/19/2025 at 2:14:02 PM
The human brain runs on about 20 watts of power -- the entire body on 80 watts (at rest). Those numbers are at least good within an order of magnitude. The largest supercomputer consumes 29 megawatts of power for 1.7 exaflops. And the largest supercomputers are nowhere near the flexible generality of a human brain -- they're calculating FFTs for the test.The amount of parallelism in the human brain is enormous. Not just each neuron, but each synapse has computational capacity. That means ~10^14 computational units or 100 trillion processing units -- on about 20 watts.
That doesn't even touch the bandwidth issues. Getting the sensory input in and out of the brain plus the bandwidth to get all of the processing signals between each neuron is at least another petabit per second. So, on bandwidth capacity alone we are 25+ years away (assuming the last 25 years of growth continues). And in humans that comes with 18 years of training at that massive bandwidth and computational power.
Also, we have no idea what a general intelligence algorithm looks like. We are just now getting multimodal LLMs.
From the computational/bandwidth perspective we are still 30 years from a computer being able to process The information a single human brain does, except while consuming 29+ megawatts of energy. If you had to feed a human 29 megawatts worth of power no business would be profitable. Humans wouldn't even survive.
Sorry, but the notion that we are close to AGI because we have good word predictors is fantasy. But, there will be some amazing natural language human-computer interface improvements over the next 10 years!
by daveguy
5/21/2025 at 12:13:23 PM
Thank you, for describing with specific details about comparative capabilities vs. energy use all the main reasons why human brains are so much, much more energy efficient at all that they can do than any current computer, LLM or algorithm.by southernplaces7
5/19/2025 at 2:57:14 PM
The next logical step, perhaps ethically questionable, seems to be growing human brains for computational purposes (parallel or quantum) with high bandwidth and very efficient power consumption.by lioeters
5/19/2025 at 4:18:25 PM
"perhaps ethically questionable"?I find it hard to think of a more ethically questionable programme!
by sgt101
5/21/2025 at 9:52:00 PM
I don't know where I got such a dark sense of humor. I find it deeply troubling that scientists are growing "organoids", little human brains, for computational purposes. Headline from 2023:> Computer chip with built-in human brain tissue gets military funding
The project called DishBrain was spun into the startup, Cortical Labs.
> World's first 'body in a box' biological computer uses human brain cells with silicon-based computing
> Cortical Labs said the CL1 will be available from June, priced at around $35,000.
> The use of human neurons in computing raises questions about the future of AI development. Biological computers like the CL1 could provide advantages over conventional AI models, particularly in terms of learning efficiency and energy consumption.
> Ethical concerns also arise from the use of human-derived brain cells in technology. While the neurons used in the CL1 are lab-grown and lack consciousness, further advancements in the field may require guidelines to address moral and regulatory issues.
by lioeters
5/19/2025 at 3:27:18 PM
I hope not. W40K is grim and dystopian for a reason and it have cogitors which is basically human-computers.by npodbielski
5/19/2025 at 3:10:32 PM
Unfortunately we are much farther from growing a human brain than we are from a scaling up an LLM to a 29 megawatt-consuming behemoth.With growing a brain, we barely know where to begin. Not in terms of growing a few neurons in a petri dish. Nourishing the complex interconnecting structure of neurons that is a human brain is nowhere even on the horizon. Much less growing the structure from cells. At least with the LLM/AI techniques we have control over the entire processing pipeline.
And I agree, that is an ethical minefield.
by daveguy
5/21/2025 at 9:54:01 PM
These Living Computers Are Made from Human Neurons - https://www.scientificamerican.com/article/these-living-comp...by lioeters
5/22/2025 at 4:01:43 PM
You are confusing organoids with "growing a brain". Organoids are a handful of cells of a given type derived from pluripotent stem cells and growing together. A neural organoid is nothing at all like a brain -- not even a brain slice. It is a loose connection of cells that have just enough context to somewhat behave natively or just not croak immediately (which is what most individual stem cells do when they differentiate in a petri dish).It's like calling a 1 bit half-adder circuit a computer.
Organoids are very interesting scientifically because we will need to start with organoids to grow any sort of biological system. And they do behave closer to native than individual cells so they can be used to research things like cell metabolism and drug response. But they are not anywhere close to an organ. And unfortunately they aren't even close enough to replace animal testing, yet.
by daveguy
5/19/2025 at 5:46:17 PM
> we have no idea what a general intelligence algorithm looks likeWhat's the goalpost here though? modern "AI" stuff we previously thought not possible, Proper full human-brain simulation; or General form of higher AI that could come from either place?
> The amount of parallelism in the human brain is enormous.
That only demonstrates the possibilities yet to be explored. biology has millions-of-years head start; what's possible today could be balked out a few centuries ago by the same argument as yours. You say "We are just now getting multimodal LLMs" like it's somehow late.
At a fundemental level, what holds back biology is all the other things it does (ala staying alive) and the limits imposed (e.g. heat etc) that a purpose-made device can optimise on. Any physical, thermodynamical of communication-theoric argument over what's possible would hold back both biological and mechanical devices. Only there are fewer material constraints for machines - they can even explicity exploit quantum mechanics.
> Sorry, but the notion that we are close to AGI
Seems we are arguing different things. I went back through the thread, and believe the proposition is: "us, humanity, being able to build AI or something being very close to that", which I translate as a comment on our literal species. I took your statement "From the energy efficiency perspective human brain is very, very effective computational machine" as being in that scope, and not just a reference to the current era (or Decade!).
by Chris2048
5/19/2025 at 6:21:27 PM
I wish I had a nickel for every time an AI hypester complained about goalposts. I thought it was clear that I was talking about AGI, or a general purpose intelligence on the level of humans. You know, what the hypesters have been saying is just around the corner if only they can have another trillion dollars. Sorry for the confusion> That only demonstrates the possibilities yet to be explored. biology has millions-of-years head start; what's possible today could be balked out a few centuries ago by the same argument as yours.
Yes, we may only have a few centuries left to go before AGI. I was going with a few decades, but now that you mention it, a few centuries is more likely given we are running into Moore's Law limits with transistor technology.
> At a fundemental level, what holds back biology is all the other things it does (ala staying alive) and the limits imposed (e.g. heat etc) that a purpose-made device can optimise on.
You don't honestly believe that AGI will not have to deal with continuity, reliability, and heat dissipation issues that living things have to deal with, do you? All the more reason megawatts vs handful of watts is relevant. You just pointed out that it's not just an algorithmic optimization problem, but a much more complex problem of which we are barely scratching the surface.
> Seems we are arguing different things. I went back through the thread, and believe the proposition is: "us, humanity, being able to build AI or something being very close to that", which I translate as a comment on our literal species. I took your statement "From the energy efficiency perspective human brain is very, very effective computational machine" as being in that scope, and not just a reference to the current era (or Decade!).
I was replying to a literal statement about increased mechanical efficiency over biological efficiency. Which, in the case of AGI is completely inverted. Biological systems are so much more efficient that the comparison is embarrassing.
Also, I was saying our species is at least 3 decades from in-silico AGI. That doesn't mean we'll have some wild new tech that no one thought of next year. But the chances are so slim you might as well be saying we will genetically engineer flying pigs.
by daveguy
5/19/2025 at 1:55:32 PM
It may be possible to optimise silicon further, but the brain does all of its work with less than a hundred watts, while the silicon closest to its capabilities needs more like tens of kW.by rcxdude
5/19/2025 at 3:04:39 PM
> while the silicon closest to its capabilities needs more like tens of kW.I think looking at power consumption for the very edge of what technology is just barely capble of may be misleading, since that's inherently at one extreme of the current cost-capability trade-off curve[0] and stands to drop the most drastically from efficiency improvements.
You can now run models equivalent in capability to initial version of ChatGPT on sub-20w chips, for instance. Or, looking over a longer timeframe, we can now do far more on a 1-milliwatt chip[1] than on the 150kW ENIAC[2].
[0]: https://i.imgur.com/GydBGRG.png
[1]: https://spectrum.ieee.org/syntiant-chip-plays-doom
[2]: https://cse.engin.umich.edu/about/history/eniac-display/
by Ukv
5/19/2025 at 7:14:27 PM
Well, sure, but the capabilities of the edge of tech is still not matching the human brain. And the fact the you can run older models on less power also doesn't say anything for certain. I'm not saying it won't happen, I'm saying it's not happened and it's not certain it will.by rcxdude
5/21/2025 at 12:09:24 PM
>we haven't really begun to explore optimisation possibilities.So you're questioning the above comment's argument based on a hand-wavy claim about completely speculative future possibilities?
As it stands, there's no disagreeing with the human brain's energy efficiency for all the computing it does in so many ways that AI can't even begin to match. This to not even speak of the whole unknown territory of whatever it is that gives us consciousness.
by southernplaces7
5/21/2025 at 1:19:37 PM
Is it speculative to suggest that technology will improve? No.> whatever it is that gives us consciousness
talk about hand-wavy; "consciousness" might not be a real thing. You might as well ask if AI has a soul.
by Chris2048
5/22/2025 at 5:57:36 AM
It is indeed speculative to not just suggest that technology will improve but make specific claims about how and in what way based on no clear connection to any current development.Also, there's nothing hand wavy about pointing out -aside from all the vastly efficient parallelism and generalist computing that the brain does with absurdly minimal power needs- that it also seems to be where our consciousness is housed.
You can go ahead and naval gaze about "how do we know if we're conscious? How do we know an LLM isn't?" but I certainly feel conscious, and so do you, and we both demonstrably have self-directed agency that indicates this widely, solidly accepted phenomenon, and is very distinct from anything any LLM can demonstrably do, regardless of what AIbros like to claim about consciousness not being real.
Arguments like these remind me of relativist fall-back idiocies of asking "but what is a spoon" whenever confronted with any hard counterargument to their completely speculative claims about X or Y.
by southernplaces7