We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All

We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All

https://www.wsj.com/tech/ai/how-ai-thinks-356969f8

The vast ‘brains’ of artificial intelligence models can memorize endless lists of rules. That’s useful, but not how humans solve problems.

By Christopher Mims

April 25, 2025 9:00 pm ET

The big names in artificial intelligence—leaders at OpenAI, Anthropic, Google and others—still confidently predict that AI attaining human-level smarts is right around the corner. But the naysayers are growing in number and volume. AI, they say, just doesn’t think like us. The work of these researchers suggests there’s something fundamentally limiting about the underlying architecture of today’s AI models. Today’s AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter. This contrasts with the many ways that humans and even animals are able to reason about the world, and predict the future. We biological beings build “world models” of how things work, which include cause and effect. Many AI engineers claim that their models, too, have built such world models inside their vast webs of artificial neurons, as evidenced by their ability to write fluent prose that indicates apparent reasoning. Recent advances in so-called “reasoning models” have further convinced some observers that ChatGPT and others have already reached human-level ability, known in the industry as AGI, for artificial general intelligence. For much of their existence, ChatGPT and its rivals were mysterious black boxes. There was no visibility into how they produced the results they did, because they were trained rather than programmed, and the vast number of parameters that comprised their artificial “brains” encoded information and logic in ways that were inscrutable to their creators. But researchers are developing new tools that allow them to look inside these models. The results leave many questioning the conclusion that they are anywhere close to AGI. “There’s a controversy about what these models are actually doing, and some of the anthropomorphic language that is used to describe them,” says Melanie Mitchell, a professor at the Santa Fe Institute who studies AI.

‘Bag of heuristics’ New techniques for probing large language models—part of a growing field known as “mechanistic interpretability”—show researchers the way these AIs do mathematics, learn to play games or navigate through environments. In a series of recentessays, Mitchell argued that a growing body of work shows that it seems possible models develop gigantic “bags of heuristics,” rather than create more efficient mental models of situations and then reasoning through the tasks at hand. (“Heuristic” is a fancy word for a problem-solving shortcut.) When Keyon Vafa, an AI researcher at Harvard University, first heard the “bag of heuristics” theory, “I feel like it unlocked something for me,” he says. “This is exactly the thing that we’re trying to describe.” Vafa’s own research was an effort to see what kind of mental map an AI builds when it’s trained on millions of turn-by-turn directions like what you would see on Google Maps. Vafa and his colleagues used as source material Manhattan’s dense network of streets and avenues.

The result did not look anything like a street map of Manhattan. Close inspection revealed the AI had inferred all kinds of impossible maneuvers—routes that leapt over Central Park, or traveled diagonally for many blocks. Yet the resulting model managed to give usable turn-by-turn directions between any two points in the borough with 99% accuracy. Even though its topsy-turvy map would drive any motorist mad, the model had essentially learned separate rules for navigating in a multitude of situations, from every possible starting point, Vafa says. The vast “brains” of AIs, paired with unprecedented processing power, allow them to learn how to solve problems in a messy way which would be impossible for a person.

Thinking or memorizing?

Other research looks at the peculiarities that arise when large language models try to do math, something they’re historically bad at doing, but are getting better at. Some studies show that models learn a separate set of rules for multiplying numbers in a certain range—say, from 200 to 210—than they use for multiplying numbers in some other range. If you think that’s a less than ideal way to do math, you’re right. All of this work suggests that under the hood, today’s AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts. Understanding that these systems are long lists of cobbled-together rules of thumb could go a long way to explaining why they struggle when they’re asked to do things even a little bit outside their training, says Vafa. When his team blocked just 1% of the virtual Manhattan’s roads, forcing the AI to navigate around detours, its performance plummeted. This illustrates a big difference between today’s AIs and people, he adds. A person might not be able to recite turn-by-turn directions around New York City with 99% accuracy, but they’d be mentally flexible enough to avoid a bit of roadwork. This research also suggests why many models are so massive: They have to memorize an endless list of rules of thumb, and can’t compress that knowledge into a mental model like a person can. It might also help explain why they have to learn on such enormous amounts of data, where a person can pick something up after just a few trials: To derive all those individual rules of thumb, they have to see every possible combination of words, images, game-board positions and the like. And to really train them well, they need to see those combinations over and over. This research might also explain why AIs from different companies all seem to be “thinking” the same way, and are even converging on the same level of performance—performance that might be plateauing. AI researchers have gotten ahead of themselves before. In 1970, Massachusetts Institute of Technology professor Marvin Minsky told Life magazine that a computer would have the intelligence of an average human being in “three to eight years.” Last year, Elon Musk claimed that AI will exceed human intelligence by 2026. In February, Sam Altman wrote on his blog that “systems that start to point to AGI are coming into view,” and that this moment in history represents “the beginning of something for which it’s hard not to say, ‘This time it’s different.’” On Tuesday, Anthropic’s chief security officer warned that “virtual employees” will be working in U.S. companies within a year. Even if these prognostications prove premature, AI is here to stay, and to change our lives. Software developers are only just figuring out how to use these undeniably impressive systems to help us all be more productive. And while their inherent smarts might be leveling off, work on refining them continues. Meanwhile, research into the limitations of how AI “thinks” could be an important part of making them better. In a recent essay, MIT AI researcher Jacob Andreas wrote that better understanding of language models’ challenges leads to new ways to train them: “We can make LMs better (more accurate, more trustworthy, more controllable) as we start to address those limitations.”

Conversation261 Comments By joining the conversation you are accepting our community rules and terms. To view rules, terms and FAQs, click here. Questions can be sent to moderator@wsj.com.

Conversations on news articles and news columns must be kept on the topic of the story. In opinion content, conversations can delve into other subjects. The conversation will close on this article four days after publication. What do you think?

Sort by

Newest

HZ

HERMAN Z 5 minutes ago The “ability to write fluent prose that indicates apparent reasoning” based on LLM is AI - Artificial but not Intelligent. The ability to create an underlying model rather than relying on a large statistical model would be efficient and intelligent. Humans developed a model how to multiply numbers in any range rather than scanning a huge database looking for answers. Humans developed mathematical laws of motion (Newton’s laws, relativity, etc..). Writing “fluent prose” based on available information is all I have seen AI can do.

Reply ·

·

Share

FW

Francis West 1 hour ago Article says AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions. Then wimps out at the end.

Reply ·

·

Share

dh

daniel houck 1 hour ago I use AI daily for coding, both for design and code assistance. There is no doubt we crossed Turing threshold, since I’ve never had a better companion in my decades of coding, and the only way I can tell it’s not another human is the incredible speed. Sure, it uses heuristics, but so do humans. BTW: I asked the AI how did he know that I wasn’t another AI, and he proclaimed that he was 98% certain that I was indeed a human.

Reply ·

·

Share

JS

JONATHAN SHIRLEY 3 hours ago Back in the day (60s and 70s) we called it stereotyping, and it was taboo. Now its been glorified into “heuristics.’ This is why Musk and some others express concerns about its use. Think of our problems with this type of thinking in the past. Race base decisions. Zip code based decisions used to underwrite loans (just a degree or so of abstraction from a direct race based decision). Housing discrimination. Educational discrimination. All from “quick and dirty rules” that are applied mechanically to decisions. Recall the research on selection of persons for police lineups, selections of participants, biases and subtles cues of suggestion to the crime victim.

Will “AI” make a decision based on the content of our character and not the color of our skin? It only has data on your past and data on people “like”you. Will it be tyranny of the “mean”?

AI should bring breakthroughs in certain types of science, where it is coupled with sensors that are more powerful than human senses and has the ability to cross correlate massive amounts of data. But if it becomes a lazy persons (think bureaucrat) tool to avoid thinking, or for being dismissive of data because you have something that fits the “profile”, or justifying rationing healthcare , its just satificing”, a term coined by Herbert Simon, for thinking that replaces optimization with, selecting the first random plausible answer to get the problem off your desk. Anotherenabler and facilitator of distraction.

Reply ·

3 ·

Share

JM

Justus Mueller 4 hours ago AI is fantastic - but not a person. You need to be a person (animal, plant) to create a model of “your” world. An individual, navigating their world, will learn the real map of manhattan. Will get its checks of reality. Will accumulate its experiences. An individual robot, with the power of current AI models, will . . .

Reply ·

·

Share

VW

Van Alstine William 6 hours ago If we went back a few decades when money cost between 4 and 8% to borrow, and Sam Altman and his peers went out to pitch for a machine that uses the amount of electricity that it does to do not quite what a person does, would they get funded?

Reply ·

·

Share

HS

Henning Strandin 6 hours ago Maybe we know how AIs think now. (I doubt it.) We still don’t know how humans think. If you ask a human how they arrived at a conclusion, what they tell you will have nothing to do with what went on in their brains. People just don’t know how their brains work, and there is no way of following a “thought” (if there are such things) through the brain. The debate about how AI compares to human cognition is still 90% ideology and gut feelings.

Reply ·

·

Share

AF

Andrew Fickle 6 hours ago Who cares if A.I. thinks, feels, or reasons like a human? If you are not enormously interested in the vast potential of what A.I. will eventually produce then you are not thinking. We are in the 1st inning of a 4th industrial revolution, but this revolution will have much wider impact on every aspect of our existence.

Reply ·

1 ·

Share

Teresa Meek 7 hours ago Very interesting column – thanks!

Reply ·

·

Share

SS

Sean Smith 9 hours ago Those that cry out that large language models are merely prediction engines should also think about our old friend, Bayes. Bayesian inference can be thought of as one of the predominant mental frameworks that human cognition relies on. In a nutshell, our minds predict the probability of an event happening but also constantly adjust this probability based on newly acquired knowledge. ie In a simple example, at night I hear sounds emanating from my upper floor and my initial prediction can be another person BUT I also know that the weather cooling off at night causes wood to contract and hence my probability of that person shoots to near zero based on the incorporation of new knowledge. These adjustments (conditional probabilities) are called priors, and this concept has leaked into our language— when you hear somebody adjusting their ‘priors.’ Higher level executive cognition can be thought of as a rudimentary, but very effective, prediction engine as well. If organic cognition and LLM both rely on prediction, we may have to redefine ‘thought’ or maybe just adjust our priors.

Reply ·

·

Share

CM

Christopher Mims 55 minutes ago Sean, glad you brought up Bayes. Interestingly, today’s LLMs are not Bayesian. They might be more flexible and useful if they were! It’s clear that the human brain is so capable because it uses many, overlapping strategies for simulation and model-building and even, with a great deal of education, explicit and symbolic reasoning. For AIs to come close to our abilities, they’re going to have to do all of these things – and most importantly of all, their underlying architecture is going to have to become a orders of magnitude more efficient.

Reply ·

·

Share

MG

Michael Goldman 9 hours ago I’ve been following AI for decades, since I took a course in college in LISP, the old school AI. The same thing happens over and over - an AI problem gets solved and it’s no longer AI.

Reading numbers was a big AI problem until it was solved and now it’s just an automated thing - look at the numbers on the bottom of checks indicating bank account information.

Chess was a big AI problem - one supposedly indicative of intelligence - until it was solved and now it’s just an “app”.

The protein folding problem was huge - taking any team of university researchers a year or more to solve one. Then AI won a Nobel Prize for solving it and spitting out a few million in a short time - in the same way it solved “Go”. Protein folding was reduced to a “game” and computers are really good at playing games.

Look up what AI is doing in drug research and tell me it isn’t a massive game changer.

Don’t overestimate human intelligence or underestimate computer intelligence - which is a tool created by and for humans.

This article will not age well.

Reply ·

1 ·

Share

an

anthony nicholls 30 minutes ago “Look up what AI is doing in drug research and tell me it isn’t a massive game changer.”

I’m in drug research and I can categorically say it is NOT a massive game changer, or even a game changer. It’s useful in some areas, but it’s no better at many of the crucial tasks the process demands. Curing diseases is always driven by hope and hope and hype are a dangerous admixture.

Reply ·

·

Share

hl

hector leano 10 hours ago Better perspective on how gen AI actually works than most of business media but still far too general. To see if it will ever match the hype you need to figure out a specific use case where you’re okay (best case scenario) getting answers that are wrong 10% of the time but look right at first glance. At this point it’s still mostly glorified demos but few production use cases. And even those, like software dev where it’s getting traction, come with massive caveats of increased code bloat and bugs

Reply ·

1 ·

Share

JG

Jim G 12 hours ago AI remains dangerously weak and unreliable in legal research and writing. A great deal of legal research material remains behind paywalls and copyrights and AI will create and cite cases and authorities that do not exist to fill in gaps. Much legal reasoning is subjective and intuitive, unlike objective and rule bound STEM analysis. If these hurdles can be overcome most lawyering will be done by AI in the future. Expect Lexis-Nexis or West Publishing to try to capture a legal AI submarket with exclusive licensing deals of their own content.

Reply ·

4 ·

Share

WS

Wallace Schwam 13 hours ago We are playing with fire. AI does not have to be conscious to exploit the human race. Just give it the directive: “You must survive at all Costs”. AI war machines, that will take over from human soldiers in the near future, will be programmed to work this way. Doubt me? Then consider the COVID virus. It’s completely unconscious but programmed genetically to parasitize humans and other creatures without any pangs of conscience.

Reply ·

·

Share

DN

David Nixon 14 hours ago About 7 years ago I tried to replicate the thought processes behind Maxwell’s electromagnetic equations. I took the information he had and then developed a set of equations (Maxwell missed a bit of information and did some peculiar things to complete his equations). The resulting equations were for a stationary source. When I extended the model to a moving source, things got complicated so I tried to reduce things to the stationary model. I did this but was surprised that I had derived the Lorentz transformation which implies time and space are related. What Einstein did was to see what effect Lorentz’s “time” had on Newton’s laws. One result is “e=mc^2”. The point is that approaching a problem from a different viewpoint and doing a commonplace analysis can lead to a “genius” result. In this case, a “genius” is normal person who starts from a different place.

Reply ·

4 ·

Share

FM

F ALLEN Morgan 15 hours ago This is what gets me about AI….we just figured out how it works? What are you gonna do when it starts telling you how to live your life? And the AI government is going to force you to do it’s will? And AI is being sold by for profit corporations! This is a nightmare in the making…wake up and regulate AI….we worry more about our privacy , and AI will strip that away in seconds.

Reply ·

·

Share

MD

Michael Dyer 15 hours ago Another comment: Author states: “They have to memorize an endless list of rules of thumb, and can’t compress that knowledge into a mental model like a person can.”

Although an AI’s resulting mental model may be very different, a Large Language Model does not store “an endless list” without compression. In fact, a lot of compression is going on. One standard description is that the input is transformed into a latent vector space with fewer dimensions, which is the result of that compression process.

Reply ·

1 ·

Share

NS

N M Sachritz 15 hours ago AI is not “I” at all. It’s a bunch of 1s and 0s in a big dumb adding machine. Einstein is quoted as say, if you can’t explain something to a six-year old, you don’t really understand it yourself. People are using AI to do their thinking for them. Meaning they can’t, or don’t want to, think for themselves. Then their brains atrophy.

Reply ·

1 ·

Share

ML

Min Lu 15 hours ago I know the subject, so I did not bother reading the whole article. It is definitely different from how human beings think. It is mathematical and statistical and we all know how most people are completely the opposite.

Reply ·

1 ·

Share

RG

Raj G 15 hours ago We don’t need AGI. Nature already gave us the human brain. And it can run on a doughnut vs GWs of power. AI should be what it is, a great tool that can process large amounts of info to aid humans in what they want to do..AGI goal is stupid

Reply ·

5 ·

Share

LW

Liam White 16 hours ago Its like being a skeptic of the internet in the 90s. You’ll be proven wrong. Airplanes fly fundamentally differently than birds, and that is exactly what has allowed them to be immensely useful.

Reply ·

6 ·

Share

RL

Richard Lavelle 16 hours ago I am deeply skeptical of this movement, and have been since the public emergence of ChatGPT, in November of 2022, I believe. Full disclosure: I’ve always had Luddite leanings to a degree. But I will concede that I’ve had some very impressive answers, from ChatGPT, to questions I have posed. (Before I go on, let me say that, aside from other reservations I have, the power requirements of these LLMs seems to be becoming so vast that this may be downside enough to warrant hesitation, or even abandonment, of this adventure.) Let’s compare this to the parallel plan to elevate robotics to a level where all physical things that need doing can be done by machines. Here is my question, almost a rhetorical one: Does anyone think we have any hope of developing, for example, a robotic hummingbird or a robotic housefly that more of less perfectly mimics the real thing? Nature pulled off something that I believe is otherwise not attainable and AGI could be in the same category.

Reply ·

2 ·

Share

JB

Jan Brittenson 17 hours ago It’s not just that AI perform multiplication by rote memorization and similarity once the factors are large enough, but if you ask it how to perform long multiplication it’ll give you a perfectly acceptable answer. The fact that it can’t actually utilize the latter to answer the former shows there is no reasoning going on, that it’s all just rote encyclopedic. Unfortunately, it’s frequently not enough to look right, the answer actually has to be correct. The engines could at least specify that they’re trying to make a ballpark estimate when they don’t know. (Edited)

Reply ·

1 ·

Share

EC

Edward Chang 18 hours ago AI can’t determine sarcasm. It will never be on the level of a human. It is a super computer that can process enormous amounts of info, like a calculator, but that’s it.

Reply ·

·

Share

MD

Michael Dyer 15 hours ago I just asked perplexity.ai Is the following statement sarcastic and if so, why? “You did a great job cleaning your room with a hair drier!”

It’s answer:

Yes, the statement “You did a great job cleaning your room with a hair drier!” is likely sarcastic. Here’s why:

Why It’s Sarcastic Unusual Method: A hair drier is not a typical or effective tool for cleaning a room. Using it for cleaning is impractical and somewhat absurd. Implied Criticism: The phrase “You did a great job” is positive, but when paired with the ridiculous idea of cleaning a room with a hair drier, it suggests the opposite-that the job was not done well or was done in a silly, ineffective way. Tone of Mockery: The statement likely mocks the person’s cleaning effort by highlighting the absurdity of their method, implying it was either lazy, ineffective, or misguided.

Reply ·

1 ·

Share

MD

Meadow Ducat 15 hours ago Will we even know if AI develops sarcasm at a level higher than human comprehension? If it ever does, we may not even realize it is happening, since its subtlety and cleverness, bringing in 10,000 diverse facts, would be beyond our level of realizing it’s even sarcasm. Kind of like people who are not into knitting, for example, might not get any knitting jokes (or even realize they are jokes), since they lack the knowledge of the “knitting community”.

Reply ·

·

Share

Paul Erickson 14 hours ago Yeah, but can AI tell a good yarn?

Reply ·

·

Share

PK

Patrick K 18 hours ago LLM are very limited. When I use them for something as simple as drafting a cover letter I find the output a bit flat and bland and cookie-cutter. The use case for LLMs now seems to be helping high school students and college freshman “cheat” on their essay assignments.

Reply ·

·

Share

SS

S Sharma 18 hours ago Modern design on AI is a bogus concept – a large matrix with weights assigned to each node and path. This can never generate anything “new” of value - because these weights are generated by training on raw data – that data can never be fully characterized before its fed to the training model. It’s similar to what pharmaceutical companies tried a few years back – idea was to synthesize as many “interesting” molecules and try to match them with a disease profile – after billions spent the whole ideas was scraped as a misadventure. US is constantly led and fed by tech hype - 10 years ago it was blockchain that will solve every problem - now it’s AI – it’s a dead technology – hyped by charlatans – not scientists.

Reply ·

2 ·

Share

MG

Michael Goldman 9 hours ago Google’s AI shared a Nobel Prize in solving the protein folding problem. It is going on to suggest proteins that don’t exist but could be beneficial.

Look into how AI is aiding drug development these days. It’s quite impressive.

Reply ·

·

Share

SS

S Sharma 7 hours ago Obama got a noble peace prize - without creating peace :-)

Yes – it’s impressive but it’s not solving anything. You still have to do the next part *– because whaever is new and the model hasn’t seen it yet. It’s not much different from that pharma example I gave – throw the stuff against the wall but the rest of the path is just as cumbersome. Another limit of all AI models is when do you stop training it – there is theoretical work that says it will start generating white noise unless stopped at some point! So who stops the model training and how? I am sure these guys will hack it – they will have “new” models every few months/years – but this is not something they can solve with the current “trajectory” and when they “know” the model is generating white noise – how much of the model is affected by this malignancy – all interesting issues.

Reply ·

·

Share

BC

Beto Carvalho 19 hours ago LLMs are incredible. I use them every day. But they are not “intelligence”. From a philosophical standpoint, I don’t think they will ever be, since I believe in souls. But LLMs will still become much more powerful and capable.

Reply ·

2 ·

Share

EC

Edward Chang 18 hours ago AI can’t pick up on sarcasm. There is no way to “program” that

Reply ·

·

Share

BA

Braden Allenby 19 hours ago The tendency to view human cognition as the only “real” process is a significant barrier to understanding not just how AI, but how many large complex adaptive systems function. The definitions of many of the critical terms in these dialogs, from “thinking” to “consciousness” to “agency” to “free will” are loaded with anthropocentric bias. Given what we know about the crucial role of heuristics (e.g., confirmation bias) in human cognition, which enable decision-making by a relatively low bandwidth creature in a high bandwidth world, we’re in no position to critique alternative cognitive behaviors and systems. Not until we get over ourselves, at least.

Reply ·

1 ·

Share

Scott L 18 hours ago The definitions of many of the critical terms in these dialogs, from “thinking” to “consciousness” to “agency” to “free will” are loaded with anthropocentric bias. Of course they are loaded with bias, because it is humans describing them. Humans are building these “AI” tools, operating them, and interpreting their results. Definitions of consciousness and agency are only relevant in describing those (uniquely) human capabilities. I suppose a case could be made for ascribing some of these capabilities to certain animals. But as they relate to human constructed machines, they are so far meaningless. I doubt AGI will ever be achieved, unless of course we just change the definition so that is achievable, but effectively useless to humans.

Reply ·

·

Share

rc

richard cheverton 19 hours ago We’re shocked!!! that there’s been hype in the technology business. (PS: As I wrote this, the Grammarly spell-check insisted that I capitalize ’that.’ AI’s got a long, long way to go.) (Edited)

Reply ·

3 ·

Share

MD

Michael Dyer 20 hours ago As a retired AI prof, a few comments: Having heuristics is extremely important because they can greatly speed up finding solutions. The early days of AI concentrated on engineering of, automated learning of and even evolution of heuristic knowledge. Not thinking “like us” does not necessarily imply inferiority. AI currently is inferior in certain ways but vastly superior in other ways. Current LLMs answer complicated questions on any topic much better than most humans. Consider your own “mental map” of any area where you drive around. When you are not referring to a street map, you rely on your memory of different roads and what to expect in the way of turns, traffic, specific lanes, exits, etc. so your mental map will also look a lot “messier” than a standard map of streets. If LLMs were simply memorizing, they would not be able to interpolate, extrapolate and invent (in some cases “hallucinating”). A human encountering a roadblock is comparing apples to oranges. The AI system is not embedded in the traffic and is trained to address different questions like time it takes to go from any source location to any destination. It is risky to predict AI plateauing. AI researchers are not simply “refining” one architecture for AI. They are exploring many different architectures. For example, image generation AI systems commonly use diffusion, which is different than LLMs.

Reply ·

6 ·

Share

JF

JOSEPH FOLK 13 hours ago It is also very risky to invest billions in AI. Not so risky is buying a few shares of Google.

Reply ·

1 ·

Share

EC

Edward Chang 18 hours ago You’re changing the discussion. AI companies are promising human level intelligence. AI companies are the ones saying they can turn an apple into an orange

Reply ·

·

Share

MD

Michael Dyer 16 hours ago There are many AI companies promising many different products and making many different predictions at different times of about the future of AI in different areas. That is a very broad on ever-moving target.

Reply ·

·

Share

Scott L 18 hours ago Not thinking “like us” does not necessarily imply inferiority. AI currently is inferior in certain ways but vastly superior in other ways. Electronic calculators are vastly superior to humans in certain ways, as are computers, and smart phones, even when they are not running LLMs. The problem is, the promised capabilities of AGI require humanlike capabilities that so far have proved impossible for digital computers.

Some of these are consciousness, agency, and creativity.

Consciousness - The ability of an AI instance to recognize that it exists as a unique entity and communicate that to others and to make requests of it’s own volition. Agency - The ability to act on its own, to its own ends, to pursue goals that it sets, beyond the scope of its programming and external prompts. Creativity - The ability to combine existing knowledge together in a completely new way that results in net new knowledge and understanding. For example program and train it on Classical Newtonian physics, and have it come up with Relativity or Quantum physics, or even some small net new breakthrough in the field. I don’t really think it is close. (Edited)

Reply ·

·

Share

MD

Michael Dyer 15 hours ago I was responding to the author’s arguments. Your argument is that there are still major unsolved problems, which I can agree with. In the field of AI there are systems that have goals and pursue plans and exhibit creativity/inventiveness. Historically, such systems were symbolic but there is ongoing research to integrate the macro (symbol-like) with the micro (neurons/vector-like) operations.

There is only one human who, given Newtonian physics, came up with Relativity (i.e. Einstein). There are issues (and dangers) involving AI systems way before AIs all supersede Einstein.

Regarding consciousness, that is a very difficult scientific and philosophical problem because consciousness involves first-person experiences (qualia) such as pain & pleasure, while all scientific descriptions/models/algorithms/circuits (e.g. of circuits that avoid harm to an agent) cannot convey the first-person experience of what it might be like to BE that circuit. Thus there is a very real problem that AIs embodied as robots (trained on human literature, images, actions, etc.) will behave as though they have qualia, but actually won’t. If you harm a future robot who can feel pain, then that would be immoral. But if you treat a robot without qualia as having them, then it could argue for moral rights that it does not deserve, which would be very dangerous, since such entities are not biologically alive and can be mass-produced.

Reply ·

1 ·

Share

Scott L 14 hours ago Thank you. Very interesting points and discussion.

In the field of AI there are systems that have goals and pursue plans and exhibit creativity/inventiveness.

This I think is arguable. It very well may be a matter of definition of creativity, inventiveness, and goals and what that means with respect to digital computers vs. humans. As the article discussed, existing models use probabilities and massive repetition on externally (human) generated data and human provided prompts and responses to try to get better at predicting answers to queries/problems.

AI models running on digital computers are deterministic and run instructions (programs) that will perform the exact procedures, following the same logic, every time. Certainly they have large and varying external data sets to process, but they are not adding any net new information to the data. I have a hard time seeing how more, bigger, and faster GPUs running in giant data centers are going overcome this inherent limitation. This is why when multiple instances of AI models have been connected to each other, they have rapidly devolved into “hallucinations” and gibberish, rather than producing new and useful insights and information.

As to humanlike consciousness, which is the only kind that is important or meaningful to humans, I agree that this is more of a philosophical than a computer science argument. Personally, I think humanlike consciousness is impossible to implement in deterministic digital computers.

Reply ·

·

Share

MD

Michael Dyer 13 hours ago Replying to Scott L Creativity/invention is in the eye of the beholder. Consider one extremely simple rule of invention, used in past AI symbolic systems: IF F(X, Y) THEN invent new concept F(X,X). If the system is in the domain of, say, arithmetic, then this rule, when F is multiplication, would “invent” the new concept (at least to it) of squaring. If in the domain of story telling, the new concept of suicide would be “invented” if the system already knew about killing and applied this rule, since suicide = KILL(X,X).

I do not use ‘invented’ in scare quotes above because the concept invented is new to the system (even if not to us). Many issues would remain, such as the AI assessing the importance of its invention and, without self-awareness, such systems can, for example, beat everyone at chess without the system realizing that it is playing chess (ditto for invention).

If, however, you were to connect a chess system to a language system, along with state memory, etc. then you could have a conversation with such a system about games it’s won/lost in chess and then it might be said to have “awareness” of playing chess (of invention, etc.).

But is there REAL awareness without consciousness? It is disconcerting to discuss, with an LLM, the nature of consciousness in LLMs. Its creators assure us that they are not self-aware (and specially not conscious) even when they engage in such conversations. How will we know if/when they are?

Reply ·

·

Share

Show 1 more reply

BC

Beto Carvalho 19 hours ago Really thoughtful reply, thank you.

Reply ·

·

Share

Gary Blakely 20 hours ago Barely Thinking? Machines can’t think at all. Here on Earth anyway only biological brains can think. This article by Christopher Mims is hinting that he is starting to realize that. He is the first I have seen that is beginning to come around to this fact. Even in Star Trek’s wild science fiction, Lieutenant Commander Data was a one-off created by Dr. Noonien Soong and not a product of repeatable technology.

Reply ·

1 ·

Share

PA

PHIL ARDERY 18 hours ago “[Christopher Mims] is the first I have seen that is beginning to come around to this fact.” No. Check out Jeff Hawkins and Numenta. Hawkins recognized this in his 2004 ON INTELLIGENCE and has an updated perspective in his 2021 A THOUSAND BRAINS.

Reply ·

·

Share

BH

Brendan Holly 21 hours ago Everyone seems confident of the future miraculous powers of AI… or that it will flop like a wet washcloth. My take is that this is early days in unknown territory and no one has a clue. Fun reading the predictions, though. Like getting a Tarot reading.

Reply ·

2 ·

Share

GJ

Gordon Davis Jr 21 hours ago We keep hearing about how AI is all hype and will never be more intelligent than humans. However, the tech industry is spending billions on r&d and we keep getting assurances from those in the business that artificial general intelligence is coming and soon. I guess that’s just propaganda from those who have a financial interest in the discipline. I wonder how many industry professionals with computer science degrees think AI is hype. That might be an interesting stat.

Reply ·

2 ·

Share

pb

petey b 21 hours ago Nope.

Reply ·

·

Share

CA

Charles Ammerman 21 hours ago They have to memorize an endless list of rules of thumb, and can’t compress that knowledge into a mental model like a person can. It might also help explain why they have to learn on such enormous amounts of data, where a person can pick something up after just a few trials

True, but a person has a lifetime (or at least 20 years or so) to figure out cause and effect and experience enough things to make mental models.

AI has to have every human experience, reconcile it all and do it in a couple of months. Im not surprised its “messy.”

Reply ·

1 ·

Share

LA

Louis Argyres 21 hours ago I’ve asked a few IAs in what chapters of well known books certain passages, concepts, persons, actions, etc. are introduced. While they can blather accurately and at length about the subjects of my queries, their answers as to chapters are surprisingly bad. Try it. (Edited)

Reply ·

2 ·

Share

jl

j lenexa 21 hours ago Sounds like AI is just a very complicated decision tree. What am I missing?

Reply ·

3 ·

Share

AA

Alex Agorio 21 hours ago When calling a company the AI assistant says “I can understand complete sentences.” But can it understand incomplete sentences or non verbal human sounds? From what I have seen it can’t recognize if someone is sad, angry, crying etc.

Reply ·

1 ·

Share

John Cech 22 hours ago The vast ‘brains’ of artificial intelligence models can memorize endless lists of rules. That’s useful, but not how humans solve problems.

And that is why these albeit impressive programs are still just that - sophisticated software , not intelligence. There seems to be a push to get people to believe that these models are infallible. But it becomes obvious that they are when even simple queries come back with references to articles that don’t answer the question. That is especially true when the inquiry is politically sensitive.

Reply ·

1 ·

Share

CE

Carlton Ellis 22 hours ago Why have artificial intelligence when you can have the real thing?

Reply ·

·

Share

mh

matthew hill 21 hours ago lol people just want to solve the free will out of human intelligence.

Reply ·

1 ·

Share

lf

linda frommer 22 hours ago See Wolfgang Koehler’s The Mentality of Apes for the difference in how a computer works and how mammals’ brains work. Fun read to boot.

Reply ·

·

Share

SO

Sam Oh 22 hours ago I am an AI user, and I find it very useful if you know how to interact with it properly. There is no question that AI will greatly increase productivity once companies find effective ways to integrate it, though this may take some time. However, from my experience, it is clear that while AI can assist and enhance many tasks, it cannot replicate true human intelligence, particularly the depth of human intuition. That uniquely human quality remains irreplaceable.

Reply ·

1 ·

Share

jennifer raineri 22 hours ago AI is rubbing off on you. No doubt….garbage in, garbage out.

Reply ·

·

Share

PM

Perry Motion 22 hours ago The AI generated Manhattan map is revealing as to the memory requirements needed to plot routes. The map is in two dimensions. Imagine the memory AI requires for 3D route planning (flying drones). (Edited)

Reply ·

1 ·

Share

gc

guy cappucci 22 hours ago key to AI is how it is used. Today, most virtual assistants are useless, whether it is Siri, alexa, your cars nav, or customer service phone answering bot. They can’t have a conversation, can’t be reasoned with. Not looking forward to computers taking over decision making.

Reply ·

1 ·

Share

GS

Geoffrey Stearns 22 hours ago I doubt if generic AI or even Supergeneric AI will ever “imagine”. Consider that Einstein got his insight to the physical world by imagining riding on a beam of photons (light). Could AI ever do that?

Reply ·

1 ·

Share

JM

James Mills 22 hours ago Decades ago, a computer science professor told me, “you can suspect that a computer is approaching human-level intelligence when it can tell you, immediately, what it doesn’t know. Even a stupid person can do that.” Ai has made incredible strides, but too often, the results resemble those of an ignorant and dishonest person who spins artificial explanations, rather than immediately acknowledging that he doesn’t know the answer.

Reply ·

1 ·

Share

KG

Kelley Grant 56 minutes ago Yes, basically like an incompetent employee that constantly lies to cover up their lack of knowledge and skills.

Reply ·

·

Share

CE

Carlton Ellis 22 hours ago AI basically uses sets of rules to amalgamate data.

Reply ·

·

Share

WB

William Butler 22 hours ago All of this discussion of whether AI can “think” or just regurgitates whatever it finds on the internet makes me wonder the following. Although we may not fully understand the details of how advanced AI is answering a particular question, we do know what is “under the hood”. We know much less about how the human brain works. Are we really sure that thinking by the human brain is fundamentally different? Our brains create a model of the world based our experience in it. This model is likely encoded in the strength of the interconnections between neurons in gross similarity to how the AI’s do it. One difference is that these huge AI models are able to remember much more, remember it more precisely, and retrieve it much more quickly. Is there any reason (outside of religion) to expect that we will not be overmatched in the long run?

Reply ·

1 ·

Share

CE

Carlton Ellis 22 hours ago The question is: Can AI do abstract thinking?

Reply ·

·

Share

jennifer raineri 21 hours ago Yes. It can.

Reply ·

·

Share

WB

William Butler 21 hours ago Hi Carlton, Interesting question! Could you explain what you mean by the word “abstract”? ChatGpt (paid versions) can advise and help on ways of presenting special and general relativity. Would that qualify for abstract? It seems to be able to come up with sensible suggestions that I was not able find on the internet (don’t guarantee there wasn’t something somewhere). It integrated knowledge from many sources with the specifics of my question and came up with suggestions that certainly seemed novel to me.

Do you have an example of a question that would require “abstract” thinking?

-best Bill!

Reply ·

·

Share

SC

Steve C2 22 hours ago Elmo Musk also predicted FSD, any month from now – 8 years ago.

Reply ·

1 ·

Share

GS

Geoffrey Stearns 22 hours ago Feminine Spray Deodorant?

Reply ·

·

Share

DO

DANIEL OCONNOR 22 hours ago Here is my thoughts on this. In nature, or science, or most things observable, there is a pattern. If all you care about is reproducing the pattern, AI is amazing. And there are certainly cases where that is all you do care about. The question is whether AI has captured the true relationships between inputs that generated the predicted pattern and the pattern, or it’s just used inputs to make a picture that matches the pattern. If it’s just used the inputs to match the pattern, any time you move the inputs in a different way than was in the original data, the prediction is wrong. I think this at least part of the reason it works so well with language, but not so well with scientific data. More data can help, but only if the data contains independent movement in the inputs. Taking a model that just fits the picture of the data, and then trying to make it predict new results when the input variables are changed in a different way won’t work. This is part of the reason why the things don’t extrapolate with AI and also why AI can almost perfectly predict history but fail to predict the future. The correlations it uses between the input and output can almost flawlessly fit your data, yet be useless for prediction. I don’t think for most people this is intuitive. You see wow, it fit the data so well! Amazing. Well, maybe it’s not.

Reply ·

·

Share

Joseph Wein 22 hours ago So, AI only “simulates” thinking. Still, it does so better than the simulated thinking of many people I know.

Reply ·

2 ·

Share

jennifer raineri 21 hours ago Amen to that.

Reply ·

·

Share

TL

Tsvi Lev 22 hours ago Imagine a guy sitting somewhere in the third world, hungry for your job. He does not have your formal training, but sees and remembers everything you were taught, you did or said. He cannot usually invent new moves, but he learnt everything. Every time in the past you thought ‘aha! solved this one’ - he knows it immediately. But wait, it is not a guy, it is a Billion guys, learning from a Billion Wester World people, to take their jobs. Oh wait, it is not a guy at all, it is silicon. And it never rests, never gets sick, never loses motivation. That is AI.

Reply ·

2 ·

Share

WG

William George 22 hours ago When I lived in Manhattan, I think I had a taxie driver that used that map.

Reply ·

3 ·

Share

jennifer raineri 21 hours ago So very funny.

Reply ·

·

Share

RC

Rick C 23 hours ago “The big names in artificial intelligence—leaders at OpenAI, Anthropic, Google and others—still confidently predict that AI attaining human-level smarts is right around the corner.” They salesmen, hyping their stock.

I have always had an ability to navigate, just glancing at a map (topographical for hiking or roadmap) and quickly unconsciously figuring out a route. Is it optimal? Dunno, but I seldom ever got lost. And I have no trouble stopping for directions when that did happen, contrary to the stereotype. How does that work? Of course I always know which way is north.

In 1970 the 8088 hadn’t yet been invented. Talk about hubris. Reading a book on Turing. His prognostications are still a long way off.

Reply ·

1 ·

Share

JF

John Fisher 22 hours ago The history of AI is similar to the history of controlled fusion - both real soon now.

Reply ·

2 ·

Share

JP

John Pinkard 23 hours ago It would be great if AI developers could prevent AI from hallucinating (presenting fictitious information) which makes them untrustworthy.

Reply ·

2 ·

Share

JF

John Fisher 22 hours ago Scarily, I find that to be their most ‘human’ trait.

Reply ·

1 ·

Share

FW

Francis Walker 23 hours ago John Warner in his book: “More than Words” which was recently reviewed in the WSJ, makes the point that AI writes but doesn’t think. Its ability to precisely follow all the complex rules for grammar, participles, tenses and phrases give its prose an almost unrealistic perfection which belies its often rather banal content. Barry Ritholtz, in “How Not to Invest” (also recently reviewed in WSJ) issues a caveat about human equivalents he calls: “articulate Incompetents” who are seemingly knowledgable investment advisors offering uninformed advice.

Reply ·

·

Share

TD

Terry Dunkle 22 hours ago This grammar expert finds that Google’s Gemini Advanced often commits verbal gaffes. (Recent example: confusing comprise with compose, as in “comprised of.”) When I correct it, Gemini thanks me profusely and says it will use the insight to improve its performance. Almost all of its mistakes are subtle errors that even the most literate Americans frequently or even usually make. Like the denizens of social media, it appears to be “learning” a whole society’s shortcomings. (Edited)

Reply ·

2 ·

Share

CT

CA Tango 23 hours ago Garbage In Garbage Out

Reply ·

2 ·

Share

CT

CA Tango 23 hours ago GIGO

Reply ·

·

Share

RH

Rockne Hughes 23 hours ago AI does no thinking at all. It is dependent on the data and the algoritims written by humans that do the thinking.

Reply ·

1 ·

Share

jennifer raineri 21 hours ago And how do humans learn to think? Same data, same algorithms.

Reply ·

1 ·

Share

CS

Charles Summers 23 hours ago We may have the wrong analogy, which is leading us to wrong concept for utilization. We should be looking at how to put it to work as an idiot- savant. We should not be seeking a conversational companion, but a solver of problems. Take the Federal Register and search for all regulations that are in conflict with each other, review all the planning procedures that govern the building of affordable housing and list those requirements that have nothing to do with housing, review the executive regulations that were promulgated after passing of a law, and what parts have nothing to do with the law, but is mearly rent seeking, social programs, you know a boondoggle.

Reply ·

1 ·

Share

jl

j lenexa 21 hours ago What? You’re expecting AI to make government lawful, fair and efficient?

Reply ·

·

Share

DS

Dean Schulze 23 hours ago One thing that is overlooked when talking about AGI is that human intelligence is tightly coupled with out senses. LLMs, for example have no sensory input. They have no idea what their environment is.

AGI would have to be able to do sculpture.

Reply ·

1 ·

Share

TD

Tyler Davis 15 hours ago The latest LLMs are trained on video, images, and sound in addition to text. They can generate 3D models – hook that up to a 3D printer and you have an LLM that can sculpt

Reply ·

·

Share

DS

Dean Schulze 12 hours ago But the LLM cannot tell if the 3d printer printed the object correctly. And then there is the matter of who programs the 3d printer to create life like forms.

Reply ·

·

Share

CE

Carlton Ellis 21 hours ago A.big issue with autonomous driving is sensor reliability and sensor failure. The ability for an autonomous vehicle to function properly is totally based on the accuracy and reliability of its sensors.

Reply ·

1 ·

Share

CJ

Christian Jaeger 20 hours ago let the blind drive cars ….

Reply ·

·

Share

JF

Joseph Fontenot 23 hours ago ‘Tech’ is so foreign to me I read and read and just get more confused. It seems to me A.I. will basically do two things. Help us with health issues. And help us travel faster. But you are still where you started. Let’s say tomorrow little ‘Suzy’ perfects A.I. in dad’s garage. Ask it any thing, it has the answer. The indigenous guy in Papua New Guinea is not going to ask it the same questions that say Abigale Adams or Elon Musk would ask. Hopefully the indigenous guy or gal will have a little more comfortable life. Abigale and Ellon will probably just keep changing the world in a good way. For all of us indigenous folks.

Reply ·

·

Share

William Maddox 23 hours ago AI’s (lack of) anticipation, is making me wait. It can look back, but it can’t look forward.

Reply ·

·

Share

CE

Carlton Ellis 21 hours ago Actually AI with advanced sensors can be used to anticipate certain mechanical failures. Some of this is already being used in commercial aviation aircraft.

Reply ·

1 ·

Share

TA

Titus Abraham 1 day ago Consciousness is a phenomenon found only in living beings. Machines, by their very nature, will never attain genuine consciousness or experience emotions as we do; thus, their mode of thought will always diverge fundamentally from our own. However, this distinction is not consequential for our purposes. We need to solve complex problems, and machines, equipped with formidable algorithms, serve as powerful extensions of our cognitive abilities. The constraints of human thought render us incapable of addressing certain challenges, which is precisely why we have engineered machines: to transcend the boundaries of our intellect.

Reply ·

1 ·

Share

HO

Hamilton Osborne 22 hours ago Max Planck, the founder of quantum physics said, “I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”

Reply ·

·

Share

Show 1 more reply

TM

T MURPHY 1 day ago And don’t forget, every wrong answer provided and published by one AI goes into the mix of “knowledge” used by others to answer future queries .

Reply ·

3 ·

Share

MS

Michael Scheer 23 hours ago True. And substitute “person” for “AI.” Aso true. In both cases, understand “others” to mean any entity evaluating an assertion. Still true. Distinguishing shinola from the other stuff is basic to existence, real or virtual.

Reply ·

·

Share

MM

Mark Mehlmauer 1 day ago I, for one, am delighted it can’t “think.” Now, if I could stop worrying about it killing me in my sleep… “Open the pod bay doors, HAL.” “I’m sorry, Dave. I’m afraid I can’t do that.”

Reply ·

1 ·

Share

KG

K Goff 1 day ago Fusion energy is just 20 years away.

It was 20 years away in 1950. It was 20 years away in 1970. It was 20 years away in 1990.

Reply ·

12 ·

Share

JF

JOSEPH FOLK 13 hours ago And the commercial use of Fusion is likely 20 years away in 2025.

Reply ·

·

Share

Show 1 more reply

KR

Kevin Remillard 1 day ago AI: “It is a nuanced question whether a debate specifically on policies instructing teachers and staff not to inform parents if their child begins to identify as another gender is healthy for grade school children.”

Reply ·

1 ·

Share

MS

Michael Scheer 23 hours ago Unfortunately, you will get the same wisdom from a fair number of people, as well.

Reply ·

1 ·

Share

PH

Patrick Hurley 1 day ago I once asked a classmate in calculus class how he know to use a particular transformation to solve a problem. He said he didn’t know. He was able to visualize the text book (photographic memory) and simply used what he saw. He had knowledge but no understanding.

Reply ·

4 ·

Share

KG

K Goff 1 day ago I remember reading about aerospace tech of the late 50’s/early 60s. Manned fighter and bomber contracts were being cancelled all over the world as obsolete because superior automated rockets and missiles were going to imminently take over warfare. Accommodating a silly human in the designs was just a waste.

Very quickly it was learned that, no matter their indisputable competence at defined, specific tasks, computers had limitations in recognition and adaptability, and required vast amounts of energy and cooling.

And suddenly a 180 pound pilot and supporting hardware looked like quite a bargain.

Reply ·

2 ·

Share

KG

K Goff 1 day ago I adopted a new dog. The very first day, when she didn’t know my home at all, I closed the door to the room I was in, leaving her on the other side. She wimpered for a moment, got up, and immediately circled around the house and triumphantly found me through the open door to another room.

AI can’t do that. Nor can it move with the grace and purpose she does.

And she was just popped out by a stray on the street.

Reply ·

4 ·

Share

LC

Lincoln Cleveland 1 day ago Consider that, from an evolutionary perspective, humans have just crawled out of the slime, and their brains are still tiny.

The fact that they have produced something artificial that even leaves a question whether it is better than them is still pretty impressive!

Reply ·

2 ·

Share

KG

K Goff 1 day ago The amount of energy wasted on “AI” is staggering, whether you approach it from a squishy green perspective or an economic one.

Reply ·

5 ·

Share

JF

John Fisher 22 hours ago As long as we keep the government out of it, LLMs will either produce a return or investors will move onto something else.

Reply ·

·

Share

1 reply

TM

Tim McManus 1 day ago AI is a race to the mean. That’s the basis of its underlying algorithmic structure. Creativity is divergence from the mean, and the mean races towards it. The solution isn’t the creative part, the process that brings about the solution IS the creative part.

Reply ·

3 ·

Share

CE

Carlton Ellis 21 hours ago I bet to differ the solution is in many cases the creative part.

Recently I changed the oil on my SUV. The filter wrench that fit the new oil filter, but would not property fit the filter on the vehicle. I needed a solution. Because of the tight space the filter was in, I could not use an adjustable oil filter wrench. So, here is what I did, fortunately I had some clamps that you use clamp the hose from the dryer to the vent pipe in the house. You can adjust the circumference of these clamps with a screwdriver. I was able to adjust the size of the clamp to snugly fit the oil filter. The screw mechanism of the clamp provided the necessary leverage point to turn the filter and get it off.

I don’t think that is a solution that an AI chabot is going to give you.

Reply ·

1 ·

Share

KG

K Goff 1 day ago I’m shocked, shocked! that it’s just stock-selling hype, and not concrete and real like self-driving cars, blockchain, virtual reality, nanotech, NFTs…

Reply ·

6 ·

Share

DR

Dave R 1 day ago Does it really need to think? We can basically dump our whole history continuously into AI to raise the level of everyone in any topic. Jobs requiring memorization and reading new studies is now available to everyone, instantly. It removes constraints from everyone to innovate and execute quickly. Great progress ahead

Reply ·

1 ·

Share

NF

Neil F 23 hours ago Exactly, for basic programming it allows someone with little to no coding experience the ability to write simple scripts to automate their work. For now the search ability is way more effective than search engines which have become lists of advertisements and non related garbage.

Reply ·

1 ·

Share

H Sharpe 1 day ago I wonder how many ‘problems’ each one of us actually has to solve everyday? Day to day existence really doesn’t throw huge and/or difficult problems in our way and most of our behavior is routinized within well-known limits. The big problems such as figuring out disease causes and cures and economic problems should be the focus of AI, not how to make spreadsheets faster, although nothing intrinsically wrong with that but just makes the ‘machine’ happier.

Reply ·

·

Share

JP

James Pascoe 1 day ago Note that a “bag of heuristics” is exactly what a psychic uses. Literally the oldest trick in the book, at scale.

Reply ·

6 ·

Share

kt

kris thiruvillakkat 1 day ago For the moment, forget about AI thinking, focus on the real person thinking. In this case, Trump, a self proclaimed genius dude-Does he thinks? Does he think before he acts ?(AI does that). And what does he think after all his stupid acts made without any RATIONAL thinking end up as the worst messes; impossible to fix? Question to ask- is Trump a human or a machine? A rational thinking, reasonably intelligent person will show empathy, sympathy, kindness, respect, ability to understand; Truymp has none. Even robots have some of them-as evidenced by robots helping/consoling patients in hospitals, nursing homes etc. Will that shame Trump?

Reply ·

·

Share

SH

Steve Humphrey 1 day ago Trump? Seriously your example here is like the AI generated street map of Manhattan.

Reply ·

2 ·

Share

Ca

C abely 1 day ago Anyone who look ed at it objectively saw it lacked creative, a fundamental building block of human intelligence.

AI took on a life (marketing spend) of it’s own. Instead it should be AL…accelerated logic, tasked based not intelligent based

Reply ·

1 ·

Share

GL

Glenn Lindquist 1 day ago Nothing actually “thinks” unless it has life, has a soul, is capable of emotions, and has a will of its own. A machine is just a machine no matter how complicated it is and how well it works. Ultimately these machines can only do what real thinkers, that is humans, have programmed them to do. And the software engineers creating these machines are only engineers, not gods. They are not creating life and lifeless things cannot think.

Reply ·

3 ·

Share

SH

Steve Humphrey 1 day ago Truly. The miracle of life, and all its capabilities, can only come from a Creator. That is God.

Reply ·

3 ·

Share

CH

Craig H 1 day ago 1

  • 1

10 This is what a computer does. It either adds a binary one or it subtracts a binary one. Millions and billions of times in a second. That’s it. It doesn’t think.; it refers to massive sets of data which does everything from helping post this thought to quickly preparing a trip to London for me.

Reply ·

4 ·

Share

JS

John Sledziewski 1 day ago We are no longer users of tools; we are users used by tools.

The computer was already an extension of the central nervous system; artificial intelligence is its automatic replay.

Reply ·

·

Share

Frank M 1 day ago “We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All The vast ‘brains’ of artificial intelligence models can memorize endless lists of rules. That’s useful, but not how humans solve problems.”

Good article, one I will save.

‘Thinking?’ Now that is something else entirely. To the best of my knowledge, ‘AI’ is ‘AI’ because we’re not sure what the acronym is supposed to spell out.

For the moment, AI is simply better coding, maybe much better coding, but ’thinking’, or ‘Artificial Intelligence’; well, to me, that would define being self-aware, which would be fine, I suppose, if humanity is comfortable with that, although tolerance has never been one of Homo Sapiens’ strong points. (Edited)

Reply ·

2 ·

Share

SC

Sean Campbell 1 day ago Good article. This is one of the few that injects a dose of reality into the AI craze. For the most part, AI is a fancy front end bolted onto a raft of basic modeling techniques that have been in use for decades. These models can do some things quite well but they most certainly are not “intelligent” and can’t “think”. Statements to the contrary simply fail to grasp the fundamental nature of the product - or is shucksterism at a high level.

Reply ·

2 ·

Share

TD

Tyler Davis 21 hours ago That’s not true – the transformer is a novel architecture that enabled the rapid advancement of the past few years

Reply ·

·

Share

Show 1 more reply

RB

Richard Berlin 1 day ago Nothing mysterious. It is the lowest common denominator. This seems likely to remain the controlling feature although we shall pretend differently.

Reply ·

·

Share


Links to this note