Podcasting since 2005! Listen to our latest podcast here:

Podcasting since 2005! Listen to Latest SolderSmoke

Friday, January 16, 2026

MIT Technology Review on Why AI LLMs are So Strange and So Alien

Until recently, I thought that the AI LLMs were just sort of fancy, souped-up search engines.  Google on steroids.   But then they started getting simple things wrong.  And they seemed to understand that that our April 1 stories were just sarcasm.  There seemed to be more to them than fancy search engines.  

This MIT article explains what is going on with the LLMs.  

https://www.technologyreview.com/2026/01/12/1129782/ai-large-language-models-biology-alien-autopsy/

What do you folks think of this?


3 comments:

  1. It scares me, but I do see value. Example: You return to Newark from the tropics with hookworm. What are odds your local physician can't properly diagnose the disease. AI expands his search.

    ReplyDelete
  2. "Return from tropics" would that trigger the physician to widen the search?

    ReplyDelete
  3. I'm personally a little tired of every reporter confusing the fact that we don't fully understand how specific information is encoded in the weights and parameters, with the idea that we don't know how LLMs work. We do. Extremely well. They weren't given to us by aliens, they're a small breakthrough (Transformer layers) on top of a very well-trodden neural network technology.

    In previous neural nets, we had an easier time figuring out the internal encodings, simply because they were smaller. Google's Deepmind dogsnail hallucinations? Those weren't images it was generating, those were images extracted from the encodings in one of the internal layers, a side effect of the neural net's real purpose, which was to identify subjects in an image.

    The main differences are just scale and dimensionality; the transformer layer from the "Attention" paper was really just another RNN, but applied in a new way that exploded the number of dimensions in the network.

    ReplyDelete