Give your Brain a rest. Use AI.

I could ask Google :unsure:
But WTF is "Woke" ?
Too many rearranged words are gaining traction like " gaslighting", it almost sounds like a good thing, I cook with Gas.;)
I wonder if this is because of AI or people are just getting dumber?
 
AI says: The term comes from the British play Gas Light, written by Patrick Hamilton. 1938

  • The Plot: A manipulative husband, Jack Manningham, attempts to convince his wife, Bella, that she is going insane to distract her from his criminal activities.
  • The "Gas" Part: Jack is secretly searching the attic for hidden jewels. When he turns on the gas lights upstairs, the lights in the rest of the house dim due to the shared fuel supply.
  • The Twist: When Bella notices the dimming lights, Jack insists she is imagining it. By denying her physical reality, he makes her doubt her own perceptions and sanity.
 
AI, if it works we are screwed and if it doesn't we are screwed ?
The massive capital invested in AI and Data centres could have gone to something like Rare Earth's processing which is definitely useful but appears to have just gone in a great big hole and needs more money thrown at it to stop an implosion :unsure:
China must be rolling on the floor in laughter, they have cheap open source AI and almost all the Rare Earth minerals processing that they and the World need as they watch the West squander such resources and slip into insignificance.
 
I asked gemini if a specific video claiming ai is a dead end is correct. I find the answer very enlightening.

The video by The Honest Torus presents a mathematically grounded critique of Large Language Models (LLMs), arguing that while they are incredible at mimicking patterns, they are structurally incapable of reaching the "Truth" or "General Intelligence" required for higher-level mathematics.
Here is a breakdown of the logic used in the video and whether those claims hold up to current scientific scrutiny.
1. The Core Mathematical Arguments
The "Dead End" argument usually centers on three specific mathematical and structural bottlenecks:
A. The "Probabilistic Decay" of Truth
This is the most compelling argument. LLMs are autoregressive, meaning they predict the next token (T_{n+1}) based on the previous tokens (T_{1...n}).
* The Logic: If an LLM has a 99% accuracy rate per token, the probability of a 100-token sequence being entirely correct is 0.99^{100} \approx 36\%. For a complex mathematical proof requiring thousands of tokens, the probability of staying "on the rails" approaches zero exponentially.
* The "Dead End": Simply making models bigger (scaling) doesn't solve this; it just moves the goalpost slightly further back.
B. Lack of a "World Model"
The video argues that LLMs represent a statistical map of language, not a logical map of reality.
* The Logic: In mathematics, a "Torus" (like the channel's namesake) has specific topological properties regardless of how you describe it in English or Python.
* The "Dead End": LLMs don't "understand" the torus; they understand the words usually associated with a torus. If you ask a question that requires navigating a logical space that hasn't been written about on the internet, the model has no "ground truth" to fall back on.
C. The Error Avalanche
In a mathematical proof, a single wrong sign or a slight misstep in a lemma invalidates the entire conclusion. Because LLMs lack a "self-correction" loop (a way to verify their own logic against a formal system like Lean or Z3 during the generation process), they are prone to "hallucinating" logical leaps that look correct but are fundamentally broken.
2. Is it True? (The "Dead End" Verdict)
Technically, yes—for the current architecture.
The consensus among many top AI researchers (including Yann LeCun and Richard Sutton) aligns with The Honest Torus: Autoregressive Transformers (LLMs) alone will likely never reach AGI.
* They are "stochastic parrots" (highly sophisticated ones) that lack the ability to plan, reason in a non-linear fashion, or understand cause-and-effect.
* If your definition of "Dead End" is "this specific tech won't become a sentient God-AI," then the video is likely correct.
3. Is it Likely? (The Nuance)
While the math says LLMs are a dead end for pure intelligence, it doesn't mean they are useless.
* The Hybrid Future: Most experts believe the "dead end" will be bypassed by plugging LLMs into formal verifiers. For example, Google DeepMind’s AlphaGeometry uses an LLM to "suggest" ideas and a symbolic engine to "prove" them.
* The "Dead End" is a Pivot: Historically, every AI "dead end" (like Perceptrons in the 60s) led to a pivot that created the next breakthrough.
Summary Table
| Feature | LLM Capability | Mathematical Requirement | Verdict |
|---|---|---|---|
| Logic | Statistical Correlation | Rigorous Deduction | Fail |
| Consistency | Probabilistic (P \approx (1-e)^N) | Absolute (1.0) | Fail |
| Knowledge | Compressed Internet Data | Underlying Axioms | Partial |
The Honest Torus is highlighting a fundamental truth: You cannot reach the moon by building a taller and taller ladder. You eventually need a rocket. LLMs are a very, very tall ladder.
 
So you wouldn't trust AI to select targets in a bombing raid then?
Which is one of the excuses put forward by the dept of war to explain 165 dead school girls, that is was an AI error they claim.
 
Certainly not. However this also is a question of definition. The old terrain hugging cruise missiles of the 90 already used some kind of AI. But that didn't decide the target.
The school issue is complicated unless you believe everything you read in the news. For one Mossad has identified multiple mass quarters of the Mullah fighters, and guess where, in hospitals, schools and mosques. That makes them legitimate targets.
Also in this case while it was a targeting error the real target was next door, so who's fault is it? Those Muslim fuckers don't give a shit about those girls, or anybody who is not in their club of brainwashed monkeys. If these girls due to a miracle developed a critical view of the Koran the Mullahs would have happily shot them themselves.
 
Last edited:
Back
Top