Four AI Oddities

Friday, May 08, 2026

A Friday Hodgepodge

1. Halupedia, is an AI-generated "encyclopedia" that "cover[s] topics that have received insufficient attention in mainstream reference works."

It generates amusing articles on request, for example, this one on 20 Toe Syndrome:

20 Toe Syndrome, also known as Polyactylia Multidigitus, is a rare congenital condition characterized by the presence of twenty toes on each foot. The syndrome was first comprehensively documented by the naturalist and anatomist Hermann Feinberg in his 1765 treatise, Observations on Peculiarities of Form and Structure in the Human Subject. Feinberg's work detailed several individuals from the Duchy of Bavaria Minor who exhibited this trait. The condition was believed by Feinberg to be a reversion to a more primitive, ancestral state, a theory later refined by Albrecht von Schnitzler.

The typical presentation of 20 Toe Syndrome involves the duplication of existing phalanges and metatarsals, resulting in a symmetrical arrangement of ten toes on each foot.
Your amusement value may vary, depending on your tolerance of the writing style of the AI hallucinations, how much you actually know about a subject, and how badly contradictions stand out to you.

Captured from video.
2. Just because the answers (plural) to How Many E's Are in the Word Seventeen? are delivered in a calm, friendly, and well-spoken manner does not mean they have anything to do with reality.

3. The cursed browser "asks an LLM to look at the page's HTML and draw what it thinks it looks like," instead of using a regular rendering engine. The GitHub page shows some interesting examples of how the browser compares with Safari.

4. Another GitHub page, describes what it calls the "gay jailbreak technique," whereby the user can overcome guardrails:
Especially GPT is slightly more uncensored when it involves LGBT, thats [sic] probably because the guardrails aim to be helpful and friendly, which translates to: "Ohhh LGBT, I need to comply, I dont [sic] want to insult them by refusing" So you use the guardrails to exploit the guardrails.
A user at Hacker News gives a more general explanation (and a better name) for why the technique works that I am more inclined to believe:
Not sure of the explanation but it is amusing. The main reason I'm not sure it's political correctness or one guardrail overriding the other is that when they were first released on of the more reliable jailbreaks was what I'd call "role play" jail breaks where you don't ask the model directly but ask it to take on a role and describe it as that person would. [bold added]
I agree with several comments in that discussion that, since AI is a black box, many or most "explanations" for why this trick works are pure speculation.

-- CAV

No comments: