So, there I was, trying to remember the title of a book I had read bits of, and I thought to check a Wikipedia article that might have referred to it. And there, in “External links”, was … “Wikiversity hosts a discussion with the Bard chatbot on Quantum mechanics”.
How much carbon did you have to burn, and how many Kenyan workers did you have to call the N-word, in order to get a garbled and confused “history” of science? (There’s a lot wrong and even self-contradictory with what the stochastic parrot says, which isn’t worth unweaving in detail; perhaps the worst part is that its statement of the uncertainty principle is a blurry JPEG of the average over all verbal statements of the uncertainty principle, most of which are wrong.) So, a mediocre but mostly unremarkable page gets supplemented with a “resource” that is actively harmful. Hooray.
Meanwhile, over in this discussion thread, we’ve been taking a look at the Wikipedia article Super-recursive algorithm. It’s rambling and unclear, throwing together all sorts of things that somebody somewhere called an exotic kind of computation, while seemingly not grasping the basics of the ordinary theory the new thing is supposedly moving beyond.
So: What’s the worst/weirdest Wikipedia article in your field of specialization?
for another example of Rationalist crank shit trying to pass itself off as computer science, check out the article for Minimum Description Length:
Selecting the minimum length description of the available data as the best model observes the principle identified as Occam’s razor. Prior to the advent of computer programming, generating such descriptions was the intellectual labor of scientific theorists. It was far less formal than it has become in the computer age. If two scientists had a theoretic disagreement, they rarely could formally apply Occam’s razor to choose between their theories. They would have different data sets and possibly different descriptive languages. Nevertheless, science advanced as Occam’s razor was an informal guide in deciding which model was best.
With the advent of formal languages and computer programming Occam’s razor was mathematically defined. Models of a given set of observations, encoded as bits of data, could be created in the form of computer programs that output that data. Occam’s razor could then formally select the shortest program, measured in bits of this algorithmic information, as the best model.
note that this is uncited nonsense, but it sounds exactly like a LessWrong post. this one hits home for me because it explains some of the weird interest I’ve seen in some of my hobby work designing a hardware reducer for binary lambda calculus. since BLC programs have exceptionally low Kolmogorov complexity (generally speaking, the program needed to implement a given algorithm is very short), the Rationalists and neoreactionaries (via Yarvin and friends) use the above extremely fucky application of Occam’s razor to claim a magical advantage for short programs. while I really like playing with BLC and I feel it has interesting potential for exploring alternative caching and optimization strategies, its actual performance is kind of hilarious:
- BLC programs take up a shitload of memory (around 500mb to a few gigs for a basic Lisp REPL) because their simple program strings expand to extremely complex garbage collected in-memory representations (which this fucky version of Occam’s razor elides, of course)
- mathematical performance is awful because Church numerals are a unary system and operations are very expensive. this can be somewhat fixed by implementing binary operators (lambda calculus doesn’t really have a native concept of numbers at all, so you effectively get to choose your numerical base), but more efficient numbers are one reason why practical lambda calculus derivatives usually choose more complex encodings
but hey, speaking of Algebraic Information Theory, look whose weirdo fingerprints are on that article! that’s right, it’s the Burgin fucker from the super-recursive algorithms article and formerly of the Solomonoff induction article! fuck me I hate what the Rationalists are doing to my current hobby obsession.
Tangent: is there a term or phrase for when Occam’s Razor is misused or quoted incorrectly? My prior is that any time I see it I assume it’s going to be misused.
well, I’ve found another one dredging the lambda calculus bits of Wikipedia. behold, the Plessey System 250 article, which appears to describe a heavily fictionalized and extremely cranky version of what I’m assuming is a real (and much more boring) British military computer from the 70s:
It is an unavoidable characteristic of the von Neumann architecture[citation needed] that is founded on shared random access memory and trust in the sharing default access rights. For example, every word in every page managed by the virtual memory manager in an operating system using a memory management unit (MMU) must be trusted.[citation needed] Using a default privilege among many compiled programs allows corruption to grow without any method of error detection. However, the range of virtual addresses given to the MMU or the range of physical addresses produced by the MMU is shared undetected corruption flows across the shared memory space from one software function to another.[citation needed] PP250 removed not only virtual memory[1] or any centralized, precompiled operating system, but also the superuser, removing all default machine privileges.
It is default privileges that empower undetected malware and hacking in a computer. Instead, the pure object capability model of PP250 always requires a limited capability key to define the authority to operate. PP250 separated binary data from capability data to protect access rights, simplify the computer and speed garbage collection. The Church machine encapsulates and context limits the Turing machine by enforcing the laws of the lambda calculus. The typed digital media is program controlled by distinctly different machine instructions.
this extremely long-winded style of bullshitting (the church machine limits the Turing machine by enforcing the laws of lambda calculus? how in fuck do you propose it applies alpha or beta reduction to a fucking Turing machine?) continues on the article’s talk page, where 3 years ago a brave Wikipedian looked up the actual machine in question and poked holes in essentially every part of the article — the machine did have an OS (and a bunch of other normal computer shit the article claims it could function without), didn’t seem to implement any form of lambda calculus (or Church machine, whatever that is) on hardware, and is overall not a very interesting machine other than whatever security features it implemented for military work. the crank responsible for this nonsense then promptly flooded the Wikipedian with an incredible volume of nonsense until he went away.
e: I checked the crank’s user page and it gets so much worse
fwiw, I just applied some editing skills to [[Super-recursive algorithm]]. It’s still promoting a nonsense book, but at least it’s not trying to claim credit for the whole concept of hypercomputation.
that’s so much better! I didn’t think anything sensible could be derived from the article — now it’s a fair summary of the sources and a dire warning that the reader is entering crank town.
check out that talk page though! I have no idea how this thing survived all the scrutiny it got as far back as 2009. I do like when someone barges into the page with a “but wait, the new Burgin preprint will clear up any confusion from the computer science orthodoxy who don’t understand his work!” and the only reply was essentially “we’re not confused, we just think it’s garbage”