Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Sneer inspired by a thread on the preferred Tumblr aggregator subreddit.
Rationalists found out that human behavior didn’t match their ideological model, then rather than abandon their model or change their ideology decided to replace humanity with AIs designed to behave the way they think humans should, just as soon as they can figure out a way to do that without them destroying all life in the universe.
That thread gives me hope. A decade ago, a random internet discussion in which rationalist came up would probably mention “quirky Harry Potter fanfiction” with mixed reviews, whereas all the top comments on that thread are calling out the alt-right pipeline and the racism.
I have no hope. The guy who introduced me to LessWrong included what I later realised was a race science pitch. Yudkowsky was pushing this shit in 2007. This sucker just realised a coupla decades late.
david heinemeier hanson of the ruby on rails fame decided to post a white supremacist screed with a side of transphobia because now he doesn’t need to pretend anything anymore. it’s not surprising, he was heading this way for a while, but seeing the naked apology of fascism is still shocking for me.
any reasonable open source project he participates in should immediately cut ties with the fucker. (i’m not holding my breath waiting, though.)
@mawhrin @BlueMonday1984 bad news, he’s just financed a coup of Rubygems…
(well, shopify did. he’s on the board. so, looks like, best guess.)
(edit: fixed wrong company name.)
@fishidwardrobe @mawhrin @BlueMonday1984 Do you mean shopify? https://joel.drapper.me/p/rubygems-takeover/
@koantig @mawhrin @BlueMonday1984 bugger, yes, editing.
I was in London last week and I can confirm it still has the Big Ben, Trafalgar Square, and streets. The Tube is as shite and loud and warm and tight as ever. I even got called a cunt once. Had a blast. On a scale of Bordeaux to Beans on Toast I rank it 9/10 Britishness
Urgh, I couldn’t even get through the whole article, it’s too disgusting. What a surprise that yet another “no politics at work”-guy turns out to support fascism!
just yesterday I saw this toot and now I know why
(I mean, they probably should’ve bounced the guy a decade ago, but definitely even more time for it now)
@mawhrin just casually pitching “great replacement theory” there. What a little Nazi
Starting things off with a newsletter by Jared White that caught my attention: Why “Normies” Hate Programmers and the End of the Playful Hacker Trope, which directly discusses how the public perception of programmers has changed for the worse, and how best to rehabilitate it.
Adding my own two cents, the rise of gen-AI has definitely played a role here - I’m gonna quote Baldur Bjarnason directly here, since he said it better than I could:
-
It’s turned the tech industry from a potential political ally to environmentalism to an outright adversary. Water consumption of individual queries is irrelevant because now companies like Google and Microsoft are explicitly lined up against the fight against climate disaster. For that alone the tech should be burned to the ground.
-
People in a variety of fields are watching the “AI” industry outright promise to destroy their field, their industry, their work, and their communities. Illustration, filmmaking, writers, and artists don’t need any other reason to be against the tech other than the fact that the industry behind the tech is openly talking about destroying them.
-
Those who fight for progressive politics are seeing authoritarians use the tech to generate propaganda, litter public institutions with LLM “accountability sinks” that prevent the responsibility of destroying people’s lives from falling on individual civil servants, and efforts to leverage the centralised nature of Large Language Model chatbots into political control over our language.
This is an interesting crystallization that parallels a lot of thoughts I’ve been having, and it’s particularly hopeful that it seeks to discard the “hacker” moniker and instead specifically describe the subjects as programmers. Looking back, I was only becoming terminally online circa 1997, and back then it seemed like there was an across-the-spectrum effort to reclaim the term “hacker” into a positive connotation after the federal prosecutions of the early 90s. People from aspirant-executive types like Paul Graham to dirty hippies like RMS were insistent that being a “hacker” was a good thing, maybe the best possible thing. This was, of course, a dead letter as soon as Facebook set up at “One Hacker Way” in Menlo Park, but I’d say it’s definitely for the best to finally put a solid tombstone on top of that cultural impulse.
As well, because my understanding of the defining activity of the positive-good “hacker” is that it’s all too close to Zuckerberg’s “move fast and break things,” and I think Jared White would probably agree with me. Paul Graham was willing to embrace the term because he was used to the interactive development style of Lisp environments, but the mainstream tools have only fitfully evolved in that direction at best. When “hacking,” the “hacker” makes a series of short, small iterations with a mostly nebulous goal in mind, and the bulk of the effort may actually be what’s invested in the minimum viable product. The self-conception inherits from geek culture a slumped posture of almost permanent insufficiency, perhaps hiding a Straussian victimhood complex to justify maintaining one’s own otherness.
In mentioning Jobs, the piece gestures towards the important cultural distinction that I still think is underexamined. If we’re going to reclaim and rehabilitate even homeopathic amounts of Jobs’ reputation, the thesis we’re trying to get at is that his conception of computers as human tools is directly at odds with the AI promoters’ (and, more broadly, most cloud vendors’) conception of computers as separate entities. The development of generative AI is only loosely connected with the sanitized smiley-face conception of “hacking.” The sheer amount of resources and time spent on training foreclose the possibility of a rapid development loop, and you’re still not guaranteed viable output at the end. Your “hacks” can devolve into a complete mess, and at eye-watering expense.
I went and skimmed Graham’s Hackers and Painters again to see if I could find any choice quotes along these lines, since he spends that entire essay overdosing on the virtuosity of the “hacker.” And hoo boy:
Measuring what hackers are actually trying to do, designing beautiful software, would be much more difficult. You need a good sense of design to judge good design. And there is no correlation, except possibly a negative one, between people’s ability to recognize good design and their confidence that they can.
You think Graham will ever realize that we’re culminating a generation of his precious “hackers” who ultimately failed at all this?
re: last line: no, he never will admit or concede to a single damn thing, and that’s why every time I remember this article exists I have to reread dabblers & blowhards one more time purely for defensive catharsis
I don’t even know the degree to which that’s the fault of the old hackers, though. I think we need to acknowledge the degree to which a CS degree became a good default like an MBA before it, only instead of “business” it was pitched as a ticket to a well-paying job in “computer”. I would argue that a large number of those graduates were never going to be particularly interested in the craft of programming beyond what was absolutely necessary to pull a paycheck.
Interesting, I’d go rhetorically more in this direction: A hack is not a solution, it’s the temporary fix (or… break?) until you get around to doing it properly. On the axis where hacks are on one end and solutions on the other, genAI shit is beyond the hack. It’s not even a temporary fix, its less, functionally and culturally.
A hack can also just be a clever way to use a system in a way it wasnt designed.
Say you put a Ring doorbell on a drone as a perimeter defense thing? A hack. See also the woman who makes bad robots.
It also can be a certain playfulness with tech. Which is why hacker is dead. It cannot survive contact with capitalist forces.
AFAIK the USA is the only country where programmers make very high wages compared to other college-educated people in a profession anyone can enter. Its a myth that so-called STEM majors earn much more than others, although people with a professional degree often launch their careers quicker than people without (but if you really want to launch your career quickly, learn a trade or work in an extractive industry somewhere remote). So I think for a long time programmers in the USA made peace with FAANG because they got a share of the booty.
Not the only. Former USSR and Eastern Europe as well, and it’s way worse there. Typically, SWE would earn about several TIMES more than your college-educated person. This leads to programmers being obnoxious libertarian nazi fucktards.
Hackers is dead. (Apologies to punk)
Id say that for one reason alone, when Musk claimed grok was from the guide nobody really turned on him.
Unrelated to programmers or hackers, Elons father (CW: racism) went fully mask off and claims Elon agrees with him. Which considering his promotion of the UK racists does not feel off the mark. (And he is spreading the dumb ‘[Africans] have an [average] IQ of 63’ shit, and claims it is all genetic. Sure man, the average African needs help understanding the business end of a hammer. As I said before, guess I met the smartest Africans in the world then, as my university had a few smart exchange students from an African country. If you look at his statements it is even dumber than normal, as he says population, so that means either non-Black Africans are not included, showing just how much he thinks of himself as the other, or they are, and the Black African average is even lower).
-
Regarding occasional sneer target Lawrence Krauss and his co-conspirators:
Months of waiting but my review copy of The War on Science has arrived.
I read Krauss’ introduction. What the fuck happened to this man? He comes off as incapable of basic research, argument, basic scholarship. […] Um… I think I found the bibliography: it’s a pdf on Krauss’ website? And all the essays use different citation formats?
Most of the essays don’t include any citations in the text but some have accompanying bibliographies?
I think I’m going insane here.
What the fuck?
https://bsky.app/profile/nateo.bsky.social/post/3lyuzaaj76s2o
Huh, I wonder who this Krauss guy is, haven’t heard of him.
*open wikipedia*
*entire subsection titled “Allegations of sexual misconduct”*
*close wikipedia*
image description
Screenshot of Lawrence Krauss’s Wikipedia article, showing a section called “Controversies” with subheadings “Relationship with Jeffrey Epstein” followed by “Allegations of sexual misconduct”. Text at https://en.wikipedia.org/wiki/Lawrence_Krauss#Controversies
Always so many coincidences.
“As a scientist…” please stop giving the world more reasons to stuff nerds in lockers.
All of those people, Krauss, Dawkins, Harris (okay that one might’ve been unsalvageable from the start, I’m really not sure) are such a great reminder that you can be however smart/educated you want, the moment you believe you’re the smartest boi and stop learning and critically approaching your own output you get sucked into the black hole of your asshole, never to return.
Like if I had a nickel. It’s hubris every time. All of those people need just a single good friend that, from time to time, would tell them “man, what you said was really fucking stupid just now” and they’d be saved.
Clout is a proxy of power and power just absolutely rots your fucking brain. Every time a Guy emerges, becomes popular, clearly thinks “haha, but I am different, power will not rot MY brain”, five years later boom, he’s drinking with Jordan Benzo Peterson. Even Joe Fucking Rogan used to be significantly more lucid before someone gave him ten bazillion dollars for a podcast and he suffered severe clout poisoning.
Sabine Hossenfelder claims she finally got cancelled, kind of - Munich Center for Mathematical Philosophy cut ties with Sabine Hossenfelder.
Supposedly the MCMP thought publicly shitting on a paper for clicks on your very popular youtube channel was antideontological. Link goes to reddit post in case you don’t want to give her views.
Sorry but who the fuck is that? Not one of our common guests here, I need a primer on her
She’s popped up once or twice, owing to how she got on a lot of normal people’s feeds as a science influencer before she couldn’t contain the crank any longer.
The commentator who thinks that USD 120k / year is a poor income for someone with a PhD makes me sad. That is what you earn if you become a professor of physics at a research university or get a good postdoc, but she aged out of all of those jobs and was stuck on poorly paid short-term contracts. There are lots of well-paid things that someone with a PhD in physics can do if she is willing to network and work for it, but she chose “rogue intellectual.”
A German term to look up is WissZeitVG but many academic jobs in many countries are only offered to people no more than x years after receiving their PhD (yep, this discriminates against women and the disabled and those with sick spouses or parents).
(sees YouTube video)
I ain’t [watchin] all that
I’m happy for u tho
Or sorry that happened
the talking point about disparaging terms for AI users by choice “I came up with a racist-sounding term for AI users, so if you say ‘clanker’ you must be a racist” is so fucking stupid it’s gotta be some sort of op
(esp when the made-up racist-sounding term turns out to have originated with Warren fucking Ellis)
i am extremely disappointed that awful systems users have fallen for it for a moment
Side note: The way I’ve seen clanker used has been for the AIs themselves, not their users. I’ve mostly seen the term in the context of star wars memers eager to put their anti-droid memes and jokes to IRL usage.
Same here, I’ve never actually seen the term “clanker” be used in reference to a person using the AI, but the AI itself. Which to me was analogous to going to an expensive bakery and accusing the bread of ripping you off instead of the baker (or whoever was setting prices, which wouldn’t be the bread).
If there was any sort of op going on (which I don’t think there is), I’d guess it would be from the AI doomers who want people to think of these things as things with enough self-awareness that something like “clanker” would actually insult them (but, again, probably not, IMO).
The truth is that we feel shame to a much greater degree than the other side, which makes it pretty easy to divide us on these annoying trivialities.
My personal hatred of tone policing is greater than my sense of shame, but I imagine that isnt something to expect for most.
Slightly related to the ‘it is an op’ thing, did you look at the history of the wikipedia page for clanker? There were 3 edits to the page before 1 June 2025.
Angela Collier: Dyson spheres are a joke.
spoiler
Turns out Dyson agreed.
thanks for linking this, was fun to watch
hadn’t seen that saltman clip (been real busy running around pretty afk the last few weeks), but it’s a work of art. despite grokking the dynamics, it continues to be astounding just how vast the gulf between fact and market vibes are
and as usual, Collier does a fantastic job ripping the whole idea a new one in a most comprehensive manner
Woke up to some hashtag spam this morning
AI’s Biggest Security Threat May Be Quantum Decryption
which appears to be over of those evolutionary “transitional forms” between grifts.
The sad thing is the underlying point is almost sound (hoarding data puts you at risk of data breaches, and leaking sensitive data might be Very Bad Indeed) but it is wrapped up in so much overhyped nonsense it is barely visible. Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.
(it also appears to be a month-old story, but I guess there’s no reason for mastodon hashtag spammers to be current 🫤)
Is there already a word for “an industry which has removed itself from reality and will collapse when the public’s suspension of disbelief fades away”?
Calling this just “a bubble” doesn’t cut it anymore, they’re just peddling sci-fi ideas now. (Metaverse was a bubble, and it was stupid as hell, but at least those headsets and the legless avatars existed.)
I would actually contend that crypto and the metaverse both qualify as early precursors to the modern AI post-economic bubble. In both cases you had a (heavily politicized) story about technology attract investment money well in excess of anyone actually wanting the product. But crypto ran into a problem where the available products were fundamentally well-understood forms of financial fraud, significantly increasing the risk because of the inherent instability of that (even absent regulatory pressure the bezzle eventually runs out and everyone realizes that all the money in those ‘returns’ never existed). And the VR technology was embarrassingly unable to match the story that the pushers were trying to tell, to the point where the next question, whether anyone actually wanted this, never came up.
GenAI is somewhat unique in that the LLMs can do something impressive in mimicking the form of actual language or photography or whatever it was trained on. And on top of that, you can get impressively close to doing a lot of useful things with that, but not close enough to trust it. That’s the part that limits genAI to being a neat party trick, generating bulk spam text that nobody was going to read anyways, and little more. The economics don’t work out when you need to hire someone skilled enough to do the work to take just as much time double-checking the untrustworthy robot output, and once new investment capital stops subsidizing their operating costs I expect this to become obvious, though with a lot of human suffering in the debris. The challenge of “is this useful enough to justify paying its costs” is the actual stumbling block here. Older bubbles were either blatantly absurd (tulips, crypto) or overinvestment as people tried to get their slice of a pie that anyone with eyes could see was going to be huge (railroad, dotcom). The combination of purely synthetic demand with an actual product is something I can’t think of other examples of, at this scale.
There are many such terms! Just look at the list of articles under “See Also” for “The Emperor’s New Clothes”. My favorite term, not listed there, is “coyote time”: “A brief delay between an action and the consequences of that action that has no physical cause and exists only for comedic or gameplay purposes.” Closely related is the fact that industries don’t collapse when the public opinion shifts, but have a stickiness to them; the guy who documented that stickiness is often quoted as saying, “Market[s] can remain irrational a lot longer than you [and I] can remain solvent.”
I don’t know if it quite applies here since all the money is openly flowing to nVidia in exchange for very real silicon, but I’m partial to “the bezzle” - referring to the duration of time between a con artist taking your money and you realizing the money is gone. Some cons will stretch the bezzle out as long as possible by lying and faking returns to try and get the victims to give them even more money, but despite how excited the victims may be about this period the money is in fact already gone.
I happened to learn recently that that’s probably not from Keynes:
it’s like there was an industry made entirely out of bullshit jobs
Is there already a word for “an industry which has removed itself from reality and will collapse when the public’s suspension of disbelief fades away”?
If there is, I haven’t heard of it. To try and preemptively coin one, “artificial industry” (“AI” for short) would be pretty fitting - far as I can tell, no industry has unmoored itself from reality like this until the tech industry pulled it off via the AI bubble.
Calling this just “a bubble” doesn’t cut it anymore, they’re just peddling sci-fi ideas now. (Metaverse was a bubble, and it was stupid as hell, but at least those headsets and the legless avatars existed.)
I genuinely forgot the metaverse existed until I read this.
It’s a financial security threat, you see
that’s why you should keep your at-risk data on quantum ai blockchain!!~
linkedin thotleedir posts directly into your mailbox? gonna have to pour one out for you
AI’s Biggest Security Threat May Be Quantum Decryption
an absolutely wild grab-bag of words. the more you know about each piece, the more surreal the sentence becomes. unintentional art!
Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.
At this point, I’m gonna chalk the refusal to stop hoarding up to ideology more than anything else. The tech industry clearly sees data not as information to be taken sparingly, used carefully, and deleted when necessary, but as Objective Reality Unitstm which are theirs to steal and theirs alone.
Getting pretty far afield here, but goddamn Matt Yglesias’s new magazine sucks:
The case for affirmative action for conservatives
“If we cave in and give the right exactly what they want on this issue, they’ll finally be nice to us! Sure, you might think based on the last 50,000 times we’ve tried this strategy that they’ll just move the goalposts and demand further concessions, but then they’ll totally look like hypocrites and we’ll win the moral victory, which is what actually matters!”
@PMMeYourJerkyRecipes @BlueMonday1984
The guy from the Federalist *doesn’t* want more ideological diversity in academia, he wants *less*. But he’ll settle for more as an interim goal until he can purge the wrong-thinkers.
The case for affirmative action for conservatives
We have that already it’s called business school
OK. So, this next thing is pretty much completely out of the sneerosphere, but it pattern matches to what we’re used to looking at: a self-styled “science communicator” mansplaining a topic they only have a reductive understanding of: Hank Green gets called out for bad knitting video
TIL Hank Green, the milquetoast BlueSky poster, also has some YouTube channel. How quaint.
I think every time I learn That Guy From BlueSky also has some other gig different from posting silly memes I lose some respect for them.
E.g. I thought Mark Cuban was just a dumb libertarian shitposter, but then it turned out he has a cuntillion dollars and also participated in a show unironically called “Shark Tank” that I still don’t 100% believe was a real thing because by god
I figured he’d be a lot better known for his YouTube career than for his bsky posting. I see his stuff all the time in my recommendations, though his style isn’t my cup of tea so I seldom watch any of them.
I haven’t seen the YouTube recommendation page in so long I wouldn’t know. Invidious my beloved <3
deleted by creator
What’s up with all the websites that tell me “you’ve reached the limit of free articles for the month” even though I’ve literally never entered that site before in my life. Stop gaslighting me you cunts.
Anyway, here’s the archive
The limit is zero, that’s all.
I disabled javascript and the popup went away
I mean if it gets too hot he could try the traditional fiber arts dodge for internet hate and fake his own death.
Some of our younger readers might not be fully inoculated against high-control language. Fortunately, cult analyst Amanda Montell is on Crash Course this week with a 45min lecture introducing the dynamics of cult linguistics. For example, describing Synanon attack therapy, Youtube comments, doomscrolling, and maybe a familiar watering hole or two:
You know when people can’t stop posting negative or conspiratorial comments, thinking they’re calling someone out for some moral infraction, when really they’re just aiming for clout and maybe catharsis?
Cultish and Magical Overthinking are top shelf.
If Yud and Soares are on a book tour I want them to go on hot ones
Forgive my stupidity, but I don’t know what this means. Or it’s referencing something I don’t know.
All is forgiven. Hot Ones is an internet interview show. Its core premise is that the host and interviewee conduct their interview while eating increasingly spicier chicken wings. As with any interview platform, it’s a common stop for public figures to hit up, especially on PR tours.
The show has a reputation for researching its guests well and asking insightful/deep questions. There’s also an element to it where, for some guests, as they experience spicier wings, they are unable to keep up whatever facade or persona they usually keep up in interviews.
I wasn’t making any profound commentary; I want to see Yud in pain while trying to explain alignment.
Thanks.
TBF I can’t say I’m sold on the notion of watching Yudkowsky eat spicy wings while also arguing with someone.
It’s only fair that his audience shouldn’t be the only ones suffering when that happens.
it would be incredible if yud’s one of the types to lose his focus around sauce 3, would do a real kicker to the shine of his grift
YouTube interview show where the interviewee is fed hot-sauce coated chicken wings of escalating spiciness
As an aside, my personal tolerance is such that if I ever go on there, I’m going to end up bankrupting the fuckers
someday™ I’ll get around to looking into getting a season pack from the states to here (which possibly might be distinctly non-trivial, and if it is I’ll have bother trying to figure out the logistics of it, which ugh)
An example from a local Nashville institution:
https://www.400degreeshotchicken.com/s/order?item=77#14
I’d gladly help you out with it!
will keep the offer in mind when I have the spoons and round tuits for it :)
Well, I have had some of their hot sauces before and can confirm that they are not bad to pretty good, and are at a good gift sort of price point (if you live in the US).
Wolfram has a blog post about lambda calculus. As usual, there are no citations and the bibliography is for the wrong blog post and missing many important foundational papers. There are no new results in this blog post (and IMO barely anything interesting) and it’s mostly accurate, so it’s okay to share the pretty pictures with friends as long as the reader keeps in mind that the author is writing to glorify themselves and make drawings rather than to communicate the essential facts or conduct peer review. I will award partial credit for citing John Tromp’s effort in defining these diagrams, although Wolfram ignores that Tromp and an entire community of online enthusiasts have been studying them for decades. But yeah, it’s a Mathematica ad.
In which I am pedantic about computer science (but also where I'm putting most of my sneers too, including a punchline)
For example, Wolfram’s wrong that every closed lambda term corresponds to a combinator; it’s a reasonable assumption that turns out to not make sense upon closer inspection. It’s okay, because I know that he was just quoting the same 1992 paper by Fokker that I cited when writing the esolangs page for closed lambda terms, which has the same incorrect claim verbatim as its first sentence. Also, credit to Wolfram for listing Fokker in the bibliography; this is one of the foundational papers that we’d expect to see. With that in mind, here’s some differences between my article and his.
The name “Fokker” appears over a dozen times in my article and nowhere in Wolfram’s article. Also, I love being citogenic and my article is the origin of the phrase “Fokker size”. I think that this is a big miss on his part because he can’t envision a future where somebody says something like “The Fokker metric space” or “enriched over Fokker size”. I’ve already written “some closed lambda terms with small Fokker size” in the public domain and it’s only a matter of time until Zipf’s law wears it down to “some small Fokkers”.
Also, while “Tromp” only appears once in my article, it appears next to somebody known only as “mtve” when they collaborated to produce what Wolfram calls a “size-7 lambda” known as Alpha. I love little results like these which aren’t formally published and only exist on community wikis. Would have been pretty fascinating if Alpha were complete, wouldn’t it Steve!? Would have merited a mention of progress in the community amongst small lambda terms, huh Steve!?
I also checked the BB Gauge for Binary Lambda Calculus (BLC), since it’s one of the topics I already wrote up, and found that Wolfram’s completely omitted Felgenhauer from the picture too, with that name in neither the text nor bibliography. Felgenhauer’s made about as many constructions in BLC as Tromp; Felgenhauer 2014 constructs that Goodstein sequence, for example. Also, Wolfram didn’t write that sequence, they sourced it from a living paper not in the bibliography, written by…Felgenhauer! So it’s yet another case of Wolfram just handily choosing to omit a name from a decade-old result in the hopes that somebody will prefer his new presentation to the old one.
Finally, what’s the point of all this? I think Wolfram writes these posts to advertise Mathematica (which is actually called Wolfram Mathematica and uses a programming language called Wolfram BuT DiD YoU KnOw) He also promotes his attempt at rewriting all of physics to have his logo upon it, and this blog post is a gateway to that project in the sense that Wolfram genuinely believes that staring at these chaotic geometries will reveal the equations of divine nature. Meanwhile I wrote my article in order to
win an IRC argument againstmake a reasonable presentation of an interesting phenomenon in computer science directly to Felgenhauer & Tromp, and while they don’t fully agree with me, we together can’t disagree with what’s presented in the article. That’s peer review, right?Having followed PLT stuff online for more than a quarter century now, I can state with confidence that basically everyone writing about lambda calculus online is doing it to glorify themselves.
Haven’t read the source paper yet (apparently it came out two weeks ago, maybe it already got sneered?) but this might be fun: OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws.
Full of little gems like
Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.
I had assumed that the problem was solely technical, that the fundamental design of LLMs meant that they’d always generate bullshit, but it hadn’t occurred to me that the developers actively selected for bullshit generation.
It seems kinda obvious in retrospect… slick bullshit extrusion is very much what is selling “AI” to upper management.
it’s a shitty paper but even they couldn’t avoid the point forever
Well, I’ll give them the text equivalent of a “you tried” sticker for finally admitting their automatic bullshit machines produce (gasp) bullshit, but the main sneerable thing I see is the ISO Standard OpenAI Anthropomo-
the developers actively selected for bullshit generation
every_tf2_class_laughing_at_once.wav
(Maximising lies extruded per ocean boiled was definitely what they were going for in hindsight, but it geniunely cracks me up to see them come out and just say it)
New post from tante: The “Data” Narrative eats itself, using the latest Pivot to AI as a jumping off point to talk about synthetic data.