Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Merry Christmas, happy Hannukah, and happy holidays in general!)

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    PauseAI Leader writes a hard take down on the EA movement: https://forum.effectivealtruism.org/posts/yoYPkFFx6qPmnGP5i/thoughts-on-my-relationship-to-ea-and-please-donate-to

    They may be a doomer with some crazy beliefs about AI, but they’ve accurately noted EA is pretty firmly captured by Anthropic and the LLM companies and can’t effectively advocate against them. And they accurately call out the false balanced style and unevenly enforced tone/decorum norms that stifle the EA and lesswrong forums. Some choice quotes:

    I think, if it survives at all, EA will eventually split into pro-AI industry, who basically become openly bad under the figleaf of Abundance or Singulatarianism, and anti-AI industry, which will be majority advocacy of the type we’re pioneering at PauseAI. I think the only meaningful technical safety work is going to come after capabilities are paused, with actual external regulatory power. The current narrative (that, for example, Anthropic wishes it didn’t have to build) is riddled with holes and it will snap. I wish I could make you see this, because it seems like you should care, but you’re actually the hardest people to convince because you’re the most invested in the broken narrative.

    I don’t think talking with you on this forum with your abstruse culture and rules is the way to bring EA’s heart back to the right place

    You’ve lost the plot, you’re tedious to deal with, and the ROI on talking to you just isn’t there.

    I think you’re using specific demands for rigor (rigor feels virtuous!) to avoid thinking about whether Pause is the right option for yourselves.

    Case in point: EAs wouldn’t come to protests, then they pointed to my protests being small to dismiss Pause as a policy or messaging strategy!

    The author doesn’t really acknowledge how the problems were always there from the very founding of EA, but at least they see the problems as they are now. But if they succeeded, maybe they would help slow the waves of slop and capital replacing workers with non-functioning LLM agents, so I wish them the best.

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      Impressive side fact: looks like they sent him an unsolicited email generated by a fucking llm? An impressive failure to read the fucking room.

      Not like that exact bullshit has been attempted on the community at large by other shitheads last year, but then originality was never the clanker wankers’ strength

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      for the creation of the shittiest widely adopted programming language since C++

      Hey! JavaScript is objectively worse, thank you very much

    • mlen@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      Digressing: The irony is that it’s a language with one of the best standard libraries out there. Wanna run a http reverse proxy with TLS cross compiled for a different os? No problem!

      Many times I used it only because of that despite it being a worse language.

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    24 hours ago

    lol, Oliver Habryka at Lightcone is sending out begging emails, i found it in my spam folder

    (This email is going out to approximately everyone who has ever had an account on LessWrong. Don’t worry, we will send an email like this at most once a year, and you can permanently unsubscribe from all LessWrong emails here)

    declared Lightcone Enemy #1 thanks you for your attention in sending me this missive, Mr Habryka

    In 2024, FTX sued us to claw back their donations, and around the same time Open Philanthropy’s biggest donor asked them to exit our funding area. We almost went bankrupt.

    yes that’s because you first tried ignoring FTX instead of talking to them and cutting a deal

    that second part means Dustin Moskovitz (the $ behind OpenPhil) is sick of Habryka’s shit too

    If you want to learn more, I wrote a 13,000-word retrospective over on LessWrong.

    no no that’s fine thanks

    We need to raise $2M this year to continue our operations without major cuts, and at least $1.4M to avoid shutting down. We have so far raised ~$720k.

    and you can’t even tap into Moskovitz any more? wow sucks dude. guess you’re just not that effective as altruism goes

    And to everyone who donated last year: Thank you so much. I do think humanity’s future would be in a non-trivially worse position if we had shut down.

    you run an overpriced web hosting company and run conferences for race scientists. my bayesian intuition tells me humanity will probably be fine, or perhaps better off.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      you run an overpriced web hosting company and run conferences for race scientists. my bayesian intuition tells me humanity will probably be fine, or perhaps better off.

      Someone in the comments calls them out: “if owning a $16 million conference centre is critical for the Movement, why did you tell us that you were not responsible for all the racist speakers at Manifest or Sam ‘AI-go-vroom’ Altman at another event because its just a space you rent out?”

      OMG the enemies list has Sam Altman under “the people who I think have most actively tried to destroy it (LessWrong/the Rationalist movement)”

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        More like

        Would you like to know more

        I mean, sure

        Here’s a 13,000-word retrospective

        Ah, nah fam

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    https://www.windowscentral.com/microsoft/windows-11/my-goal-is-to-eliminate-every-line-of-c-and-c-from-microsoft-by-2030-microsoft-bets-on-ai-to-finally-modernize-windows

    My goal is to eliminate every line of C and C++ from Microsoft by 2030. Our strategy is to combine AI *and* Algorithms to rewrite Microsoft’s largest codebases. Our North Star is “1 engineer, 1 month, 1 million lines of code”. To accomplish this previously unimaginable task, we’ve built a powerful code processing infrastructure. Our algorithmic infrastructure creates a scalable graph over source code at scale. Our AI processing infrastructure then enables us to apply AI agents, guided by algorithms, to make code modifications at scale. The core of this infrastructure is already operating at scale on problems such as code understanding."

    wow, *and* algorithms? i didn’t think anyone had gotten that far

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      I suppose it was inevitable that the insufferable idiocy that software folk inflict on other fields would eventually be turned against their own kind.

      https://xkcd.com/1831/

      alt text

      And xkcd comic.

      Long haired woman: or field has been struggling with this problem for years!

      Laptop wielding techbro: struggle no more! I’m here to solve it with algorithms.

      6 months later:

      Techbro: this is really hard Woman: You don’t say.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Our algorithmic infrastructure creates a scalable graph over source code at scale.

      There’s a lot going on here, but I started by trying to parse this sentence (assuming it wasn’t barfed out by an LLM). I’ve become dissatisfied lately with my own writing being too redundancy-filled and overwrought, showing I’m probably too far out of practice at serious writing, but what is this future Microsoft Fellow even trying to describe here?

      at scale

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Ah yes, I want to see how they eliminate C++ from the Windows Kernel – code notoriously so horrific it breaks and reshapes the minds of all who gaze upon it – with fucking “AI”. I’m sure autoplag will do just fine among the skulls and bones of Those Who Came Before

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      They now updated this to say it is just a research project and none of it will be going live. Pinky promise (ok, I added the pinky promise bit).

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      So maybe I’m just showing my lack of actual dev experience here, but isn’t “making code modifications algorithmically at scale” kind of definitionally the opposite of good software engineering? Like, I’ll grant that stuff is complicated but if you’re making the same or similar changes at some massive scale doesn’t that suggest that you could save time, energy and mental effort by deduplicating somewhere?

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        2 days ago

        This doesn’t directly answer your question but I guess I had a rant in me so I might as well post it. Oops.


        It’s possible to write tools that make point changes or incremental changes with targeted algorithms in a well understood problem space that make safe or probably safe changes that get reviewed by humans.

        Stuff like turning pointers into smart pointers, reducing string copying, reducing certain classes of runtime crashes, etc. You can do a lot of stuff if you hand-code C++ AST transformations using the clang / llvm tools.


        Of course “let’s eliminate 100% of our C code with a chatbot” is… a whole other ballgame and sounds completely infeasible except in the happiest of happy paths.

        In my experience even simple LLM changes are wrong somewhere around half the time. Often in disturbingly subtle ways that take an expert to spot. Also in my experience if someone reviews LLM code they also tend to just rubber stamp it. So multiply that across thousands of changes and it’s a recipe for disaster.

        And what about third party libraries? Corporate code bases are built on mountains of MIT licensed C and C++ code, but surely they won’t all switch languages. Which means they’ll have a bunch of leaf code in C++ and either need a C++ compatible target language, or have to call all the C++ code via subprocess / C ABI / or cross-language wrappers. The former is fine in theory, but I’m not aware of any suitable languages today. The latter can have a huge impact on performance if too much data needs to be serialized and deserialized across this boundary.

        Windows in particular also has decades of baked in behavior that programs depend on. Any change in those assumptions and whoops some of your favorite retro windows games don’t work anymore!


        In the worst case they’d end up with a big pile of spaghetti that mostly works as it does today but that introduces some extra bugs, is full of code that no one understands, and is completely impossible to change or maintain.

        In the best case they’re mainly using “AI” for marketing purposes, will try to achieve their goals using more or less conventional means, and will ultimately fall short (hopefully not wreaking too much havoc in the progress) and give up halfway and declare the whole thing a glorious success.

        Either way ultimately if any kind of large scale rearchitecting that isn’t seen through to the end will cause the codebase to have layers. There’s the shiny new approach (never finished), the horrors that lie just beneath (also never finished), and the horrors that lie just beneath the horrors (probably written circa 2003). Any new employees start by being told about the shiny new parts. The company will keep a dwindling cohort of people in some dusty corner of the company who have been around long enough to know how the decades of failed code architecture attempts are duct-taped together.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          In my experience even simple LLM changes are wrong somewhere around half the time. Often in disturbingly subtle ways that take an expert to spot.

          I just want to add: sailor’s reference to “expert” here is no joke. the amount of wild and subtle UB (undefined behaviour) you get in the C family is extremely high-knowledge stuff. it’s the sort of stuff that has in recent years become fashionable to describe as “cursed”, and often with good reason

          LLMs being bad at precision and detail is as perfect an antithesis in that picture as I am capable of conceiving. so any thought of a project like this that pairs LLMs (or, more broadly, any of the current generative family of nonsense) as a dependency in it’s implementation is just damn wild to me

          (and just incase: this post is not an opportunity to quibble about PLT and about what be or become possible.)

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          Some of the horrors are also going to be load bearing for some fixes people dont properly realize because the space of computers which can run windows is so vast.

          Think something like that happend with twitter, when Musk did his impression of a bull in a china store at the stack, they cut out some code which millions of Indians, who used old phones, needed to access the twitter app.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        2 days ago

        The short answer is no. Outside of this context, I’d say the idea of “code modifications algorithmically at scale” is the intersection of code generation and code analysis, all of which are integral parts of modern development. That being said, using LLMs to perform large scale refactors is stupid.

        • jaschop@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          I think I’m with Haunted’s intuition in that I don’t really buy code generation. (As in automatic code generation.) My understanding was you build a thing that takes some config and poops out code that does certain behaviour. But could you not build a thing instead, that does the behaviour directly?

          I know people who worked on a system like that, and maybe there’s niches where it makes sense. Just seems like it was a SW architecture fad 20 years ago, and some systems are locked into that know. It doesn’t seem like the pinnacle of engineering to me.

          • Jonathan Hendry@iosdev.space
            link
            fedilink
            arrow-up
            1
            ·
            6 hours ago

            @jaschop

            “But could you not build a thing instead, that does the behaviour directly?”

            Back in the day NeXT’s Interface Builder let you connect up and configure “live” UI objects, and then freeze-dry them to a file, which would be rehydrated at runtime to recreate those objects (or copies of them if you needed more.)

            Apple kept this for a while but doesn’t really do it anymore. There were complications with version control, etc.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            Unfortunately, the terms “code generation” and “automatic code generation” are too broad to make any sort of value judgment about their constituents. And I think evaluating software in terms of good or bad engineering is very context-dependent.

            To speak to the ideas that have been brought up:

            “making the same or similar changes at some massive scale […] suggest[s] that you could save time, energy and mental effort by deduplicating somewhere”

            So there are many examples of this in real code bases, ranging everywhere from simple to complex changes.

            • Simple: changing variable names and documentation strings to be gender neutral (e.g. his/hers -> their) or have non-loaded terms (black/white list -> block/allow list). Not really something you’d bother to try and deduplicate, but definitely something you’d change on a mass scale with a “code generation tool”. In this case, the code-generation tool is likely just a script that performs text replacement.
            • Less simple: upgrading from a deprecated API (e.g. going from add_one_to(target) to add_to(target, addend)). Anyone should try to de-dupe where they can, but at the end of the day, they’ll probably have some un-generalisable API calls that still can be upgraded automatically. You’ll also have calls that need to be upgraded by hand.

            Giving a complex example here is… difficult. Anyway, I hope I’ve been able to illustrate that sometimes you have to use “code generation” because it’s the right tool for the job.

            “My understanding was you build a thing that takes some config and poops out code that does certain behaviour.”

            This hypothetical is a few degrees too abstract. This describes a compiler, for example, where the “config” is source code and “code that does certain behaviour” is the resulting machine code. Yes, you can directly write machine code, but at that point, you probably aren’t doing software engineering at all.

            I know that you probably don’t mean a compiler. But unfortunately, it’s compilers all the way down. Software is just layers upon layers of abstraction.

            Here’s an example: a web page. (NB I am not a web dev and will get details wrong here) You can write html and javascript by hand, but most of the time you don’t do that. Instead, you rely on a web framework and templates to generate the html/javascript for you. I feel like that fits the config concept you’re describing. In this case, the templates and framework (and common css between pages) double as de-duplication.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          This is like the entire fucking genAI-for-coding discourse. Every time someone talks about LLMs in lieu of proper static analysis I’m just like… Yes, the things you say are of the shape of something real and useful. No, LLMs can’t do it. Have you tried applying your efforts to something that isn’t stupid?

          • BurgersMcSlopshot@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            2 days ago

            If there’s one thing that coding LLMs do “well”, it’s expose the need in frameworks for code generation. All of the enterprise applications I have worked on in modernity were by volume mostly boilerplate and glue. If a statistically significant portion of a code base is boilerplate and glue, then the magical statistical machine will mirror that.

            LLMs may simulate filling this need in some cases but of course are spitting out statistically mid code.

            Unfortunately, committing engineering effort to write code that generates code in a reliable fashion doesn’t really capture the imagination of money or else we would be doing that instead of feeding GPUs shit and waiting for digital God to spring forth.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    elsewhere on lemmy, a piece from the atlantic (be warned: they quote lasker/cremieux for some reason) on new shiny glp-1 agonist that you can order off telegram from some random ass chinese lab:

    The tests, insofar as they are reliable, do flag problems. According to Finnrick Analytics, a start-up that provides free peptide tests and publicly shares the results, 10 percent of the retatrutide samples it has tested in the past 60 days had issues of sterility, purity, or incorrect dosing. Two other peptide-testing labs, Trustpointe and Janoshik, have said in interviews with Rory Hester, a.k.a. PepTok on YouTube, that they see, respectively, an overall fail rate of 20 percent and a 3 to 5 percent fail rate for sterility alone across all peptides.

    isn’t dear leader EY taking this? it’s still not approved yet, so it’s not available on normal market, and because it’s peptide it’s i.m. only. also, side effects not just for this one, but for entire class include anhedonia, which must be very rational thing to risk without medical need. chat, what’s your p(infected sore on EY’s ass)

    • saucerwizard@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      3 days ago

      I’m running ozempic and I haven’t noticed any anhedonia tbf. I think Yud claimed he had tried them and that they failed to work or something.

      (fun fact: ozempics going generic up here in a few months because Novo fucked up the patent application. The peptide market thing gives me the willies.)

      • Charlie Stross@wandering.shop
        link
        fedilink
        arrow-up
        7
        ·
        2 days ago

        @saucerwizard Do you drink alcohol, and if so, what has semaglutide done to your desire to drink? (I know my alcohol consumption crashed by about 80% when I went on Rybelsus—the oral formulation of semaglutide, the GLP-1 agonist in Ozempic™—and it’s a common enough side-effect that it’s undergoing clinical trials as an anti-addiction medication.)

        • saucerwizard@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          Its gone completely out the window - anything more then a beer or two and I get nauseous. I get a free bottle of hard liquor from work every quarter (distillery) and I’m completely unable to touch the stuff now.

          I also quit marijuana entirely, the only thing remaining is nicotine (which I do consider dropping from time to time). All and all, I think its been a good thing since I’m not sure I have the healthiest relationship with substances.

          • Charlie Stross@wandering.shop
            link
            fedilink
            arrow-up
            10
            ·
            2 days ago

            @saucerwizard I still drink *socially*, but it’s very much an “I’ll have a pint or two at the pub where I’m going to see friends”, rather than “I’m going to the pub for a drink (with friends)”. And zero inclination to drink at home, even with meals. Not that I did so regularly before, but semaglutide caused a marked loss of interest on my part.

            I was *really* worried for the first six months that it had nuked my pleasure in writing, which would have been a disaster—it’s my job—but I recovered.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        good for you ig. ozempic is actually small enough (and profitable enough) to make it synthetically, but novo process is to make linear precursor by fermentation, purify that, then tack on it side chain and N-terminal H-His-Aib- using regular peptide chemistry methods. no such luck with retatrutide tho, it has to be entirely synthetic. the real big deal however will be about small-molecule drug that targets this receptor, because this means pills instead of injections from day 1

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    I posted about Eliezer hating on OpenPhil for having too long AGI timelines last week. He has continued to rage in the comments and replies to his call out post. It turns out, he also hates AI 2027!

    https://www.lesswrong.com/posts/ZpguaocJ4y7E3ccuw/contradict-my-take-on-openphil-s-past-ai-beliefs?commentId=3GhNaRbdGto7JrzFT

    I looked at “AI 2027” as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didn’t bother pushing back because I didn’t expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as if they were astrological signs for as long as that’s been a practice (it has now been replaced by trading made-up numbers for p(doom)).

    When we say it, we are sneering, but when Eliezer calls them stupid little timelines and compares them to astrological signs it is a top quality lesswrong comment! Also a reminder for everyone that I don’t think we need: Eliezer is a major contributor to the rationalist attitude of venerating super-forecasters and super-predictors and promoting the idea that rational smart well informed people should be able to put together super accurate predictions!

    So to recap: long timelines are bad and mean you are a stuffy bureaucracy obsessed with credibility, but short timelines are bad also and going to expend the doomer’s crediblity, you should clearly just agree with Eliezer’s views, which don’t include any hard timelines or P(doom)s! (As cringey as they are, at least they are committing to predictions in a way that can be falsified.)

    Also, the mention about sacrificing credibility make me think Eliezer is intentionally willfully playing the game of avoiding hard predictions to keep the grift going (as opposed to self-deluding about reasons not to explain a hard timeline or at least put out some firm P()s ).

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      I believe in you Eliezer! You’re starting to recognise that the AI doom stuff is boring nonsense! I’m cheering for you to dig yourself out of the philosophical hole you’ve made!

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      it has now been replaced by trading made-up numbers for p(doom)

      Was he wearing a hot-dog costume while typing this wtf

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        I really don’t know how he can fail to see the irony or hypocrisy at complaining about people trading made up probabilities, but apparently he has had that complaint about P(doom) for a while. Maybe he failed to write a call out post about it because any criticism against P(doom) could also be leveled against the entire rationalist project of trying to assign probabilities to everything with poor justification.

    • Evinceo@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      Eliezer is a major contributor to the rationalist attitude of venerating super-forecasters and super-predictors and promoting the idea that rational smart well informed people should be able to put together super accurate predictions!

      This is a necessary component of his imagined AGI monster. Good thing it’s bullshit.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          And looking that up led me to this passage from Bertrand Russell:

          The more tired a man becomes, the more impossible he finds it to stop. One of the symptoms of approaching nervous breakdown is the belief that one’s work is terribly important and that to take a holiday would bring all kinds of disaster. If I were a medical man, I should prescribe a holiday to any patient who considered his work important.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I’m pretty sure that Atlas Shrugged is actually just cursed and nobody has ever finished it. John Galt’s speech gets two pages longer whenever you finish one.

      And I think the challenge with engaging with Rand as a fiction author is that, put bluntly, she is bad at writing fiction. The characters and their world don’t make any sense outside of the allegorical role they play in her moral and political philosophy, which means you’re not so much reading a good story with thought behind it as much as it’s a philosophical treatise that happens in the form of dialogue. It’s a story in the same way that Plato’s Republic is a story, but the Republic can actually benefit from understanding the context of the different speakers at least as a historical text.