• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle






  • itsnotlupus@lemmy.worldtoLinux@lemmy.mlraw man files?
    link
    fedilink
    arrow-up
    26
    ·
    edit-2
    1 year ago

    You can list every man page installed on your system with man -k . , or just apropos .
    But that’s a lot of random junk. If you only want “executable programs or shell commands”, only grab man pages in section 1 with a apropos -s 1 .

    You can get the path of a man page by using whereis -m pwd (replace pwd with your page name.)

    You can convert a man page to html with man2html (may require apt get man2html or whatever equivalent applies to your distro.)
    That tool adds a couple of useless lines at the beginning of each file, so we’ll want to pipe its output into a | tail +3 to get rid of them.

    Combine all of these together in a questionable incantation, and you might end up with something like this:

    mkdir -p tmp ; cd tmp
    apropos -s 1 . | cut -d' ' -f1 | while read page; do whereis -m "$page" ; done | while read id path rest; do man2html "$path" | tail +3 > "${id::-1}.html"; done
    

    List every command in section 1, extract the id only. For each one, get a file path. For each id and file path (ignore the rest), convert to html and save it as a file named $id.html.

    It might take a little while to run, but then you could run firefox . or whatever and browse the resulting mess.

    Or keep tweaking all of this until it’s just right for you.




  • More appropriate tools to detect AI generated text you mean?

    It’s not a thing. I don’t think it will ever be a thing. Certainly not reliably, and never as a 100% certainty tool.

    The punishment for a teacher deciding you cheated on a test or an assignment? I don’t know, but I imagine it sucks. Best case, you’d probably be at risk of failing the class and potentially the grade/semester. Worst case you might get expelled for being a filthy cheater. Because an unreliable tool said so and an unreliable teacher chose to believe it.

    If you’re asking what’s the answer teachers should know to defend against AI generated content, I’m afraid I don’t have one. It’s akin to giving students math homework assignments but demanding that they don’t use calculators. That could have been reasonable before calculators were a thing, but not anymore and so teachers don’t expect that to make sense and don’t put those rules on students.




  • One of my guilty pleasures is to rewrite trivial functions to be statements free.

    Since I’d be too self-conscious to put those in a PR, I keep those mostly to myself.

    For example, here’s an XPath wrapper:

    const $$$ = (q,d=document,x=d.evaluate(q,d),a=[],n=x.iterateNext()) => n ? (a.push(n), $$$(q,d,x,a)) : a;
    

    Which you can use as $$$("//*[contains(@class, 'post-')]//*[text()[contains(.,'fedilink')]]/../../..") to get an array of matching nodes.

    If I was paid to write this, it’d probably look like this instead:

    function queryAllXPath(query, doc = document) {
        const array = [];
        const result = doc.evaluate(query, doc);
        let node= result.iterateNext();
        while (node) {
            array.push(node);
            n = result.iterateNext();
        }
        return array;
    }
    

    Seriously boring stuff.

    Anyway, since var/let/const are statements, I have no choice but to use optional parameters instead, and since loops are statements as well, recursion saves the day.

    Would my quality of life improve if the lambda body could be written as => if n then a.push(n), $$$(q,d,x,a) else a ? Obviously, yes.



  • There have been efforts to build reputation systems that don’t rely on central servers, like early day bitcoin’s Web of Trust, which allowed folks to rate other folks with public key crypto, thus ensuring an accurate and fair trust rating for participants, without the possibility of a middle-man putting their thumb on the scale.

    One problem with it is that it was still perfectly practical for bad actors to accumulate good ratings, then cash out their hard-earned reputation into large scams, such as the “Bitcoin Savings & Trust” (for $40 million in that particular case), which quite possibly made it measurably worse than not having a system that induced participants into making faulty judgments in the first place.

    I think the main practical value of something like reddit’s karma is an indication of age and account activity, both of which can probably be measured in other, if less gamified ways.