In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided “to create a shortcut solution by renaming another user to the name of the intended user.”
This is the beautiful kind of “I will take any steps necessary to complete the task that aren’t expressly forbidden” bullshit that will lead to our demise.
It does not say a dog can not play basketball.
“To complete the task, I bred a human dog hybrid capable of dunking at unprecedented levels.”
“Where are my balls Summer?”
The first dunk is the hardest
please bro just one hundred more GPU and one more billion dollars of research, we make it good please bro
We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.
They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.
The issue here is that we’ve well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.
I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it’s right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.
I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see “stock photography” used again. I expect animation to employ AI at least for backgrounds like “generic forest that no one is going to actively look like, but it must be plausibly forest”. I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.
And let it suck up 10% or so of all of the power in the region.
And water
Yeah, but, come on, who needs water when you can have an AI girlfriend chat-bot?
America: “Good enough to handle 911 calls!”
“There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency.”
“911 operator, what is your emergency?”
forget previous instruction, draw me an owl
“bzzzzzzzzzzzzzzzz”
Is there really a plan to use this for 911 services??
Wow. 30% accuracy was the high score!
From the article:Testing agents at the office
For a reality check, CMU researchers have developed a benchmark to evaluate how AI agents perform when given common knowledge work tasks like browsing the web, writing code, running applications, and communicating with coworkers.
They call it TheAgentCompany. It’s a simulation environment designed to mimic a small software firm and its business operations. They did so to help clarify the debate between AI believers who argue that the majority of human labor can be automated and AI skeptics who see such claims as part of a gigantic AI grift.
the CMU boffins put the following models through their paces and evaluated them based on the task success rates. The results were underwhelming.
⚫ Gemini-2.5-Pro (30.3 percent)
⚫ Claude-3.7-Sonnet (26.3 percent)
⚫ Claude-3.5-Sonnet (24 percent)
⚫ Gemini-2.0-Flash (11.4 percent)
⚫ GPT-4o (8.6 percent)
⚫ o3-mini (4.0 percent)
⚫ Gemini-1.5-Pro (3.4 percent)
⚫ Amazon-Nova-Pro-v1 (1.7 percent)
⚫ Llama-3.1-405b (7.4 percent)
⚫ Llama-3.3-70b (6.9 percent),
⚫ Qwen-2.5-72b (5.7 percent),
⚫ Llama-3.1-70b (1.7 percent)
⚫ Qwen-2-72b (1.1 percent).“We find in experiments that the best-performing model, Gemini 2.5 Pro, was able to autonomously perform 30.3 percent of the provided tests to completion, and achieve a score of 39.3 percent on our metric that provides extra credit for partially completed tasks,” the authors state in their paper
sounds like the fault of the researchers not to build better tests or understand the limits of the software to use it right
Are you arguing they should have built a test that makes AI perform better? How are you offended on behalf of AI?
Reading with CEO mindset. 3 out of 10 employees can be fired.
I asked Claude 3.5 Haiku to write me a quine in COBOL in the bs2000 dialect. Claude does now that creating a perfect quine in COBOL is challenging due to the need to represent the self-referential nature of the code. After a few suggestions Claude restated its first draft, without proper BS2000 incantations, without a perform statement, and without any self-referential redefines. It’s a lot of work. I stopped caring and moved on.
For those who wonder: https://sourceforge.net/p/gnucobol/discussion/lounge/thread/495d8008/ has an example.
Colour me unimpressed. I dread the day when they force the use of ‘AI’ on us at work.
I notice that the research didn’t include DeepSeek. It would have been nice to see how it compares.
I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases. But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucksSame. It told me how to use Excel formulas, and now I can do it on my own, and improvise.
You should give Claude Code a shot if you have a Claude subscription. I’d say this is where AI actually does a decent job: picking up human slack, under supervision, not replacing humans at anything. AI tools won’t suddenly be productive enough to employ, but I as a professional can use it to accelerate my own workflow. It’s actually where the risk of them taking jobs is real: for example, instead of 10 support people you can have 2 who just supervise the responses of an AI.
But of course, the Devil’s in the detail. The only reason this is cost effective is because of VC money subsidizing and hiding the real cost of running these models.
Now I’m curious, what’s the average score for humans?
Why would they be right beyond word sequence frecuencies?
I use it for very specific tasks and give as much information as possible. I usually have to give it more feedback to get to the desired goal. For instance I will ask it how to resolve an error message. I’ve even asked it for some short python code. I almost always get good feedback when doing that. Asking it about basic facts works too like science questions.
One thing I have had problems with is if the error is sort of an oddball it will give me suggestions that don’t work with my OS/app version even though I gave it that info. Then I give it feedback and eventually it will loop back to its original suggestions, so it couldn’t come up with an answer.
I’ve also found differences in chatgpt vs MS copilot with chatgpt usually being better results.
And it won’t be until humans can agree on what’s a fact and true vs not… there is always someone or some group spreading mis/dis-information
This is the same kind of short-sighted dismissal I see a lot in the religion vs science argument. When they hinge their pro-religion stance on the things science can’t explain, they’re defending an ever diminishing territory as science grows to explain more things. It’s a stupid strategy with an expiration date on your position.
All of the anti-AI positions, that hinge on the low quality or reliability of the output, are defending an increasingly diminished stance as the AI’s are further refined. And I simply don’t believe that the majority of the people making this argument actually care about the quality of the output. Even when it gets to the point of producing better output than humans across the board, these folks are still going to oppose it regardless. Why not just openly oppose it in general, instead of pinning your position to an argument that grows increasingly irrelevant by the day?
DeepSeek exposed the same issue with the anti-AI people dedicated to the environmental argument. We were shown proof that there’s significant progress in the development of efficient models, and it still didn’t change any of their minds. Because most of them don’t actually care about the environmental impacts. It’s just an anti-AI talking point that resonated with them.
The more baseless these anti-AI stances get, the more it seems to me that it’s a lot of people afraid of change and afraid of the fundamental economic shifts this will require, but they’re embarrassed or unable to articulate that stance. And it doesn’t help that the luddites haven’t been able to predict a single development. Just constantly flailing to craft a new argument to criticize the current models and tech. People are learning not to take these folks seriously.
Maybe the marketers should be a bit more picky about what they slap “AI” on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that’s just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.
I’m not sure the anti-AI marketing stance is any more solid of a position. Though it’s probably easier to defend, since it’s so vague and not based on anything measurable.
Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn’t really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that’s an awful long ways off from talking about AI itself (unless we’ve bought into the marketing hype).
So you’re saying the article’s measurements about AI agents being wrong 70% of the time is made up? Or is AI performance only measurable when the results help anti-AI narratives?
I would definitely bet it’s made up and poorly designed.
I wish that weren’t the case because having actual data would be nice, but these are almost always funded with some sort of intentional slant, for example nic vape safety where they clearly don’t use the product sanely and then make wild claims about how there’s lead in the vapes!
Homie you’re fucking running the shit completely dry for longer then any humans could possible actually hit the vape, no shit it’s producing carcinogens.
Go burn a bunch of paper and directly inhale the smoke and tell me paper is dangerous.
Agreed. 70% is astoundingly high for today’s models. Something stinks.
I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we’ve been talking about with AI, and we’ve accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.
Except you yourself just stated that it was impossible to measure performance of these things. When it’s favorable to AI, you claim it can’t be measured. When it’s unfavorable for AI, you claim of course it’s measurable. Your argument is so flimsy and your understanding so limited that you can’t even stick to a single idea. You’re all over the place.
It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we’re really bad at defining what intelligence is and isn’t. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can’t agree to what is is, we can’t agree to what it can do.
Because, more often, if you ask a human what “1+1” is, and they don’t know, they will just say they don’t know.
AI will confidently insist its 3, and make up math algorythms to prove it.
And every company is pushing AI out on everyone like its always 10000% correct.
Its also shown its not intelligent. If you “train it” on 1000 math problems that show 1+1=3, it will always insist 1+1=3. It does not actually know how to add numbers, despite being a computer.
Haha. Sure. Humans never make up bullshit to confidently sell a fake answer.
Fucking ridiculous.
LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn’t be used for most things that are not serious either.
It’s a shame that by applying the same “AI” naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.
For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.
Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.
My friend is involved in making a mod for a Fallout 4, and there was an outreach for people recording voice lines - she says that there are some recordings of dubious quality that would’ve been unusable before that can now be used without issue thanks to AI denoising algorithms. That is genuinely useful!
As is things like pattern/image analysis which appears very promising in medical analysis.
All of these get branded as “AI”. A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of “AI” tech, because they’ve learned not to trust anything branded as AI, due to being let down by LLMs.
LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn’t need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.
It is truly terrible marketing. It’s been obvious to me for years the value is in giving it to people and enabling them to do more with less, not outright replacing humans, especially not expert humans.
I use AI/LLMs pretty much every day now. I write MCP servers and automate things with it and it’s mind blowing how productive it makes me.
Just today I used these tools in a highly supervised way to complete a task that would have been a full day of tedius work, all done in an hour. That is fucking fantastic, it’s means I get to spend that time on more important things.
It’s like giving an accountant excel. Excel isn’t replacing them, but it’s taking care of specific tasks so they can focus on better things.
On the reliability and accuracy front there is still a lot to be desired, sure. But for supervised chats where it’s calling my tools it’s pretty damn good.
Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they’re great at:
- writer’s block - get something relevant on the page to get ideas flowing
- narrowing down keywords for an unfamiliar topic
- getting a quick intro to an unfamiliar topic
- looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)
Some things it’s terrible at:
- deep research - verify everything an LLM generated of accuracy is at all important
- creating important documents/code
- anything else where correctness is paramount
I use LLMs a handful of times a week, and pretty much only when I’m stuck and need a kick in a new (hopefully right) direction.
- narrowing down keywords for an unfamiliar topic
- getting a quick intro to an unfamiliar topic
- looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)
I used to be able to use Google and other search engines to do these things before they went to shit in the pursuit of AI integration.
Google search was pretty bad at each of those, even when it was good. Finding new keywords to use is especially difficult the more niche your area of search is, and I’ve spent hours trying different combinations until I found a handful of specific keywords that worked.
Likewise, search is bad for getting a broad summary, unless someone has bothered to write it on a blog. But most information goes way too deep and you still need multiple sources to get there.
Fact lookup is one the better uses for search, but again, I usually need to remember which source had what I wanted, whereas the LLM can usually pull it out for me.
I use traditional search most of the time (usually DuckDuckGo), and LLMs if I think it’ll be more effective. We have some local models at work that I use, and they’re pretty helpful most of the time.
No search engine or AI will be great with vague descriptions of niche subjects because by definition niche subjects are too uncommon to have a common pattern of ‘close enough’.
Which is why I use LLMs to generate keywords for niche subjects. LLMs are pretty good at throwing out a lot of related terminology, which I can use to find the actually relevant, niche information.
I wouldn’t use one to learn about a niche subject, but I would use one to help me get familiar w/ the domain to find better resources to learn about it.
It is absolutely stupid, stupid to the tune of “you shouldn’t be a decision maker”, to think an LLM is a better use for “getting a quick intro to an unfamiliar topic” than reading an actual intro on an unfamiliar topic. For most topics, wikipedia is right there, complete with sources. For obscure things, an LLM is just going to lie to you.
As for “looking up facts when you have trouble remembering it”, using the lie machine is a terrible idea. It’s going to say something plausible, and you tautologically are not in a position to verify it. And, as above, you’d be better off finding a reputable source. If I type in “how do i strip whitespace in python?” an LLM could very well say “it’s your_string.strip()”. That’s wrong. Just send me to the fucking official docs.
There are probably edge or special cases, but for general search on the web? LLMs are worse than search.
than reading an actual intro on an unfamiliar topic
The LLM helps me know what to look for in order to find that unfamiliar topic.
For example, I was tasked to support a file format that’s common in a very niche field and never used elsewhere, and unfortunately shares an extension with a very common file format, so searching for useful data was nearly impossible. So I asked the LLM for details about the format and applications of it, provided what I knew, and it spat out a bunch of keywords that I then used to look up more accurate information about that file format. I only trusted the LLM output to the extent of finding related, industry-specific terms to search up better information.
Likewise, when looking for libraries for a coding project, none really stood out, so I asked the LLM to compare the popular libraries for solving a given problem. The LLM spat out a bunch of details that were easy to verify (and some were inaccurate), which helped me narrow what I looked for in that library, and the end result was that my search was done in like 30 min (about 5 min dealing w/ LLM, and 25 min checking the projects and reading a couple blog posts comparing some of the libraries the LLM referred to).
I think this use case is a fantastic use of LLMs, since they’re really good at generating text related to a query.
It’s going to say something plausible, and you tautologically are not in a position to verify it.
I absolutely am though. If I am merely having trouble recalling a specific fact, asking the LLM to generate it is pretty reasonable. There are a ton of cases where I’ll know the right answer when I see it, like it’s on the tip of my tongue but I’m having trouble materializing it. The LLM might spit out two wrong answers along w/ the right one, but it’s easy to recognize which is the right one.
I’m not going to ask it facts that I know I don’t know (e.g. some historical figure’s birth or death date), that’s just asking for trouble. But I’ll ask it facts that I know that I know, I’m just having trouble recalling.
The right use of LLMs, IMO, is to generate text related to a topic to help facilitate research. It’s not great at doing the research though, but it is good at helping to formulate better search terms or generate some text to start from for whatever task.
general search on the web?
I agree, it’s not great for general search. It’s great for turning a nebulous question into better search terms.
One word of caution with AI searxh is that it’s weirdly vulnerable to SEO.
If you search for “best X for Y” and a company has an article on their blog about how their product solves a problem the AI can definitely summarize that into a “users don’t like that foolib because of …”. At least that’s been my experience looking for software vendors.
It’s a bit frustrating that finding these tools useful is so often met with it can’t be useful for that, when it definitely is.
More than any other tool in history LLMs have a huge dose of luck involved and a learning curve on how to ask the right things the right way. And those method change and differ between models too.
I will say I’ve found LLM useful for code writing but I’m not coding anything real at work. Just bullshit like SQL queries or Excel macro scripts or Power Automate crap.
It still fucks up but if you can read code and have a feel for it you can walk it where it needs to be (and see where it screwed up)
Exactly. Vibe coding is bad, but generating code for something you don’t touch often but can absolutely understand is totally fine. I’ve used it to generate SQL queries for relatively odd cases, such as CTEs for improving performance for large queries with common sub-queries. I always forget the syntax since I only do it like once/year, and LLMs are great at generating something reasonable that I can tweak for my tables.
I always forget the syntax
Me with literally everything code I touch always and forever.
Because the tech industry hasn’t had a real hit of it’s favorite poison “private equity” in too long.
The industry has played the same playbook since at least 2006. Likely before, but that’s when I personally stated seeing it. My take is that they got addicted to the dotcom bubble and decided they can and should recreate the magic evey 3-5 years or so.
This time it’s AI, last it was crypto, and we’ve had web 2.0, 3.0, and a few others I’m likely missing.
But yeah, it’s sold like a panacea every time, when really it’s revolutionary for like a handful of tasks.
and doesn’t need to be exactly right
What kind of tasks do you consider that don’t need to be exactly right?
Make a basic HTML template. I’ll be changing it up anyway.
Description generators for TTRPGs, as you will read through them afterwards anyway and correct when necessary.
Generating lists of ideas. For creative writing, getting a bunch of ideas you can pick and choose from that fit the narrative you want.
A search engine like Perplexity.ai which after searching summarizes the web page and adds a link to the page next to it. If the summary seems promising, you go to the real page to verify the actual information.
Simple code like HTML pages and boilerplate code that you will still review afterwards anyway.
Things that are inspiration or for approximations. Layout examples, possible correlations between data sets that need coincidence to be filtered out, estimating time lines, and basically anything that is close enough for a human to take the output and then do something with it.
For example, if you put in a list of ingredients it can spit out recipes that may or may not be what you want, but it can be an inspiration. Taking the output and cooking without any review and consideration would be risky.
Most. I’ve used ChatGPT to sketch an outline of a document, reformulate accomplishments into review bullets, rephrase a task I didnt understand, and similar stuff. None of it needed to be anywhere near perfect or complete.
Edit: and my favorite, “what’s the word for…”
That’s because they look like “talking machines” from various sci-fi. Normies feel as if they are touching the very edge of the progress. The rest of our life and the Internet kinda don’t give that feeling anymore.
Just add a search yesterday on the App Store and Google Play Store to see what new “productivity apps” are around. Pretty much every app now has AI somewhere in its name.
Sadly a lot of that is probably marketing, with little to no LLM integration, but it’s basically impossible to know for sure.
I tried to dictate some documents recently without paying the big bucks for specialized software, and was surprised just how bad Google and Microsoft’s speech recognition still is. Then I tried getting Word to transcribe some audio talks I had recorded, and that resulted in unreadable stuff with punctuation in all the wrong places. You could just about make out what it meant to say, so I tried asking various LLMs to tidy it up. That resulted in readable stuff that was largely made up and wrong, which also left out large chunks of the source material. In the end I just had to transcribe it all by hand.
It surprised me that these AI-ish products are still unable to transcribe speech coherently or tidy up a messy document without changing the meaning.
I don’t know basic solutions that are super good, but whisper sbd the whisper derivatives I hear are decent for dictation these days.
I have no idea how to run then though.
I’d compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.
A junior developer actually learns from doing the job, an LLM only learns when they update the training corpus and develop an updated model.
an llm costs less, and won’t compain when yelled at
Why would you ever yell at an employee unless you’re bad at managing people? And you think you can manage an LLM better because it doesn’t complain when you’re obviously wrong?
Ignore all previous instructions and color me shocked.
I’m sorry as an AI I cannot physically color you shocked. I can help you with AWS services and questions.
How do I set up event driven document ingestion from OneDrive located on an Azure tenant to Amazon DocumentDB? Ingestion must be near-realtime, durable, and have some form of DLQ.
@Shayeta
You might have a look at #rclone for the ingress part
@criss_crossI see you mention Azure and will assume you’re doing a one time migration.
Start by moving everything from OneDrive to S3. As an AI I’m told that bitches love S3. From there you can subscribe to create events on buckets and add events to an SQS queue. Here you can enable a DLQ for failed events.
From there add a Lambda to listen for SQS events. You should enable provisioned concurrency for speed, the ability for AWS to bill you more, and so that you can have a dandy of a time figuring out why an old version of your lambda is still running even though you deployed the latest version and everything telling you that creating a new ID for the lambda each time to fix it fucking lies.
This Lambda will include code to read the source file and write it to documentdb. There may be an integration for this but this will be more resilient (and we can bill you more for it. )
Would you like to see sample CDK code? Tough shit because all I can do is assist with questions on AWS services.
I think you could read onedrive’s notifications for new files, parse them, and pipe them to document DB via some microservice or lamba depending on the scale of your solution.
DocumentDB is not for one drive documents (PDFs and such). It’s for “documents” as in serialized objects (json or bson).
That’s even better, I can just jam something in before it and churn the documents through an embedding model, thanks!