• 0 Posts
  • 102 Comments
Joined 2 年前
cake
Cake day: 2023年6月28日

help-circle
  • Maybe my analogy is a little bit too silly and too obvious, but I think wanting a humanoid robot (rather than one designed in a way that is best suited for the purpose) could be somewhat akin to wanting a mechanical horse rather than a car. On the one hand, this may sound like a reasonable idea if saddles, carriages, stables and blacksmiths are already available. On the other hand, the mechanical horse is going to be a lot slower than a car and a lot more uncomfortable to ride. Also, it is still going to need charging stations or gas stations (since it won’t eat oats) and dedicated repair shops (since veterinarians won’t be able to fix it). Also, its technology might be a lot more complex and difficult to fix than that of a car (especially the early models).


  • I guess both chatbots and humanoid robots are basically about the fantasy of automating human labor away effortlessly. In the past, most successful automation probably required a strong understanding of not just the tech, but also the tasks themselves and often a complete overhaul of processes, internal structures etc. In the end, there was usually still a need for human labor, just with different skill sets than before. Many people from the C-suite aren’t very good at handling these challenges, even if they would want to make everyone believe otherwise. This is probably why the promise of reaping all the rewards of automation without having to do the work sounds compelling to many of them.










  • Most searchers don’t click on anything else if there’s an AI overview — only 8% click on any other search result. It’s 15% if there isn’t an AI summary.

    I can’t get over that. An oligopolistic company imposes a source on its users that is very likely either hallucinating or plagiarizing or both, and most people seem to eat it up (out of convenience or naiveté, I assume).








  • With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that don’t appear to be getting priced in or planned for in discussions of the glorious AI technocapital future

    This is a very important point, I believe. I find it particularly ironic that the “traditional” Internet was fairly efficient in particular because many people were shown more or less the same content, and this fact also made it easier to carry out a certain degree of quality assurance. Now with chatbots, all this is being thrown overboard and extreme inefficiencies are being created, and apparently, the AI hypemongers are largely ignoring that.


  • It’s quite noteworthy how often these shots start out somewhat okay at the first prompt, but then deteriorate markedly over the following seconds.

    As a layperson, I would try to explain this as follows: At the beginning, the AI is - to some extent - free to “pick” how the characters and their surroundings would look like (while staying within the constraints of the prompt, of course, even if this doesn’t always work out either).

    Therefore, the AI can basically “fill in the blanks” from its training data and create something that may look somewhat impressive at first glance.

    However, for continuing the shot, the AI is now stuck with these characters and surroundings while having to follow a plot that may not be represented in its training data, especially not for the characters and surroundings it had picked. This is why we frequently see inconsistencies, deviations from the prompt or just plain nonsense.

    If I am right about this assumption, it might be very difficult to improve these video generators, I guess (because an unrealistic amount of additional training data would be required).

    Edit: According to other people, it may also be related to memory/hardware etc. In that case, my guesses above may not apply. Or maybe it is a mixture of both.


  • I have been thinking about the true cost of running LLMs (of course, Ed Zitron and others have written about this a lot).

    We take it for granted that large parts of the internet are available for free. Sure, a lot of it is plastered with ads, and paywalls are becoming increasingly common, but thanks to economies of scale (and a level of intrinsic motivation/altruism/idealism/vanity), it still used to be viable to provide information online without charging users for every bit of it. Same appears to be true for the tools to discover said information (search engines).

    Compare this to the estimated true cost of running AI chatbots, which (according to the numbers I’m familiar with) may be tens or even hundreds of dollars a month for each user. For this price, users would get unreliable slop, and this slop could only be produced from the (mostly free) information that is already available online while disincentivizing creators from producing more of it (because search engine driven traffic is dying down).

    I think the math is really abysmal here, and it may take some time to realize how bad it really is. We are used to big numbers from tech companies, but we rarely break them down to individual users.

    Somehow reminds me of the astronomical cost of each bitcoin transaction (especially compared to the tiny cost of processing a single payment through established payment systems).