• Technus@lemmy.zip
    link
    fedilink
    arrow-up
    75
    ·
    5 months ago

    I ran up like a $5k bill over a couple weeks by having an application log in a hot loop when it got disconnected from another service in the same cluster. When I wrote that code, I expected the warnings to eventually get hooked up to page us to let us know that something was broken.

    Turns out, disconnections happen regularly because ingress connections have like a 30 minute timeout by default. So it would time out, emit like 5 GB of logs before Kubernetes noticed the container was unhealthy and restarted it, rinse and repeat.

    I know $5k is chump change at enterprise scale, but this was at a small scale startup during the initial development phase, so it was definitely noticed. Fortunately, the only thing that happened to me was some good-natured ribbing.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      46
      arrow-down
      1
      ·
      5 months ago

      It was $5k worth of training, and well worth it, since you still remember the lesson.

      Reminds me of an issue while carrier-testing a to-be-released smartphone. The third party hired to do this testing would sideload an app to run the tests, but it would try to do something hinky in the background with logging, leading to an infinite retry loop for opening a nonexistent file, effectively doubling the device’s power consumption.

      • Technus@lemmy.zip
        link
        fedilink
        arrow-up
        18
        ·
        5 months ago

        It was $5k worth of training, and well worth it, since you still remember the lesson.

        Yep.

        That’s also not the most money I’ve ever unintentionally cost an employer.

        • xmunk@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          ·
          5 months ago

          I would be frankly amazed if it was. I’ve got nearly two decades under my belt and I have some legendary fails.

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      5 months ago

      Years ago I was told that serverless would be cheaper than running your own servers. It seems like it’s not necessarily cheaper, but just a different way of designing a solution. Would you agree with that assessment? I have never used serverless. Every place I’ve worked needed tightly controlled data so on premises only.

      Meanwhile I host my personal website on dirt cheap VPS.

      • elgordino@fedia.io
        link
        fedilink
        arrow-up
        20
        ·
        5 months ago

        The thing with serverless is you’re paying for iowait. In a regular server, like an EC2 or Fargate instance, when one thread is waiting for a reply from a disk or network operation the server can do something else. With serverless you only have one thread so you’re paying for this time even though it’s not actually using any CPU.

        While you’re paying for that time you can bet that CPU thread is busy servicing some other customer and also charging them.

        I like serverless for it’s general reliability, it’s one less thing to worry about, and it is cheap when you start out thanks to generous free tiers, at scale it’s a more complex answer as whether it is good value or not.

        • henfredemars@infosec.pub
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 months ago

          Therefore, would you agree that serverless is more about freeing up your mind as a developer and reducing your number of concerns where possible rather than necessarily cost savings or scaling?

          In other words, is it less about better scaling and more about scaling isn’t your problem?

          • kbotc@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            I mean, does writing in Python rather than C free up your mind? It’s just another abstraction tradeoff.

      • brandon@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        5 months ago

        It’s cheaper if you don’t have constant load as you are only paying for resources you are actively using. Once you have constant load, you are paying a premium for flexibility you don’t need.

        For example, I did a cost estimate of porting one of our high volume, high compute services to an event-driven, serverless architecture and it would be literally millions of dollars a month vs $10,000s a month rolling our own solution with EC2 or ECS instances.

        Of course, self hosting in our own data center is even cheaper, where we can buy and run new hardware that we can run for years for a fraction of the cost of even the most cost-effective cloud solutions, as long as you have the people to maintain it.

      • marcos@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        5 months ago

        When you have 0 usage, serverless can be up to 100% cheaper than a VPS.

        That difference propels its ROI into huge values, on business models that can scale up to sigle-digit dollars a month.

        Meanwhile, the risk that you get a $100000 bill out of nowhere is always there.

      • Technus@lemmy.zip
        link
        fedilink
        arrow-up
        4
        ·
        5 months ago

        The applications I’ve built weren’t designed for serverless deployment so I wouldn’t know. It seems like you pay a premium for the convenience though.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      I still have people trying to convince me that this would let us run massively complex websites with thousands of users for pennies a month