After seeing this graph about the fall of StackOverflow, it’s clear to me that I’m not the only one who has stopped using it in favor of LLMS (large language model) alternatives. With the rise of generative AI-powered initiatives like OverflowAI by Stack Overflow, and the potential of ChatGPT by OpenAI, the landscape of programming knowledge sharing is evolving. These AI models have the ability to generate human-like text and provide answers to programming questions. It’s fascinating to witness the potential of AI as a game-changing tool in programming problem solving. So, I’m curious, which AI have you replaced StackOverflow with? Share your favorite LLMS or generative AI platform for programming in the comments below!

  • Aloso@programming.dev
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    edit-2
    1 year ago

    I do not use AI to solve programming problems.

    First, LLMs like ChatGPT often produce incorrect answers to particularly difficult questions, but still seem completely confident in their answer. I don’t trust software that would rather make something up than admit that it doesn’t know the answer. People can make mistakes, too, but StackOverflow usually pushes the correct answer to the top through community upvotes.

    Second, I rarely ask questions on StackOverflow. Most of the time, if I search for a few related keywords, Google will find an SO thread with the answer. This is much faster than writing a SO question and waiting for people to answer it; and it is also faster than explaining the question to ChatGPT.

    Third, I’m familiar enough with the languages I use that I don’t need help with simple questions anymore, like “how to iterate over a hashmap” or “how to randomly shuffle an array”. The situations where I could use help are often so complicated that an LLM would probably be useless. Especially for large code bases, where the relevant code is spread across many files or even multiple repositories (e.g. a backend and a frontend), debugging the problem myself is more efficient than asking for help, be it an online community or a language model.

    • TempestTiger@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      I might be taking over at a job for a friend who’s leaving the country. Not programming, but IT and Sec.

      I was concerned about my lack of exp.

      They told me just to use ChatGPT 'cause that’s what they do.

      They don’t even have .exe files blocked for users.

      I’m now far more concerned about the state of the networks I’ll be taking over. Going to be doing a full security audit as soon as I’m up to speed.

      TT_TT

    • ImpossibleRubiksCube@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      This is definitely my issue. I’ve experimented with LLMs for code generation, but more often than not the code will be unusable, and occasionally it will have grotesque practices like unused function parameters in it. As far as I can tell we are nowhere near an LLM capable of generating ethical code.

  • ImpossibleRubiksCube@programming.dev
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    I prefer the classic method from the 90s. Open up a can of turpentine or varnish, breathe deep counting back from 100, then start explaining your problem loudly and clumsily to a brick wall until it starts talking back.

  • CoderSupreme@programming.devOP
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    I use Perplexity.ai, it uses ChatGPT + search and improves accuracy a lot. It only has 5 free uses of GPT-4 every 4 hours, but the normal ChatGPT + search is still better than any of the other LLMs I’ve tried.

  • varsock@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    There are many skills I’m an absolute beginner in. ChatGPT helps me drown out unnecessary noise and points me in the right direction. For things I’d consider myself proficient in, I ask it to write me tedious tasks. Love it for Makefile expansions

  • Von_Broheim@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Initially I was very impressed by ChatGPT but over the past few weeks I’m getting fed up with it. It completely ignores constraints I give it regarding library versions I use. It dreams up insane, and garbage, answers to fairly simple prompts. For more complicated stuff it’s even worse.

    My current workflow is, try top few google results, if failed try ChatGPT for a few minutes, if failed go to documentation and or crawl through SO for some time, if failed ask on SO.

    I tested some of the questions I asked on stackoverflow vs ChatGPT and the answers on stack overflow were much better. So for real “I really am stuck here” sort of issues I use SO.

    • YaBoyMax@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Out of curiosity, do you use ChatGPT Plus? I’ve found that GPT-4 is worlds better at solving programming problems and is much less likely to hallucinate (as long as the question isn’t too obscure).

  • ctr1@fl0w.cc
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    GPT-4 in NeoVim. Definitely has taken the place of StackOverflow for me in most cases, but I still go there for especially difficult problems.

    Would rather run something on my own hardware, but I’m waiting for other models to catch up

  • shiveyarbles@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I noticed jetbrains resharper has an AI feature. I had fun asking it stupid questions, I still use stack though

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    That’s not how I use it at all. I don’t use it for things I can’t do, because it can’t either.

    I use it for things I could easily do, but don’t want to, or when I don’t know enough about a topic to ask.

    For example, I had it build me JSON of the top 100 Lemmy instances.

    I also was having trouble customizing a markup renderer - it didn’t know how to do it, it couldn’t find anything on my situation, but I asked it how it would do it in a few different common libraries.

    I still had to figure it out myself with some trial and error, but instead of spending a day diving into how parsing, tokization, and rendering work, it showed me what a solution might look like, and defined some terms for me in context.

    Knowing what it looked like, I could guess what the library creator was thinking with the undocumented custom extension I saw in their code, and I quickly got traction

  • discusseded@programming.dev
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    ChatGPT+, Github Copilot and Copilot X. I’m pretty green yet so maybe I’m getting more out of it than a seasoned vet would. I just feel so much more productive and I’m learning faster since Copilot will offer 10 solutions and there always seem to be one that intrigues me with its novelty.

    I like the conversational nature of ChatGPT and I love that I’m not getting judged. I can ask the dumbest questions all day long and not sweat that it’s going to cost me the next promotion. Not too say I don’t reach out to people, but I keep the really dumb shit between me and GPT.

  • UFO@programming.dev
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    1 year ago

    I use AI to solve problems. Like any tool, they have limitations. Eg: complex systems cannot be completely described within context length. Like any content (human or AI) the arguments should be considered critically and references checked.

    That does mean I rarely take genererated code as-is.

  • swordsmanluke@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    I use Codeium’s CoPilot-like tool in Intellij, plus ChatGPT-4 for the occasional quick question.

    I still do all my own thinking about how to design and write the code, but for questions like “How do I convert a pandas DataFrame into a PySpark DataFrame”, it’s been great!

    One of my senior-engineer buddies likens it to working with a really fast Junior. You gotta vet all their code, but if you know what code to ask it for, it can save you some time in writing it.

  • armchair_progamer@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    I replaced it with online docs, Github Issues, Reddit, and Stack Overflow.

    Many languages/libraries/tools have great documentation now, 10 years ago this wasn’t the case, or at least I didn’t know how to find/read documentation. 10 years ago Stack Overflow answers were also better, now many are obsolete due to being 10 years old :).

    Good documentation is both more concise and thorough than any QA or ChatGPT output, and more likely to be accurate (it certainly should be in any half-decent documentation, but sometimes no).

    If online documentation doesn’t work, I try to find the answer on Github issues, Reddit, or a different forum. And sometimes that forum is Stack Overflow. More recently I’ve started to see most questions where the most upvoted answer has been edited to reflect recent changes; and even when an answer is out-of-date, there’s usually a comment which says so.

    Now, I never post on Stack Overflow, nor do I usually answer; there are way too many bad questions out there, most of the good ones already have answers or are really tricky, and the community still has its rude reputation. Though I will say the other stack exchange sites are much better.

    So far, I’ve only used LLMs when my question was extremely detailed so I couldn’t search it, and/or I ran out of options. There are issues like: I don’t like to actually write out the full question (although I’m sure GPT works with query terms, I’ll probably try that); GPT4’s output is too verbose and it explain basic context I already know so it’s just filler; and I still have a hard time trusting GPT4, because I’ve had it hallucinate before.

    With documentation you have the expectation that the information is accurate, and with forums you have other people who will comment if the answer is wrong, but with LLMs you have neither.