• 4 Posts
  • 44 Comments
Joined 2 年前
cake
Cake day: 2023年3月4日

help-circle

  • I have 20+ years of software development experience having to deal with user requests, so I am for sure sensitive to that fact! I don’t think that current LLMs can do anything but the most superficial change to code. But that doesn’t mean it always will, in 5-10 years, with realtime inference (e.g. 100x generations for the same prompt allowing for much better error correction) and video support, you could have a long session (say, 1 or 2 hours) of asking questions, reviewing mockups, tweaking the requirements, etc.) in order to understand the ask, and then the user will spend some time using it and testing it.


  • I agree that with the current state of tools around LLMs, this is very unadvisable. But I think we can develop the right ones.

    • We can have tools that can generate the context/info submitters need to understand what has been done, explain the choices they are making, discuss edge cases and so on. This includes taking screenshots as the submitter is using the app, testing period (require X amount of time of the submitter actually using their feature and smoothening out the experience)

    • We can have tools at the repo level that can scan and analyze the effect. It can also isolate the different submitted features in order to allow others to toggle them or modify them if they’re not to their liking. Similarly, you can have lots of LLMs impersonate typical users and try the modifications to make sure they work. Putting humans in the loop at different appropriate times.

    People are submitting LLM generated code they don’t understand right now. How do we protect repos? How do we welcome these contributions while lowering risk? I think with the right engineering effort, this can be done.