An internal Microsoft memo has leaked. It was written by Julia Liuson, president of the Developer Division at Microsoft and GitHub. The memo tells managers to evaluate employees based on how much t…
Before LLMs came along no one cared what tools I did or didn’t use at work. Hell will freeze over before I let a text predictor write code for me even if that eventually costs me a job. I’m the sort who can’t stand any sort of auto-completion or other typing “help”, much less spending all my time reviewing LLM output.
LLMs are the next wave of popups here in the second quarter of the 21st century. I’ve become skilled at removing all the requests to let AI help me in whatever I’m actively doing. I about lost it recently when Excel threw one at me at work. NO, I DON’T WANT YOUR HELP!
Having a better guided search in a help feature I don’t mind. But stop pushing it in everything, just have a way to get to it (and have it WORK when I use it!)
even if that eventually costs me a job
I mean it’s kind of a ‘damned both ways’ situation, here. Right? Lose your job if you refuse to use it, lose your job if you end up training it how to do your job.
A programmer automating his job is kind of his job, though. That’s not so much the problem as the complete enshittification of software engineering that the culture surrounding these dubiously efficient and super sketchy tools seems to herald.
On the more practical side, enterprise subscriptions to the slop machines do come with assurances that your company’s IP (meaning code and whatever else that’s accessible from your IDE that your copilot instance can and will ingest) and your prompts won’t be used for training.
Hilariously, github copilot now has an option to prevent it from being too obvious about stealing other people’s code, called duplication detection filter:
If you choose to block suggestions matching public code, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match, or a near match, the suggestion is not shown to you.
Before LLMs came along no one cared what tools I did or didn’t use at work. Hell will freeze over before I let a text predictor write code for me even if that eventually costs me a job. I’m the sort who can’t stand any sort of auto-completion or other typing “help”, much less spending all my time reviewing LLM output.
LLMs are the next wave of popups here in the second quarter of the 21st century. I’ve become skilled at removing all the requests to let AI help me in whatever I’m actively doing. I about lost it recently when Excel threw one at me at work. NO, I DON’T WANT YOUR HELP!
Having a better guided search in a help feature I don’t mind. But stop pushing it in everything, just have a way to get to it (and have it WORK when I use it!)
Me: single tear rolls down over my tab complete
Tap complete got worse when llm replaced the AST for tabs.
That’s just it though, it’s not going to replace you at doing your job. It is going to replace you by doing a worse job.
A programmer automating his job is kind of his job, though. That’s not so much the problem as the complete enshittification of software engineering that the culture surrounding these dubiously efficient and super sketchy tools seems to herald.
On the more practical side, enterprise subscriptions to the slop machines do come with assurances that your company’s IP (meaning code and whatever else that’s accessible from your IDE that your copilot instance can and will ingest) and your prompts won’t be used for training.
Hilariously, github copilot now has an option to prevent it from being too obvious about stealing other people’s code, called duplication detection filter: