Is it just a philosophical belief that AI is morally bad? Or have you actually used AI to build things and feel confident that you have explored the space enough to come to such a strong conclusion?
I have been writing code every day for over 30 years, and have been doing it professionally for over 20. I have seen fads come and go, and I have seen real developments that have changed the way I do what I do numerous times. The more experience and the more projects I create with AI, the more certain I am that this is a lasting and fundamental change to how we produce software, and how we use computers generally. I have seen AI get better, and I have seen myself get more proficient at using it to get real work done, work that has already been tested with real world, production, workloads.
You can hate that it is happening, and hate the way working with AI feels, but that doesn't mean it is not providing real value for people and doing real work.
I don’t think people are wasting too much time. Although, I do agree most of these posts are just bs, including this one. But AI-development has been a thing across a lot of companies in the world.
> Arguing in good faith
will be futile, unfortunately.
And on the ROI side, trying things out regularly, I haven’t seen the positive ROI in the limited time I’ve dedicated to exploring the tools. I’ve restricted experimenting to 4 hours per month, because spending more than 2.5% of the month chasing productivity improvements that realistically seem to be 10-20%, will quickly eat into those gains. After accounting for token costs, it ends up being a wash.
You can't learn how to use _anything_ by experimenting 4 hours a month.
With infinite time anything is possible, but since we live within constraints, discussing practical, real world thresholds or evaluation methods is a worthwhile use of our time.
AI is a powerful tool. Depending on what I need I use chatgpt, in-ide agents, or a platform like Devin.ai.
I use it when it helps me advance my goals. I don't when it doesn't. Sometimes it misses the mark and I scale back and have it do a specific piece and I'll do the rest.
Sometimes I use it to analyze the code base in seconds vs minutes. Sometimes I use it to pinpoint a bug fast.
Ive solved customer issues in seconds and minutes with it vs hours.
I worked on a banking app with deeply domain specific data issues. AI was not very helpful on that team. My current work on consumer web apps mean my problems are more mundane and AI is a big accelerant.
Being and engineer means solving the problems with the right tools with the right tradeoffs as well. It's why I use an idea vs notepad, I use chatgpt for one-off scripts and "chat", and i use agentic workflows for big, repetitive, or "boring" low-stakes tasks.
lets get nitty gritty on this - can you say how you did this? because a lot of people think this is an unsolved problem
There are a lot of little things we’ve tracked, and it’s just faster to implement things now. To be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.
> to be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.
I see this appear quite often in discussions on productivity, to the point that a conclusion may be made regarding its centrality for productivity gains.
- open the browser
- google "john repo"
- find the website
- copy the repo name
- open the terminal
- cd
- git clone
- try to find the file i want
- read the whole file to find the answer
= answer
i now do:
- "john repo question" = answer
I don't think agentic workflows are there yet, but implementing skills to manually call and use while working side by side with an AI is definitely nice - our company is focused a lot on sandboxing right now and having safe skills
I don't think we've gotten feature development well yet, but the review skills + grafana skills they wrote have been pretty solid
Agents are unbelievably useful at helping takeover and refactor messy codebases though. I just started taking over this monstrous nightmare of a codebase, truly ancient code the bulk of it written over 10+ years ago in PHP. With the use of Claude / Codex I was able to port over the vast majority of the existing legacy storefront and laid the groundwork for centralizing the 10-20k LOC mega-controller logic over to reusable repo/service patterns.
Just shit that would've taking years previously, is achievable in under a month.
Everything needs an element of human touch, I would somehow only run vanilla things. But if, let’s say, I’m creating backup scripts, I meticulously outline the plan.
Or maybe the only people left opposing AI are so hardcore against it they form their identity (username) around it