Personally I think it's a bit more nuanced than senior vs junior (though it is very hard for juniors right now). What I've seen a lot of hunger for is people with a track record of getting their hands dirty and getting things solved. I'm very much a "builder" type dev that has more fun going from 0-v1 than maintaining and expanding scalable, large systems.
From the early start of the last tech boom through the post-pandemic hiring craze I increasingly saw demand for people who where in the latter category and fit nicely in a box. The ability to "do what you must to get this shipped" was less in demand. People cared much more about leetcode performance than an impressive portfolio.
Now reminds me a lot of 2008 in terms of the job market and what companies are looking for. 2008-2012 a strong portfolio of projects was the signal most people looked for. Back then being an OSS dev was a big plus (I found it not infrequently to be a liability in the last decade, better to study leetcode than actually build something).
Honestly, a lot of senior devs lose this ability over time. They get comfortable with the idea that as a very senior hire you don't have to do all that annoying stuff anymore. But the teams I see hiring are really focused on staying lean and getting engineers how are comfortable wearing multiple hats and working hard to get things shipped.
Maintaining and expanding is more challenging, which is why I’ve grown to prefer that. Greenfield and then leaving is too easy, you don’t learn the actually valuable lessons. As experience shows that projects won’t stay in the nice greenfield world, building them can feel like doing illusory work — you know the real challenges are yet to come.
Nearly all of the teams I've joined had problems they didn't know how to solve and often had no previously established solution. My last gig involved exploring some niche research problems in LLM space and leveraging the results to get our first round of funding closed, this involved learning new code bases, understanding the research papers, and communicating the findings publicly in an engaging way (all to typical startup style deadlines).
I agree with your remarks around "greenfield" if it just involves setting up a CRUD webapp, but there is a wide space of genuinely tricky problems to solve out there. I recall a maintainer style coworker of mine, who describe himself similar to what you are describing, telling me he was terrified of the type of work I had to do because when you started you didn't even know if there was a solution.
I have equal respect for people such as myself and for people that you describe, but I wouldn't say it is more challenging, just a different kind of challenge. And I do find the claim "you don't learn the actually valuable lessons" to be wildly against my experience. I would say most of my deep mathematical knowledge comes from having to really learn it to solve these problems, and more often than not I've had to pick up on an adjacent, but entirely different field to get things done.
"when you started you didn't even know if there was a solution."
Regardless what the problem is - as long as I know _nobody knows if there is a solution_ it's an instant sugar rush.
You are free to bang your head against a stone wall for months trying to crack the damn thing.
OFC you need to deliver in the end. And this requires then ruthless "worse is better" mentality - ship something - anything. Preferably a the smallest package that can act as mvp that you know you can extend _if this is the thing_ what people want.
Because that's the other side of the coin - if the solution is not known - people are also not aware if the solution has true value or if it is a guess.
So in any case you have to rush to the mvp.
Such joy!
Of course the mvp must land, and must be extensible.
But these type of MVP:s are not a slam dunk.
The combined requirement of a) must ship within limited time b) nobody knows what _does_ require a certain mindset.
I've found new hires to be more successful when they join, get some easy wins, and then find their own problems to solve. But maybe it's just an artifact of working at large companies where most of the day-to-day stuff is figured out.
(d) although the initial statement seems credible, the problem is actually ill defined and under specified and therefore not solvable as originally stated.
Example: our start-up plans to "fix health care"
Definetly it's a trap. If you are a purist it's nigh impossible. But if you ruthlessly 80/20 it most stakeholders will be pleasantly surprised.
I have no clue why I end up in these situations but I sure do like them.
I do realize this would sound more of a perpetual "not invented here syndrome" but technical implementation of modeling aspects for 3D and computational geometry is such a scarce talent you actually get to do novel stuff for your business.
The last time this happened I designed & implemented the core modeling architecture and led the implementation effort for our new map feature[0]
[0] See section "Stunning new building facades add practical value" in https://www.mapbox.com/blog/detailed-architecture-and-new-de...
It's kind of like when the FAA does crash investigation -- a stunning amount of engineering and process insights have been generated by such work to the benefit of all of us.
Trust me, you get plenty of experience in this as a founding engineer in a startup.
Many of these comments make me wonder how many people here have actually worked at an early stage startup in a lead role. You learn a lot about what's maintainable and scalable, what breaks and what doesn't, in the process rapidly iterating on a product to find your market.
(For readers, I don't think there's anything wrong with that but it just means that certain perspectives are overrepresented here that may not be more reflective of the broader industry.)
The idea that this is means "you don’t learn the actually valuable lessons" is completely baffling to me.
Most people I've know with founding engineer experience or similar leave not because it's not challenging, but because it's exhausting.
Increasingly I've realized that the HN community and I are not even speaking the same language.
Even in areas where startups aren't literally creating new product categories like the foundational model providers, the edge of a startup over a more established business is the speed at which they can provide value. What's the point of buying CoolCo when you can go with L&M Inc. that has thousands of headcount working on your feature. The value prop of CoolCo is that CoolCo can roll out a feature in the time it takes L&M to make a detailed specification and a quarterly planning doc breaking down the roadmap and the order of feature implementation.
Now be part of the team of folks that keeps that application running for 10, 20, 30 years. Now be part of the transition team to the new app with the old data. Those tasks will also teach you a lot about system stability, longevity, and portability... lessons that can only be learned with more time than a startup has.
The technical challenges are _very_ different between these environments. In a small company you have to deal with technical breakages all the time, but you don't really have systems-level problems.
Takeoff systems aren't analogous to prototype development. I don't know you'd build a prototype plane that's feasible to take to market, without having deep knowledge about how planes are built.
Early design decisions matter. And you don't get to that realisation without dealing with legacy systems where some upstart made terrible decisions that you're now responsible for.
“Technologist flavor of NTSB investigator.”
One of the guys had a very strong opinion that the ideal architecture was something as abstracted and object oriented as possible with single function classes, etc. I let him run with that. The other guy got frustrated with his sub-team's inability to write code to spec in a language they'd never used before and where they were trying to build some new features they didn't clearly understand. He developed a strong feeling that TDD was the most efficient path forward: he owns the PRD and design, so he just created test stubs and told the remote team to "just write code that passes the test" even if they didn't understand the function of the block.
So, after a few months where did we end up:
1. The "abstract everything" architect's team had an extremely fragile and impossible to maintain codebase because it was impossible for any outsider to tell what was going on.
2. The "just pass the damn tests" guy had a team that had quickly ramped on a new language and they had a codebase that was incomplete (because they were building it like a Lego project) but that everyone could understand because the code blocks generally stood on their own.
What was the next step: to shut down the guy who abstracted everything and force him to drive a quick & dirty rewrite that would be more maintainable, and to also start a major refactoring of the "Lego" team's code because it was so fragmented that it also was fragile and unsuited for production.
I saw this as a terrific learning experience for all involved and I was able to get away with it because the stakes were pretty low and we had time to experiment (part of the ultimate objective was upskilling the team), but the more important lessons were these:
1. Docs matter. Take the time to write clear & detailed specs first because you'll be forced to think of edge cases and functionality that you didn't originally, and it provides a basis for system design, too.
2. Architecture & design matter. Adhering too close to any single paradigm is probably a mistake, but it takes experience on the team to understand where the compromises are and make the best decision for that system.
That second point will not stop being true with the advent of agentic assisted software development. Like others have said, my expectation in the job market is that pay will continue to be depressed for junior hires as employers reset expectations and generally just want folks who can instruct fleets of agents to do the actual coding. Senior staff will become increasingly critical and their jobs will be painful and difficult, because it'll be assumed they can (and will be willing to) do design & code reviews of artifacts originated by agents.
What I am going to be most interested in is what happens in the SRE/Sysadmin world over the next few years as more AI-generated code hits prod in organizations that don't have adequate review & oversight functions.
You kindof answered the question yourself. Humans write the tests and then go tell the AI to write the solution which passes the test.
Maybe you're just a really really good engineer and product thinking hybrid!
You learn a ton of valuable lessons going from 0 to v1. And a ton of value is created. I guess I'm unclear how you're defining "actually valuable" here.
This is evident in my personal experience by the fact that I am often the one that sees scaling and maintenance issues long before they happen. But of course parent would claim this is impossible.
Edit: a legacy vibe coder
If v1 is successful and attracts a lot of users, it will have to have features added and maintained.
Doing that in ways that does not produce "legacy code" that will have to be thrown away and rewritten in a few years is a very different skill than getting v1 up and running, and can easily be what decides if you have a successful business or not.
When you are going from “1” to stable, there is some breathing room because you have a 1 that works, mostly. Sort of. Dealing with it may be a slow slog of sordid substitutions, but the pressure is different.
Going from 0 to 1 may involve working 80+ hour weeks with little sleep and enormous stress. It may mean meeting deadlines that make or break the product in a mad rush to fulfill a contract that saves or dooms the company. It may mean taking minutes to decide designs when days or months of consideration would have been more appropriate. And it may mean getting a lot of things wrong, but hopefully not so wrong that a version 2 is impossible.
As a final note: often v1 has substantial problems, that’s true. But sometimes it’s actually not that bad, and v2 fails because it was trying to shove the tech de jure (k8s cough cough) where it wasn’t needed so someone could get that shiny architect promotion.
The original punchline ("you don’t learn the actually valuable lessons.") was just a bit too sharp, so you even edited in a psuedo-clarification which actually just repeats that punchline but in a softer way, masterful!
How times have changed
Almost invariably after submitting, I see how I could clarify and/or expand on my thoughts, so I often do end up editing.
In my experience separating the roles out is silly if you're an engineer yourself. We do this a lot and that leads to silly mentalities. Greenfield developer vs maintenance engineer, MVP engineer vs Big Tech dev, FOSS hacker vs FOSS maintainer. Each of those dichotomies speaks to cultural differences that we humans amplify for no reason.
In truth the profession needs both and an engineer that can do both is the most effective. The sharpest engineers I've worked with over the years can hack new, greenfield stuff and then go on to maintaining huge projects. Hell Linus Torvalds started out by creating Linux from scratch and now he's a steward of the kernel rather than an author!
One of the tricks of HN is the 'delay' setting. https://news.ycombinator.com/item?id=231024
> There's a new field in your profile called delay. It's the time delay in minutes between when you create a comment and when it becomes visible to other people. I added this so that when there are rss feeds for comments, users can, if they want, have some time to edit them before they go out in the feed. Many users edit comments after posting them, so it would be bad if the first draft always got shipped.
I've got mine set to 2. It gives me a little bit of time for the "oh no, I need to fix things" or "I didn't mean to say that" and when everyone else can see it.
AI makes it look like these developers can do the same job the Americans did building the product to begin with. Even if things fall apart in the end, it won’t stop the attempt to order of magnitude reduce the cost for maintenance.
Staff or principals that have a tenure of majority greenfield development are extremely dangerous to companies IMO. Especially if they get hired in a nontraditional tech company, like utilities, banking, or insurance.
And if your entire career is nothing maintenance and sustaining projects, you'll never know what decisions it takes to build a greenfield application that lives long enough to become a graybeard.
You'll think you do because you see all the mistakes they made, but you'll only have cynical reasons for why those mistakes get made like "they don't care, they just make a mess and move on to the next job" or "they don't bother learning the tools/craft deeply enough moving, it's all speed for them".
-
To indulge myself in the niceness a bit: I don't think you write comments like the one above if you've done both, yet having done both feels like an obvious requirement to be a well-rounded Staff/Principal.
Most maintenance work suffers because of decisions made at the 0 to 1 stage. And most greenfield work fails entirely, never maturing to the maintenance stage.
So both sides have to do something right in the face of challenges unique to their side. And having familiarity with both is extremely valuable for technical leadership.
When working at larger orgs on legacy projects (which I have also done) you think "what sort of idiot did this?"
Then when you're the one tasked with getting a project shipped in two weeks that most reasonable engineers would argue needs two months, you start have to make strategic decisions at 2am about what maintainability issues will block the growth of the product on the way to funding and what ones can be fixed before 5pm by someone that will think you're an idiot in 3 years.
But to reword it: if you think the reason 0 to 1 work is typically a duct-taped mess is because of a lack of experience or understanding from greenfield devs, you'll probably fail at 0 to 1 work yourself.
Not that a noob developer great at selling has never landed 0 to 1 work, crapped out a half working mess and left with a newly padded resume... but maintenance work is missing out on by far the most volatile and unpredictable stage of a software project, with its own hard lessons.
The duct-taped nature of 0 to 1 work is usually a result of the intersection of fickle humans and software engineering, not a lack of knowledge.
-
People in maintenance can do things like write new tests against the production system to ensure behavior stays the same... what happens when 1 quarter into a 2 quarter project it turns out some "stakeholder" wasn't informed and wants to make changes to the core business logic that break half the invariants you designed the system around. And then after that it turns out you can't do that, legal pushed back. And then a few weeks later they came to an agreement so now we want a bit of A and B?
Or you're in consumer and there's a new "must have" feature for the space? Maybe you'd like to dismiss as "trend chasing", but that'll just doom your project in the market because it turns out following trends is a requirement for people to look at everything else you've built
Or worst of all, you know that quality engineering of the system will take 8 weeks, and there's a hard deadline on someone else's budget of 4 weeks, and you can of course decline to ship it, but then you'll need a new job. (and I know, you'll say "Gladly, I take pride in my engineering!", but again, you're probably going to end up maintaining a project that only survived by doing exactly what you quit over)
tl;dr it's Yin and Yang: you can't have one without the other, and you need to have a bit of the other side in you whenever you're working in the capacity of either to be a good technical leader.
You'll figure out what you should have built after it's been used in prod for a while. Possibly years.
How many offers did you receive? Companies have also adopted your strategy: interviewing candidates "to see what's out there" - there's a job I interviewed for that's still open after 10 months.
When I was doing a lot of hiring we wouldn't take the job posting down until we were done hiring people with that title.
It made a couple people furious because they assumed we were going to take the job posting down when we hired someone and then re-post a new listing for the next person.
One guy was even stalking LinkedIn to try to identify who was hired, without realizing that many engineers don't update their LinkedIn. Got some angry e-mails. There are some scary applicants out there.
Some times a specific job opening needs to stay open for a long time to hire the right person, though. I can recall some specific job listings we had open for years because none of the people we interviewed really had the specific experience we needed (though many falsely claimed it in their applications, right until we began asking questions)
If you need to wait YEARS to hire someone with some specific experience, I can guarantee that you really didn't need that person. You're doing this just to check some specific artificial goal that has little to do with the business.
There's a difference between "critically needing" and "would benefit from."
If you can find the specialist who's done what you're doing before at higher scale and help you avoid a lot of pain, it's awesome. If not, you keep on keeping on. But as long as you don't start spending too much on the search for that candidate, it's best to keep the door open.
There is no requirement that every job opening needs to be urgently filled.
You keep repeating this like it means the job opening shouldn't exist at all. Not all job openings are for urgent demands that must be filled right away or not exist at all.
Option 1) Hire someone sub-standard and deal with either an intense drag on the team while they came up to speed or worst case having to manage them out if they couldn't cut it.
Option 2) Give up the requisition which looked like an admission that we didn't really "need" the position, and also fails to help with senior management and director promotions tied to org size.
This always seemed pathological to me and I would have loved to have the ability to build a team more slowly and intentionally. Don't let all this criticism get to you.
I've worked in specialized fields where it takes YEARS for the right candidate to even start looking for jobs. You need to have the job listings up and ready.
This was extremely true when we were working on things that could not be done remote (literal physical devices that had to be worked on with special equipment in office).
Engineers aren't interchangeable cogs.
> I can guarantee that you really didn't need that person.
So what? There are many roles where we don't "need" someone, but if the right person is out there looking for a job we want to be ready to hire them.
Engineers aren't cogs, but they are able to travel and you can hire them by other means that full-time employment. So I suspect that was probably what you were meant to do for your situation.
Nothing about this was mission critical or even all that important or you would have found a way to solve the problem or you did and it wasn't a problem to begin with. I'm in a field where people often want to hire me for some special thing like this, but it often turns out, most of my life would be spent idle because no one company has enough demand for me. I can consult instead and be busy all year, or I can take a job for someone that's OK with me being idle for 80% of my time. I prefer the former for multiple reasons but just making an example of why hiring for specialized roles that aren't mission critical is often not the thing you should be doing.
I don't know why you assumed that. We had teams. We just wanted to grow them.
We weren't sitting there waiting.
I don't know where you're getting these ideas. We weren't hiring people to repair a backlog of devices. Warranty and repair work typically goes to the contract manufacturer, for what it's worth.
Companies like to grow and develop more products. You need more people.
If this is true then those shouldn't even be public job postings. That sort of critical position is for headhunters
Why? Not everyone is on LinkedIn or has an updated profile.
Some of the best candidates I've hired were people who were in other states who were planning to move, but waiting for the right job opportunity to come up.
We also used recruiters.
Why does it make people so angry that we posted job listings for real jobs that we were really hiring for?
If only we had listened to HN comments and given up instead
I recommend the article "Up or Out: Solving the IT Turnover Crisis" [0] which gives a reasonable argument for doing exactly that.
Notes:
0 - https://thedailywtf.com/articles/up-or-out-solving-the-it-tu...
Imagine working on voyager II .. or some old-ass banking software that still runs RPG (look it up, I'll wait), or trying to hire someone to do numerical analysis for the genesis of a format that supercedes IEEE float .. or .. whatever.
There are many applications for extremely specific skillsets out there. Suggesting otherwise is, in my opinion, clearly unwise
There's a lot of anger in this thread at companies for making obvious choices.
If the perfect applicant happens to be looking for a job and it can save us the time and churn of switching someone internally, then yes: I would prefer to hire that person.
> The whole hiring angle you describe seems silly in terms of process and expectations
I think the silly part of this thread is all of comments from people who think they know better how to operate a company they know nothing about the people who were in it.
Elsecomment and on Reddit, you'll see the attitude that their years of experience should be sufficient assurance for their prospective employer that they can pick up whatever other technologies are out there.
This is often coupled with the "you shouldn't need to learn new things outside of your 9-5."
Here, you are presenting a situation where a company would rather promote from within (counter job hopping culture) and would penalize someone who is not learning about new things that their current employer isn't using in the hiring process.
---
And you've mentioned it elsecomment too - it's about the risk. A company hiring an individual who isn't familiar with the technology and has not shown the ability to learn new material is more risky a hire than one who is either familiar with it professionally or has demonstrated the ability to learn new technologies.
That runs counter to the idea of the "best" candidate being the one who is most skilled but rather the "best" candidate being the one that is the least risky of a hire.
I think we could all be a little more mindful of that in hiring. That waiting for perfection is itself a fallacy for all these reasons and plenty more.
I screen hundreds of resumes a week when hiring. I know this very well.
Hiring the wrong person can easily be a net negative to the team. Hiring too fast and desperately hiring anyone who applies is doubly bad because it occupies limited headcount and prevents you from hiring the right person when they become available.
Building teams is a long game.
So if you don't have a job opening posted on the day they're sending out applications, you may miss your shot to hire them.
“We’re making do, but we’re kind of figuring out X as we go. That’s working for now, but the problems keep getting knottier as we grow and change—it works, but it’s expensive in terms of avoidable mistakes.
Nothing’s on fire, but if we ever got the chance, we’d value authentic expertise in this niche. But if it’s just ‘I could probably figure that out,’ we’ve already got plenty of that internally.”
Where a good hire ends up helping those internal people as they develop experience and expertise, and one that’s not right is worse than none at all.
That still takes a long time if random Senior Engineer X who's looking on LinkedIn is only 10% of the way there for what you'd need for a very specialized role.
It's a small engineering org, allegedly head-hunting one principal engineer for the whole org, so it's a single opening. 10 months later they are still hunting for their special snowflake.
> I can recall some specific job listings we had open for years because none of the people we interviewed really had the specific experience we needed
This is exactly what I mean. If you can go for years without filling a role, it's non-essential , and are in effect, "seeing what's out there". More and more companies are getting very picky on mundane roles, such as insisting on past experience in specific industries: "Oh, your extensive experience in low-latency comms is in telecoms? We prefer someone who's worked in TV broadcast, using these niche standards specifically, even though your knowledge is directly transferable. We don't want to waste 5 days on training"
For example, your company might need a full-time network admin once its network grows to a certain size and complexity. You won’t hit that level for three years but you’d hire the perfect person now if you found them even though they might be spending a lot of idle time scrolling Hacker News for the first year or two. At 5x the growth rate, you’d need that person within less than a year, and you might be less picky about whether they are coming from a TV or telecom shop.
More specialized.
If we wanted to train someone, we'd start with an internal candidate who was familiar with the other parts of the job and then train them on this one thing.
Hiring an outsider who doesn't know the subject matter and then teaching them is less efficient and more risky. It was better to have someone in the team learn the new subject as an incremental step and then backfill the simpler work they were doing.
If your hiring model is hiring multiple people through one posting, then you will probably get a lot fewer angry ex-candidates being weird (because they think you've lied to them since the posting is still up) by just sending out rejections that don't say that and just get the "we're no longer interested in you for this role" message across.
Nicer/more corporate language for both, of course.
No, this isn't possible unless you delay rejections letters until you hire someone.
We send letters as soon as the decision is made not to continue with that candidate.
Honestly it would be cruel to string them along any longer.
On the hiring side, at least in tech: interviewing really sucks. It's a big time investment from multiple people (HR, technical interviewers, managers, etc).
I'm not saying it's impossible that companies are interviewing for fun, but it seems really unlikely to me anyone would want to do interviews without seriously intending to hire someone.
I know it sucks, I've sat on the other side if the interviewing desk many times, and the charade wastes everyone's time - the candidates most of all because no one values that.
> I'm not saying it's impossible that companies are interviewing for fun, but it seems really unlikely to me anyone would want to do interviews without seriously intending to hire someone.
It sounds like you've never had to deal with the BS that is headcount politics, which happens more at larger organizations due to more onerous processes. Upper management (director, VP) can play all sorts of games to protect a headcount buffer[1], and everyone down the chain has to waste their time pretending to be hiring just because the division heads want to "maximize strategic flexibility" or however they phrase it.
1. Which is reasonable, IMO. Large companies are not nimble when reacting to hiring needs. The core challenge are the conflicting goals thrust on senior leadership reporting to the C-Suite: avoiding labor shocks, and maximizing profitably -- the former requires redundancy, but the latter, leanness.
I am on the interviewing and screening side and understand what you're saying. I also empathize with the people I routinely reject who don't understand why they were rejected. It's hard to see why you might not be a right fit for a role.
> it seems really unlikely to me anyone would want to do interviews without seriously intending to hire someone.
I keep seeing this accusation thrown around and like you, I have a hard time seeing this. On the flip side, looking at it from the eyes of many disenchanted candidates, I can see how a theory like this is appealing and self-reinforcing.
I've been running the same job ad for 2 years now, as a recruiter for a big Canadian bank. I've been laughed at for having ridiculously unrealistic standards. I've been accused of running ghost ads.
I'm in the process of hiring the 13th person using this same job ad for new and existing teams that need a very particular type of engineer.
Most prefer a greenfield project.
My friends who are "book smart" and leetcode geniuses are struggling. They're my friends, but they come off a bit "off" at first glance, the stereotypical nerd vibe. They're all really struggling since they can't sell themselves properly and lack the interpersonal skills.
Large companies tend to over specialize and that’s where I see the “I’m a builder” types fall apart. That takes away agency, lowers skills, and removes variety from work. That’s when it stops being fun to me.
I would hope most people with the builder architype are otherwise fine to keep building and maintaining.
A few years ago, when interest rates were 0% and companies were hiring at an unsustainable rate, I got a lot of criticism for cautioning engineers against non-coding roles. I talked to a lot of people who dreamed of moving into pure architect roles where they advised teams of more junior engineers about what to build, but didn't get involved with building or operating anything.
I haven't kept up with everyone but a lot of the people I know who went that route are struggling now. The work is good until a company decides to operate with leaner teams and keeps the people committing code. The real difficulties start when they have to interview at other companies after not writing much code for 3 years. I'm in a big Slack for career development where it's common for "Architect" and "Principal Engineer" titled people to be venting about how they can't get past the first round of interviews (before coding challenges!) because they're trying to sell themselves as architects without realizing that companies want hands-on builders now.
I'm no AI booster but I think this is exact scenario where AI-driven development is going to allow those non-coding developers to shine. They still remember how code works (and have probably still done PR review from time to time) so they're well placed to write planning documents for an AI agent and verify its output.
I left to a startup where I write code and design architecture. I even had a former coworker tell me "wow you're willing to do stuff like that at this point in your career?"
The Pick-Up Artist's Guide to Tech Interviewing, you should be writing.
The first 100 subscribers get a 50% off discount the month of March, you should be announcing on LinkedIn and Tiktok, and making passive income.
The rest of us experienced people with proven track records have to learn algorithms on the weekends despite having white hair.
Did you get any offers yet? It seems the issue is not lack of interviews but lack of offers. Many companies are looking for a goldilocks candidate and are happy to pass on anything that doesn't match their ideal candidate
Semi related, holy hell do companies have a lot of interview rounds these days. It seems pretty standard to spread 5-6 Teams calls over the course of a month. I get that these are high salary, high impact roles and you want to get it right. But this feels really excessive. And I'm not talking about FAANG tech giants here. It's everyone, from startups to random midsize insurance companies.
Most resumes are not very good. Beyond the obvious problems like typos, there is a lot of bad advice on the internet that turns resumes into useless noise. Screen a lot of resumes and you'll get tired of seeing "Boosted revenue by 23% by decreasing deploy times by 64%." This communicates nothing useful and we all know that revenue going up 23% YoY was not attributable to this single programmer doing anything at all.
Often I'll get candidates into interviews and they light up telling me about impressive things they did at a past job with enough detail to convince me they know the subject, but their resumes are garbage because they've followed too many influencers.
So try to work on your resume first. Try different resumes. Rewrite it and see what makes interviewers take notice and what they ignore. The most common mistake is to write a resume once and then spam it to 100 jobs. I know it's not fun to change the resume or invest time into applying for a job that may not respond, but you know what else isn't fun? Applying to 100 jobs and not getting any responses because every hiring manager has 20 tailored resumes in their inbox ahead of yours.
Having a simple but clear LinkedIn profile helps. Many scoff at this, but it works. You don't have to read LinkedIn's social media feed or do anything with the site. Just set it up and leave it for them to find.
GitHub portfolios and other things have low relative value at most companies. There are some exceptions where someone will look at it and it might tip the balance in your favor, but it's a small optimization. You need to be perfect the resume first, get a LinkedIn that looks decent second, and only then think about the ancillary things.
I'm putting more time into cleaning up my LinkedIn profile since that's been my most reliable route into hiring pipelines (other than referrals and networking).
This is the "quantify everything" mantra career coaches have been repeating for decades. As the story goes, no company is going to care that you refactored the FooBar library in order to make bugs in the DingDang module easier to fix. You have to only write down things that moved some quantifiable business needle like revenue or signups, even if the link to that needle is tenuous. Obviously, this ends up penalizing hard working, talented devs who don't happen to be working in areas where wins are easily quantifiable.
It's the useless quantification that turns resumes into noise, combined with making claims that you changed revenue by yourself.
> You have to only write down things that moved some quantifiable business needle like revenue or signups, even if the link to that needle is tenuous. Obviously, this ends up penalizing hard working, talented devs who don't happen to be working in areas where wins are easily quantifiable.
Every hiring manager knows this game and sees right through it. You can't read 1000 resumes with claims of "Increased revenue by 13% by" followed by something that clearly was not the reason revenue increased 13% to become numb to it.
Nobody believes these.
The somewhat useful quantifications are things like "Reduced cloud spend by 50% by implementing caching". This can spark a conversation about how they diagnosed they issue, made a transition plan, ensured it was working, and all of the other things we want to hear about.
This is a person who you're going to be reviewing their code or reading the documentation that they write.
If there are typos and poor formatting in the resume (that they've had the leisure of reviewing themselves and correcting), what does this say about the quality of the code the code or documentation that they're going to write when under a time constraint?
Are you going to be faced with the decision of having code with variables that have spelling errors and documentation that is grammatically or factually incorrect go through because of the time pressure?
The resume itself is a demonstration of the applicant's ability to pay attention to the details that matter in software development without showing a single line of NDAed code.
Everyone has seemingly adopted the FAANG playbook for interviewing that doesn’t really select for people who like getting their hands dirty and building. These kinds of interviews are compliance interviews: they’re for people who will put in the work to just pass the test.
There are so many interviews I’ve been in where if I don’t write the perfect solution on the first try, I’ll get failed on the interview. More than ever, I’m seeing interviewers interrupt me during systems or coding interviews before I have a chance to dig in. I’ve always seen a little bit of this, but it seems like the bar is tightening, not on skill, but on your ability to regurgitate the exact solution the interviewer has in mind.
In the past I’ve always cold applied places and only occasionally leaned on relationships. Now I’m only doing the latter. Interviewees are asked to risk asymmetrically compared to employers.
and sure, lots of people can't get a call back too, but starting the process means nothing
should say how many offers did you get, that's a better way to normalize it
You've been interviewing forever. You're the well practiced pickup artist of job searching. Of course you'll be getting the call backs over the other 1000 applicants who don't have the same experience level applying. You "just know" how to read between the lines and tailor a resume, whip up a cover letter, etc whereas they're making mistakes.
And there's also the advantage of having a current job, instead of an increasingly larger jobless gap that not only decreases your chances over time, but also contributes to the cycle of "less chance -> wider gap -> increased anxiety -> less chance".
Fumble the first few months due to a combination of a lack of interviewing practice, and of job postings that never intended to hire anyway or that are looking for someone that checks literally all their shopping list of boxes, all while still dragging you for a 4-8 journey, and suddenly your position is not that good anymore.
The majority of engineers, in my hiring experience, failed very simple tests pre-AI. In a world where anyone can code, they're no better than previously non-technical people. The CS degree is no longer protection.
The gap between average and the best engineers now, though, is even higher. The best engineers can visualize the whole architecture in their head, and describe exactly what they want to an AI - their productivity is multiplied, and they rarely get slowed down.
While this could be done by junior or senior, I think junior usually has the slight advantage in being more AI-native and knowing how to effectively prompt and work with AI, though not always.
AI has fundamentally broken the education system in a way that will take decades for it to fully recover. Even if we figure out how to operate with AI properly in an educational setting in such a way that learners actually still learn, the damage from years of unqualified people earning degrees and then entering academia is going to reverberate through the next 50 years as those folks go on to teach...
That time when you got to internalise through first hand experience what good & bad look like is when you built the skill/intuition that now differentiates competent LLM wielding devs from the vibers. The problem is that expectations of juniors are inevitably rising, and they don't have the experience or confidence (or motivation) to push back on the 'why don't you just AI' management narrative, so are by default turning to rolling the dice to meet those expectations. This is how we end up with a generation of devs that truly don't understand the technology they're deploying and imho this is the boringdystopia / skynet future that we all need to defend against.
I know it's probably been said a million times, but this kinda feels like global warming, in that it's a problem that we fundamentally will never be able to fix if we just continue to chase short term profit & infinite growth.
I would say that baptism by fire _is_ where the quality of an academic education comes from, historically at least. They are the same picture.
In my experience, target schools are the only universities now that can make their assignments too hard for AI.
When my university tried that, the assignments were too hard for students. So they gave up.
If you didn't have high information density in essays you were torn into. AI was a disadvantage due to verboseness.
Most people dropped the class and prof went on sabbatical.
Education and training and entry level work build judgement.
AI is either the next wheel or abysmal doom for future generations. I see both and neither at the same time.
In corporate environment where navigating processes, politics and other non-dev tasks takes significantly longer than actual coding, AI is just a bit better google search. And trust me, all these non-dev parts are still growing and growing fast. Its useful, but not elevating people beyond their true levels in any significant way (I guess we can agree ie nr of lines produced per day ain't a good idea, rather some Dilbert-esque comic for Friday afternoon).
> I'll usually turn off AI tools for a good chunk of the day just to make sure I don't get too rusty.
same, but its hard to do when $work has set a quota on ai usage and # of ai-related prs every month...I think this must be part of it. I see so many posts about people burning a thousand dollars in AI credits building a small app, and I have no idea why. I use the $20 Claude plan and I rarely run out of usage, and I make all kinds of things. I just describe what I want, do a few back-and-forths of writing out the architecture, and Claude does it.
I think the folks burning thousands of dollars of credits are unable to describe what they want.
Basically, yes. I bought 'business tier' and I know about webdev but I'm somewhere between intern and junior, so I do a lot of discussing. One session is "I want [functionality and constraints], ask me relevant major design questions" then implementation, then me investigating and asking for fixes.
> I think the folks burning thousands of dollars of credits are unable to describe what they want.
my related question whenever i hear a story like that: are they just filthy rich or have any plan to make that money back?But juniors don't (usually) have the knowledge to assess if what the AI has produced is ok or not. I agree that anybody (junior or senior) can produce something with AI, the key question is whether the same person has the skills to asses (e.g., to ask the right questions) that the produced output is what's needed. In my experience, junior + AI is just a waste of money (tokens) and a nightmare to take accountability for.
I perceive the AI itself as a very fast junior that I pair program with. So you basically need the seniority to be able to work with a "junior ai".
The bar for human juniors is now way higher than it used to be.
What do you think that is now? How does someone signal being 'past the bar'? If I hand wrote a toy gaussian splat renderer is that better than someone who used AI to implement a well optimized one with lots of features in vulkan?
I very much follow the pattern of having the whole architecture in my head and describe it to the AI which generates the appropriate code. So now the bottlenecks are all process related: availability of people to review my PRs, security sign offs on new development, waiting on CI builds and deployments, stakeholder validation, etc. etc.
Did you consider tech whiteboard / leetcode interviews are unnatural stressful environments ? Have you gone through a mid/difficult technical appraisal yourself lately ? Try it out just to get an idea how it feels on the other side...
I always asked a simple question like here is an array full of objects. Please filter out any objects where the "age" property is less than 20, or the "eye color" property is red or blue. It was meant more as a sanity check that this person can do basic programming than anything else.
Tons and tons of people failed to make basically any progress, much less solve the problem, despite saying that they worked programming day to day in that language. For a mid level role I would filter out a good 8 or 9 out of ten applicants with it.
I would consider it a non-leetcode type of question since it did not require any algorithm tricks or any optimization in time/space.
Nowadays that kind of question is trivial for AI so it doesn't seem like the best test. I'm not hiring right now,.but when I do I'm not sure what I will ask.
You're assuming the question has to even be that difficult. I've proctored sessions for senior-level webdev roles where the questions were akin to "baby's first React component" -- write a component that updates a counter when you click a button. So many candidates (who purported to be working with React for years) would fail, abysmally. Not like they were just making small mistakes; I didn't even care about best practices -- they just needed to make it work. So many failed. Lot of frauds out there.
There are so many software engineering candidates who literally cannot write the simplest code. I even had someone actually say "I don't really write code at my current job, I'm more of a thought leader." Bzzzzzt.
I've always prepared what I called level 1, level 2, and level 3 questions ready for candidates. But, I almost never even got to level 2, and never in 20 years of interviewing got to my level 3 questions.
I've been around the block for over 3 decades. I've had a number of high level positions across both IC and management tracks. These days I'm very hands on keyboard across a number of clients. If you asked me to write a basic for loop or if statement, there's a small chance I'd flub the exact syntax if writing on a whiteboard. Both because I bounce between languages all day and wires get crossed on the fly, but also the standard interview pressure type arguments. Whereas if the test is "does this person understand what a for loop is and how it works?", then yes, I can easily demonstrate I do.
In real life I'm not going to take an interview where there's not already that degree of trust so if that questions comes up something is already wrong. But I'm sure there are interviewers in the world who'd fail someone for that.
One of the worst guys took 20 minutes, with me having to coach him through it the entire time. It was a true exercise in patience, but I don't mind helping people learn new things. When he got his rejection email, he actually complained to the recruiter because he thought he did really well. Dude...
Half of the people I screen fail it. It's crazy.
https://blog.codinghorror.com/why-cant-programmers-program/
Most interviewees failed fizzbuzz, and that was 20 years ago.
It’s been well over a decade that I’ve had to do the coding interview monkey dance and I actually turned down an offer where I did pass a coding interview because I found it insulting and took a job for slightly less money where the new to the company director was interested in a more strategic hire (2016). That was the same thing that happened before in 2014 and after in 2018 - a new manager/director/CTO looking for a strategic hire.
In fact even my job at BigTech -AWS ProServe (full time blue badge RSU earning employee) as a customer facing consultant specializing in app dev was all behavioral as well as my next full time job as a staff consultant in 2023.
I’m 51 years old and was 40 in 2014. If I’m still trying to compete based on my ability to reverse a b tree on the whiteboard even at 40, I have made some horrible life decisions.
(Well actually I did make a horrible life decision staying at my second job too long until 2008 and becoming an expert beginner. But that’s another story)
I can never get over how this became a thing. Was listening to a Brian Cox video on YouTube the other night (something about his voice helps me sleep). He said "I don't memorize formulas, it's easy to look them up."
If you ever need to reverse a b tree (in 30+ years of writing code, I never have) it's easy to look that up. It tells me nothing about your ability as a developer of real software that you spent time memorizing trivia before an interview.
It's a contrived scenario, but the whole point is that it measures min(a,b) where `a` is your ability to think, and `b` is your ability to prepare (and memorize answers ahead of time). (I'd personally try to find ways to measure `a` instead of `b`, maybe by asking questions people wouldn't have heard before.)
So much of tech hiring cargo culting has been built up around leetcode and other coding problems, puzzles, and more. We all pay lip service to systems thinking and architecture, but I question if even those are testing the correct things for the modern era.
And then what happens in a year when the models can handle that as well?
Let them use their preferred setup and AI to the full extent they want, and evaluate their output and their methodology. Ask questions of "why did you choose X over Y", especially if you're skeptical, and see their reasoning. Ask what they'd do next with more time.
It's clear when a candidate can build an entire working product, end-to-end, in <1 day vs. someone who struggles to create a bug-free MVP and would take a week for the product.
In addition to the technical interview, hiring them on a trial basis is the absolute best if possible.
Taste and technical understanding of goals and implementation to reach those goals is the biggest differentiator now. AI can handle all the code and syntax, but it's not great at architecture yet - it defaults to what's mid if not otherwise instructed.
I do feel like there's something *different* about the required skillset now, and it's not something that all engineers have even experienced ones. But I can't put my finger on what exactly it is. If I'm right though, classic interview techniques won't select for it because they never were intended to do so.
Either the machines exterminate us or we become glorified pets.
Hope the AIs prefer us to cats (even though that's a long shot).
An amateur with a chess engine that blunders 10% of the time will hardly play much better than if they didn't use it. They might even play worse. Over the course of a game, those small probabilities stack up to make a blunder a certainty, and the amateur will not be able to distinguish it from a good move.
However, an experienced player with the same broken engine will easily beat even a grandmaster since they will be able to recognise the blunder and ignore it.
I often find myself asking LLMs "but if you do X won't it be broken because Y?". If you can't see the blunders and use LLMs as slot machines then you're going to spend more money in order to iterate slower.
I guess? I don't really see why that would be the case. Being a senior is also about understanding the requirements better and knowing how/what to test. I mean we're talking about prompting text into a textarea, something I think even an "old timer" can do pretty well.
I'm not sure why junior engineers would be any better at that though, unless it's just that they're approaching it with less bias and reaping beginners luck.
Before gen AI, I used to give candidates at my company a quick one-hour remote screening test with a couple of random "FizzBuzz"-style questions. I would usually paraphrase the question so a simple Google search would not immediately surface the answer, and 80% of candidates failed at coding a working solution, which was very much in line with the article. Post gen AI, that test effectively dropped to a 0% failure rate, so we changed our selection process.
[1] https://blog.codinghorror.com/why-cant-programmers-program/
I'd go a step further and say the engineers who, unprompted, discover requirements and discuss their own designs with others have an even better time. You need to effectively communicate your thoughts to coding agents, but perhaps more crucially you need to fit your ever-growing backyard of responsibilities into the larger picture. Being that bridge requires a great level of confidence and clear-headedness and will be increasingly valued.
I should have a credential I have to maintain every few years, one or two interviews, and that should get me a job.
We have a lot of people where if you gave them clear requirements, they could knock out features and they were useful for that, but I have an army of agents that can do that now for pennies. We don't need that any more. We need people who have product vision and systems design and software engineering skills. I literally don't even care if they can code with any competency.
Btw, if you think that copying and pasting a jira ticket into claude is a skill that people are going to pay you for, that is also wrong. You need to not just be able to use AI to code, you need to be able to do it _at scale_. You need to be able to manage and orchestrate fleets of ai agents writing code.
It's almost impossossible to screen for "high performers" though. When interviewing, you just don't know who you are getting, short of like, they can solve your leetcode questions well, and they had good answers to pretty high-level "work experience" questions.
So I don't think this can be true on the hiring side. Maybe on choosing who they let go when cutting down the workforce, they can look at general performance reviews and such, but I doubt it plays a role in hiring.
That's not true? leetcode is crap, but usually you can learn a lot about a person from how they approach problems and on what kind of questions they ask.
I graduated 9 months ago. In that time I've merged more PRs than anyone else, reduced mean time to merge by 20% on a project with 300 developers with an automated code review tool, and in the past week vibe coded an entire Kubernetes cluster that can remotely execute our builds (working on making it more reliable before putting it into prod).
None of this matters.
The companies/teams like OpenAI or Google Deepmind that are allegedly hiring these super juniors at huge salaries only do so from target schools like Waterloo or MIT. If you don't work at a top company your compensation package is the same as ever. I am not getting promoted faster, my bonus went from 9% to 14% and I got a few thousand in spot bonuses.
From my perspective, this field is turning into finance or law, where the risk of a bad hire due to the heightened skill floor is so high that if you DIDN'T go to a target school you're not getting a top job no matter how good you are. Like how Yale goes to Big Law at $250k while non T14 gets $90k doing insurance defence and there's no movement between the categories. 20-30% of my classmates are still unemployed.
We cannot get around this by interviewing well because anyone can cheat on interviews with AI, so they don't even give interviews or coding assessments to my school. We cannot get around this with better projects because anyone can release a vibe coded library.
It appears the only thing that matters is pedigree of education because 4 years of in person exams from a top school aren't easy to fake.
People are posting about pull requests, use of AIs, yada yada. But they never tell us what they are trying to produce. Surely this should be the first thing in the post:
- I am developing an X
- I use an LLM to write some of the code for it ... etc.
- I have these ... testing problems
- I have these problems with the VCS/build system ...
Otherwise it is all generalised, well "stuff". And maybe, dare I say it, slop.
edit: to clarify, I'm using recc which wraps the compiler commands like distcc or ccache. It doesn't require developers to give up their workspace.
Right now I'm using buildbarn. Originally, I used sccache but there's a hard cap on parallel jobs.
In terms of how LLMs help, they got me through all the gruntwork of writing jsonnet and dockerfiles. I have barely touched that syntax before so having AI churn it out was helpful to driving towards the proof of concept. Otherwise I'd be looking up "how do I copy a file into my Docker container".
AI also meant I didn't have to spend a lot of time evaluating competing solutions. I got sccache working in a day and when it didn't scale I threw away all that work and started over.
In terms of where the LLM fell short, it constantly lies to me. For example, it mounted the host filesystem into the docker image so it could get access to the toolchains instead of making the docker images self-contained like it said it would.
It also kept trying to not to the work, e.g. It randomly decides in the thinking tokens "let's fall back to a local caching solution since the distributed option didn't work" then spams me with checkmark emojis and claims in the chat message the distributed solution is complete.
A decent amount of it is slop, to be honest, but an 80% working solution means I am getting more money and resources to turn this into a real initiative. At which point I'll rewrite the code again but I'll pay closer attention now that I know docker better.
> The goal is to transparently replace dedicated developer workstation
Isn't there a less convoluted way of making the best engineers leave? I am half serious here. If you want your software to run slow, IT could equally well install corporate security software on developer laptops. Oops, I did it again. Oh well, in all seriousness, I have never seen any performance problem being solved by running it on Azure's virtualization. I am afraid you are replacing the hardware layer by a software layer with ungodly complexity, which you are sure of will be functionally incomplete.Are you sure they don't have to fix the build pipeline first? Tens of thousands of vCPUs for a single compilation run, or to accommodate 100 developers who try to compile their own changes?
Sorry, I wasn't clear. I am not virtualizing the workspace. I'm using `recc` which is like `distcc` or `ccache` in that it wraps the compiler job. Every developer keeps their workstation. It just routes the actual `clang` or `gcc` calls to a Kubernetes cluster which provides distributed build and cache.
> Isn't there a less convoluted way of making the best engineers leave?
We have 7000+ compiler jobs in a clean build because it is a big codebase. People are waiting hours for CI.
I'm sure that drives attrition and bringing that down to minutes will help retain talent.
> Tens of thousands of vCPUs for a single compilation run, or to accommodate 100 developers who try to compile their own changes?
Because it uses remote execution, it will ideally do both. My belief is that an individual developer launching 6000 compiler jobs because they changed a header will smooth out over 300 developers that generally do incremental builds. Likewise, this'll eliminate redundant recompilation when git pulling since this also serves as a cache.
It happens when someone modifies a widely included header file. Which there are a lot of thanks to our use of templates. And this is just our small team of 300 people.
> Have you thought about splitting that giant thing in smaller chunks?
Yes. We've tried but it's not scaling. Unfortunately, we've banned tactics like pImpl and dynamic linking that would split a codebase unless they're profiled not to be on a hot path. Speed is important because I'm writing tests for a semiconductor fab and test time is more expensive than any other kind of factory on Earth.
I tried stuff like precompiled headers but the fact only one can be used per compilation job meant it didn't scale to our codebase.
It sounds like you have a job, right out of college, but you're griping about not getting promoted faster. People generally don't get promoted 9 months into a job.
I'm reading your post and I am genuinely impressed but what you claim to have done. At the same time I am confused about what you would like to achieve within the first year of your professional career. You seem to be doing quite well, even in this challenging environment.
I am in great fear of ending up on the wrong side of the K shaped recovery.
Everyone is telling me I need to be exceptional or unemployed because the middle won't exist in 2 years.
I want to secure the résumé that gives me the highest possibility of retaining emoloyment if there's a sudden AI layoff tomorrow. A fast career trajectory catches HR's eye even if they don't understand the technicals.
How many juniors OpenAI GDM are going to hire in a year, probably double digits at max, the chances are super slim and they are by nature are allowed to be as picky as they should be.
That being said, I do agree this industry is turning into finance/law, but that won’t last long either. I genuinely can’t foresee what if when AGI/ASI is really here, it should start giving human ideas to better itself, and there will be no incentive to hire any human for a large sum anymore, maybe a single digit individuals on earth perhaps
Because AI accelerates the rate of knowledge gain, this gets even faster.
> The retreat challenged the narrative that AI eliminates the need for junior developers. Juniors are more profitable than they have ever been. AI tools get them past the awkward initial net-negative phase faster. They serve as a call option on future productivity. And they are better at AI tools than senior engineers, having never developed the habits and assumptions that slow adoption.
> The real concern is mid-level engineers who came up during the decade-long hiring boom and may not have developed the fundamentals needed to thrive in the new environment. This population represents the bulk of the industry by volume, and retraining them is genuinely difficult. The retreat discussed whether apprenticeship models, rotation programs and lifelong learning structures could address this gap, but acknowledged that no organization has solved it yet.
Someone who jumps higher than expected when the boss demands it?
Someone who works 996 in the office?
Or someone who knows what they’re doing?
I think this is bigger than any individual. It’s just a matter of time before you’re let go. There’s no loyalty from companies at all. Not when they’re seeing higher than expected profits and are still cutting huge percentages of staff every year. There’s no strategy or preference to it. I don’t think this has to do with how you or I perform on the job.
Most people I’ve talked to lately who are still employed are watching out for their job to get cut.
I would not mind switching but 1. I don’t see interesting positions 2. they don’t pay well, and only 3. they might not even want me.
It might also be just my niche, but finding a good position feels completely impossible for me.
I am doing cross platform mobile development and I’m wondering how I could transition into backend development or I started even considering the decentralized finance…
3.5 years ago was peak ZIRP hiring craziness.
It wasn't a normal reference point.
My resume isn't bad on paper either. It's not FAANG coded, but it's decent experience.
They're just as capable of typing prompts into AI, but what they don't have is good judgement of what good work/code looks like, so what's the point of asking a junior engineer to do something vs asking the LLM directly?
Nobody is gonna lose money because some script that generates yaml for the build process every hour nested three loops instead of two. Intern, AI, junior dev, junior dev telling an intern how to use AI, doesn't matter. If it works for the week it'll work for the decade. If someone needs to pick it apart and fix something in a year it'll either take no time because they know enough to do it easily or it'll be a good low stakes learning exercise for a junior.
Everyone wants to think their stuff is important but 99.9% of code is low stakes support code either in applications or in infrastructure around them.
Wouldn't the assumption be the opposite, in that AI is magnifying the decision making of the engineer and so you get more payback by having the senior drive the AI?
I suspect a lot of it best practices will be enforcing best practices via agents.md/claude.md to create a more constrained environment.
Juniors seem to split into the category of trust everything the ai says, or review every step of the implementation. It’s extremely hard to guide the ai while you are still learning the basics, opus4.6 is a very powerful model.
Quite often the AI guesses accurately and you save the time you'd have spent crafting the perfect prompt. Recently, my PM shared a nigh-on incompressible hand-scribbled diagram on Slack (which, in fairness, was more or less a joke). I uploaded it to Gemini with the prompt "WTF does this diagram mean?". Even without a shred of context, it figured out that it was some kind of product feature matrix and produced a perfect three paragraph summary.
I've never really seen the value in the planning phase as you're free to just throw away whatever the AI produces and try again with a different prompt. That said, I don't pay for my tokens at work. Is planning perhaps useful as a way of reducing total token usage?
This could simply be a matter of style however.
Being able to clearly describe a problem and work with the AI to design a solution, prioritise what to put the AI to work on, set up good harnesses so the quality of the output is kept high, figure out what parallelises well and what’s going to set off agents that are stepping on each others toes… all of this needs experience and judgement and delegation and project organisation skills.
AI is supercharging tech leads. Beginners might be able to skill up faster, but they’re not getting the same results.
For average to low-performing intermediates/seniors... there's not much difference in output between them and a good junior at this point. Claude really raised the skill floor for software development.
I find it easier to get a reasonably smart senior to use AI in a good way, than to train a junior in what thinking to do, and what to outsource, learning basics about good design, robustness and risk analysis. The tools aren't the problem per se, it's more about how people use them. Bit of a slippery slope.
That's just my anecdotal experience from not a whole lot of data though. I think the industry will figure it out once things calm down a bit. Right now, I usually make the bet to get one senior rather than two juniors. Quite different to my strategy from a few years ago.
This is the K-shaped economy playing out. Its a signal that the american middle class is hollowing out. Bad, very bad.
While I could buy that hiring managers believe this, it's not actually true.
The gulf between the quality of what a sr developer can do with these tools and what a jr can do is huge. They simply don't know what they don't know and effective prompting requires effective spec writing.
A rando jr vibe coder can churn out code like there's no tomorrow, but that doesn't mean it's actually right.
This is also where micro services pattern fits in well because individual unit is so small no design needed.
I’ll bite: why? Genuine question, not a weird gotcha.
what an interesting way to say most programmers find it extremely difficult to get a job. you sound like you have some kind of insight, but is there anything notable about jobs drying up for people who aren't cheap enough or who aren't valuable enough? that's just how jobs dry up. anytime it's a bad job market for workers it'll be like that.
it is a great way to frame the coming tech crash. it allows whoever remains to fancy themselves as top talent.
i argue that's a good outcome. Seniors who aren't high performers should not command high salaries. I think the anomaly that is the post-covid boom is warping salary expectations vs difficulty of job (and the competency required for it).
Seniors have much more advantage right now in using AI than Juniors. Seniors get to lean in on their experience in checking AI results. Juniors rely on the AI's experience instead, which isn't as useful.
Before that role, I spent two years at another government contractor working on various govt. applications doing UX research, design, and front-end UI development. Overall, I’ve had a 17-year career in UX Research, Design and Development, starting at an ad agency in 2009.
From 2016 to 2022, I worked hard in government projects and enjoyed collaborating with great, close-knit coworkers and receiving consistently positive client feedback. From 2022 to 2026, things changed as the company grew—my role narrowed to UX research and design while newer hires handled UI development. I often felt underutilized and raised it, but management assured me I was doing well. With little direction from my last manager, I focused on staying visible to the client by monitoring user chats, identifying UX issues, and proposing design solutions that the client appreciated and the development team implemented.
Looking at where the tech industry is now—with thousands laid off from government IT and the broader tech sector flooding the job market, creating rising competition, constant pressure to work harder (Elon wants us to work as hard as Chinese workers do) and AI rapidly reshaping creative and development roles—I’m not very interested in that level of stress. I worked hard for many years and enjoyed it, but I value MY LIFE and MY HEALTH more than participating in the current “battle royale” environment in tech.
Overall, now with AI I feel graphic & web design, as well as front design web development is a stupid career! It was a nice run, bought two houses from it, worked remotely, when things were slow worked from wherever in the lower 48 and now .... in April Im starting nursing school and Im not young (20 years left of work in me). Roll with the punches here yet the punches are gonna punch hundreds of thousands to millions in the face ... not sure how this any good for an economy and society but here we are! If you are like me sell your house and stash the money away to buy houses when the crash from AI happens!
I suspect that nursing is an excellent choice.
I got pushed out, and slapped with the Dead Fish of SV Ageism. It was brutal, and I got pissed off.
But in the long run, it's been the best thing that ever happened to me. I would have liked the extra ten years of salary and saving, but I'm not entirely sure that I would have survived it.
Truth is, when I was part of larger orgs/enterprise I definitely saw some folks who were dead weight, and I don’t mean to be harsh, a few of these knew they weren’t contributing and were being malicious in that sense.
Similarly, I wonder how many high performers now are taking multiple jobs thanks to remote work and exposing the mid to low performers. Like some kind of developer hypergamy taking place.
I've been looking again this year and the landscape has changed drastically. Specialization is the name of the game, I have a good amount of experience working with Growth initiatives and I've been getting good responses from roles that are looking for either Growth or Design engineers, roles that were not as prevalent years ago.
That sounds good for many of us (and don’t we all like to think we’re top candidates here on HN…) but is there any data to back this up? Or it just anecdata (not to dismiss anecdata, still useful info).
That is pretty context sensitive. You're correct that there's no real deep AI use expertise broadly understood to exist at this point (unless you're Steve Yegge?), but if people think they can toss out the engineers with experience in the systems that have been around a while, with junior developers "guiding" changes — that's likely a good way for a business to fall on its sword.
People with experience and/or credentials desired by companies in areas of growth (i.e. AI) are always in high demand
This is tautological.
Apparently it is over a third affected in my domain. Which is crazy. Pretty much everyone in my immediate band has been hit at some point. That that weren't were usually around 5-8 above me. So basically a different generational band altogether.
If intermediates were being pushed out they would just take junior roles to have something
Companies really don’t like hiring Juniors in general
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
What we really need is the -10X engineer ;)
Alas, his job would entirely consist of debloating the slop everyone else is pumping out "at inference speed".
You can be a great unblocker, team lead, and work well within cross cutting areas and with interdepartmental stake holders, have a history of strong technical performance.
and yet its nebulous if that means you're a high performer or not to those hiring. It seems I'm seeing 'culture fit' as a common reason people aren't getting hired again. That was out of vogue for a good while.
I've noticed a huge tightening of the rope around that sort of thing.
I can't tell you how many times I've passed all the tests, all the interview things, get to the final round with the team and the rejection email comes in despite having good conversations. By all accounts, I believe any person would say the interview went well.
my peers are reporting the same things.
I haven't found that to be true. Unless by "top candidates" you mean people working at actual AI companies such as Alphabet/Meta/OpenAI/Anthropic. If you're an AI-user and not an AI scientist it's bad out there, even for senior+ developers who previously worked in "FAANG".
HN user: not in my experience!
It's pretty depressing. I'd take just about anything at the moment. I understand desperation going into a job interview isn't ideal either.
It feels like I'm in a hole.
As was foretold in the Tyler Cowen's eponymous 2013 book "Average Is Over".
In it he argued that the modern economy will undergo a permanent shift where "average" performance no longer guarantees a stable, middle-class life.
He predicted that the economy will split into two distinct classes: a high-earning elite (roughly 10–15% of the population) who thrive by collaborating with technology, and a larger group (85–90%) facing stagnant wages and fewer opportunities.
AI summary of the other key points of that book:
The "Man + Machine" Advantage: Success will belong to those who can effectively use smart machines. Cowen uses Freestyle Chess (teams of humans and computers) as an analogy, noting that human intuition combined with machine processing power consistently outperforms either working alone.
The Power of Conscientiousness: In a world of abundant information, the scarcest and most valuable traits will be self-motivation, discipline, and the ability to focus. Hyper-Meritocracy: Advanced data and machine intelligence make it easier for employers to measure an individual's exact economic value. This leads to extreme salary inequality as top performers are identified and rewarded more precisely.
A New Social Contract: Cowen predicts a future where individuals must be more self-reliant. He suggests society will move toward lower-cost living models for the non-elite, featuring cheaper housing and "bread and circuses" in the form of low-cost digital entertainment and online education.
EDIT: Notice how we're basically already here: Netflix is cheap, YT is free, Khan Academy and MIT OCW is free, Coursera/Udemy/etc. are cheap.
Stagnant vs. Dynamic Sectors: The economic divide is worsened by "low accountability" sectors like education and healthcare, where productivity is hard to measure and costs continue to rise, unlike tech-driven sectors that see rapid gains.
Unfortunately, this one hasn't aged well. Human+Computer is now consistently outperformed by Computer alone in the chess world. Also, the name Freestyle Chess is now used for Chess960, the chess variant where starting positions are randomized. It has nothing to do with computer chess now!
Happy to be on the high-end ^^.
Tell me about all the junior developers you've hired (it's none)
This is probably the dumbest take I've heard of. They're the most likely to make mistakes with AI because they don't know the pitfalls of what they're doing.