the vertical axis is not test type. It is would you run the test. At the bottom are deterministic fast tests for something completely unrelated to what you are working on - but they are so easy/fast you run them anyway 'just in case'. As you move up you get tests that you more and more want to aviod running. Tests that take a long time, tests that randomly fail when nothing is wrong, tests that need some settup, tests that need some expensive license (i can't think of more now but I'm sure there are).
You want to drive everything down as far as possible, but there is value in tests that are higher so you won't get rid of it. Just remember as soon you get to the 'make would run this test but I'm skipping it for now because it is annoying' line you need a seperate process to ensure the test is eventually run - you are trading off speed now for the risk that the test will find something and it is 10x harder to fix when you get there - when a test is run all the time you know what caused the failure and can go right there, while later means you did several things and have forgotten details. 10x is an estimate, depending where in your process you put it it could be 100 or even 1000 times harder.
I’ve had quite a bit of success in helping my dev teams to own quality, devising and writing their own test cases, maintaining test pipelines, running bug hunts, etc. 90% of this can be attributed to treating developers as my customer, for whom I build software products which allow them to be more productive.
Looks like you never worked with a decent QA team and do not understand the full scope of quality management. They have plenty of creative tasks not aligned with other roles.
Well, sort of maybe, but it's not always economical. For a normal web app - yeah I guess. Depends on the complexity of the software and the environment / inputs it deals with.
And then there's explorative testing, where I always found a good QA invaluable. Sure, you can also automate that to some degree. But someone who knows the software well and tries to find ways to get it to behave in unexpected ways, also valuable.
I would agree that solid development practices can handle 80% of the overall QA though, mainly regression testing. But those last 20%, well I think about those differently.
Yes, I agree. We do this too. Findings are followed by a post-mortem-like process: - fix the problem - produce an automated test - evaluate why the feature wasn't autotested properly
What do you define as "normal"? I can't think of anything harder to test than a web app.
Even a seemingly trivial static HTML site with some CSS on it will already have inconsistencies across every browser and device. Even if you fix all of that (unlikely), you still haven't done your WCAG compliance, SEO, etc.
The web is probably the best example case for needing a QA team.