1. Any given system has a finite number of findable vulnerabilities.
2. All findable vulnerabilities are fixable (if not in software then with a new hardware revision).
3. Fixing a vulnerability while keeping the same intended functionality introduces on average less than 1 other findable vulnerability.
4. It is possible to cease adding new features to a system and from that point forward only focus on fixing vulnerabilities.
If all 4 are true, then perfect security seems possible, in some sense. I think some vulnerabilities might not be fixable, if you include things like the idea that users can be tricked into revealing their passwords. If you restrict the definition of vulnerability to some narrower meaning that still captures most of what people mean when they say computer vulnerability, then I think those 4 statements are probably true.
Perfect security might be near impossible in practice because vulnerabilities will get more difficult to find and fix over time, but I think we should expect the discovery of vulnerabilities to eventually become arbitrarily slow in a hypothetical system that prioritized security above all else.
If you imagine you had a vulnerability scanner as fast and convenient as a linter, it would be much cheaper to write secure code right away. Probably not perfectly secure, but still secure enough to make sure finding exploits stays expensive.