I was in college at the time and doing some odd freelance jobs to make some money. Unbeknownst to my clients I was writing their website backends in swift, using build packs on heroku to get them hosted.
It was a fun time for me and I love swift but I will admit last year I went ahead and rewrote an entire one of those sites in good ol typescript. I love swift but anything outside of the Apple ecosystem with it just seems like it hasn’t hit critical mass yet.
Even today, with the fancy Swift 6.3, the experience of using Swift for anything other than apps for Apple platforms is very painful. There is also the question of trust - I don't think anyone would voluntarily introduce Apple "The Gatekeeper" in parts of their stack unless they're forced to do it.
Even Apple does not use Swift on the server (AFAIK) so why would you?
https://www.swift.org/blog/swift-at-apple-migrating-the-pass...
You could have easily fact-checked before forming an opinion, but at least the buffoon down there agreeing with you is worse
Exactly true - they've created all these "working groups" of open source / volunteers to care for Android / Server / Wasm / ... all while being constraint "as an Apple product". Of course the end result is crappy
I wrote an eBook on Swift several ago but rarely update that book anymore. Count me as one of the many developers who for a while thought Swift would take over the world. At least Swift is a fun language to use, and now with LLM coding tools writing macOS/iOS/iPadOS apps is fairly easy.
In comparison, e.g. Scala 2 -> Scala 3 was an absolute nightmare—it just didn't have the same vocal wailing from maintainers in the community (or, I suppose, a fraction of Python's popularity to begin with).
I can't speak for node.js specifically but who gives a shit
> Long-term source code compatibility is a very useful feature for open source
Sure, until you need affordable maintainers. Maintainability must be balanced with patience for bad software. Cf the insane maintenance cost of perl scripts
Swift just wasn't doing the same things. And even if it did, Swift would compete with other languages that were understood as "a better Python", like Julia. Even then, Swift only came to Linux in 2016, Windows in 2020, and FreeBSD less than a year ago with WWDC 2025.
I think it doesn't help that the mid 2010s saw a burst of Cool and New languages announced or go mainstream. Go, Julia, Rust, TypeScript, Solidity, etc. along with Swift. I think most of us only have space to pick up one or two of these cool-and-new languages every few years.
In 2015-2017 you could interop with C, C++ support wasn't added until very recently.
I do agree with you though and I am not sure what the exact reasoning is, but Swift is definitely an Apple ecosystem language despite the random efforts to gain traction elsewhere.
Why could it?
> it was simple enough - very fast - could plug into the C/C++ ecosystem. Hence all the numeric stuff people were doing in Python powered by C++ libraries could've been done with Swift.
Half a dozen languages fit this description.
> the server ecosystem was starting to come to life, even supported by IBM.
No, not at all. Kitura, Vapor (a fitting name) were just a toys that no serious player ever touched.
But I don't know why I'd pick Swift on the server when Rust is better in almost every dimension, with a thriving and more community-driven ecosystem.
Thats the problem.
I assume the server side usage is not zero, but not enough to reach a critical mass, you're probably right there.
Swift for TensorFlow was a cool idea in that time …
NVidia, AMD and Intel now have doubled now into giving Python GPU JITs, and Julia, the same capabilities as their CUDA, ROCm, and SYSCL offerings with C++.
With Julia and Python having their 1.0 long behind them.
https://youtu.be/ovYbgbrQ-v8?si=tAko6n88PmpWrzvO&t=1400
--- start quote ---
Swift has turned into a gigantic super complicated bag of special cases, special syntax, special stuff...
We had a ton of users, it had a ton of iternal technical debt... the whole team was behind, and instead of fixing the core, what the team did is they started adding all these special cases.
--- end quote ---
And then if that's the case, how were they not ready to solve the many problems that a big organization would run into? And all the schedule constraints that come with it?
That's true, but only partly true. It already was a gigantic super complicated bag of special cases right from the start.
Rob Rix noted the following 10 years ago:
Swift is a crescendo of special cases stopping just short of the general; the result is complexity in the semantics, complexity in the behaviour (i.e. bugs), and complexity in use (i.e. workarounds).
https://www.quora.com/Which-features-overcomplicate-Swift-Wh...
Me, 2014:
Apple's new Swift language has taken a page from the C++ and Java playbooks and made initialization a special case. Well, lots of special cases actually. The Swift book has 30 pages on initialization, and they aren't just illustration and explanation, they are dense with rules and special cases
https://blog.metaobject.com/2014/06/remove-features-for-grea...
Of course, that doesn't mean that it didn't get worse. It got lot worse. For example (me again, 2020):
I was really surprised to learn that Swift recently adopted Smalltalk keyword syntax ... Of course, Swift wouldn't be Swift if this weren't a special case of a special case, specifically the case of multiple trailing closures, which is a special case of trailing closures, which are weird and special-casey enough by themselves.
https://blog.metaobject.com/2020/06/the-curious-case-of-swif...
Oh, and Function Builders (2020, also me):
A prediction I made was that these rules, despite or more likely because of their complexity, would not be sufficient. And that turned out to be correct, as predicted, people turned to workarounds, just like they did with C++ and Java constructors.
https://blog.metaobject.com/2020/04/swift-initialization-swi...
So it is true that it is now bad and that it has gotten worse. It's just not the case that it was ever simple to start with. And the further explosion of complexity was not some accidental thing that happened to what was otherwise a good beginning. That very explosion was already pretty much predetermined in the language as it existed from inception and in the values that were visible.
From my exchange with Chris regarding initializers:
"Chris Lattner said...
Marcel, I totally agree with your simplicity goal, but this isn't practical unless you are willing to sacrifice non-default initializable types (e.g. non-nullable pointers) or memory safety."
Part of my response:
"Let me turn it around: Chris, I totally agree with your goal of initializable types, but it is just not practical unless you are willing to sacrifice simplicity, parsimony and power (and ignore the fact that it doesn't actually work)."
Simplicity is not the easy option. Simplicity is hard. Swift took the easy route.
[...] when you first attack a problem it seems really simple because you don't understand it. Then when you start to really understand it, you come up with these very complicated solutions because it's really hairy. Most people stop there. But a few people keep burning the midnight oil and finally understand the underlying principles of the problem and come up with an elegantly simple solution for it. But very few people go the distance to get there.
-- Steve Jobs (borrowed and adapted from Heinelein)
https://blog.metaobject.com/2014/04/sophisticated-simplicity...
Just IMO, but... no. To me a "could have easily" requires n-1 things to have happened, and 1 thing not happening. Like, we "could have easily" had a nuclear exchange with the USSR, were it not for the ONE Russian guy who decided to wait for more evidence. https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alar...
But even in '15-'17, there were too many people doing too many things with Python (the big shift to data orientation started in the mid/late 90's which paved the way to ML and massive python usage) by then.
The 'n' was large, and not nearly of the 'n' things were in Swift's favor then.
Again, IMO.
It is also there in Ada, C#, Java, Python, Common Lisp,....
Even if the languages started tiny, complexity eventually grows on them.
C23 + compiler extensions is quite far from where K&R C was.
Scheme R7 is quite far from where Scheme started.
Go's warts are directly related to ignoring history of growing pains from other ecosystems.
And then of course the case that proves the opposite, Clojure. Sure, new ideas appear, but core language is more or less unchanged since introduced, rock solid and decades old projects still run just fine, although usually a bit faster.
Also its market share adoption kind of shows it.
First of all: Clojure is not "done". Latest commits were 3 months ago - https://github.com/clojure/clojure. Secondly, the language intentionally not a batteries-included PL. The core is meant to be a stable, minimal substrate. Most action happens in libraries and tools - core.async, spec & malli, babashka, nbb, etc. Check the activity in Clojurians Slack. It's a small but unusually vibrant community, every single day there are news and announcements - updates, etc. It is done-ness in the good sense - like a well-designed tool that doesn't need to keep changing its handle.
> market share adoption kind of shows it
NuBank being the world's largest digital bank and running Clojure at scale is not "adoption"? Besides, there's Apple, Cisco, and tons of smaller companies running on it.
> there is hardly anything being done
They are making a documentary https://www.youtube.com/watch?v=JJEyffSdBsk Please don't say: "well, there are documentaries about dinosaurs" or something. I've been using Clojure for over ten years - in different teams, companies, industries. For my own projects and professionally. I've heard about it "dying" back then. I keep hearing about it dying every year and I promise you - nothing like that (even remotely) happening. Yes, the hype is gone (was it ever real?), but the language, community, library ecosystem, tooling - all of that only getting better.
There's no "killing" of Lisp. As long as programming languages remain relevant, there will always be some Lisp-dialect around. It probably never will become mainstream, yet it never completely disappears. There's no killing of Lisp, because it would be like killing "graph theory" or something. Graph theory doesn't need a Fortune 500 company funding it to remain true. Similarly, a small community keeping a Lisp dialect alive is all it takes - and there will always be people drawn to the clarity you get when you strip a language down to its lambda-calculus bones and see the whole thing fit in your head at once.
Rich Hickey has made this point himself - Clojure isn't trying to be the most popular language, it's trying to be correct about certain things. And correctness doesn't go out of fashion.
The latter is a bit tautological, since the size of the language grammar is itself a measure of complexity.
The complexity would be to grow like Common Lisp, instead it is up to Clojure folks to write Java, C#, JavaScript code, therein lies the complexity.
If (or when? I haven't checked recently) a decent and well-thought-out LLVM backend emerges for it, ideally with some new underlying complexity seeping through, the market share might expand overnight.
And as for C++, while some complexity is certainly unavoidable, a rigorous complexity control is desperately needed. Ideally, the same way Bell Labs folks did when they initially conceived Go from Algol68 and C and similar (before or after joining Google; I couldn't tell), and Rich Hickey did when he initially designed Clojure. Some people are managing the complexity using style guides and clang-tidy checks. Which is great in that doing so doesn't need lengthy language committee decisions. But that approach hasn't been enough to make code _sufficiently_ safe; every now and then an enterprising engineer or team finds a way to abuse a feature in a way that produces unsafe or unpredictable results. Rust is a bit better and solves a few of the common problems, but sadly the list of potential issues (of using Rust in a codebase at scale; Engineers' faults, not Rust's) is long and growing. My verdict is we need both complex and simple LLVM languages, ideally co-designed to have no interop problems by design, while allowing expressing some logic in the simple parts and some logic in the complex parts. Or better, a 3 tier design would be nearly perfect: expressive config language, glue and research language, and core building blocks language. I think a clojure-style language can be designed to achieve all three.
I think the way of classical programming languages is behind us, unless AI implodes and we are back to programming without it.
Yes, that's one approach to avoiding ever growing complexity, maybe the other languages should try it sometime ;)
With that said, everything around Clojure keeps improving and getting better. While the language doesn't have static types, clojure.spec offers something that is even better than static typing (imo), and doesn't even require any changes to the core language. Something else other mainstream languages could learn too.
In theory we only need parentheses, prefix operators and a REPL, but mainstream never went down that route.
Anyway the complexity then ends up being custom DSLs and macros.
Swift was feeling pretty exciting around ~v3. It was small and easy to learn, felt modern, and had solid interop with ObjC/C++.
...but then absolutely exploded in complexity. New features and syntax thrown in make it feel like C++. 10 ways of doing the same thing. I wish they'd kept the language simple and lean, and wrapped additional complexity as optional packages. It just feels like such a small amount of what the Swift language does actually needs to be part of the language.
I've been using C# since the first release in 2003/4 timeline?
Aside from a few high profile language features like LINQ, generics, `async/await`, the syntax has grown, but the key additions have made the language simpler to use and more terse. Tuples and destructuring for example. Spread operators for collections. Switch expressions and pattern matching. These are mostly syntactic affordances.
You don't have to use any of them; you can write C# exactly as you wrote it in 2003...if you want to. But I'm not sure why one would forgo the improved terseness of modern C#.
Next big language addition will be discriminated unions and even that is really "opt-in" if you want to use it.
I was excited for DU until I saw the most recent implementation reveal.
https://github.com/dotnet/csharplang/blob/main/proposals/uni...
Compared to the beauty of Swift:
https://docs.swift.org/swift-book/documentation/the-swift-pr...
Very useful for reducing boilerplate and we can do some interesting things with it. One use case: we generate strongly typed "LLM command" classes from prompt strings.
Now having someone diving today into incremental code generators, with the best practices not to slow down Visual Studio during editing, that is a different matter.
They are naturally useful, as a user, as a provider, Microsoft could certainly improve the experience.
I would remove the distinction between value types and reference types at the type level. This has caused so many bugs in my code. This distinction should be made where the types are used not where they are defined.
I would remove everything related to concurrency from the language itself. The idea to let code execute on random threads without any explicit hint at the call site is ridiculous. It's far too complicated and error prone, which is why Swift designers had to radically change the defaults between Swift 6.0 and 6.2 and it's still a mess.
I would remove properties that are really functions (and of course property wrappers). I want to see at the call site whether I'm calling a function or accessing a variable.
I would probably remove async/await as well, but this is a broader debate beyond Swift.
And yes you absolutely do have to know and use all features that a language has, especially if it's a corporate language where features are introduced in order to support platform APIs.
But a lot of what you said, except for the concurrency and property wrapper stuff, largely exists for Obj-C interop. The generated interface is more readable, and swift structs act like const C structs. It’s nice.
* If you're in a team (or reading code in a third-party repo) then you need to know whatever features are used in that code, even if they're not in "your" subset of the language.
* Different codebases using different subsets of the language can feel quite different, which is annoying even if you know all the features used in them.
* Even if you're writing code entirely on your own, you still end up needing to learn about more language features than you need to for your code in order that you can make an informed decision about what goes in "your" subset.
To answer your question: I would immediately get rid of guard.
Also, I think the complexity and interplay of structs, classes, enums, protocols and now actors is staggering.
internal should definitely go though.
// Swift
guard let foo = maybeFoo else {
print("missing foo")
return false
}
// Kotlin
val foo = maybeFoo ?: run {
print("missing foo")
return false
}
Unless there's a use case for guard I'm not thinking ofFocusing on the keywords rather than the macros, I think the rest of them have legitimate use cases, though they're often misused, especially fileprivate.
2. On top of that many of the features in the language exist not because they were carefully designed, but because they were rushed: https://news.ycombinator.com/item?id=47529006
I'm not 100% sure but I think the swift doc you linked is missing at least a dozen keywords so the truth probably lies in the middle
NumPy, SciPy, Pandas, and Pytorch are what drove the mass adoption of Python over the last few years. No language feature could touch those libraries. I now know how the C++/Java people felt when JS started taking over. It's a nightmare to watch a joke language (literally; Python being named for Monty Python) become the default simply because of platform limitations.
Since 5.10 it's been worth picking back up if you're on MacOS.
No way something that compiles as slowly as Swift dethrones Python.
Edit: Plus Swift goes directly against the Zen of Python
> Explicit is better than implicit.
> Namespaces are one honking great idea -- let's do more of those!
coupled with shitty LSP support (even to this day) makes code even harder to understand than when you `import *` in Python.
Edit 2: To expand a little on how shitty the LSP support is for those who don't work with Swift: any trivial iOS or macOS project that builds fine in Xcode can have a bunch of SourceKit-LSP (the official Swift LSP) errors because it fails to resolve frameworks/libraries. The only sane way to work with Swift in VS Code or derivatives I've found is to turn off SourceKit diagnostics altogether and only keep swiftc diagnostics. And I have the swift-lsp plugin in Claude Code, there's a routine baseline of SourceKit errors ignored. So you have symbols without explicit namespaces, and the LSP simply can't resolve lots of them, so no lookup for you. Good luck.
This must have pushed Chris Lattner towards making Mojo both interpreted and compiled at the same time.
That's funny. To me magic is implicit by definition and Python strikes me as a very magical language compared to something like Java that is way more explicit.
The Zen of Python is how we got crap like argparse where arguments are placed in the namespace instead of a dict.
If you had some feature flag args, you'd keep accessing them via the dict? Highly unlikely...
I do, though, think Swift had/has(?) a chance to dethrone Rust in the non-garbage collected space. Rust is incredibly powerful but sometimes you don't really need that complexity, you just need something that can compile cross-platform and maintain great performance. Before now I've written Rust projects that heavily use Rc<> just so I don't have to spend forever thinking about lifetimes, when I do that I think "I wish I could just use Swift for this" sometimes.
You're right, though, that Swift remains Apple's language and they don't have a lot of interest in non-Apple uses of it (e.g. Swift SDK for Android was only released late last year). They're much happier to bend the language in weird ways to create things like SwiftUI.
I think Go has already taken that part of the cake.