When someone argues subjectivity (in a negative sense), they need to show that the opinion does not rely on facts, rather it's based on... nothing (feelings).
I offered a very easy way to numerically assess the negative impact of poor language design choices made by Haskell designers. It's not about what I "feel" about the language: in Java, you write three-words program, and you get, usually, a unique interpretation. In Haskell, you write a three-words program, and you get 9 (nine) possible interpretations. It's impossible for a human to examine nine interpretations simultaneously and figure out which of them are valid and might fit the context. So, reading a Haskell program takes longer and requires more effort than a Java program.
Of course, Haskell programmers find ways to adapt to their misfortune. They try to avoid pathological cases (eg. writing four-words programs, let alone five!), they memorize a lot of acronyms and non-typographical symbols that they later use to prune the search for a possible meaning of the program. They invent conventions on top of the bare language design that constrain the search space for possible programs to make their task easier.
It's absolutely possible that after layers of conventions and a long time spent memorizing various acronyms and symbols, Haskell programmers catch up to speed of programmers in other languages: after all, the superficial difficulties with the language might seem like a small price to pay for the access to the language's riches that lay beyond the surface. The language grammar rules cannot account for the entirety of the performance of the programmers who chose to write in the language.
This situation is very similar to the "universal" (claimed, but not in practice) mathematical language, which is extremely difficult to read, write, edit, typeset... yet the tradition of using it prevails and the overwhelming majority of mathematicians use, and prefer using the "universal" mathematical language even though much saner alternatives exist.
I see OP's point. Haskell feels (or felt, I admit I haven't been keeping up the last 15 years) needlessly obtuse sometimes, like how people love to invent new infix operators all the time.
I couldn't disagree more. Yes, there is more upfront work understanding Haskell code. But it's very dense. Once you understand the patterns, you can read it much quicker. Just like map/filter/fold are harder to understand then a for-loop, but once you do, you can immediately see what kind of iteration is applied. The for-loop can do all kinds of crazy index manipulation that you always have to digest from scratch.
> And then it's also spiced by the most bizarre indentation rules invented by men.
Again, quite surprised by this criticism. The rule is extremely simple: inner expressions must be indented more. You're free to decide by how much. That's why there are many "styles" out there. Maybe that's what you mean with bizarre. But it's not like the language is forcing weird constraints on you. If anything the constraints are too lax. Any other language with non-mandatory indentation allows that as well. In general, I really don't understand why not more languages do mandatory indentation. You only need curly braces and semicolons if you want the option to write a whole if/else/while/... statement in one line. But nobody does that.
Not to support the parent comment, which I disagree with, but If you use multi-line let-bindings, those require that you indent not just more than the previous line, but as much as the first token after the let keyword on the previous line. It’s a very strange rule, all the more surprising because it’s inconsistent even with the rest of the language. It is totally avoidable if you, like I think most experienced haskellers do, just prefer ‘where’, but people more familiar with procedural code usually lean into using ‘let’ everywhere because it feels more familiar.
I think the strange indentation used to be required in more places - I vaguely remember running into it a lot more when I started with Haskell 20 years ago, but that was also just when I was new to the language. These days I just keep ‘let’ to a bare minimum, so it doesn’t bother me. One thing that made Elm frustrating was that it disallowed ‘where’ clauses, forcing you to deal with this weird edge case all the time.
let
f = 9
fo = 10
foo = 123
in f+fo+foo
vs. let
f = 9
fo = 10
foo = 123
in f+fo+foo someValue = let f = 9
fo = 10
foo = 123
in f+fo+foo
rather than: someValue = let f = 9
fo = 10
foo = 123
in f+fo+foo
I think it used to be the case that it had to be indented past the `=` or the `let` even if it was't on the same line. Note also that `in` has to be indented past `someValue`, but doesn't need to be indented as far `let`.This is fine:
someValue = let
f = 9
fo = 10
foo = 123
in f+fo+foo
So, it is possible to land on sane indentation, but the parser is much pickier than, e.g., Python's off-sides rule, so it takes some trial and error for new users to find it, and it can be frustrating if you're just temporarily modifying an expression to quickly try something out.I honestly think it would be less surprising if the parser just disallowed writing the first binding on the same line as the `let` entirely, treating it only as a block, but some people (bewilderingly) do seem to prefer to write their code with the excessive indentation (I'd imagine with editor support, rather than manually maintaining the spacing).
[proceeds to agree on all points]
Not even sure what to tell you... Have more introspection?
Syntax highlighting? Please take a look at https://play.haskell.org/
I am completely baffled by this comment. Are you missing the parenthesized function calls by any chance? If so then I can relate a bit.
For background: my first time in college, I was studying typography. An integral part of this trade is figuring out what is easier for people to read by answering questions s.a. what is the best line length, what number of columns per page is the best, what number of ascent elements per font face is the best, considering letter frequencies and coincidence and so on.
It also comes with the editing part, as in the trade of taking a manuscript (a text intended to be published) and making sure that the text meets certain reader expectations in terms of consistency, clarity, structure. This, obviously, includes the use of punctuation, but it's more about the language structure, things like adjectives order or anaphora usage etc.
Programming languages can be judged using the same rules, because, in the end of the day, we read them and need to interpret them. People have particular strengths and weaknesses when it comes to reading: we can remember the anaphora's anchor for only so long, we can hold only so many "variables" in fast-to-access memory, we only can do so many levels of adverb phrase nesting and so on.
Haskell was designed by someone completely oblivious to human abilities to read. It's very demanding and straining when it comes to extracting structure from text in the same way how, in English, you'd struggle to extract structure from so-called "garden path" sentences, because it's intentionally obfuscated. I don't believe Haskell is intentionally obfuscated, instead, I attribute the poor performance to the lack of awareness on the part of the author.
To convey the same point by means of example: Haskell is almost uniquely bad in that given a program
A B C
the programmer can't tell if the program is actually A(B, C), or B(A, C), or C(A, B), or A(B(C)), or A(C(B)), or (A(B))(C), or (B(C))(A), or (B(A))(C), or (C(B))(A).There's absolutely no reason a language should offer these kinds of puzzles, especially in a very large quantity as Haskell does. Removing this "feature" would make the language a lot easier to work with.
0: OK, there are some additional non-ASCII Unicode symbols, but everything but string literals should be kept ASCII IMO.
What do you mean, "can't tell"? If I see this in Python
(A)(B)(C)
how do I know which of your 9 it means? Well, I'm a Python programmer so I know that it means A(B)(C)
which is the function A applied to B, which returns a function that gets applied to C. If you're a Haskell programmer you know that it means the same thing.I grant you that it is odd to those who are unfamiliar and it took me quite a while to get used to it, but it's much better to write that way in Haskell when writing programs that use higher-order functions.
But that is a choice. I prefer not using complex function compositions and the lenses due to this, split complex expressions into a bunch of let bindings etc..
So you also can write very readable code in Haskell.
syntax-case is the general purpose construct to use. syntax-rules is a restricted, easy-things-should-be-easy construct.
It is pretty awful to write things like that.
* Prolog (and, by extension, Erlang).
* Pascal.
* Java 5 and earlier (and Go, as it's almost a Java's twin).
These languages somehow manage to hit the sweet spot of enough system and enough diversity, few unexpected syntax constructs (eg. Pascal or Java have the "dangling else" problem, but it's manageable compared to the problems introduced by optional statement delimiters in Go or JavaScript for example). In every case, a programmer must program defensively against these sorts of language "pathologies".
To give some examples of questionable or outright bad design decisions:
* In Common Lisp (and Scheme as well as a number of similar languages) there's a problem with identifying the open parenthesis that will be closed by typing the closing parenthesis. Programmers must invent tools and techniques to manage this problem.
* In C++, there's a laughable (or, at least was, for a long time) rookie "whoopsie" when it comes to ">>" in templates vs infix operator. And the "solution" offered by the language designer makes you think they were just... lazy (add space).
Here are also examples of some (perhaps, accidentally) good decisions:
* Kebab-case in many Lisp family of languages. In Latin script, the position of the hyphen in the middle of the lower-case letter is a better choice then, eg. underscore (which is tutted to be a "not a typographic character"). Same reason why, eg. in traditional Hebrew hyphens are at the height of a capital letter (Hebrew doesn't have lower-case letters and the shape of letters is better suited for hyphens at the top rather than the middle).
* Clojure as well as Racket (afaik, deliberately) introduced more kinds of parenthesis-like delimiters to make it easier to guess which expression is being terminated by the currently typed delimiter.
* * *
Note that this is a "superficial" metric, because languages are also valuable for concepts they are able to express both in terms of program logic as well as program application to the hardware it manages; the ability to process, modify, generate, analyze the language automatically; the ability to constrain the language to a desired subset of all available operations... Incorporating all of these into a single metric seems like mission impossible :)
Are you mixing tabs and spaces? Maybe an example here would help.
>overloading of literals (heaven, why???)
No, this is important, so that default strings don't to have to be something crummy. Even C++ got on this bandwagon.
>and requiring parenthesis around function arguments both for definition and for application.
??? Again, an example would be helpful. Usually the complaint with Haskell is that people don't use enough parenthesis.
>The execution model is great
...I thought lazy execution was widely agreed to be the worst part of Haskell.
This is not what "rules" means. Rules aren't about what I do. Rules are about what the language treats as legal or illegal. I don't write in Haskell at all because I don't like it and have no use for it, but Haskell rules don't change because of that, they are still mindbogglingly complex when it comes to telling the programmer if the next line is the right amount of space to the right or not. None of that complexity is necessary and could've been totally avoided if the language used statement delimiters.
> No, this is important, so that default strings don't to have to be something crummy.
My argument is that to get a little accidental convenience you sacrificed a huge amount of routine convenience. The mental load of having to distrust a string when you see it is just not worth the accidental convenience of writing a prepared statement and making it appear as if it was a string. In other words, you are the guy who traded a donkey for three beans, but the beans didn't sprout into a huge ladder that took you to the giant's castle. You just made a very watery soup and that was that.
> Again, an example would be helpful.
Look up the example I gave in the adjacent reply.
> I thought lazy execution was widely agreed to be the worst part of Haskell.
It's good because it's unique and, when it fits the purpose, it's useful for that particular purpose and neigh irreplaceable, because it is unique. It's worth having for the sake of research, to understand how languages can be designed and what tools or techniques can be discovered on this path. This is said from the perspective that Haskell is not the end product, but rather a research attempting to study how languages can work and what concepts they can develop.
I mean, it does. White-space sensitive syntax is entirely opt-in when you chose to omit delimiters. Here's an explicit delimiter example:
let {
a = 1;
b = 2
} in a + b