Writing a program and proving a theorem are the same act. (Curry–Howard–Lambek.) For well-behaved programs, every program is a proof of something and every proof is a program. The match is exact for simple typed languages and leaks a bit once you add general recursion (an infinite loop “proves” anything in Haskell), but the underlying identity is real. Lambek added the third leg: these are also morphisms in a category. [1]
Algebra and geometry are one thing wearing different costumes. (Stone duality and cousins.) A system of equations and the shape it cuts out aren’t related, they’re the same object seen from opposite sides. Grothendieck rebuilt algebraic geometry on this idea, with schemes (so you can do geometry on the integers themselves) and étale cohomology (topological invariants for shapes with no actual topology). His student Deligne used that machinery to settle the Weil conjectures in 1974. Wiles’s Fermat proof lives in the same world, though it leans on much more than the categorical foundations. [2]
[0] https://en.wikipedia.org/wiki/Yoneda_lemma
[1] https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
We should strive to name all things by their function not by their inventor or discoverer IMO. But people like their ribbons.
Anyways, the discussion begins with these people. Who all use the name to reference the paper which contains the result. As the discussion expand, it remains centered on this group and you have to talk _with_ them and not at them so you use the name they do. This usage slowly expands, until eventually it gets written in a textbook, taught to grad students, then to undergrads, and it becomes hopeless to change the name.
I share the frustration with naming, we can come up with such better names for things now. But until we give stipend bonuses for good naming, the experts will never care to do so. But i wholeheartedly disagree that the problem as a whole can be reduced to "people like their ribbons". Naming something after yourself is so gauche and would not be tolerated in my field at least. The other professors would create a better name simply out of spite for your greed.
https://math.stackexchange.com/questions/823289/abstract-non...
Sometimes the proof in category theory is trivial but we have no lower dimension or concrete intuition as to why that is true. This whole state of affairs is called abstract nonsense.
Nobody seems to care or notice. I'm watching in disbelief how nobody is pointing out the article is full of inaccuracies. See my sibling thread for a (very) incomplete list, which should disqualified this as a serious reading: https://news.ycombinator.com/item?id=47814213
My conclusion cannot be other than this ought to be useless for the general practitioner, since even wrong mathematics is appreciated the same as correct mathematics.
I don't know. I finished my graduate studies in math a few years ago, and pretty much every textbook by well-known mathematicians was packed with errors. I just stopped caring so much about inaccuracies. Every math book is going to have them. Human beings are imperfect, and great mathematicians are no exception. I'd just download the errata from the uni website and keep it open while reading.
imo, this is a problem with how it's taught! Order theory is super useful in programming. The main challenge, beyond breaking past that barrier of perceived "pointlessness," is getting away from the totally ordered / "Comparator" view of the world. Preorders are powerful.
It gives us a different way to think about what correct means when we test. For example, state machine transitions can sometimes be viewed as a preorder. And if you can squeeze it into that shape, complicated tests can reduce down to asserting that <= holds. It usually takes a lot of thinking, because it IS far from the daily routine, but by the same rationale, forcing it into your daily routing makes it familiar. It let's you look at tests and go "oh, I bet that condition expression can be modeled as a preorder on [blah]"