(kevinlynagh.com)
Day 400: Having thoroughly described a universal theory of everything, we set out to build an experimental apparatus in orbit at a Lagrange point capable of detecting a universal particle which acts a mediator for all observable forces in the known universe.
Like, if you stay focused, is it even really a side project?
Which is why my 2d top down sprite-based rpg now has a 3d procedural animation engine, a procedural 3d character generator with automagic rigging, a population simulator that would put Europa Universalis to shame if I ever get around to finishing it (ha!) a pixel art editor, a 2d procedural animation engine using active ragdolls.........
You might wonder why a 2d game needs 3d procedural animation, well...
The scope creeps in mysterious ways
To produce better looking assets for the 2D top down world?
That is already something people would call a project.
You could achieve things yourself if you tried!
Any advice on how to mitigate this?
If it helps anything at all: It's normal. At this point, you've already proven you're smart and knowledgeable. Now, the universe wants to see if you can also finish what you've started. That's the main thing a PhD proves: That you can take an incredibly interesting topic and then do all the boring stuff that they need you to do to be formally compliant with arbitrary rules.
Focus on finishing. Reduce the scope as much as possible again. Down to your core message (or 3-4 core messages, I guess, for paper-based dissertations).
Listen to the feedback you get from your advisor.
You got this!
When I did my MSc thesis he told me it was a pretty good PhD. (Before giving me a months work in corrections.) I didn’t understand back then, but I understand now. It was small, replicatable and novel (still is)! Just replicate three times and be done with it. You’ve proven your mastery. Now start something serious.
My professor once told me he presented at a small conference, the whole audience everybody had PhD in mathematics and maybe 2 of the 50 or so people in the audience could follow along. The point he was trying to make is at some point the people in the audience were not really interested in what was being presented because it is difficult to just follow along some really niche topic.
He discussed this topic and how generally it's left to those who are more notable in a field to ask the 'dumb' questions everyone else is afraid to ask. And such questions often need to be asked to get the audience on board and open the floodgates with areas of niche research - the speaker themself is often too far into the rabbit hole to discern the difference between opaque and obvious.
So it stands to reason, at smaller conferences this would be a big problem, with fewer thought leaders in attendance whose reputations are intact enough that they wouldn't mind looking foolish.
in my field this would be terrible advice. instead you need to be doing something that your audience actually will give a shit about.
If there is something interesting enough to qualify, then reduce the scope as much as possible. It should go without saying that you shouldn’t throw out the interesting bit.
But there's some things to remember that are incredibly important
- a paper doesn't *prove* something, it suggests it is *probably* right
- under the conditions of the paper's settings, which aren't yours
- just because someone had X outcome before doesn't mean you won't get Y outcome
- those small details usually dominate success
- sometimes a one liner seemingly throw away sentence is what you're missing
- sometimes the authors don't know and the answer is 5 papers back that they've been building on
- DO NOT TREAT PAPERS AS *ABSOLUTE* TRUTH
- no one is *absolutely* right, everyone is *some* degree of wrong
- other researchers are just like you, writing papers just like you
- they also look back at their old papers and say "I'm glad I'm not that bad anymore"
- a paper demonstrating your idea is a positive signal, you're thinking in the right direction
As soon as you start treating papers as "this is fact" you tend to overly generalize the results. But the details dominate so you just kill your own creativity. You kill your own ideas before you know they're right or wrong. More importantly you don't know how right or how wrong.Combine that with the publish-or-perish paradigm and I think we got significant coverage. People don't even consider diving deeper into things and are encouraged to take the route of "assume paper is correct" because that's the fastest way to push out research. But if the foundation is shaky, then everything built on it is shaky too.
Which, that's a distinction in the hard and more formal fields like math and physics. They have no issues pushing out papers that may have errors in them because the process is to attack works as hard as possible. Then whatever is left is where you build again. You definitely have people take advantage of this, like Avi Loeb publishing about aliens, but it is realistically a small price to pay. And hey, even Loeb's work still contributes. If at some point it actually is aliens, then there's work existing that can be built upon. And when it continues to not be aliens, there's existing work to build on since really his problem is more that the papers just end up concluding "and this is why we can't rule out aliens!" (-__-)
Anyways, long story short, my advice is to just remember that you, and everybody else, is a blubbering idiot and it is a absolute fucking miracle a bunch of mostly hairless apes can even communicate, let alone postulate about the cosmos. At the end of the day we're all on the same team, seeking truth. Truth matters more than our egos and if we start to forget how dumb we are then we'll only hinder our pursuit of truth.
Acknowledge it is normal? Attempt to buy deeper into the delusion ("Yeah my work is awesome and unique!"). Use stimulants to force enthusiastic days every once in awhile?
Uhh... unless you plan to stay in academia? Then, this is a terrible idea.
You’ll almost never see a PhD thesis that has anything particularly interesting, novel or directly applicable to the sciences.
This is definitely the wrong way of going about a research project, and I have rarely seen anyone approach research projects this way. You should read two or at most three papers and build upon them. You only do a deep review of the research literature later in the project, once you have some results and you have started writing them down.
Replicating existing results doesn't meet that criteria so unknowingly repeating someone's work is an existential crisis for PhD students. It can mean that you worked for 4-6 years on something that the committee then can't/won't grant a doctorate for and effectively forcing you to start over.
Theoretically, your advisor is supposed to help prevent this as well by guiding you in good directions, but not all advisors are created equal.
It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
> It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
Or are they trying to require enough rigor and discipline so that out of 100,000 people who want to be the next Einstein, the process washes out the 99,000 who aren't willing or able to do more than throw out half-baked 'creative' ideas and expect the world to pick them up and run with them.
There's only finite attention and money for funding research, so you gotta do SOMETHING to filter out the larpers who want to take it and faff around.
I think at this point the system has eaten its own tail a bit, but there's good reason to require some level of "show me" before getting given the money to run your own research.
Moreover, I am not suggesting you don't look at other papers at all. But google scholar and some quick skimming of abstracts and papers you find should suffice to check if someone has already done the work. If you start fully reading more than a handful of papers, your ideas are already locked in by what others have done, and it becomes way harder to produce something novel.
"impediment to action advances action. what stands in the way, becomes the way".
I had a coworker who would always be diplomatic about code changes he felt could be improved but when he felt he was nitpicking, where he would say: It's better than it was. It allowed him to provide criticism while also giving permission to go ahead even if there were minor things that weren't perfect. I strongly endorse this kind of attitude.
nit: this could be changed to XYZ
vs we should use XYZ here
where it was understood nits could be ignored if you didn't feel it was an urgent thing vs a preference.What I am describing would be something higher level, more like a comment on approach, or an observation that there is some high-level redundancy or opportunity for refactor. Something like "in an ideal world we would offload some of this to an external cache server instead of an in-memory store but this is better than hitting the DB on every request".
That kind of observation may come up in top-level comment on a code review, but it might also come up in a tech review long before a line of code has been written. It is about extending that attitude to all aspects of dev.
The trick to overcoming this is not to aim for "clean" but for "cleaner than before".
Just keep chipping away at it, whether it is a messy codebase or a messy kitchen.
The other saying I say is "completion not perfection". That helps me in yard work especially. I'm not going for the cover shot of "Better Homes and Gardens", I just need the lawn to be cut.
The sand blows in endlessly. You don’t aim for a pristine, sandless land. But you can’t ignore it or it takes over.
I’ll just pick up a few things and ferry them towards their “home.” Or go do a small amount of yard work. Etc.
So "better" means "more specialized" more often that it means "more optimized". I don't say it is a bad thing per se, but it is best to keep in mind that they are two types of improvement, fixes and specializations, because the latter is a commitment.
I always thought perfectionism meant extremely high achievements (for too great of a cost). But it can also be quitting without any progress because you can't accept anything less than perfect (which may or may not be achievable). Perfectionism can be someone procrastinating on a large task.
I don't think it holds in 100% of situations but I do think if you're going to make an error one way or the other, I'd rather do something smaller and release too early than do something bigger and waste time.
I worked on a team that built high precision industrial machinery. The team and the project manager decided to delay shipping because there were still problems. We delayed, fixed the problems, and the machine worked really well and was used for at least a decade. If we'd had shipped it too soon we would have to try and fix it at a remote site and likely it would suffer from problems.
With most products you want to figure out what is your MVP (minimal viable product) and what is the quality level your customers expect. If you ship something less than that it's probably not a good tradeoff. If you build too much and ship too late that's also not a good tradeoff. When shipping increments they also need to be appropriately sized and with the right quality level.
It's in a field that I have little experience with (Information Retrieval). So there is obviously prior art that I could learn from or even integrate with.
This article motivates me further to learn things by focusing on building my own and peek into prior art as I go, when I'm stuck or need ideas.
Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
However, he also mentioned that he made other languages before. So the larger story starts earlier, by making things and learning from practice.
Maybe that's also the bigger lesson: Don't overthink, start by making the thing. But later when you learned a bunch of practical lessons and maybe hit a wall or two, then you might need that deeper research to push further.
That was also on my mind thanks to the documentary. Then I followed up with "Easy made Simple" and "Hammock Driven Development", and it makes me want to learn Clojure.
Clojure documentary on CultRepo channel: https://www.youtube.com/watch?v=Y24vK_QDLFg
Simple Made Easy: https://www.youtube.com/watch?v=SxdOUGdseq4
Hammock Driven Development: https://www.youtube.com/watch?v=f84n5oFoZBc
Sometimes you just want to button-mash through, rushing about carefree.
Other times, you want to go entirely stealth, wandering around, trying to find the best path, wasting an hour or more on a level you could have button-mashed in 5 minutes.
Both are fine.
See also "Why does the C++ standard ship every three years" (as opposed to ship when the desired features are ready):
https://news.ycombinator.com/item?id=20428703 (2019-07-13, 220 comments)
Great explanation for what I see when I mess around with coding LLMs. The natural human instinct of “this feels complicated, let me think about it some more” is suppressed. So far all the gains from the stunning initial speed have been cancelled out later in the project, arising from the over-engineered complexity baked into the code.
My ability to get this right is often a matter of how well I know the domain. If I don't know the domain as well I think I do, I fall into a lot of rework. If I know the domain more than I imagine then I waste my time with a baby step process when I could have run. All of this is a big judgement call, and I have "regrets" in both directions.
Don't fall prey to sunk cost fallacy. Just because you spent hours researching a PhD level topic doesn't mean you now have to use it in your project, if it's not quite the right application.
I think the author is indeed someone who just really enjoys learning and doing all sorts of things, so the rabbitholing is part of the fun that tickles their brain.
You start with a simple goal → then research → then keep expanding scope → and never ship.
The people who actually finish things do the opposite: lock scope early, ignore “better ideas”, ship v1.
Most projects don’t fail due to lack of ideas, they fail because they never converge.
The real problem is avoidance, when cuts are warranted and you don't want them, so you ... hide, often by working hard on something else.
The solution is to value your time. Most don't, so (self-) managers instead need to dangle other opportunities: finish this so you can do that. You can't take candy from a baby without trouble; instead, you trade for something else.
This resonates hard. LLMs enable true perfectionism, the ability to completely fulfil your vision for a project. This lets you add many features without burning out due to fatigue or boredom. However (as the author points out), most projects' original goal does not require these complementary features.
Maybe I lack imagination or curiosity, but it makes it difficult to come up with an idea and follow it through.
Prototype a minority of the time. Research a majority of the time. At some point the ratio flips as research fades out and producing increases.
Organizations and Conferences:
1. Insist on doing everything through “channels.” Never permit short-cuts to be taken in order to expedite decisions.
2. Make “speeches,” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
3. When possible refer all matters to committees, for “further study and consideration”. Attempt to make the committees as large as possible – never less than five.
4. Bring up irrelevant issues as frequently as possible.
5. Haggle over precise wordings of communications, minutes, resolutions.
6. Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
7. Advocate “caution.” Be “reasonable” and urge your fellow-conferees to be “reasonable” and avoid haste which might result in embarrassments or difficulties later on.
8. Be worried about the propriety of any decision – raise the question of whether such action as is contemplated lies within the jurisdiction of the group or whether it might conflict with the policy of some higher echelon.
Managers and Supervisors:
1. Demand written orders.
2. “Misunderstand” orders. Ask endless questions or engage in long correspondence about such orders. Quibble over them when you can.
3. Do everything possible to delay the delivery of orders. Even though parts of the order may be ready beforehand, don’t deliver it until its completely ready.
4. Don’t order new working materials until your current stocks have been virtually exhausted, so that the slightest delay in filling your order will mean a shutdown.
5. Order high-quality materials which are hard to get. If you don’t get them argue about it. Warn that inferior materials will mean inferior work.
6. In making work assignments, always sing out the unimportant jobs first. See that important jobs are assigned to inefficient workers with poor equipment.
7. Insist on perfect work in relatively unimportant products send back for refinishing those which have the least flaws. Approve other defective parts whose flaws are not visible to the naked eye.
8. Make mistakes in routing so that parts and materials will be sent to the wrong place in the plant.
9. When training new workers, give incomplete or misleading instructions.
10. To lower moral and with it production, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers; complain unjustly about their work.
11. Hold meetings when there is critical work to be done.
12. Multiply paperwork in plausible ways. Start duplicating files.
13. Multiply the procedures and clearances involved in issuing instructions, making payments, and so on. See that three people have to approve everything where one would do.
14. Apply all regulations to the last letter.
Kids these days just want to use prefab libraries and frameworks with a million dependencies doing god knows what and written by randos.
(Unrelated to how commenters these days just want an excuse to use the term "peanut smut".)
This technique is called out in the CIA simple field sabotage manual.
do you want to learn a new skill? do you want to scratch a very specific personal itch for just yourself? do you want to solve problems for others as well? do you want to build a startup/business around the idea?
all of these necessitate different approaches and strategies to research and coding. scratching an itch? maybe fully vibe coding is fine. want to learn? ditch the vibes and write by hand and ignore prior art. want to build a business? do some actual market research first and decide if this is something you actually want to pursue.
this post was a good reminder for me to identify the why as early on as possible and to be ok with just building something for myself without always having to monetize a side project which, for me, just zaps all joy from it.
Project where the sole user is you in your kitchen? Sure, hack it together.
Project where you actually want other people to use the product? A research phase matters and helps here.
Consider what the goal is and the amount of effort to invest typically becomes more evident.
Basically, you will end up dependent on the massive complexity of a compiler due to the syntax complexity, and the cherry on top, thanks to ISO, you'll get feature creep creating a cycle of planned obsolescence around 5 to 10 years.
Oh, sorry, "they" called that "innovation".