I've been calling these things product primitives. I can't remember where I heard that term, but it refers to things like...
Blocks in Notion. Messages and conversations in Telegram. Frames and layers in Figma. Tweets in Twitter. Cells and sheets in Excel. Tools and layers in Photoshop. Commands in a CLI.
I think what makes for good product design is having a very small number of primitives. A bad product doesn't know what its primitives are. Or it has a very large number of primitives. It feels like everything in the product is some unique thing that works in its own unique way. So users have to learn a ton of different top-level primitives/concepts. It's confusing and intimidating and hard to teach. Ideally you just want one or two or three main primitives.
The complexity/power in an app comes from choosing powerful primitives that have depth, that are composable, etc. You can do a lot with Notion blocks. You can do a lot with Excel cells. You can do a lot with a CLI command. You can do a lot with a Minecraft block. There's depth there.
Seems like there's quite a bit more to it: https://outliner.tana.inc/learn
Small but not too small. Case in point: shell scripts (POSIX shell, bash) where the scripting part was decided to be modelled as commands thus not introducing another bunch of concepts. We all know what the result is (hot, slow mess).
Shell scripting is a victim of its own success: it is _so easy_ to get started that most users get value out of knowing the first one percent and never bother to actually learn the rest.
There aren't many who have read the Bash manual, or know what zsh can do that Bash cannot, etc.
"Shell scripting is a hot, slow mess" is the same hot slow mess that you get wherever the barrier to entry is extremely low (e.g. early PHP, early JavaScript/frontend development, game development with a game engine where you can just click around in the editor, etc).
We started with the second two points: our core technology was a sampler that enables arbitrary hierarchical Bayesian graph models for sparse data, our constraint was cpu bound tractable compute. The piece that took us the longest to discover was the fact that our end products need to be separate from our underlying technology.
We were given that advice in various words from many people even before we started but some lessons need to be lived to be learned.
Won't this lead to premature abstraction and application of design patterns everywhere? I mean, sure, of course you should do separation of concerns, keep your business domain layer clean of persistence/network/UI/… concerns etc. But your domain layer will still be very much tied to your product. There is no way around that.
So while you may have a few concepts that serve as interfaces between the two layers, but how the latter evolve should be disconnected..
This is more to disable its competitors than anything.
A one-pager begs of you to find the foundational value simply - no fooling yourself with a multitude of prospects and complexity.
The separable aspect makes explicit the need to build the foundation to stand on its own. You can't lean on the branches prematurely as if features are solid ground.
The single-defining constraint forces one to conceive and recognize the single-most fundamental functionality - and its shape, and its abilities; its character.
The most elegant solutions typically arise not out of unbounded degrees of freedom, but building specifically with a constraint in mind.
I think that this goes with point 1: composing the one pager helps define those constraints.
I have no hard data to back it up, but in my experience, projects that take the time to put everyone on the same page conceptually (even if it's a 1 pager, high level, here's what we are and are not doing) end up succeeding far more often than projects that wing it. The wing it projects always end up disappointing everyone who had opinions but never bothered to articulate them.
I’m gonna go do these…
- An advanced ranking algorithm
- Moderated contribution and discussion
I don't know... none of the examples makes sense for me. Especially:
> Google has Kubernetes
I mean, yeah, and? Google was originally a product built around PageRank, the core tech, wasn't it?
The biggest product of the century thus, LLMs, are the core tech.
I don't doubt these rules have helped the author, but readers should be mindful when heeding them.
I would not say LLMs are products. It's still early adopters stage and it's going to be skewed on HN -- a large portion of people here evangelize the virtues of digging through an electronics store's parts bin to finish off a pcb they made in their garage then run an obscure version of linux on it for work. It's a lot of tech kludged together, not a product, not in the context of this article anyway. Same for the current state of LLMs. It's a tech waiting for a product to make it useful for the general population. For developers, Github's copilot is probably the closest, it bundles LLM tech to leverage their tech (github) creating a product you don't have to piecemeal together if you don't want too.
The internet was a tech that was first played with like we play with LLMs now. It was the web browser -- a product that leveraged a core tech -- that made it widely usable. Large parts of the population have no idea the internet is not their web browser (or now apps that access that web through a different interface).
I read a quote from the new Apple CEO on AI, that I think highlights the tech vs product separation and why Apple is where it is: 'We never think about shipping technology. We always think about 'how can we leverage technology to ship amazing products'
https://www.tomsguide.com/computing/macbooks/i-interviewed-j...
In the past, I worked in teams, building much more ambitious projects, and these rules would likely not apply.