Red LEDs were invented / discovered in 1920s, became commercially successful as indicators in 1960s. Optical fibers were invented in 1920s or so, became a commercial success in 1980s.
Certain things just take time. Do not dismiss a good physical effect, they are much more rare than so-called good ideas.
Do you just think Google hates money, or does this only work for hover cars
With the help of “remote assistance”, that is. Which is probably one of the reasons for the limited rollout.
“Feasible” is doing some heavy lifting there. The whole point of the comment you replied to is that it can take a long time for some new physical technique to become commercially feasible.
I know flying cars are some sort of futuristic trope, yet I cringe at it every time I see it. They always assume magical infinite power. In the real the reason we do not have flying cars is the same why you don't use a drone as a coat hanger at home: It is just more practical to use a mechanical solution that holds your coat for infinite time without any energy use or noise/heat emissions and it is much cheaper.
Lifting stuff against gravity is not free, but a piece of wood, a brick or a rubber wheel does a pretty good job at it. One way to do it is magnets, but that means you need even more complicated roads.
We are living on a warming planet where only the naive and the evil pretend that energy use is something only the poor have to think about. We all have to think about it.
In short, if a tech takes 40 years to be commercialised it would have been invented some time in the 80s.
To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.
Or maybe RAEND
Massive storage that takes a month to fully read is acceptable in a wide variety of use cases. If it's cheaper than hard drives it'll get a huge amount of users.
Reading a floppy disk took around 30 secs for example. A whole CD took 5 mins. My whole 1TB SSD takes 10 mins.
Perhaps the needs for read/write speed are bounded (before processor, etc. becomes the limiting factor), while more capacity is only limited by price. Or maybe increasing density of storage inherently means a tradeoff with I/O speed (AFAIK, NAND flash needs to rewrite lots of data just to make a single write? Atom-scale interactions have side effects)
But I 100% agree with your main point about possibility vs productionisation.
I'm not familiar enough with the space to know how this idea rates compared to alternative options at similar levels of development: the density is obviously extreme (but probably not the biggest advantage), and it makes sense to me that the underlying physics could work robustly, but the practicalities of how you read and write seem pretty difficult (and I think the paper kind of glosses over this: read caching and defect mapping could be trickier than it implied. Accessing the tape from both sides also seems like it will make the engineering more difficult).
I do not think that any new memory device principles have been invented after WWII. Already by 1940, the inventor of DRAM, John Vincent Atanasoff, had enumerated almost all principles that can be used to make a memory device.
The first DRAM of Atanasoff was made with discrete capacitors, then 5-years later von Neumann proposed to use iconoscope cathode-ray tubes instead, which were used for a few years, before being replaced by magnetic core memories. The Intel company was formed for the commercialization of the first (1-kbit) DRAM integrated circuit made with MOS transistors.
The memory described in TFA is in principle equivalent with a memory made with mechanical toggle switches or latching relays with mechanical latching, where the 2 stable states are maintained by elastic forces and you can toggle the state if you apply a force great enough on the switch.
Reducing a mechanical bistable device to the size of a few atoms reaches the possible limit of memory density. As described in the parent article, this device should be able to store information safely and it should be able to switch is state quickly.
The difficulties are not in the memory cell itself, but in how to enable fast and accurate reading and writing. While the memory cell itself may have the minimum size permitted by the atomic structure, there is no way to miniaturize to the same extent any kind of reading and writing interfaces, so that they could be incorporated in the memory cell, like in an SRAM cell.
Therefore the only solution that can preserve the high cell density is to have a read/write head that is shared by a great number of cells, i.e. which must be moved in order to access different cells.
So the memory, at least within some block, must have mechanical access, so it must be implemented as a tape or a disc. Multiple heads could be used to increase the read/write speed, like also for magnetic memories.
So I do not think that there is much to criticize in this paper, it makes sense and it identifies a new material that is suitable for implementing a known kind of memory cell at an atomic scale, even if it is unlikely that a practical memory based on this concept will become possible any time soon.
Microsoft has worked for many years on their glass memory devices, which have much more important advantages, and they are still far from being able to sell such devices, mainly due to the cost of the required lasers, for which there is a chicken-and-egg problem, they are very expensive because they are produced in very small quantities and they cannot be incorporated in a device intended for mass production, because they are too expensive.
I don’t think this would bother the average enterprise in the least. We used to have entire rooms dedicated to tape libraries that housed dozens of tape drives and thousands of tapes each.
The read and write speed are absolutely critical but having to utilize multiple devices isn’t anything new at all.
Of course, wouldn't you expect that for a fairly mature technology that you'd get tons of false starts from competing tech before eventually getting one breakthrough that completely changed everything? I mean, you could have written a comment that was perfectly analogous to your paragraph above about how AI and neural networks never really amounted to much for about 50-60 years until, all of the sudden, they did (and even if you think AI may currently be overhyped, it's undeniable that in the past 5 years that AI has had an effect on society probably much greater than all the previous history of AI put together).
I prefer to read this academic paper as "Oh, this is a really interesting approach, I wonder what its limitations are" vs. interpreting at as a "this new storage tech will change the world!!!" announcement. I feel like the first approach leads to generally more curiosity, while the second just leads to cynicism and jadedness.
From that, you might be able to draw useful conclusions. Well...you'd also need correction factors for how profitable the hype itself was, over time, in the various scientific & technical fields.
The business model would be selling db access to VC's, R&D managers, and other folks making decisions about real money.