upvote
It took 15 if not 20 years to commercialize even such obvious, low-tech thing as radio telegraph, which can literally be built form common house supplies. It happened about 60 years after Maxwell predicted the electromagnetic waves theoretically.

Red LEDs were invented / discovered in 1920s, became commercially successful as indicators in 1960s. Optical fibers were invented in 1920s or so, became a commercial success in 1980s.

Certain things just take time. Do not dismiss a good physical effect, they are much more rare than so-called good ideas.

reply
It doesn't take long to commercialize feasible new tech in this day and age. If someone invented an electromagnetic hovercar tomorrow, it will be available for sale next week and regulations will follow after.
reply
Waymo has cars that drive themselves and are dramatically safer than people in most conditions and yet they're only in select cities.

Do you just think Google hates money, or does this only work for hover cars

reply
> Waymo has cars that drive themselves

With the help of “remote assistance”, that is. Which is probably one of the reasons for the limited rollout.

reply
I don't know the costs and logistics of such an operation. Maybe you do?
reply
> It doesn't take long to commercialize feasible new tech

“Feasible” is doing some heavy lifting there. The whole point of the comment you replied to is that it can take a long time for some new physical technique to become commercially feasible.

reply
What advantage would hovering have?
reply
No Street Infrastructure needeed to drive anywhere (kinda).
reply
Ok, and where does the energy to consistently keep a weight in the air come from and is it really worth spending?

I know flying cars are some sort of futuristic trope, yet I cringe at it every time I see it. They always assume magical infinite power. In the real the reason we do not have flying cars is the same why you don't use a drone as a coat hanger at home: It is just more practical to use a mechanical solution that holds your coat for infinite time without any energy use or noise/heat emissions and it is much cheaper.

Lifting stuff against gravity is not free, but a piece of wood, a brick or a rubber wheel does a pretty good job at it. One way to do it is magnets, but that means you need even more complicated roads.

We are living on a warming planet where only the naive and the evil pretend that energy use is something only the poor have to think about. We all have to think about it.

reply
smoother ride, no need for wheels so no road friction and fewer parts that wear, no need for shock absorbers as well, no need for roads clean of snow and ice which would make them both more practical and safer.. if we're talking star trek hovering, not rotor blade / hovercraft noisy shit with rotating parts that waste a ton of energy.
reply
The only technologies that are commercialised quickly today are the ones that can be commercialised quickly. The ones that can't won't be for decades yet.

In short, if a tech takes 40 years to be commercialised it would have been invented some time in the 80s.

reply
It feels a little disjointed to compare old tech. Computing tech iteration cycles and adoption rates seem more interesting than things at the dawn of communications technology.
reply
Communication technologies have been evolving for billions of years
reply
> who cares if it can store an exabyte if it takes all month to read it

To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.

reply
Plus just put 32 in stripping RAID if you really need to read an exabyte a day
reply
*RAED

Or maybe RAEND

reply
RAVED is more likely. These things aren't cheap.
reply
But if you need 1eb, waiting a whole month for it isn't great. You'd be better off with 720 1pb devices taking an hour in parallel.
reply
Yes it causes problems in this increasingly narrow situation.

Massive storage that takes a month to fully read is acceptable in a wide variety of use cases. If it's cheaper than hard drives it'll get a huge amount of users.

reply
It's notable that 'time to read/write entire device' has been creeping up for any storage device you can buy off the shelf for the past ~40 years.

Reading a floppy disk took around 30 secs for example. A whole CD took 5 mins. My whole 1TB SSD takes 10 mins.

reply
Interesting, this is my first time consciously thinking about this trend.

Perhaps the needs for read/write speed are bounded (before processor, etc. becomes the limiting factor), while more capacity is only limited by price. Or maybe increasing density of storage inherently means a tradeoff with I/O speed (AFAIK, NAND flash needs to rewrite lots of data just to make a single write? Atom-scale interactions have side effects)

reply
A modern hard drive (36TB @ 280MB/s) can take more than a day. If you treat a bank of tapes as one device this can get even more extreme.
reply
In long term archival use cases this is less of an issue. Especially if it’s many exabytes we’re talking about, needing to be stored for decades.

But I 100% agree with your main point about possibility vs productionisation.

reply
Well, yeah. It takes a heck of a long time to pull something out of the lab, let alone theory, into the real world, and there's a ton of ways that it can die along the way. But you do need people to be pursuing these things to actually get something into production, else there really would never be any progress. To me this reaction feels a bit of a misunderstanding about why it's worth discussing these ideas at all: it's not meant to be a forecast of where technology is definitely going in the future, it's a potential direction that some people think is worth pursuing, and even if the odds are low for any given idea it doesn't make them worthless. (I've worked for near 10 years to turn something that 'worked in the lab' when I joined into an actual product, for example, and it's still not quite standing on its own feet in production yet).

I'm not familiar enough with the space to know how this idea rates compared to alternative options at similar levels of development: the density is obviously extreme (but probably not the biggest advantage), and it makes sense to me that the underlying physics could work robustly, but the practicalities of how you read and write seem pretty difficult (and I think the paper kind of glosses over this: read caching and defect mapping could be trickier than it implied. Accessing the tape from both sides also seems like it will make the engineering more difficult).

reply
I have no idea if this is practical but I remember when flash memory was this suspicious semi-science fiction thing too. There are probably some people on this site that remember the same for DRAM. There have been loads of things in between that didn't make it. Some of them were semi-crackpot, some actually went into production like bubble memory and Optane. Few of them have met the sweet spot of the market in a way that let them move from a niche to a dominant form of memory, but still I wouldn't discount that it's possible to invent a new form of memory that will take over the world!
reply
Most kinds of memory devices are based on old principles of making a memory device, which are applied to new materials.

I do not think that any new memory device principles have been invented after WWII. Already by 1940, the inventor of DRAM, John Vincent Atanasoff, had enumerated almost all principles that can be used to make a memory device.

The first DRAM of Atanasoff was made with discrete capacitors, then 5-years later von Neumann proposed to use iconoscope cathode-ray tubes instead, which were used for a few years, before being replaced by magnetic core memories. The Intel company was formed for the commercialization of the first (1-kbit) DRAM integrated circuit made with MOS transistors.

The memory described in TFA is in principle equivalent with a memory made with mechanical toggle switches or latching relays with mechanical latching, where the 2 stable states are maintained by elastic forces and you can toggle the state if you apply a force great enough on the switch.

Reducing a mechanical bistable device to the size of a few atoms reaches the possible limit of memory density. As described in the parent article, this device should be able to store information safely and it should be able to switch is state quickly.

The difficulties are not in the memory cell itself, but in how to enable fast and accurate reading and writing. While the memory cell itself may have the minimum size permitted by the atomic structure, there is no way to miniaturize to the same extent any kind of reading and writing interfaces, so that they could be incorporated in the memory cell, like in an SRAM cell.

Therefore the only solution that can preserve the high cell density is to have a read/write head that is shared by a great number of cells, i.e. which must be moved in order to access different cells.

So the memory, at least within some block, must have mechanical access, so it must be implemented as a tape or a disc. Multiple heads could be used to increase the read/write speed, like also for magnetic memories.

So I do not think that there is much to criticize in this paper, it makes sense and it identifies a new material that is suitable for implementing a known kind of memory cell at an atomic scale, even if it is unlikely that a practical memory based on this concept will become possible any time soon.

Microsoft has worked for many years on their glass memory devices, which have much more important advantages, and they are still far from being able to sell such devices, mainly due to the cost of the required lasers, for which there is a chicken-and-egg problem, they are very expensive because they are produced in very small quantities and they cannot be incorporated in a device intended for mass production, because they are too expensive.

reply
Basically you just ignore the hyped up press releases, this just accompanies most semi-cool/exciting papers. The scientists probably know this isn't going to be some new storage that will become widespread but its just part of the game to sell the story like this and the administration wants this.
reply
> You probably don't want to have to need a separate device to read and a device to write.

I don’t think this would bother the average enterprise in the least. We used to have entire rooms dedicated to tape libraries that housed dozens of tape drives and thousands of tapes each.

The read and write speed are absolutely critical but having to utilize multiple devices isn’t anything new at all.

reply
It doubles design, development, and manufacturing cost, potentially doubling your supply chain. It's not a problem for the consumer.
reply
Used to? We absolutely still do. LTO is a widely used format, and as far as I'm aware, it is "picking up more steam" each year.
reply
In terms of capacity, LTO sales are increasing. In terms of tape count and drive count, there's been a steady decline.
reply
I don't think there are public numbers. No doubt IBM knows. I do expect that trend to reverse this year if true.
reply
> Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.

Of course, wouldn't you expect that for a fairly mature technology that you'd get tons of false starts from competing tech before eventually getting one breakthrough that completely changed everything? I mean, you could have written a comment that was perfectly analogous to your paragraph above about how AI and neural networks never really amounted to much for about 50-60 years until, all of the sudden, they did (and even if you think AI may currently be overhyped, it's undeniable that in the past 5 years that AI has had an effect on society probably much greater than all the previous history of AI put together).

I prefer to read this academic paper as "Oh, this is a really interesting approach, I wonder what its limitations are" vs. interpreting at as a "this new storage tech will change the world!!!" announcement. I feel like the first approach leads to generally more curiosity, while the second just leads to cynicism and jadedness.

reply
In fairness, i assume any headline that emphasizes some excessively large storage density is probably at best something useful for archiving and not a replacement for an SSD. If they were targeting latency they would lead with those numbers not the density.
reply
Every article like this there is someone that points this out. Not hard to do but sure is reliable.
reply
The hard work would be maintaining a database of ideas which were similarly hyped over the past (say) couple centuries - including details on if/when each idea worked out, or fell out of hype-space, or was proven useless.

From that, you might be able to draw useful conclusions. Well...you'd also need correction factors for how profitable the hype itself was, over time, in the various scientific & technical fields.

The business model would be selling db access to VC's, R&D managers, and other folks making decisions about real money.

reply
Very large, fast, read-only memory now has an incredible use-case: NN weights.
reply
The fact that most of the world's data is still stored on little spinny disks, considering how many times in the last 40 years we've seen this story, is criminal.
reply
Aren't lasers driving the current 32TB+ HDD tech?
reply
yeah but that wasn't a straight upgrade, either. HAMR has all sorts of tradeoffs.
reply