Demonstrating this stuff is possible isn't the hard part, it seems. Productionizing it is. You have to have exceedingly fast read and write speeds: who cares if it can store an exabyte if it takes all month to read it, or if you produce data faster than you can write it? It has to be durable under adverse conditions. It has to be practical to manufacture the medium and the drives. You probably don't want to have to need a separate device to read and a device to write. By the time most of these problems are worked out, most of these technologies aren't a whole lot better than existing tech.
Stick this on the "Wouldn't it be nice if graphene..." pile.
Red LEDs were invented / discovered in 1920s, became commercially successful as indicators in 1960s. Optical fibers were invented in 1920s or so, became a commercial success in 1980s.
Certain things just take time. Do not dismiss a good physical effect, they are much more rare than so-called good ideas.
Do you just think Google hates money, or does this only work for hover cars
With the help of “remote assistance”, that is. Which is probably one of the reasons for the limited rollout.
“Feasible” is doing some heavy lifting there. The whole point of the comment you replied to is that it can take a long time for some new physical technique to become commercially feasible.
In short, if a tech takes 40 years to be commercialised it would have been invented some time in the 80s.
To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.
Or maybe RAEND
Massive storage that takes a month to fully read is acceptable in a wide variety of use cases. If it's cheaper than hard drives it'll get a huge amount of users.
Reading a floppy disk took around 30 secs for example. A whole CD took 5 mins. My whole 1TB SSD takes 10 mins.
But I 100% agree with your main point about possibility vs productionisation.
I don’t think this would bother the average enterprise in the least. We used to have entire rooms dedicated to tape libraries that housed dozens of tape drives and thousands of tapes each.
The read and write speed are absolutely critical but having to utilize multiple devices isn’t anything new at all.
Of course, wouldn't you expect that for a fairly mature technology that you'd get tons of false starts from competing tech before eventually getting one breakthrough that completely changed everything? I mean, you could have written a comment that was perfectly analogous to your paragraph above about how AI and neural networks never really amounted to much for about 50-60 years until, all of the sudden, they did (and even if you think AI may currently be overhyped, it's undeniable that in the past 5 years that AI has had an effect on society probably much greater than all the previous history of AI put together).
I prefer to read this academic paper as "Oh, this is a really interesting approach, I wonder what its limitations are" vs. interpreting at as a "this new storage tech will change the world!!!" announcement. I feel like the first approach leads to generally more curiosity, while the second just leads to cynicism and jadedness.
(I'm also unclear how the bit is supposed to actually flip under the applied electric charge without the fluorine and carbon having to pass through each other.)
Technical note, because it's jargon:
"Real" means position = A * sin(w * t)
"Imaginary" means position = A * expt(w * t)
(because expt(w * i * t) = cos(w * t) + i * sin(w * t))
If you calculate in a computer an ammonia molecule with all the atom is a plane z = 0 (instead of the usual piramidal shape), then the N in the center is in an inestable equilibrium and the N does not make small vibrations like z = expt(w * t).
It makes a big "imaginary" vibration like z = expt(w * t) that is exponential for a short time while z is almost 0, and then the approximations don't apply and it reach the z of the usual shape at equilibrium.
The AFM mechanism described as “tier 1” (very strong LLMism, btw) is somewhat optimistic but realistic. The fields needed are large compared to usual values in solid state devices, but I’d guess achievable with an AFM. But “tier 2” is vague and completely speculative. Some random things I noted: - handwaving that (not exact quote) “the read controller is cached. No need to read the same bit twice”. Cached with what?? If this miraculous technology can achieve 25 PB/s, what can possibly hope to cache it? More generally, it’s a strange thing to point out. - some magic and completely handwaved MEMS array that converts an 8um spot size laser beam into atomic-resolution 2D addressing? In my opinion this is the biggest sin of the manuscript. What I understood to be depicted is just fundamentally physically impossible. - a general misunderstanding of integrated electronics, and dishonest benchmarking, comparing real memory technologies being sold at scale right now, vs theoretical physical bounds on an untested idea. Also no mention of existing magnetic tape as far as I can tell. - constantly pulling out specific numbers or estimates with no citation and insufficient justification. Too many examples to even count.
I’m sorry for the harsh language, I wouldn’t use it for a usual review. But in my opinion this needs a very heavy toning down and complete rewrite, and is unfit for a proper review. Final remark: electronics is, and will always fundamentally be, intrinsically denser than optics. Some techniques “described” here, if they were possible, would have been applied to existing optical tech (i.e. phase change materials in blue-ray).
> Once a region of tape has been read, the controller stores the result. Subsequent operations reference the cache rather than re-interrogating the physical medium. Re-reading a known bit is unnecessary; the controller already holds its state
However, earlier, the paper claims:
> The transformer architectures underpin- ning modern large language models are bandwidth-limited, not compute-limited [1–3]. The energy consumed moving data between DRAM, NAND flash, and processor cache already exceeds the energy consumed by arithmetic in datacenter AI accelerators [2]. This is not an optimization problem. It is a materials problem [emphasis mine].
as part of a longer rant about the AI "memory wall" in the very first section. If we open with a long spiel about how memory is expensive in material cost and energy cost and this material is a solution for that then what are we caching the read in? On that note, what kind of computer engineer thinks about cache on the order of individual bits on a medium?
And, as you point out, 25 PB/s is a lot. Around 1000x that of a typical on-die SRAM cache, I think.
A while later, the author speaks of using atomic force microscopy to read the data back. The size of AFM scans are, in practice, as I understand, along the order of square micrometers. I think this whole paper is an AI-driven, as you put it, 'fever dream', enabling an author to put forth 60 pages of sciencey claims and sciencey math without -- as far as I can tell -- any concrete and novel scientific result of any kind. AI-driven reality warps are not new; the difference is nowdays AIs are good enough at sounding smart to get past the barriers of a typical smart person who might want to be fooled or make a show of being open-minded. Later on, the author proposes using "shaped femtosecond IR pulses" -- without further elaboration -- to address single atoms! IR wavelengths are on the order of a micrometer at minimum!
The caching comment refers to the Tier 1 controller holding a bitmap of bits it has already scanned — standard practice in any scanning probe system. It's not competing with the storage medium for capacity.
Tier 2 is explicitly labeled speculative. The paper's validation target is Tier 1: one C-AFM scan, one voltage pulse, existing equipment.
The core contribution is not the architecture — it's the physics: a verified transition state for C-F pyramidal inversion at 4.6 eV (B3LYP) and 4.8 eV (CCSD(T)), one imaginary frequency, barrier below bond dissociation. That's standard computational chemistry, not handwaving. The architecture sections are forward-looking by design.
The fluorine passes between two carbon neighbors through a C-C gap of 2.64 Å at the transition state — not through any atom. This is pyramidal inversion, the same mechanism as ammonia, but with a 4.6 eV barrier instead of 0.25 eV.
Magnetic tape comparison is in Table 2.
How is this lost on people? Everything that contains the slightest hint of "AI slop" is instantly panned anywhere it appears, and yet people such as Ilia Toli appear to be entirely oblivious to this.
It's tragic. There is at least a non-zero chance that this work is a world changing breakthrough. It's clear, based on his engagement with comments here, that he at least believes this. And yet the first thing the guy does with it is debase it all using a clanker.
It boggles the mind.
We're seeing this throughout academe, in courts with both lawyers and judges, and among lawmakers and journalists. Several times a week one or another of these makes another headline for misapplying "AI". It seems that the work for which we are all expected to have the highest regard is coming from people that are completely witless; both unaware of how transparent this is and unaware of the consequences.
You have to be deeply ensconced inside an impenetrable bubble to do that to yourself.
I largely agree with your point, but I’m afraid you are the one in the bubble. Detecting AI writing is a rare skill, not the norm. It’s glaringly obvious to those of us who use AI a lot, but it’s not that obvious to the average person.
To the point of absurdity in cases – I’ve seen loads of people who hate AI complain about AI online, not realising that the account they are talking to is nothing but a simple spam bot.
Do not match your communication style to nonsense articles.
"The physics is mine — thirteen years of it, starting from the 2013 paper. I use AI for editing, as I use a calculator for arithmetic. The transition state, the barrier, the molecular model, the fluorine uniqueness argument — all computed on my workstation. The tone criticism is heard and will be addressed in revision. The calculations don't change with the prose."
This is NOT about "prose." You're missing the point. Badly. And damn that's frustrating.
Read carefully and inculcate: Do not use LLM to write anything you expect to be taken seriously. This is not negotiable. It doesn't matter if all your peers and colleagues are doing exactly that. It doesn't matter that this is your first experience with such a reaction: it's not a fluke. DO. NOT. DO. IT.
Am I getting through?
And what’s the reason for going solo vs a research university, where I assume this type of research could be significantly sped up?
Edit: https://www.mathgenealogy.org/id.php?id=61429 It looks quite unrelated
https://www.researchgate.net/publication/258423577_Data_Stor...
Clearly they have been working on this for over a decade.
My experience has been that research became much more fulfilling after finishing my PhD. I got more research independence, the level of work I was expected to do increased, and as a bonus, my salary almost tripled. It was like having the world open up, and starting to really experience being a scientist without my PI protecting me.
I was curious about their decisions because if you're taking on the opportunity cost of a PhD, it's probably because you enjoy research, but if you enjoy research, you wouldn't keep going back to the starting point. So, without additional context, it seemed like they just wanted the credentials.
I think it was also worth asking because universities often want to know why you want another PhD, since from their perspective, spending that funding on someone with no PhD potentially creates a new researcher (vs spending it on an existing researcher). So, if they managed to get into a PhD program again, they probably had a good reason.
Their response about different countries is an explanation (especially from an immigration angle), it's not like I'm asking them to lay out all their personal circumstances behind the decision in detail.
If I'm paying for "free" education with my tax euros, I might as well use it.
I had thought for a while about a way to store data that makes use of an idea that I had for sub-diffraction limited imaging inspired by STED microscopy.
First an overview of STED. You have a "donut" shaped laser (or toroidal laser) that is fired on a sample. This laser has an inner hole that is below the diffraction limit. This laser is used to deplete the ability of the sample to fluoresce, and then immediately after a second laser is shone on the same spot. The parts of the sample depleted by the donut laser don't fluoresce and so you only see the donut hole fluoresce. This allows you to image below the diffraction limit.
My idea was to apply this along with a layer in the material that exhibits sum frequency generation (SFG). The idea is that you can shine the donut laser with frequency A and a gaussian laser with frequency B at the same spot. When they interact in the SFG material you get some third frequency C as a result of SFG. Then, below that material would be a material that doesn't transmit frequencies C and A.
Then what you'd be left with after the light shines through those two layers is some amount of light at frequency B. The brightness inside the hole and outside of the hole would depend on how much of the light from frequency B converts into frequency C. Sum frequency generation is a very inefficient process, with only some tiny portion of the light participating, but my thinking is that if laser B is significantly less bright than laser A, then what will happen is that most of the light from laser B will participate in sum frequency generation where it mixes with laser A, and that you'll be left with only a tiny bit of laser A outside of the hole, so that you get a nice contrast ratio for the light at frequency A between the hole and the surroundings that then allow you to image whatever is below these layers below the diffraction limit.
In my idea the final layer is some kind of optical storage medium that can be be read/written by the laser below the diffraction limit. Obviously aiming this would be hard :) My idea was that it would be some kind of spinning disk, but I never really got to that point.
You're comparing to current memory technologies but there are also some optical technologies like AIE-DDPR which presumably is (a lot?) less dense but has layers (I noticed you're also discussing a volumetric implementation), would devices based on your technology be simpler/faster? (I guess optical disks don't intend to replace high speed memory). What about access times?
I'm sure you could take this material and write a couple papers out of it, but right now this is a 60 page word document with commentary on a variety of topics from memory market economics to quantum computing.
It's full of self-congratulatory language like "The transition is not an incremental improvement within the existing paradigm; it obsoletes the paradigm and the infrastructure built around it". Alright, I'm happy to believe that this work is important. But this is not the neutral tone of a scientific article, it reads like ad copy for a new technology.
I'm sure there's interesting physics in there, but it needs a serious editing effort before it could be taken seriously by a journal.
Big discoveries will speak for themselves.
Smells like laziness to me.
Does that mean a scanning tunneling microscope is the I/O mechanism? That's been demoed for atom-level storage in the past. But it's too slow for use.
fluorographane -> Fluorographene
Can't find a single page about fluorographane
https://en.wikipedia.org/w/index.php?search=fluorographane&t...
But this
https://nowigetit.us/pages/d7f94fd0-e608-47f9-8805-429898105...
> A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude.
Are we supposed to read all these stories as lies?
Now it doesn’t say that this is easy to produce, but if those claims are true, it doesn’t really matter if it is very expensive.
It doesn’t say either if the stuff can withstand live conditions.
It’s annoying not to be able to trust whether solutions like these are viable or not.
[1]: https://www.tampabay.com/archive/1991/06/23/holograms-the-ne...
Research can be interesting but so often none of it goes anywhere, it's just hype and there's a reproducibility crisis in academia. Look at the decades wasted on academic fraud and appeals to authority with Alzheimer's research [1].
Most of this media is the academic equivalent of "dcotors HATE This guy".
Or, to imply guilt by association by first constructing a false stereotype of research in one field, and then applying it onto an instance of research in another field?
The price of the 50kwh unit I had put into my house was very low.
Sodium ion is ramping up too but is commercially available. That straight wasn't possible a few years ago till the electrode breakthroughs.
It was under subsidy, but I got about double what I was going to get about 6 months prior. There are 50kwh units going on AliExpress for about $12k AUD outright so I think there's been another step down in per-cell costs which is tickling through.
I'm waiting for a price cut to make outright purchases a bit more affordable but with a wholesale electricity service plan adding another say 100kWh probably works out.
I have hopes for Sodium-Ion cells, they should be way more shippable and presumably a better fit for residential power.