The underlying text (1911 edition) is public domain, but the structured version here — the parsing, reconstruction, and linking — is something I put together for this site. Right now there isn’t a bulk download available. I’m considering exposing structured access (API or dataset) in some form, but haven’t decided exactly how that will work yet.
If you have a specific use case in mind (especially for training), I’d be interested to hear more.
Separately, I've fine-tuned the Gemma 4 model[2], it was very quick (just 90 seconds), so I think it could be interesting to train it to talk like 1911 Encyclopedia Britannica.
I would use the entries as training data and train it to talk in the same style. There isn't a specific use case for why, I just think it would be interesting. For example, I could see how it writes about modern concepts in the style of 1911 Britannica.
[1] https://stateofutopia.com/encyclopedia/
[2] To talk like a pirate! https://www.youtube.com/live/WuCxWJhrkIM
The underlying text is public domain, but the structured version here is something I put together for the site. I haven’t released a bulk dataset yet.
If you end up experimenting with it, I’d love to hear how it turns out — and I’m still figuring out what structured access might look like.
Another reason would be to able to keep running/using it even if the main site were to go down for whatever reason eventually; or, to operate a mirror of it, for redundancy (linking back to the original, of course).