I think there’s a difference between doomsday framing and preparedness.
Offline access and local models aren’t about assuming collapse—they’re about treating knowledge as infrastructure instead of something implicitly guaranteed.
If current frontier online LLMs are made inaccessible due to a local or global cataclysmic event running models locally will be the least of your concerns.
This isn’t prepping for anything it’s cosplaying as a vault dweller.
P.S. Having TED talks as part of the “educational” curriculum of this project is probably the biggest circle jerk imaginable.
Doomsday may not be the end of the world, but simply living in a country where you're being unjustifiably bombed by a foreign government lead by a delusional sociopath, and so access to information sources becomes limited.
I'm a fan of "civilization in a box" kinds of projects. However the ZIM file format leaves a lot to be desired in 2026. I've been exploring a refreshed, alternative approach: https://github.com/stazelabs/oza
I do think having an LLM as an optional "sidecar" is a useful approach. If you can run a meaningful Ollama instance alongside your content, great!
This is really cool. Having offline Wikipedia + local LLMs in a single bundle is a great combo for emergency preparedness. Do you have any benchmarks on how it performs on lower-end hardware? Curious about minimum specs.
Not sure how good of an idea a Steam Deck would be for this. If you can't access Wikipedia, I imagine a replacement for its unprotected glass screen would be harder to come by if you drop it.
It might be an interesting idea given that the Steam Deck has reasonable amount of RAM/GPU. The main issue for a knowledge base might be the lack of a physical keyboard though.
I like this idea! I don't need the LLM bits, and want it to run on an old Android tablet I have lying around. Can anyone recommend similar software where I can get wikipedia / street maps / useful tutorial videos nicely packaged for offline use?
Missing a chance to note (or configure for?) installation on a Raspberry Pi --- that'd make an affordable option to leave powered down, but ready to go in an EMI-shield/Faraday Cage.
They specifically state that they’re aiming for a “fatter” model that expects higher-end hardware, and other projects like Internet in a box already target rpi-style devices.
The ones mentioned in this thread all use Kiwix for off-line wikipedia, OSM for maps, Khan for educational videos. It looks like internet-in-a-box is aimed at working well on low-powered devices, whereas nomad expects beefy hardware and includes local AI. Not sure how WROLPi differs from internet-in-a-box.
Maybe it's like linux distros: all based on the same software, but optimized for different use-cases or preferences.
I mean, technically they use Kolibri for educational videos and exercises. A lot of them do come from Khan Academy, but we do a lot of work to make an offline first education platform, and also bring in a huge swathe of other open educational resources.
I get the hate on AI for many reasons (hype, resource greediness, threat to civilization, etc), but having a local LLM that could help guide and reason about the data within seems like a win, especially if it's optional.
See I really want this in a simpler format. Like a single file embedded database on my filesystem that I can point a single/or few tools at for my model to use when it needs.
In the meanwhile, wikipedia ships wikidata, which uses RDF dumps (and probably 8x less compressed than it should be).
https://www.wikidata.org/wiki/Wikidata:Database_download
There is room for a third option leveraging commercial columnar database research.
https://adsharma.github.io/duckdb-wikidata-compression/
Offline access and local models aren’t about assuming collapse—they’re about treating knowledge as infrastructure instead of something implicitly guaranteed.
That feels more like resilience than pessimism.
This isn’t prepping for anything it’s cosplaying as a vault dweller.
P.S. Having TED talks as part of the “educational” curriculum of this project is probably the biggest circle jerk imaginable.
AlexNet -> Tansformers -> ChatGPT -> Claude Code -> Small LMs serving KBs
Large LLMs could have a role in efficiently producing such KBs.
I do think having an LLM as an optional "sidecar" is a useful approach. If you can run a meaningful Ollama instance alongside your content, great!
>What is Project N.O.M.A.D.? Node for Offline Media, Archives, and Data
That's the first header, and the first sentence of the first paragraph, and I'm confused.
To "go offline" means for something to become inaccessible that was once accessible "online". ("Offline" is an adverb.)
Meanwhile, an "offline" thing is one which is usable even without ever being "online". ("Offline" is an adjective.)
So it becomes:
> "Knowledge That Never [Becomes Inaccessible]"
> "Node for [Accessible-Without-Connection] Media, Archives, and Data"
But definitely confusing to put them right next to each other like that. You'd think a copyeditor would flag it or something.
>Knowledge That Never Goes Offline
Means
>Knowledge That Never becomes inaccessible to you
While the next offline means you can access it even if you don't have access to a wider network.
At least that's how I would read it.
https://github.com/ligi/SurvivalManual
https://internet-in-a-box.org/
https://wrolpi.org/
I used it on a long train trip. There was no internet due to drone attacks, and with Kiwix I could browse pre-downloaded Wikis
Maybe it's like linux distros: all based on the same software, but optimized for different use-cases or preferences.
whatever I think might be useful later, I capture through the web clipper extension. [0]
[0]: https://obsidian.md/clipper