upvote
These are based on the Gemma 3n architecture so E2B only needs 2Gb for text2text generation:

https://ai.google.dev/gemma/docs/gemma-3n#parameters

You can think of the per layer-embeddings as a vector database so you can in theory serve it directly from disk.

reply