upvote
You can fine-tune local models using your own data. Unsloth has a guide at https://unsloth.ai/docs/get-started/fine-tuning-llms-guide.

I'm currently experimenting with Tobi's QMD (https://github.com/tobi/qmd) to see how it performances with local models only on my Obsidian Vault.

reply
Right, the technical know-how about fine-tuning isn't the problem here, getting sufficiently high quality session logs without basically giving away my private data for free is the issue.

Today, I can use even the small models of OpenAI and Anthropic to get valuable sessions, but if I wanted to actually use those for fine-tuning a local model, I'd need to actually start sending the data I want to use for fine-tuning to OpenAI and Anthropic, and considering it's private data I'm not willing to share, that's a hard-no.

So then my options are basically using stronger local models so I get valuable sessions I can use for fine-tuning a smaller model. But if those "stronger local models" actually worked in practice to give me those good sessions, then I'd just use those, but I'm unable to get anything good enough to serve as a basis for fine-tuning even from the biggest ones I can run.

reply
Models are lossy, so fine-tune can only take you so far with small models. What we need is reasonably capable local models with a huge context window and a method to make efficient use of token and cram as much info as possible in the context before degrading the output quality.
reply
This is exactly what we're working on, is there any application in particular you're interested in the most?

> I'm struggling collecting actual data I could use for fine-tuning myself,

Journalling or otherwise writing is by far the best way to do this IMO but it doesn't take very much audio to accurately do a voice-clone. The hard thing about journalling is that it can actually be really biased away from the actual "distribution" of you, whether it's more aspirational or emotional or less rigorous/precise with language.

What I'm starting to do is save as many of my prompts as possible, because I realized a lot of my professional writing was there and it was actually pretty valuable data (especially paired with outputs and knowledge of what went well and waht didn't) for finetuning on my own workloads. Secondly is assembling/curating a collection of tools and products that I can drop into each new context with LLMs and also use for finetuning them on my own needs. Unlike "knowledge repositories" these both accurately model my actual needs and work and don't require me to do really do anything unnatural.

The other thing I'm about to start doing is "natural" in a certain sense but kinda weird, basically recording myself talking to my computer (verbalizing my thoughts more so it can be embedded alongside my actions, which may be much sparser from the computer's perspective) / screen recordings of my session as I work with it. This is something I've had to look into building more specialized tools for, because it creates too much data to save all of it. But basically there are small models, transcoding libraries, and pipelines you can use for audio/temporal/visual segmentation and transcription to compress the data back down into tokens and normal-sized images.

This is basically creating a semantic search engine of yourself as you work, kinda weird, but IMO it's just much weirder that your computer can actually talk back and learn about you now. With 96GB you can definitely do it BTW. I successfully finetuned an audio workload on gemma 4 2b yesterday on a 16GB mac mini. With 96GB you could do a lot.

> letting LLMs write docs and add them to a "knowledge repository"

I think what you actually want them to do is send them to go looking for stuff for you, or actively seeking out "learning" about something like that for their own role/purposes, so they can embed the useful information and better retrieve it when they need it, or produce traces grounded in positive signals (eg having access to this piece of information or tool, or applying this technique or pattern, measurably improves performance at something in-distribution to whatever you have them working on) they can use in fine-tuning themselves.

reply
I think maybe you're misunderstanding the issue here. I have loads of data, but I'm unwilling to send it to 3rd parties, so that leaves me with gathering/generating the training data locally, but none of the models are good/strong enough for that today.

I'd love to "send them to go looking for stuff for you", but local models aren't great at this today, even with beefy hardware, and since that's about my only option, that leaves me unable to get sessions to use for the fine-tuning in the first place.

reply
Right, that's exactly the situation I'm in too and "send them to go looking for stuff for you" without it going off the rails is the problem we've been working on.

Basically you need a squad of specialized models to do this in a mostly-structured way that ends up looking kind of like a crawling or scraping/search operation. I can share a stack of about 5-6 that are working for us directly if you want, I want to keep the exact stack on the DL for now but you can check my company's recent github activity to get an idea of it. It's basically a "browser agent" where gemma or qwen guide the general navigation/summarization but mostly focus on information extraction and normalization.

The other thing I've done, which obviously not everybody is going to want to do, is create emails and browser profiles for the browser agent (since they basically work when I'm not on the computer, but need identity to navigate the web) and run them on devices that don't have the keys to the kingdom. I also give them my phone number and their own (via an endpoint they can only call me from). That way if they run into something they have a way to escalate it, and I can do limited steering out of the loop. Obviously this is way more work than is reasonable for most people right now though so I'm hoping to show people a proper batteries-included setup for it soon.

Edit: Based on your other comment, I think maybe what you're really looking for most are "personal traces". Right now that's something we're working on with https://github.com/accretional/chromerpc (which uses the lower level Chrome Devtools Protocol rather than Puppetteer to basically fully automate web navigation, either through an LLM or prescriptive workflows). It would be very simple to set up automation to take a screenshot and save it locally every Xm or in response to certain events and generate traces for yourself that way, if you want. That alone provides a pretty strong base for a personal dataset.

reply
> that ends up looking kind of like a crawling or scraping/search operation

Sure, but what I'm talking about is that the current SOTA models are terrible even for specialized small use cases like what you describe, so you can't just throw a local modal on that task and get useful sessions out of them that you can use for fine-tuning. If you want distilled data or similar, you (obviously) need to use a better model, but currently there is none that provides the privacy-guarantees I need, as described earlier.

All of those things come once you have something suitable for the individual pieces, but I'm trying to say that none of the current local models come close to solving the individual pieces, so all that other stuff is just distraction before you have that in place.

reply
Understood. I guess I'm saying "soon" but definitely agreed its not "now" yet. I will say though, with 96GB, in a couple months you're going to be able to hold tons of Gemma 4 LoRa "specialists" in-memory at the same time and I really think it will feel like a whole new world once these are all getting trained and shared and adapted en-masse. And also, you could set up personal traces now if you want. Nobody can make you, but in its laziest form it can be literally just taking screenshots of your screen periodically as you work, and that'll have applications soon
reply
> And also, you could set up personal traces now if you want. Nobody can make you, but in its laziest form it can be literally just

But again, you're missing my point :) I cannot, since the models I could generate useful traces from are run by platforms I'm not willing to hand over very private data to, and local models that I could use I cannot get useful traces from.

And I'm not holding out hope for agent orchestration, people haven't even figured out how to reliably get high quality results from agent yet, even less so with a fleet of them. Better to realistically tamper your expectations a bit :)

reply
Couldn't you create synthetic data based on your entries using local models? Or would that defeat the purpose of fine tuning it?
reply
Yeah, I suppose, but how do I get sufficiently high quality synthetic data without sending the original data to OpenAI/Anthropic, or by using local models when none of them seem strong enough to be able to generate that "sufficiently high quality synthetic data" in the first place?
reply
you could do something like rent GPU time yourself, and use it to run a higher-quality local model (e.g. one of the Chinese "close to frontier" ones). Not guaranteed to preserve privacy of course, but it at least avoids directly sending the data to OpenAI/Anthropic.
reply