Ask HN: Is anyone doing anything cool with tiny language models?
For some reason, I’m slightly more biased towards tiny, self-hosted, run-on-CPU LMs. As long as high accuracy isn’t in the criteria, some of these use cases are pretty nifty
Tags: local ai llms ollama via:hn