Showcase / question: a board-proven offline language runtime on ESP32-C3, and whether some language capability may eventually move beyond low-bit dense inference #485
Alpha-Guardian
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi BitNet folks,
I wanted to share a small but unusual language-runtime project that may still be relevant to the broader question of how far language capability can be compressed and deployed, even though it does not follow the low-bit dense model path.
We built a public demo line called Engram and deployed it on a commodity ESP32-C3.
Current public numbers:
Host-side benchmark capability
LogiQA = 0.392523IFEval = 0.780037Published board proof
LogiQA 642 = 249 / 642 = 0.3878504672897196host_full_match = 642 / 6421,380,771 bytesImportant scope note:
This is not presented as unrestricted open-input native LLM generation on MCU.
The board-side path is closer to a flash-resident, table-driven runtime with:
So this is not a standard low-bit dense inference stack. It is closer to a task-specialized language runtime whose behavior has been crystallized into a compact executable form under severe physical constraints.
Repo:
https://github.com/Alpha-Guardian/Engram
Why I’m posting here is that BitNet seems to represent one of the clearest public efforts around pushing language-model efficiency to a much more radical point while still staying within the dense model family.
What I’d be curious about is whether systems like this should be thought of as:
If this direction is relevant to your team, I’d be glad to compare notes.
Beta Was this translation helpful? Give feedback.
All reactions