Skip to content

KrisWilson/LocalSensei

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local Sensei

Background Daemon taking screenshots at your command, sent to local VLM (hosted at your NPU/CPU/GPU), VLM-describe then send reflection to local coding LLM (hosted at your GPU).

Create successful code and automatically copy into copyboard <3

Showcase

Requirements:

  • Linux with X11/Wayland
  • xbindkeys or other mapping key→command software
  • Ollama
  • at least 15 GB of free space (depends on selected LLM model)
  • Recommended use case with Tilda or Yakuake (drop-down terminal)
  • python3 with libs in requirements.txt

Example usage GPU (12GB VRAM):

Bind Client.py under some key with xbindkeys to trigger action

  • Tried to do NPU+GPU combo but:
  • [OpenVino NPU/CPU] Gemma 3 4b is unusable for recognizing code from IDE
  • [OpenVino NPU/CPU] same as InternVL and Phi-3.5
  • The best option is only to put GLM-OCR into VRAM, but it won't be at CPU/NPU (it's not compiled yet for OpenVINO for INTEL)

For LLM GPU:

For VLM GPU:

About

Small Daemon in background - taking screenshot at command and describing it by local LLM to help you with your problems in code. Handling screenshot to local VLM in your NPU/GPU/CPU then parsing it to local LLM (Ollama/GPU) and copying result into clipboard.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages