Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU
26 by abi | 2 comments on Hacker News.
I spent the last few days building out a nicer ChatGPT-like interface to use Mistral 7B and Llama 3 fully within a browser (no deps and installs). I’ve used the WebLLM project by MLC AI for a while to interact with LLMs in the browser when handling sensitive data but I found their UI quite lacking for serious use so I built a much better interface around WebLLM. I’ve been using it as a therapist and coach. And it’s wonderful knowing that my personal information never leaves my local computer. Should work on Desktop with Chrome or Edge. Other browsers are adding WebGPU support as well - see the Github for details on how you can get it to work on other browsers. Note: after you send the first message, the model will be downloaded to your browser cache. That can take a while depending on the model and your internet connection. But on subsequent page loads, the model should be loaded from the IndexedDB cache so it should be much faster. The project is open source (Apache 2.0) on Github. If you like it, I’d love contributions, particularly around making the first load faster. Github: https://ift.tt/IyWuMax Demo: https://secretllama.com
Friday, May 3, 2024
Home »
Hacker News
» New top story on Hacker News: Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU
0 comments:
Post a Comment