AI powered app (with open-source LLMs like Llama) with Elixir, Phoenix, LiveView, and TogetherAI
TLDR; two processes, one for liveview and another process that will handle HTTP call with streams. LiveView will send the prompt and its pid (process id) to the handler, that in turn will spawn a separate process that will make HTTP call and send the chunks of LLM output to the LiveView as the chunks arrive. When the last chunk arrives, we then notify the LiveView that the text generation has finished.
https://dev.to/azyzz/ai-powered-app-with-llms-with-elixir-phoenix-liveview-and-togetherai-4ei1