#SemanticKernel โ€“ ๐Ÿ“ŽChat Service demo running Llama2 LLM locally in Ubuntu

Hi! Today’s post is a demo on how to interact with a local LLM using Semantic Kernel. In my previous post, I wrote about how to use LM Studio to host a local server. Today we will use ollama in Ubuntu to host the LLM. Ollama Ollama is an open-source language model platform designed for…… Continue reading #SemanticKernel โ€“ ๐Ÿ“ŽChat Service demo running Llama2 LLM locally in Ubuntu

#SemanticKernel – ๐Ÿ“ŽChat Service demo running Phi-2 LLM locally with #LMStudio

Hi! It’s time to go back to AI and NET, so today’s post is a small demo on how to run a LLM (large language model, this demo using Phi-2) in local mode, and how to interact with the model using Semantic Kernel. LM Studio I’ve tested several products and libraries to run LLMs locally,…… Continue reading #SemanticKernel – ๐Ÿ“ŽChat Service demo running Phi-2 LLM locally with #LMStudio