πŸ¦™ Harnessing Local AI: Unleashing the Power of .NET Smart Components and Llama2

Hi! .NET Smart Components are an amazing example of how to use AI to enhace the user experience in something as popular as a combobox. .NET Smart Components also support the use of local LLMs, so in this post I’ll show how to configure thse components to use a local Llama 2 inference server. The…… Continue reading πŸ¦™ Harnessing Local AI: Unleashing the Power of .NET Smart Components and Llama2

πŸ“Ž Extending #SemanticKernel using OllamaSharp for chat and text completion

Hi! In previous posts I shared how to host and chat with a Llama 2 model hosted locally with Ollama. (view post). And then I also found OllamaSharp (nuget package and repo). OllamaSharp is a .NET binding for the Ollama API, making it easy to interact with Ollama using your favorite .NET languages. So, I…… Continue reading πŸ“Ž Extending #SemanticKernel using OllamaSharp for chat and text completion

#SemanticKernel – πŸ“ŽChat Service demo running Llama2 LLM locally in Ubuntu

Hi! Today’s post is a demo on how to interact with a local LLM using Semantic Kernel. In my previous post, I wrote about how to use LM Studio to host a local server. Today we will use ollama in Ubuntu to host the LLM. Ollama Ollama is an open-source language model platform designed for…… Continue reading #SemanticKernel – πŸ“ŽChat Service demo running Llama2 LLM locally in Ubuntu