⚠️ Disclaimer: This blog post was created with the help of AI tools — because we like to code and let our machine friends do some heavy lifting.
Hi!
In this post, we’ll explore how to use TransformerSharp to bring Hugging Face models directly into a C# console application — no Python required. We’ll generate text with a QWen model, and even create images from text using the kandinsky-2.2 model.
And the best part? It runs locally with CPU only (GPU coming soon!), and the image generation pipeline runs in ~10 lines of code. Let’s go! 💥
Spoiler: video here
🧪 Details
We start by showcasing two text generation scenarios using a local model:
🔤 Scenario 1: Text Generation with QWen
In this first part, we use a basic QWen model to generate text from a prompt using the TransformerSharp library.
👉 Code Sample
// https://tonybaloney.github.io/TransformersSharp/pipelines/text_classification/
using TransformersSharp;
using TransformersSharp.Pipelines;
var pipeline = TextGenerationPipeline.FromModel("Qwen/Qwen2.5-0.5B", TorchDtype.BFloat16);
Console.WriteLine("✅ Pipeline created successfully");
Console.WriteLine();
var messages = new List<Dictionary<string, string>>
{
new() {
{ "role", "user" },
{ "content", "Tell me a story about a brave knight." }
}
};
var results = pipeline.Generate(messages, maxLength: 100, temperature: 0.7);
foreach (var result in results)
{
foreach (var kvp in result)
{
Console.WriteLine($"{kvp.Key}: {kvp.Value}");
}
}
🧠 Scenario 2: Text Generation using IChatClient
Next, we integrate with the .NET AI Abstractions by using the IChatClient interface from Microsoft.Extensions.AI. This adds more structure and composability to your AI calls.
👉 Code Sample Placeholder #2
using Microsoft.Extensions.AI;
using TransformersSharp;
using TransformersSharp.MEAI;
var model = "Qwen/Qwen2.5-0.5B";
var client = TextGenerationPipelineChatClient.FromModel(model, TorchDtype.BFloat16, trustRemoteCode: true);
var response = await client.GetResponseAsync("tell me a story about kittens");
Console.WriteLine(response.Text);
🖼️ Scenario 3: Text-to-Image with kandinsky-2.2
Now for the fun part: let’s turn a piece of text into an image using the kandinsky-2.2 model. This model is available on Hugging Face, and using the text_to_image pipeline in TransformerSharp, you can call it in just a few lines.
👉 Code Sample Placeholder #3
using TransformersSharp;
using TransformersSharp.Pipelines;
var model = "kandinsky-community/kandinsky-2-2-decoder";
var pipeline = TextToImagePipeline.FromModel(model: model, device: "cuda");
var result = pipeline.Generate(
"A pixelated image of a beaver in Canada",
numInferenceSteps: 30,
guidanceScale: 7.5f,
height: 256, // Updated to 256x256
width: 256); // Updated to 256x256
File.WriteAllBytes("image.png", result.ImageBytes);
The output should be similar to this one:

In the previous post, we also covered how we can do the same with a MCP server that acts as a bridge between the C# app and the underlying AI tooling — helping invoke tools (like the text-to-image pipeline) from the external service.
✅ Conclusion
TransformerSharp is a game changer — especially for .NET developers who want to leverage Hugging Face models without diving deep into Python or jumping between runtimes.
We’re running local models, invoking them from a C# console app, and even creating beautiful AI-generated images. All with a modern, .NET-native feel. ✨
GPU support is coming soon — but even on CPU, this is a great way to prototype locally with powerful AI tools.
🔗 Resources
- 💻 Sample Code:
https://github.com/elbruno/TransformersSharp/ - 🧪 Text-to-Image Pipeline Docs:
https://github.com/elbruno/TransformersSharp/blob/main/docs/pipelines/text_to_image.md - 📚 TransformerSharp Library (by @tonybaloney):
https://tonybaloney.github.io/TransformersSharp/ - 🎨 kandinsky-2.2 on Hugging Face:
https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder
Happy coding!
Greetings
El Bruno
More posts in my blog ElBruno.com.
More info in https://beacons.ai/elbruno

Leave a comment