
⚠️ This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the 🤖 in C# are 100% mine.
Hi!
Quick 5 min video recap here, so you can avoid the code and blog post 😅
🔥 Introduction
In my last video, I played around with TransformersSharp, a super cool .NET library that lets us use Hugging Face transformer models… locally.
Well — things escalated quickly!
In today’s post (and video), we go one step further. I figured out how to use my local GPU to generate images with TransformersSharp. And not just that — I run a side-by-side benchmark between CPU and GPU (including a NVIDIA Tesla T4 GPU 🚀).
Let’s dive in.
⚙️ Details
The setup includes:
- The kandinsky-2.2 decoder model from Hugging Face
- The TransformersSharp text-to-image pipeline
- Local runs using both:
- 🖥️ CPU (my dev machine)
- 💻 GPU (with CUDA support)
- 🧪 Cloud GPU (NVIDIA Tesla T4)
We run a process that generates 20+ images from prompts like:
"cyberpunk octopus hacking a satellite"
"robot chef preparing ramen in Tokyo"
The generated images are saved locally, and we measure:
- Average generation time
- Memory usage
📦 Sample Code
var model = "kandinsky-community/kandinsky-2-2-decoder";
var pipeline = TextToImagePipeline.FromModel(model: model,
torchDtype: TorchDtype.Float16,
device: "cuda");
var result = pipeline.Generate(
"A pixelated image of a squirrel in Canada",
numInferenceSteps: 30,
guidanceScale: 0.75f,
height: 256,
width: 256);
var imageName = GetImageName(pipeline!.DeviceType);
File.WriteAllBytes(imageName, result.ImageBytes);
Console.WriteLine($"Image saved to: {imageName}");
You can configure the pipeline to use CUDA if your machine supports it, or run purely on CPU. Just make sure the ONNX execution provider is set up accordingly.
More advanced code examples and GPU execution setup are in the repo below ⬇️
🧠 Conclusion
TransformersSharp continues to amaze — and the ability to run Hugging Face models locally (and quickly!) from .NET is a game changer for devs working on AI-infused apps.
Adding GPU support opens up new possibilities — you don’t need a supercluster to build fast, fun, and intelligent image-generation apps in C#. Just bring your .NET skills and let the rocket 🚀 (or snail 🐌) do the rest.
🔗 Resources
- 🧪 Sample Code Repository
https://github.com/elbruno/TransformersSharp/ - 📄 Text-to-Image Pipeline Docs
https://github.com/elbruno/TransformersSharp/blob/main/docs/pipelines/text_to_image.md - 🧠 TransformersSharp Main Page
https://tonybaloney.github.io/TransformersSharp/ - 🎨 Kandinsky-2.2 Model (Hugging Face)
https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder
Happy coding!
Greetings
El Bruno
More posts in my blog ElBruno.com.
More info in https://beacons.ai/elbruno
Leave a comment