A couple of days ago, I wrote about how I like Portainer as a visual interface to manage docker (see references). In one of the comments, Marco suggested also an option to automatically update all my containers.
As I explained in my post, the basic steps for this are
Super easy, but tedious if you need to do this several times. Let’s meet Watchover.
Watchtover has the ability to “watch” running Docker containers on either the same local or remote host, check if there is a newer image in the remote registry, and then update the container with the new image using the same configuration options it was instantiated with.
I give this a try, and I found that it was very easy, just a command
And, there is a personal bonus here. I’ve been looking for an excuse to start to play around with Go, and this is the one. I think, I’ll use the excuse of update my containers as my next phase on Go learning.
I was chatting about the new features and the use cases supported using the brand new (and still in Preview) Cognitive Services Anomaly Detector and I realized that we can use the service with local containers (instead of the cloud service), but there is something different in this Service.
Why to use containers?
Azure Cognitive Services allow developers to easily add cognitive features—such as object detection, vision recognition, and language understanding—into their applications without having direct AI or data science skills or knowledge. Containerization is an approach to software distribution in which an application or service is packaged so that it can be deployed in a container host with little or no modification
I first will recommend reading the article [Getting started with Azure Cognitive Services in containers] and then I will remark 2 main advantages of using containers
You can get a better control of your internet usage. I mean, all the HTTP calls will be performed in your intranet
You will have a better data governance.
Related to the 2nd bullet, I’ll quote the official launch sample
For example, let’s look at a typical hospital system that works with patients. After many years of taking care of patients, they have numerous doctor’s notes, intake records, or other files that they want to process and derive insights about key trends. Using Cognitive Services containers, they can process all of these files, index millions of documents and find commonalities, and improve the patient experience while keeping the data in-house. Another example would be a large manufacturing plant that has limited connectivity where they want to track assets on the edge using remote sensors and cameras, using AI to predict maintenance needs.
As a downside point, you need to manage your own docker instances, which may require some extra work. However, it’s likely that you are already doing this, so this is “just more containers”
How about Anomaly Detector Containers?
And, after this small introduction, and starting to use Anomaly Detector Service I found that if you want to use Anomaly Detector with containers, you must first complete and submit the Anomaly Detector Container Request form to request access to the container.
The form requests information about you, your company, and the user scenario for which you’ll use the container. After you’ve submitted the form, the Azure Cognitive Services team reviews it to ensure that you meet the criteria for access to the private container registry.
Once you submitted your request and been approved, you can start to use the Anomaly Detector service locally with containers. And just a reminder about this: even if your data requests are not going to hit and Azure HTPP Endpoint, your containers need to be able to connect to Azure.
The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn’t connect to Azure within the allowed time window, the container continues to run but doesn’t serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can’t connect to the billing endpoint within the 10 tries, the container stops running.
I’m going to be part of one of the most amazing developer events in NA: CodeMash (http://www.codemash.org/). It will be my first time in Ohio, and also it will be an amazing opportunity to network and have some face-to-face chats with some amazing people. (Just look at the Speaker List)
I was also lucky to host a session around Artificial Intelligence with Cognitive Services at Enterprise Level. The latest announcements of containers and Cognitive Services are ready on time for this!
How a PoC at home can scale to Enterprise Level using Custom Vision APIs
It all started with a DIY project to use Computer Vision for security cameras at home. A custom Machine Learning model is the core component used to analyze pictures to detect people, animals and more in a house environment. The AI processing is performed at the edge, in dedicated hardware and the collected information is stored in the cloud.
The same idea can be applied to several CCTV scenarios, like parking lots, train stations, malls and more. However, moving this into enterprise scale brings a set of challenges, which are going to be described and explained in this session.
Today I have the placer to speak again with Juan Luis Guerrero (@JuanLuElGuerre), and this time he will share with us some of his experiences in a new adventure in Sevilla. This new adventure is cool to share because he is using a lot of cool new technologies, and he moved on from the “Hello World” stage a long time ago. So, JuanLu will share with us some of his experiences using, configuring, deploying, and testing .NET Core in different environments like Windows Core, Linux and more.
We will also speak about other products and tools like Sonar, XUnit instead of MSTest, Test Manager, Linux and Docker, and more.
Hoy vuelvo a hablar con Juan Luis Guerrero (@JuanLuElGuerre), y esta vez para que me comente como está llevando una nueva aventura en Sevilla. Lo interesante de esta experiencia, es que está utilizando muchas tecnologías “nuevas” y tiene un punto de vista particular sobre la mismas. En otras palabras, hace un rato largo que ha pasado de la etapa del “Hola Mundo” y está viviendo en su propia piel la utilización, configuración y despliegue de algunas soluciones como .NET Core en Linux.
En el camino caen otros temas como la utilización de Sonar, XUnit en lugar de MSTest, Test Manager, Linux y Docker, y el porqué no de Linux y Windows Core, y mu8chos otros temas más.
En este episodio Pedro nos cuenta un poco lo básico que debemos conocer para comenzar a trabajar con containers o contenedores. Hoy por hoy es una palabra que está muy de moda en los ambientes de desarrolladores, así que este es un buen punto de introducción para
Conocer qué es un container ?
Porque grandes como Amazon, Azure o Google están apostando fuerte por ellos?
Qué es Docker?
Cómo orquestar un grupo de contenedores?
Y otros muchos temas que hemos tocado durante el podcast, como el escalado de containers, la verdad de Git dentro de un contenedor y más. Espero que lo disfruten, yo he de decir que he aprendido mucho en poco más de una hora!