Posts

Showing posts from June, 2025

Access host machine service from within docker container

To access the host machine's service running on a local port from within a docker container, use the special URL host.docker.internal to access the host services. This soluction works on the following installation types of Docker:     Docker Engine on Linux     Docker Desktop on Linux, Mac and Windows     Colima on Mac  Example using docker docker run --add-host=host.docker.internal:host-gateway --rm alpine/curl -fsSL http://host.docker.internal:8080/health-check Example using docker compose networks:   main:     name: my-docker-compose-network services:   curl:     container_name: curl     image: "alpine/curl"     extra_hosts:       - host.docker.internal:host-gateway     command: curl -fsSL http://host.docker.internal:8080/health-check     networks:       main:         aliases:           - curl

Download and run Hugging Face AI models in Ollama

This article explains step by step guide to download an AI model from Hugging Face repository and run it with Ollama. Ollama allows to create a model locally from GGUF format or SafeTensors format and this article will cover both models. Creating from GGUF models is highly reliable as the GGUF file is packaged with model data with template file suitable for the AI model. Running a model using GGUF: Example repository with GGUF formal model https://huggingface.co/sergbese/gemma-3-isv-translator-v5-gguf-bf16 Create a download directory: mkdir gemma-3-isv-translator-v5-gguf-bf16 Download the file into the directory cd gemma-3-isv-translator-v5-gguf-bf16 wget "https://huggingface.co/sergbese/gemma-3-isv-translator-v5-gguf-bf16/resolve/main/gemma-3-finetune-2.BF16.gguf?download=true" -O ./gemma-3-finetune-2.BF16.gguf Create an Ollama model: echo 'FROM ./gemma-3-finetune-2.BF16.gguf' > Modelfile ollama create gemma-3-isv-translator-v5-gguf-bf16:latest Run and verify the ...