Posts

Colima - Drop In replacement for Docker Desktop for Mac and Linux

Colima ( Co ntainer on Li nux Ma chines) is an open source alternative and drop in replacement for Docker Desktop on Mac and Linux. https://github.com/abiosoft/colima It is simple to setup and run containers without the need for sudo or root access. Images can be pulled form both Docker hub ( https://hub.docker.com ) or Amazon's public registry ( https://gallery.ecr.aws ) Amazon's registry is more permissive or friendly with pull rates for both clients and non-clients of AWS. This article show the setup and basics of running containers on mac. Installation command for Linux can vary based on the distribution type, hence please refer to official documentation for up-to-date steps at  https://github.com/abiosoft/colima/blob/main/docs/INSTALL.md Installation on mac: brew install docker  brew install docker-buildx brew install docker-compose brew install jq brew install colima Start colima in foreground with default options colima start -f --network-address with explicit options 4

git submodules

 git submodules allows to link one or more related git projects, which allows to build and execute commands in the dependent projects in a seamless way both on development machines and CI servers. This post explains about some basic commands required with git repositories containing git submodules. Add a new submodule: git submodule add <git_repo_url> <submodule_path> git add . git commit -m "Added Submodule" git push Remove an existing submodule: git submodule deinit <submodule_path> git rm <submodule_path> rm -rf .git/modules/<submodule_path> git add . git commit -m "Removed Submodule" git push Pull changes to submodules in already cloned repository: git pull --recurse-submodules git submodule update --init --recursive Pull changes to submodules in parallel: The -j parameters specifies the maximum number of parallel jobs to pull the submodules. In the below example -j8 means, up to 8 submodules will be pulled in parallel. The will spe

Install Docker Engine and Compose on Linux machines (for developers)

Install docker: Know the linux architecture type using uname -m mkdir -p $HOME/tools/docker_engine Download latest static binary from https://download.docker.com/linux/static/stable/ to $HOME/tools/docker_engine sudo groupadd docker sudo gpasswd -a ${USER} docker newgrp docker tar -xvzf $HOME/tools/docker_engine/docker-24.0.5.tgz -C $HOME/tools/docker_engine chmod a+rwx -R $HOME/tools/docker_engine/* ls -1 $HOME/tools/docker_engine/docker | (while read line: do sudo ln -sfn $HOME/tools/docker_engine/docker/$line /usr/bin/$line; done;) Start docker service using sudo dockerd Start docker service in the background sudo sudo nohup dockerd > /dev/null 2>&1 & exit Kill the docker service running in the background sudo ps -Aef | grep dockerd | grep -v grep | tr -s ' ' | cut -d' ' -f2 | xargs sudo kill -9 Clean up dockerd logs and data sudo rm -rf /var/lib/docker Install docker compose: mkdir -p $HOME/tools/docker_compose Download latest version from https://gith

Kibana KQL cheat sheet

Search logs containing the exact text message: "my search term" Search logs containing the wildcard text message: *search* Search logs with and, or, not ((message: "text 1" OR "text 2") AND (message: *text3*) AND NOT (message: "text 4")) Search logs where message field exists message: * Search logs where message field not exists NOT message: * Search logs by level: (level: "INFO" OR level: "WARN" OR level: "ERROR" OR level: "DEBUG" OR level: "TRACE") Search logs by kubernetes pod name: kubernetes.pod.name: example-service-name-* Search logs by kubernetes container name: kubernetes.container.name: "example-container-name" Common fields: @timestamp message level kubernetes.pod.name kubernetes.container.name 

Running unit or integration tests in specific order with junit platform suite

The intention of this example is not to encourage running unit or integration test in a specific in the build. Instead this is a powerful and handy way to reproduce random build failure due to test clashing with each other or lack of isolation between tests. The fundamental idea is to temporarily create a junit platform suite and run the tests in the same order which caused the build failure and reproduce the failure in a consistent and deterministic way. Once the failure can be reliably reproduced, the clashing test can be easily narrowed down. In order to create this sample suite, we need the following test dependency: group id: org.junit.platform artifact id: junit-platform-suite version: <<latest>> Create a sample Junit suite and run it using your favourite IDE. package com.harishkannarao.test; import org.junit.platform.suite.api. SelectClasses ; import org.junit.platform.suite.api. Suite ; @Suite // order of the test classes is very important. E.g ExampleTest runs firs

Docker save and load images offline

 In some rare occasions, it was beneficial for me to save (export) the docker as a local file and load (import) the local file as docker image. Hence, I am posting the commands to save, load and tag the images to enable offline working with docker. In order to save the image, first we need to pull the image from a docker registry (default docker hub or any other registry). Pull docker image: docker pull postgres:10.6 Save (export) docker image: docker save -o /tmp/postgres-10.6.tar postgres:10.6 Load (import) docker image: docker load -i /tmp/postgres-10.6.tar See the loaded image: docker images postgres:10.6 Create a new tag from the loaded image: docker tag postgres:10.6 org.example/postgres:10.6

minikube as drop in replacement for Docker Desktop (Mac and Windows)

For a very long time, development time dependencies like postgres (database), kafka (message broker) was run using docker desktop (Docker for Mac / Docker for Windows). However due to recent changes to licensing, docker desktop is not free for all organisations. As a developer, if you are in an organisation who can't provide Docker Desktop, then minikube can be used as a drop in replacement for Docker Engine, so that your dependency scripts using docker cli can continue to work. The following contents explains the bare minimal commands needed to run minikube which will work if you are running dependencies using docker cli via: Shell scripts Maven docker plugin Gradle docker plugin TestContainers IDE (Intellij / Eclipse / VS Code) Plugin Install minikube: https://minikube.sigs.k8s.io/docs/start/ Start minikube: minikube start Stop minikube: minikube stop Delete minikube cluster: minikube delete Delete all minikube clusters: minikube delete --all See minikube dashboard: minikube dash