Search This Blog

Monday 6 August 2018

Docker issue with "no such file or directory"

Problem


Trying run a docker container fails with mysterious error:
container_linux.go:247: starting container process caused "exec: \"./twit_test\": stat ./twit_test: no such file or directory"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"./twit_test\": stat ./twit_test: no such file or directory".
ERRO[0000] error getting events from daemon: context canceled
Makefile:18: recipe for target 'run' failed
make: *** [run] Error 127
Container could not be started on a remote server while on local machine all worked fine.

Background


Command:

docker run \
--name=twit-test \
--rm \
-p 5000:7077 \
-v /home/tamara/data:/app \
        -e TWITTER_CONS_KEY=${TWITTER_CONS_KEY} \
-e TWITTER_CONS_SECRET=${TWITTER_CONS_SECRET} \
quay.io/tamarakaufler/twit-test:v1alpha1


Dockerfile is located in ~/programming/go/src/..../go-twit/examples

After some testing and poking around, I located the offending line, which was the volume binding. From what I can see the local volume to be bindmounted (a funny word) must be on the level where the command is run or lower, ie the same directory or its subdirectory. Maybe this is a well known fact but it was not to me. Wiser now.

Wednesday 1 August 2018

Eclectic collection of helpful Docker and Kubernetes commands

Docker


docker build -f gcd-service/Dockerfile -t quay.io/tamarakaufler/gcd-service:$(GCD_IMAGE_TAG) .

docker login quay.io -u tamarakaufler -p $QUAY_PASS

docker push quay.io/tamarakaufler/gcd-service:$GCD_IMAGE_TAG

docker ps | grep "postgres" | awk '{print $1}' | xargs docker stop

docker ps | grep "-service" | awk '{print $1}' | xargs docker rm -f

docker run --name=decrypt-incremental --rm -v $PWD:/data quay.io/tamarakaufler/decrypt-incremental:v1alpha1 -f=/data/test4.txt

kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar


Kubernetes


kubectl delete pods $(kubectl get pods |grep 'fe-deployment'|awk '{print $1;}')

kubectl port-forward $(kubectl get  pods --selector=app=kube-prometheus-grafana -n  monitoring --output=jsonpath="{.items..metadata.name}") -n monitoring  3000

kubectl get nodes -o json | grep name


Tuesday 29 May 2018

Bootstrapping a database for use with containerized applications (PostgreSQL, MongoDB etc)

PROBLEM

Developing a dockerized microservice that requires access to a dockerized database with a more complex custom setup (a new user with particular permissions, new database with tables etc).

SOLUTION

Docker allows to perform additional initialization during the database bootstrapping. It works through extending the base registry database image with init scripts, placed in the /docker-entrypoint-initdb.d.  Docker first runs the default  docker-entrypoint.sh script, then all scripts under the /docker-entrypoint-initdb.d directory.

Creating custom PostgreSQL image


Dockerfile

FROM postgres:alpine
ADD ./init/* /docker-entrypoint-initdb.d/
.
├── mongodb
│   └── init
├── nats
├── networking.md
├── postgres
│   ├── Dockerfile
│   ├── init
│   │   └── 01-author-setup.sh
│   ├── Makefile
│   └── README.md
└── README.md

Our local ./init dir contains the following bash script creating a database, a custom user with granted privileges to the created database and (optionally) creating a table.

01-author-setup-sh

#!/usr/bin/env bash

# Credits: based on https://medium.com/@beld_pro/quick-tip-creating-a-postgresql-container-with-default-user-and-password-8bb2adb82342

# This script is used to initialize postgres, after it started running,
# to provide the database(s) and table(s) expected by a connecting
# application.

# In this case, postgres is used by a author-service microservice,
# which expects access to:

#   - database called publication_manager
#   - within it a table called author

#   * db user with approriate privileges to the database

set -o errexit

PUBLICATION_MANAGER_DB=${PUBLICATION_MANAGER_DB:-publication_manager}
AUTHOR_DB_TABLE=${AUTHOR_DB_TABLE:-authors}
AUTHOR_DB_USER=${AUTHOR_DB_USER:-author_user}
AUTHOR_DB_PASSWORD=${AUTHOR_DB_PASSWORD:-authorpass}
POSTGRES_USER=${POSTGRES_USER:-postgres}

# By default POSTGRES_PASSWORD is an empty string. For security reasons it is advisable
# to set set it up when we start running the container:
#
#   docker run --rm -e POSTGRES_PASSWORD=mypass -p 5432:5432 -d --name author_postgres author_postgres
#   psql -h localhost -p 5432 -U postgres

#       Note that unlike in MySQL, psql does not provide a flag for providing password.
#       The password is provided interactively.
#       The PostgreSQL image sets up trust authentication locally, so password is not required
#       when connecting from localhost (inside the same container). Ie. psql in this script, 
#       that runs after Postgres starts, does not need the authentication. 

POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-}

# Debug ----------------------------------------------------
echo "==> POSTGRES_USER ... $POSTGRES_USER"
echo "==> POSTGRES_DB ... $POSTGRES_DB"
echo "==> PUBLICATION_MANAGER_DB ... $PUBLICATION_MANAGER_DB"
echo "==> AUTHOR_DB_USER ... $AUTHOR_DB_USER"
echo "==> AUTHOR_DB_PASSWORD ... [$AUTHOR_DB_PASSWORD]"
echo "==> AUTHOR_DB_TABLE ... $AUTHOR_DB_TABLE"
echo "==> POSTGRES_PASSWORD = [$POSTGRES_PASSWORD]"
# ----------------------------------------------------------

# What environment variables need to be set up.
#   Environment variable defaults are set up in this case, 
#   however we want to ensure the defaults are not accidentally
#   removed from this file causing a problem.
readonly REQUIRED_ENV_VARS=(
  "PUBLICATION_MANAGER_DB"
  "AUTHOR_DB_USER"
  "AUTHOR_DB_PASSWORD"
  "AUTHOR_DB_TABLE")

# Main execution:
# - verifies all environment variables are set
# - runs SQL code to create user and database
# - runs SQL code to create table
main() {
  check_env_vars_set
  init_user_and_db

  # Comment out if wanting to use the author-service uses gorm AutoMigrate feature:
  #   the gorm AutoMigrate feature creates extra columns (xxx_unrecognized, xxx_sizecache)
  #   based on the proto message, which are required for proto messages transactions
  #   to work with the table
  # init_db_tables
}

# ----------------------------------------------------------
# HELPER FUNCTIONS

# Check if all of the required environment
# variables are set
check_env_vars_set() {
  for required_env_var in ${REQUIRED_ENV_VARS[@]}; do
    if [[ -z "${!required_env_var}" ]]; then
      echo "Error:
    Environment variable '$required_env_var' not set.
    Make sure you have the following environment variables set:
      ${REQUIRED_ENV_VARS[@]}
Aborting."
      exit 1
    fi
  done
}

# Perform initialization in the already-started PostgreSQL
#   - create the database
#   - set up user for the author-service database:
#         this user needs to be able to create a table,
#         to insert/update and delete records
init_user_and_db() {
  psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
     CREATE DATABASE $PUBLICATION_MANAGER_DB;
     CREATE USER $AUTHOR_DB_USER WITH PASSWORD '$AUTHOR_DB_PASSWORD';
     GRANT ALL PRIVILEGES ON DATABASE $PUBLICATION_MANAGER_DB TO $AUTHOR_DB_USER;
EOSQL
}

#   - create database tables
init_db_tables() {
  psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" "$PUBLICATION_MANAGER_DB" <<-EOSQL
    CREATE TABLE $AUTHOR_DB_TABLE(
    ID             CHAR VARYING(60) PRIMARY KEY NOT NULL,
    FIRST_NAME     CHAR VARYING(40) NOT NULL,
    LAST_NAME      CHAR VARYING(60) NOT NULL,
    ADDRESS        CHAR(100),
    COUNTRY        CHAR(70),
    EMAIL          CHAR(70),
    PASSWORD       CHAR VARYING(50),
    TOKEN          TEXT
);
EOSQL
}

# Executes the main routine with environment variables
# passed through the command line. Added for completeness 
# as not used here.
main "$@"

Makefile

build:
 docker build -t author-postgres .

run:
 docker run --rm  -p 5432:5432 --network=pm-net-bridge --name author-postgres author-postgres

The Docker run command above also attaches the running postgres container to a custom bridge so that a microservice, attached to the same bridge, can connect to the database.

LINKS


https://github.com/tamarakaufler/grpc-publication-manager
https://github.com/tamarakaufler/go-calculate-for-me


Saturday 12 May 2018

Dockerization of Go web applications using alpine base image on Ubuntu 16.04

Using the attractively small alpine image as a base for dockerizing a Go web application on an Ubuntu host, does not work if the application binary is built using the traditional:

GOOS=linux GOARCH=amd64  go build


FROM alpine:latest

RUN mkdir /app
WORKDIR /app
COPY service /app
EXPOSE 50051
ENTRYPOINT ["./service"]

The image builds without a problem, but running a container:

docker run -p 50051:50051 author-service

results in the following error:

standard_init_linux.go:178: exec user process caused "no such file or directory"

The reason behind this lies in the alpine OS missing dependencies that the Go application by default needs. In particular it is the Go net package, that by default requires cgo to enable dynamic runtime linking to C libraries, which the alpine does not provide.

SOLUTION


a)
On some systems (like Ubuntu) it is possible to use the network without cgo, so we can use the the CGO_ENABLED flag to change the default:


CGO_ENABLED=0 GOOS=linux GOARCH=amd64  go build .

b)
Alternatively it is possible to enforce using Go implementation of net dependencies, netgo: 

GOOS=linux GOARCH=amd64  go build -tags netgo -a -v .

c)
If you for any reason don't want to use either of the above, just use debien instead of alpine.