Search This Blog

Saturday, 23 January 2021

dpkg: package is in a very inconsistent state

Somehow the docker-ce package on my Ubuntu16.04 machine got out of shape.  I could not update it, I could not delete it.  Running this and other package management related commands

sudo dpkg --configure -a

resulted in:

.....

dpkg: error processing package docker-ce (--purge): package is in a very bad inconsistent state; you should reinstall it before attempting a removal Errors were encountered while processing: docker-ce .....

 Below are the steps that helped me solve the problem:

A) If you get an error message about not being able to acquire a lock, run the following commads to find, which processes are holding the particular lock(s):

sudo lsof /var/lib/dpkg/lock-frontend
sudo lsof /var/lib/dpkg/lock
sudo lsof /var/lib/apt/lists/lock
sudo lsof /var/cache/apt/archives/lock

Then kill the process(es) with

sudo kill -9 <process_id>

and remove the associated lock(s)

sudo rm /var/lib/dpkg/lock-frontend
sudo rm /var/lib/apt/lists/lock
sudo rm /var/cache/apt/archives/lock
sudo rm /var/lib/dpkg/lock

If no processes are listed, just remove the lock(s).

B) Find the package (docker-ce) location using

ls -l /var/lib/dpkg/info | grep docker-ce

and do a backup of the package files

sudo mv /var/lib/dpkg/info/docker-ce.* /tmp/

          

C) Cleanup the inconsistency

sudo dpkg --remove --force-remove-reinstreq docker-ce  

D) Reinstall docker-ce

sudo dpkg --configure -a
sudo apt-get install docker-ce


Links

https://itsfoss.com/could-not-get-lock-error/

https://stackoverflow.com/questions/48431372/removing-broken-packages-in-ubuntu

https://stackoverflow.com/questions/55700980/cant-uninstall-docker-ce-and-cant-install-any-new-programs

https://askubuntu.com/questions/979293/docker-ce-post-installation-subprocess-never-finishes

Wednesday, 9 October 2019

Running containerized traffic lights in Raspberry Pi Kubernetes cluster

My cluster setup


The incentive behind this exercise was to understand how to set up deployment of an application talking to Raspberry Pi GPIOs when running the application in a container within a Kubernetes cluster.

Hardware


  • four Raspberry Pi 4 Model B (4Gb of memory)
  • 4 microSD cards (3 x 32Gb, 1 x 64Gb):
    • Samsung EVO Plus 32 GB microSDHC 
    • SanDisk Ultra 32GB microSDHC Memory Card 
  • 5 port USB power supply (PoE hat currently too expensive at £18 per one board)
  • 5 port Ethernet switch (no PoE functionality):
    • TTP-Link LS1005G 5-Port Desktop/Wallmount Gigabit Ethernet Switch
  • 4 USB cables (the best speed you can afford)
  • 4 Ethernet cables
    • Multi Cable SLIM FLAT 1m Cat6 RJ45 Ethernet Network Patch Lan cable
  • 2 traffic lights
  • 3 lights:
    • 3-Piece Set Ky 009 5050 3 Colour SDM RGB 3 Color LED Module for Arduino
  • 3 GPIO header extensions:
    • T-Type GPIO Extension Board Module with 40 Pin Rainbow Ribbon Cable
  • Cluster tower:
    • MakerFun Pi Rack Case for Raspberry Pi 4 Model B,Raspberry Pi 3 B+ Case with Cooling Fan and Heatsink, 4 Layers Acrylic Case Stackable

Software

  • Balena Etcher
  • Raspbian lite (at the time of writing buster)
  • k3s or full blown k8s
    • k3s lightweight kubernetes setup:
      • either manually
      • using k3sup
    • k8s set up using kubeadmin
You will not need it for this project, but if you want to set up a Raspberry Pi Go development environment, it is possible to download a Go binary for the ARM architecture. uname -a command tells what architecture  the board has:

armv6l, armv7l .......... go1.13.1.linux-armv6l.tar.gz (at the time of writing), for Raspbian OS

arm64 ....................... go1.13.1.linux-arm64.tar.gz, for other 64bit OSes

There are articles documenting setup of a Raspberry Pi Kubernetes cluster (see Links), so I shall not detail how I went about it here. Perhaps YARPCA in the future. One thing I found I needed to do to be able to access the cluster locally using the kubectl command, was to set the KUBECONFIG environment variable, even if I had the right (Raspberry Pi cluster) context in the default file ~/.kube/config. This was not obvious given the Kubernetes documentation.

I tried three ways of Kubernetes installation: using kubeadmin, k3sup and manual installation of k3s. I have now two clusters, one with the full blown Kubernetes container management system, one with the lighter k3s.

Each of the Raspberry Pi boards has a traffic light or an LED light (3 changing colours, ie connects to 3 pins + ground) connected to its GPIOs.

Fun with traffic lights


The traffic-lights Go code, Dockerfile and kubernetes manifests can be found on github.

main.go
package main

import (
 "fmt"
 "os"
 "os/signal"
 "syscall"
 "time"

 rpio "github.com/stianeikeland/go-rpio/v4"
)

func main() {
 fmt.Printf("Starting traffic lights at %s\n", time.Now())

 // Opens memory range for GPIO access in /dev/mem
 if err := rpio.Open(); err != nil {

  fmt.Printf("Cannot access GPIO: %s\n", time.Now())

  fmt.Println(err)
  os.Exit(1)
 }

 // Get the pin for each of the lights (refers to the bcm2835 layout)
 redPin := rpio.Pin(2)
 yellowPin := rpio.Pin(3)
 greenPin := rpio.Pin(4)

 fmt.Printf("GPIO pins set up: %s\n", time.Now())

 // Set the pins to output mode
 redPin.Output()
 yellowPin.Output()
 greenPin.Output()

 fmt.Printf("GPIO output set up: %s\n", time.Now())

 // Clean up on ctrl-c and turn lights out
 c := make(chan os.Signal, 1)
 signal.Notify(c, os.Interrupt, syscall.SIGTERM)
 go func() {
  <-c

  fmt.Printf("Switching off traffic lights at %s\n", time.Now())

  redPin.Low()
  yellowPin.Low()
  greenPin.Low()

  os.Exit(0)
 }()

 defer rpio.Close()

 // Turn lights off to start.
 redPin.Low()
 yellowPin.Low()
 greenPin.Low()

 fmt.Printf("All traffic lights switched off at %s\n\n", time.Now())

 // Let's loop now ...
 for {
  fmt.Println("\tSwitching lights on and off")

  // Red
  redPin.High()
  time.Sleep(time.Second * 2)

  // Yellow
  redPin.Low()
  yellowPin.High()
  time.Sleep(time.Second)

  // Green
  yellowPin.Low()
  greenPin.High()
  time.Sleep(time.Second * 2)

  // Yellow
  greenPin.Low()
  yellowPin.High()
  time.Sleep(time.Second * 2)

  // Yellow off
  yellowPin.Low()
 }

}


I went through several stages to learn and fully understand each scenario and to ensure all was working as supposed to.


Stage 1 - running the application directly on Raspberry Pi


After setting up the application dependencies using Go modules, I compiled a binary for the ARM architecture:

    GOOS=linux GOARCH=arm GOARM=7 go build -o trafficlights_arm7 .

Then transferred it to each of the boards to test the pins. The IP addresses are set up to be static in my router, eg the master is 192.168.1.92 etc. The hostnames are fixed as well, my master is raspberrypi-k3s-a.

    scp trafficlights_arm7 pi@192.168.1.92:.

I then sshed to each Raspberry Pi node and ran the traffic lights application

   ./trafficlights_arm7

All confirmed as working satisfactorily, I embarked on stage two, containerizing the traffic lights and running them on the Raspberry Pi nodes in containers.


Stage 2 - running the application in a container on Raspberry Pi


Dockerfile
FROM golang:1.13.1-buster as builder
WORKDIR /app
COPY . .

ENV GOARCH arm
ENV GOARM 7
ENV GOOS linux
RUN ["go", "build", "-o", "trafficlights", "."]

FROM scratch
WORKDIR /app
COPY --from=builder /app/trafficlights /app
CMD ["/app/trafficlights"]


I am using a two stage image build to have a lightweight final Docker image.
I create the image running the following command in the root of the traffic-lights git repo:

     docker build -t "forbiddenforrest/traffic-lights:0.1.0-armv7" .

I then log into my Docker registry (docker login) and push the image

     docker push forbiddenforrest/traffic-lights:0.1.0-armv7

Now I can ssh into one of my Raspberry Pi nodes and run a container based on the pushed traffic-lights image. The docker command is available when using the full version of kubernetes. When using k3s, docker is not available out of thew box as k3s uses containerd. docker can be downloaded separately (sudo apt-get install docker.io).
Manipulating the traffic lights using a containerized application did not prove straightforward. The container needs access to the Raspberry Pi node hardware, which is not given by default. I searched for a solution, came across some, but none worked for me. So I first needed to understand better how it works on the Raspberry Pi side and on the Docker side, then a spot of trial and error approach.

The result that worked for me:

     docker run --rm -it --device /dev/mem --device /dev/gpiomem  forbiddenforrest/traffic-lights:0.1.0-armv7

alternatively

     docker run --rm -it --device=/dev/mem --device=/dev/gpiomem  forbiddenforrest/traffic-lights:0.1.0-armv7

alternatively

     docker run --rm -it --device=/dev/mem:/dev/mem \
--device=/dev/gpiomem:/dev/gpiomem forbiddenforrest/traffic-lights:0.1.0-armv7


This (based on what I found when researching) did not work:

     docker run --rm -it --privileged forbiddenforrest/traffic-lights:0.1.0-armv7

     docker container run --rm -it --privileged --device=/dev/mem:/dev/mem --device=/dev/gpiomem:/dev/gpiomem -v /sys:/sys  forbiddenforrest/traffic-lights:0.1.0-armv7

     docker container run --rm -it --privileged  -v /sys:/sys forbiddenforrest/traffic-lights:0.1.0-armv7


All confirmed working, I moved to the next stage of running traffic-lights in a pod.

Stage 3 - running the application in a Pod in Raspberry Pi Kubernetes cluster


traffic_lights_pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: traffic-lights
  labels:
    app: traffic-lights
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 997
    fsGroup: 15
  containers:
  - name: traffic-lights
    image: forbiddenforrest/traffic-lights:0.1.0-armv7
    securityContext:
      privileged: true

The securityContext needs to be set correctly both for the pod and the containers running in that pod:

runAsUser ..... 1000 (pi user)
runAsGroup ... 997  (group of /dev/mem, special character file which mirrors the main memory)
fsGroup .......... 15    (group of /dev/gpiomem, special character file which mirrors the memory associated with the GPIO device. Some volumes (storage) are owned and are writable by this GID.)

from
pi@raspberrypi-k3s-a:~ $ ls -l /dev/mem
crw-r----- 1 root kmem 1, 1 Oct  6 23:17 /dev/mem
pi@raspberrypi-k3s-a:~ $ ls -l /dev/gpiomem 
crw-rw---- 1 root gpio 247, 0 Oct  6 23:17 /dev/gpiomem
pi@raspberrypi-k3s-a:~ $ cat /etc/group |grep mem
kmem:x:15:
pi@raspberrypi-k3s-a:~ $ cat /etc/group |grep gpio
gpio:x:997:pi

Dealing with pods,  we are now fully dealing with the cluster, ie the traffic-lights application, when the pod is created, will be scheduled on one of the worker nodes. If the pod is killed and a new one created, it will be scheduled randomly by kubernetes. The master node is tainted not to be considered for scheduling.

To have a visual proof of the scheduling, I created a bash script for creating and deleting pods. Each of the worker nodes is connected to a light. When a pod gets deployed on a node, the lights connected to that particular Raspberry Pi board start working.

kubectl apply -f traffic_lights_pod.yaml
sleep 6
kubectl delete pod traffic-lights
kubectl apply -f traffic_lights_pod.yaml
sleep 6
kubectl delete pod traffic-lights
kubectl apply -f traffic_lights_pod.yaml
sleep 6
kubectl delete pod traffic-lights
kubectl apply -f traffic_lights_pod.yaml
sleep 6
kubectl delete pod traffic-lights

It took about 10 seconds to remove a deleted pod. If a pod is created directly in the cluster, kubernetes will not recreate it automatically when it is deleted. Let's deploy the traffic-lights using Kubernetes Deployment - my last Stage 4.


Stage 4 - running the application as a Deployment in Raspberry Pi Kubernetes cluster


traffic_lights_deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: traffic-lights
  labels:
    app: traffic-lights
spec:
  replicas: 1
  selector:
    matchLabels:
      app: traffic-lights
  template:
    metadata:
      name: traffic-lights
      labels:
        app: traffic-lights
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 997
        fsGroup: 15
      containers:
      - name: traffic-lights
        image: forbiddenforrest/traffic-lights:0.1.0-armv7
        securityContext:
          privileged: true

The Deployment template contains the same Pod specification without apiVersion and kind information. The Deployment prescribes there should be one pod running at any time.

To see which worker the traffic-lights application is running on, I have the following script:

start_stop_deploy.sh
#!/bin/bash
echo "kubectl apply -f traffic_lights_deploy.yaml"
echo "then 10 rounds of pod deletion"
echo ""
echo "Start ..."
sleep 5
kubectl apply -f traffic_lights_deploy.yaml

for i in 1 2 3 4 5 6 7 8 9 10
do
    echo "round $i"
    echo "creating pod at `date`"
    sleep 4
    echo "deleting pod at `date`"
    kubectl delete pod -l 'app=traffic-lights'
done
kubectl delete deployments.apps/traffic-lights

echo "End ..."

It takes about 5 seconds for a new pod to start when one is deleted. A new pod is already in place while kubernetes is tidying up the deleted one.


And now a bit of cinematography



Monday, 6 August 2018

Docker issue with "no such file or directory"

Problem


Trying run a docker container fails with mysterious error:
container_linux.go:247: starting container process caused "exec: \"./twit_test\": stat ./twit_test: no such file or directory"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"./twit_test\": stat ./twit_test: no such file or directory".
ERRO[0000] error getting events from daemon: context canceled
Makefile:18: recipe for target 'run' failed
make: *** [run] Error 127
Container could not be started on a remote server while on local machine all worked fine.

Background


Command:

docker run \
--name=twit-test \
--rm \
-p 5000:7077 \
-v /home/tamara/data:/app \
        -e TWITTER_CONS_KEY=${TWITTER_CONS_KEY} \
-e TWITTER_CONS_SECRET=${TWITTER_CONS_SECRET} \
quay.io/tamarakaufler/twit-test:v1alpha1


Dockerfile is located in ~/programming/go/src/..../go-twit/examples

After some testing and poking around, I located the offending line, which was the volume binding. From what I can see the local volume to be bindmounted (a funny word) must be on the level where the command is run or lower, ie the same directory or its subdirectory. Maybe this is a well known fact but it was not to me. Wiser now.

Wednesday, 1 August 2018

Eclectic collection of helpful Docker and Kubernetes commands

Docker


docker build -f gcd-service/Dockerfile -t quay.io/tamarakaufler/gcd-service:$(GCD_IMAGE_TAG) .

docker login quay.io -u tamarakaufler -p $QUAY_PASS

docker push quay.io/tamarakaufler/gcd-service:$GCD_IMAGE_TAG

docker ps | grep "postgres" | awk '{print $1}' | xargs docker stop

docker ps | grep "-service" | awk '{print $1}' | xargs docker rm -f

docker run --name=decrypt-incremental --rm -v $PWD:/data quay.io/tamarakaufler/decrypt-incremental:v1alpha1 -f=/data/test4.txt

kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar


Kubernetes


kubectl delete pods $(kubectl get pods |grep 'fe-deployment'|awk '{print $1;}')

kubectl port-forward $(kubectl get  pods --selector=app=kube-prometheus-grafana -n  monitoring --output=jsonpath="{.items..metadata.name}") -n monitoring  3000

kubectl get nodes -o json | grep name


Tuesday, 29 May 2018

Bootstrapping a database for use with containerized applications (PostgreSQL, MongoDB etc)

PROBLEM

Developing a dockerized microservice that requires access to a dockerized database with a more complex custom setup (a new user with particular permissions, new database with tables etc).

SOLUTION

Docker allows to perform additional initialization during the database bootstrapping. It works through extending the base registry database image with init scripts, placed in the /docker-entrypoint-initdb.d.  Docker first runs the default  docker-entrypoint.sh script, then all scripts under the /docker-entrypoint-initdb.d directory.

Creating custom PostgreSQL image


Dockerfile

FROM postgres:alpine
ADD ./init/* /docker-entrypoint-initdb.d/
.
├── mongodb
│   └── init
├── nats
├── networking.md
├── postgres
│   ├── Dockerfile
│   ├── init
│   │   └── 01-author-setup.sh
│   ├── Makefile
│   └── README.md
└── README.md

Our local ./init dir contains the following bash script creating a database, a custom user with granted privileges to the created database and (optionally) creating a table.

01-author-setup-sh

#!/usr/bin/env bash

# Credits: based on https://medium.com/@beld_pro/quick-tip-creating-a-postgresql-container-with-default-user-and-password-8bb2adb82342

# This script is used to initialize postgres, after it started running,
# to provide the database(s) and table(s) expected by a connecting
# application.

# In this case, postgres is used by a author-service microservice,
# which expects access to:

#   - database called publication_manager
#   - within it a table called author

#   * db user with approriate privileges to the database

set -o errexit

PUBLICATION_MANAGER_DB=${PUBLICATION_MANAGER_DB:-publication_manager}
AUTHOR_DB_TABLE=${AUTHOR_DB_TABLE:-authors}
AUTHOR_DB_USER=${AUTHOR_DB_USER:-author_user}
AUTHOR_DB_PASSWORD=${AUTHOR_DB_PASSWORD:-authorpass}
POSTGRES_USER=${POSTGRES_USER:-postgres}

# By default POSTGRES_PASSWORD is an empty string. For security reasons it is advisable
# to set set it up when we start running the container:
#
#   docker run --rm -e POSTGRES_PASSWORD=mypass -p 5432:5432 -d --name author_postgres author_postgres
#   psql -h localhost -p 5432 -U postgres

#       Note that unlike in MySQL, psql does not provide a flag for providing password.
#       The password is provided interactively.
#       The PostgreSQL image sets up trust authentication locally, so password is not required
#       when connecting from localhost (inside the same container). Ie. psql in this script, 
#       that runs after Postgres starts, does not need the authentication. 

POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-}

# Debug ----------------------------------------------------
echo "==> POSTGRES_USER ... $POSTGRES_USER"
echo "==> POSTGRES_DB ... $POSTGRES_DB"
echo "==> PUBLICATION_MANAGER_DB ... $PUBLICATION_MANAGER_DB"
echo "==> AUTHOR_DB_USER ... $AUTHOR_DB_USER"
echo "==> AUTHOR_DB_PASSWORD ... [$AUTHOR_DB_PASSWORD]"
echo "==> AUTHOR_DB_TABLE ... $AUTHOR_DB_TABLE"
echo "==> POSTGRES_PASSWORD = [$POSTGRES_PASSWORD]"
# ----------------------------------------------------------

# What environment variables need to be set up.
#   Environment variable defaults are set up in this case, 
#   however we want to ensure the defaults are not accidentally
#   removed from this file causing a problem.
readonly REQUIRED_ENV_VARS=(
  "PUBLICATION_MANAGER_DB"
  "AUTHOR_DB_USER"
  "AUTHOR_DB_PASSWORD"
  "AUTHOR_DB_TABLE")

# Main execution:
# - verifies all environment variables are set
# - runs SQL code to create user and database
# - runs SQL code to create table
main() {
  check_env_vars_set
  init_user_and_db

  # Comment out if wanting to use the author-service uses gorm AutoMigrate feature:
  #   the gorm AutoMigrate feature creates extra columns (xxx_unrecognized, xxx_sizecache)
  #   based on the proto message, which are required for proto messages transactions
  #   to work with the table
  # init_db_tables
}

# ----------------------------------------------------------
# HELPER FUNCTIONS

# Check if all of the required environment
# variables are set
check_env_vars_set() {
  for required_env_var in ${REQUIRED_ENV_VARS[@]}; do
    if [[ -z "${!required_env_var}" ]]; then
      echo "Error:
    Environment variable '$required_env_var' not set.
    Make sure you have the following environment variables set:
      ${REQUIRED_ENV_VARS[@]}
Aborting."
      exit 1
    fi
  done
}

# Perform initialization in the already-started PostgreSQL
#   - create the database
#   - set up user for the author-service database:
#         this user needs to be able to create a table,
#         to insert/update and delete records
init_user_and_db() {
  psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
     CREATE DATABASE $PUBLICATION_MANAGER_DB;
     CREATE USER $AUTHOR_DB_USER WITH PASSWORD '$AUTHOR_DB_PASSWORD';
     GRANT ALL PRIVILEGES ON DATABASE $PUBLICATION_MANAGER_DB TO $AUTHOR_DB_USER;
EOSQL
}

#   - create database tables
init_db_tables() {
  psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" "$PUBLICATION_MANAGER_DB" <<-EOSQL
    CREATE TABLE $AUTHOR_DB_TABLE(
    ID             CHAR VARYING(60) PRIMARY KEY NOT NULL,
    FIRST_NAME     CHAR VARYING(40) NOT NULL,
    LAST_NAME      CHAR VARYING(60) NOT NULL,
    ADDRESS        CHAR(100),
    COUNTRY        CHAR(70),
    EMAIL          CHAR(70),
    PASSWORD       CHAR VARYING(50),
    TOKEN          TEXT
);
EOSQL
}

# Executes the main routine with environment variables
# passed through the command line. Added for completeness 
# as not used here.
main "$@"

Makefile

build:
 docker build -t author-postgres .

run:
 docker run --rm  -p 5432:5432 --network=pm-net-bridge --name author-postgres author-postgres

The Docker run command above also attaches the running postgres container to a custom bridge so that a microservice, attached to the same bridge, can connect to the database.

LINKS


https://github.com/tamarakaufler/grpc-publication-manager
https://github.com/tamarakaufler/go-calculate-for-me


Saturday, 12 May 2018

Dockerization of Go web applications using alpine base image on Ubuntu 16.04

Using the attractively small alpine image as a base for dockerizing a Go web application on an Ubuntu host, does not work if the application binary is built using the traditional:

GOOS=linux GOARCH=amd64  go build


FROM alpine:latest

RUN mkdir /app
WORKDIR /app
COPY service /app
EXPOSE 50051
ENTRYPOINT ["./service"]

The image builds without a problem, but running a container:

docker run -p 50051:50051 author-service

results in the following error:

standard_init_linux.go:178: exec user process caused "no such file or directory"

The reason behind this lies in the alpine OS missing dependencies that the Go application by default needs. In particular it is the Go net package, that by default requires cgo to enable dynamic runtime linking to C libraries, which the alpine does not provide.

SOLUTION


a)
On some systems (like Ubuntu) it is possible to use the network without cgo, so we can use the the CGO_ENABLED flag to change the default:


CGO_ENABLED=0 GOOS=linux GOARCH=amd64  go build .

b)
Alternatively it is possible to enforce using Go implementation of net dependencies, netgo: 

GOOS=linux GOARCH=amd64  go build -tags netgo -a -v .

c)
If you for any reason don't want to use either of the above, just use debien instead of alpine.


Sunday, 13 August 2017

Setting up Brother printers HL-4570CDW (works for DCP-J315W too) on Ubuntu 17.04


  1. Download drivers from http://support.brother.com/g/b/downloadlist.aspx?c=us&lang=en&prod=hl4570cdw_all&os=128
    1. Either download the Driver Install Tool OR
    2. the two debian packages (LPR printer driver and CUPSwrapper printer)
  2. Install either through the Install tool 
    1. there are detailed instructions
    2. the installation includes download of 2 drivers
  3. OR by installing the printer driver first, then the CUPS wrapper driver
  4. Do the printer network configuration (through the printer panel)
  5. Make the printer available for use:
    1. System Settings -> Printers -> Add printer -> Network Printer -> Find Network Printer
    2. Automatic discovery should show you the printer. If more appear, choose the one with AppSocket/JetDirect.
Troubleshooting

If it happens and you cannot discover your network printer, try restarting cups:

sudo /etc/init.d/cups restart

Wednesday, 10 May 2017

Dockerization of a Golang application

There are several possibilities how to write a Dockerfile for a Golang application. Which one to choose depends on dependencies the application needs.

If the application uses just built in packages, it is possible to have a very minimalistic Dockerfile.


             FROM scratch

             COPY ./goaws /

             CMD ["/goaws"]


Using scratch means we are not basing our image on anything, there is no operating system. This is possible if we do not need to perform any shell operations that require coperation of the OS like mkdir etc.

-------------------------------------------------------------------------------------------------

If there are non built in dependencies, there there are several approaches possible.

a) The first approach bases the new Docker image on an official golang one, downloads all the dependencies, builds the Go binary and starts the application.

There are several gotchas to watch out for:

The application dependencies, whether internal (packages provided within the application) or external (3rd party packages from the Go public repository) need to be in the right place for the build to be able to find them. The paths searched depend on $GOPATH and $GOROOT environment variables. In the golang:1.8 image OS, the $GOPATH is /go. I created a /go/src/github.com/tamarakaufler/goaws directory which allows the Go compiler to find all internal packages during the build. The go get command will install the external packages and we are good to go (excuse the pun).


             FROM golang:1.8

             RUN mkdir -p /go/src/github.com/tamarakaufler/goaws

             COPY . /go/src/github.com/tamarakaufler/goaws/

             RUN go get github.com/ghodss/yaml

             RUN go get github.com/gorilla/mux

             WORKDIR /go/src/github.com/tamarakaufler/goaws

             RUN go build .

             CMD ["./goaws"]


b) Another option is to create an intermediate image based on an official golang one, install various 3rd party packages that your applications need and copy over internal packages. Then:

             FROM my_golang

             COPY ./goaws /

             CMD ["/goaws"]

where my_golang's Dockerfile is:

           FROM golang:1.8

           RUN mkdir -p /go/src/github.com/tamarakaufler/goaws
           COPY ./app/conf/config.go /go/src/github.com/tamarakaufler/goaws/app/conf/
           COPY ./app/router/router.go /go/src/github.com/tamarakaufler/goaws/app/router/

           RUN go get github.com/ghodss/yaml
           RUN go get github.com/gorilla/mux

Saturday, 6 May 2017

Dockerization of an sftp service

Synopsis

As part of building a web service, I wanted to have some of the microservice dependencies dockerized for easy setup and deployment. The three applications my web service depends are:

  • mongodb
  • rabbitmq
  • sftp

This post is about dockerizing sftp. The container will provide sftp-only user accounts and the users will be restricted to their home directory. The former is done through disabling login (in the Dockerfile), the latter by chrooting to the user's home directory (in the sshd_config file).


FROM ubuntu:latest

The latest version of ubuntu is our starting point.


RUN apt-get update && \
    apt-get -y install openssh-server


Sftp (SSH File Transfer Protocol) is a separate protocol packaged with SSH, so we install the ssh server.


RUN mkdir /var/run/sshd

Privilege separation directory, /var/run/sshd, must be present, otherwise the container will exit immediately after starting.


COPY sshd_config /etc/ssh/sshd_config

The default ssh configuration is adjusted for sftp purposes (https://github.com/tamarakaufler/go_loyalty_scheme_service/tree/master/dockerized/sftp).


RUN groupadd sftpusers

All sftp users will be part of this group.

 
 

RUN adduser  --quiet --disabled-password sftp_loyalty

When creating a new sftp user, the
--disabled-password option is provided not to have problems with the following command to change the password.


RUN echo "sftp_loyalty:BIGSeCrEt" | chpasswd sftp_loyalty

RUN usermod -g sftpusers sftp_loyalty && \
    usermod -s /bin/nologin sftp_loyalty && \
    chown root:sftp_loyalty /home/sftp_loyalty && \
    chmod 755 /home/sftp_loyalty


This assigns the sftp user to the correct group and disables normal login.


RUN mkdir /home/sftp_loyalty/uploads && \
    chown sftp_loyalty:sftp_loyalty /home/sftp_loyalty/uploads && \
    chmod 755 /home/sftp_loyalty/uploads

EXPOSE 22

CMD ["/usr/sbin/sshd", "-D"]


Starts the ssh server. I originally tried using:   service sshd start
but that did not work, preventing the container from starting.

Update:  Providing the full path:

             /usr/sbin/service sshs start

works

-----------------------------------------------------------------------------------------------------------------
sshd_config (based on the default /etc/ssh/sshd_config)

  1. Deleted the original line:
    1. Subsystem sftp /usr/lib/openssh/sftp-server
  2. Added at the end of the default sshd_config file:
    1. Subsystem sftp internal-sftp
      Match Group sftpusers
             ChrootDirectory %h #set the home directory
             ForceCommand internal-sftp
             X11Forwarding no
             AllowTCPForwarding no
              PasswordAuthentication yes


https://github.com/tamarakaufler/go_loyalty_scheme_service (when it becomes public)


References


https://www.vultr.com/docs/setup-sftp-only-user-accounts-on-ubuntu-14
https://github.com/atmoz/sftp
https://docs.docker.com/engine/examples/running_ssh_service/

Thursday, 4 May 2017

Xsane - Falied to open device error

PROBLEM


Unable to open xsane due to an error:

Failed to open device 'brother3:bus4:dev2' Invalid argument

BACKGROUND


Ubuntu 16.04 (upgraded from Ubuntu14:04)
Printer/scanner:       Brother DCP-J315W

SOLUTION


1. Open /etc/udev/rules.d/60-libsane.rules
2. Add the following 2 lines at the last of the device entry. (just before "# The following rule...")
         # Brother scanners
         ATTRS{idVendor}=="04f9", ENV{libsane_matched}="yes" 
3. Restart the OS.

 NOTE


  • idVendor is the same for all Brother printers

Thursday, 27 April 2017

Problem with github not labelling the repo language correctly

GitHub uses the Linguist library to classify a repo. It does so by comparing percentages of the various language files (goes by extensions perhaps among other things). So if the repo contains more HTML files than Go files, which was my case, then is is classified as an HTML repo. Not good.

Solution


Add a new file to the repo called .gitattributes with following content:

*.html linguist-language=Go


Alternatively, for a nodejs repo:

*.html linguist-language=JavaScript


Cannot sign into my blogger - admin console obstacle

So I wanted to add a new post to my Google blog (this one) after a longish break. It turned out I could not. At least, not easily. I was doing this on my new laptop and, unlike on my previous one, I had also used my work Gmail account here. This, unexpectedly, complicated things. I could not sign into the Blogger even when I was signed out of my work Gmail account. Weird but true.

It turns out, that Google gets confused by the two accounts if they are used in the same browser. You are on your Blogger page and click on Sign in. It shows you both of your Google logins. You choose your private one, because that one is associated with your blog. Then you are told that you cannot log into the Admin console with your ordinary Gmail.

What!!

Happy ending


  1. Sign out of all your Google accounts. Well, I did that to be on a safe side
  2. Use another browser (I encountered the problem in Firefox, so I used Chrome)
  3. Sign into Google+ using your private Gmail
  4. Voila! Going to my blog page showed I was already logged in. And - I was logged in when I visited my blog in Firefox too.

Saturday, 7 May 2016

Why the order of the Docker commands in the Dockerfile matters

The Docker image is built from directives in the Dockerfile. Docker images are actually built in layers and each layer is part of the image's filesystem. A layer either adds to or replaces the previous one. Each Docker directive, when run, results in a new layer and the layer is cached, unless Docker is instructed otherwise. When the container starts, a final read-write layer is added on top to allow to store changes to the container. Docker uses the union filesystem to merge different filesystems and directories into the final one.

When the Docker image is being built, it uses the cached layers. If a layer was invalidated (one of the invalidation criteria is the checksum of files present in the layer, so when a file has changed during the build), Docker reruns all directives from the one, whose cached layer was invalidated, to the current command to recreate/create up-to-date layers. This makes Docker efficient and fast, and it also means the order of commands is important.

Example:


WRONG

1
2
3
4
5
6
WORKDIR /var/www/my_application            container directory where the RUN, CMD will be run

COPY .  /var/www/my_application            puts all our application files, including package,json, to its resting place in the container
RUN ["npm", "install"]                     installs application nodejs dependencies - this invalidates the cached layer for the COPY directive 
RUN ["node_modules/bower/bin/bower.js", "install", "--allow-root"]   installs application frontend dependencies - this invalidates the cached layer for the COPY directive          
RUN ["node_modules/gulp/bin/gulp.js"]      runs task runner that again invalidates the COPY directive layer by creating new (minified etc) files


CORRECT


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
WORKDIR /var/www/my_application            container directory where the RUN, CMD will be run

COPY package.json  /var/www/my_application this layer will only be invalidated if the package,json changes
RUN ["npm", "install"]                     installs application nodejs dependencies

COPY bower.json  /var/www/my_application   this layer will only be invalidated if the bower.json  changes
RUN ["node_modules/bower/bin/bower.js", "install", "--allow-root"]  installs application frontend dependencies    

COPY .  /var/www/my_application            puts all our application files to its resting place in the container

RUN ["node_modules/gulp/bin/gulp.js"]      runs task runner

Wednesday, 4 May 2016

Dockerizing a nodejs application requiring AWS authentication

Background


To dockerize an application/run an application in a Docker container, refers to running the application in a light, portable virtual machine (container), that contains all it needs to run the application without the need to have to set up the environment/dependencies that the application requires, whether locally or on a Virtual Machine, to be able to run it.

Docker works in a client/server mode. We need to have a Docker daemon (server) running in the background and we use the Docker CLI (command line interface, client) to send Docker directives (to build an image, run it, inspect containers, inspect the image, log into the container and more) to the Docker daemon. Docker directives can be either run by a root/sudoer or it is possible to create a docker group, add one's user to that group and run the docker CLI by that user.

Docker container provides a stripped version of the Linux OS. Docker image provides the application dependencies.
We say we load the image into the container and run the application, for which the image was built, in the container.

The order of the Docker commands in the Dockerfile matters:

The image is built based on directives found in the Dockerfile. Docker images are built in layers. Each layer is a part of the image's filesystem. It either adds to or replaces the previous layer. Each Docker directive, when run, results in a new layer and the layer is cached, unless Docker is instructed otherwise. When the container starts, a final read-write layer is added on top to allow to store changes to the container. Docker uses the union filesystem to merge different filesystems and directories into the final one.

When the Docker image is being built, it tries to use the previously cached layers. If a layer was invalidated (one of the invalidation criteria is the checksum of files present in the layer, so when a file has changed during the build), Docker reruns all directives from the one, whose cached layer was invalidated, to the current command to recreate/create up-to-date layers. This makes Docker efficient and fast, but it also means the order of commands is important.

Example:


WRONG

1
2
3
4
5
6
WORKDIR /var/www/my_application             container directory where the RUN, CMD will be run

COPY .  /var/www/my_application             puts all our application files, including package,json, to its resting place in the container
RUN npm install                                                           installs application nodejs dependencies - this invalidates the cached layer for the COPY directive 
RUN ["node_modules/bower/bin/bower.js", "install", "--allow-root"]   installs application frontend dependencies - this invalidates the cached layer for the COPY directive          
RUN ["node_monules/gulp/bin/gulp.js"]     runs task runner that again invalidates the COPY directive layer by creating new (minified etc) files

CORRECT

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
WORKDIR /var/www/my_application            container directory where the RUN, CMD will be run

COPY package.json  /var/www/my_application this layer will only be invalidated if the package,json changes
RUN npm install                            installs application nodejs dependencies

COPY bower,json  /var/www/my_application   this layer will only be invalidated if the bower.json  changes
RUN ["node_modules/bower/bin/bower.js", "install", "--allow-root"]  installs application frontend dependencies    

COPY .  /var/www/my_application            puts all our application files to its resting place in the container

RUN ["node_monules/gulp/bin/gulp.js"]      runs task runner


Essential commands:

Command to create the docker image:

[sudo] docker build image-name  .

[sudo] docker run  [ -t        -i  ]         image-name                [ ls -l ]
-----------|----------         |         |                      |                                |
runs the container      |         |                      |   command to run interactively in the container
                                   |         |     image to run in the container
                                   |    interactive    
                                   |
creates a pseudo terminal with stdin and stdout

Command to create the docker image:

[sudo] docker run ubuntu /bin/echo 'Hello everybody'
[sudo] docker run   -t -i   ubuntu /bin/bash                               (creates an interactive bash session that allows
                                                                                                        to explore the container)

Terminology


  • docker container ..........  provides basic Linux operating system
  • docker image ...............  loading the image into the container extends the container basics with the required dependencies to provide the desired functionality, eg running a web application, running, populating and maintaining a database etc
  • Dockerfile ..................... contains docker instructions/commands for creating the image
  • Docker hub/registry ...... Docker Engine (providing the core docker technology) makes it possible for people to share software through uploading created images

Requirements

  1. download the docker software to be able to run the docker daemon and the CLI binary, which allows to work with docker container and docker images
  2. create a docker image , which, after loading into a Docker container, sets up the container for running an application
  3. run the image in the docker container. There is a public docker registry (or it's possible to have a private one), from where it is either possible to download an already existing image and use it as is, or it is possible to use as existing image as a basis for creating a customized one.

Download and install the Docker

  1. installation instructions for Linux based Operating Systems:
    1. https://docs.docker.com/engine/installation/

Implement the application


                   TODO

Dockerize the application


Create a docker image build file called Dockerfile


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
FROM node:argon
ENV appDir ${appDir:-/var/www/}

RUN mkdir -p ${appDir}
WORKDIR ${appDir}

COPY package.json ${appDir}/
RUN ["npm", "install"]

COPY bower.json ${appDir}/
RUN ["./node_modules/bower/bin/bower", "install", "--allow-root"]

COPY . ${appDir}
RUN ["./node_modules/gulp/bin/gulp.js"]

CMD ["npm", "start"]

EXPOSE 9090

FROM node:argon
we shall base our custom image on a suitable existing image from the Docket registry/hub. We could use ubuntu image and download and install a particular nodejs version as part of the image build ourselves, but it is more convenient to use an image already created for a particular use. For other nodejs targeted images, see  Docker hub.

ENV APP_DIR=/var/www/clear_cache_example
we are setting an environment variable determining our application root.

RUN mkdir -p $APP_DIR
docker RUN directive executes a shell command.

WORKDIR $APP_DIR
docker WORKDIR command decides the place where the subsequent RUN docker directive will be executed.

COPY package.json $APP_DIR
copies a file in our local directory, where we shall be running the docker build command, into the application root directory in the docker container

RUN ["npm", "install"]
installs application dependencies into the container $APP_DIR/node_modules

COPY . $APP_DIR
copies the application files into the application root directory in the docker container.

RUN ["./node_modules/bower/bin/bower", "install", "--allow-root"]
the root will be the user running the commands (unless we specify another user with docker USER directive) and bower complains if we run install unless we specifically allow it. The relative path ./node_modules/bower/bin/bower needs to be used as bower is not installed globally in the node:argon image. We downloaded bower as a nodejs dependency and therefore we have access to its binary in the node_modules directory.

RUN ["./node_modules/gulp/bin/gulp.js"]
now run the default gulp task

CMD ["npm", "start"]
there can be only one CMD directive in the Dockerfile which either starts the application, as in our case,  or sets the command line arguments for an application command to run with, if ENTRYPOINT (which executable to run after the image is loaded into the container) is specified.

Example:

                # Default Memcached run command arguments
                CMD ["-u", "root", "-m", "128"]

                # Set the entrypoint to memcached binary
                ENTRYPOINT memcached

EXPOSE 9090
port on which the application is running in the container


Create .dockerignore file to exclude files from being added to the image

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
.git
.gitignore
.gitattributes
node_modules
bower_components
Dockerfile
.dockerignore
*.bak
*.orig
*.md
*.swo
*.swp
*.js.*

node_modules and bower_components dependencies will be downloaded and installed during the image build ()

Create a docker image of your application (command must be run in the directory containing Dockerfile

           [sudo] docker build -t [username/]image-name .

Run the application in the docker container

           [sudo] docker run  -v "/home/tamara/.aws:/root/.aws"  -p 3000:9999 -it image-name
                                             |                                                          |
                                             |                                                          |
                       binds a local directory                                application port exposed by the image, 9999,
                       to the container                                           is accessible locally/from outside of the container on 3000

The application communicates with an AWS S3 bucket. The AWS authentication resides in the .aws subdirectory of the user running the application in the form of the file /home/tamara/.aws/credentials. The dockerized application is run by root in the container (unless we dictate otherwise by using the docker USER command), so we bind, at runtime, the local /home/tamara/.aws to the container directory /root/.aws.

Literature

Docker documenation
How To Create Docker Containers Running Memcached