Search This Blog

Saturday, 7 May 2016

Why the order of the Docker commands in the Dockerfile matters

The Docker image is built from directives in the Dockerfile. Docker images are actually built in layers and each layer is part of the image's filesystem. A layer either adds to or replaces the previous one. Each Docker directive, when run, results in a new layer and the layer is cached, unless Docker is instructed otherwise. When the container starts, a final read-write layer is added on top to allow to store changes to the container. Docker uses the union filesystem to merge different filesystems and directories into the final one.

When the Docker image is being built, it uses the cached layers. If a layer was invalidated (one of the invalidation criteria is the checksum of files present in the layer, so when a file has changed during the build), Docker reruns all directives from the one, whose cached layer was invalidated, to the current command to recreate/create up-to-date layers. This makes Docker efficient and fast, and it also means the order of commands is important.

Example:


WRONG

1
2
3
4
5
6
WORKDIR /var/www/my_application            container directory where the RUN, CMD will be run

COPY .  /var/www/my_application            puts all our application files, including package,json, to its resting place in the container
RUN ["npm", "install"]                     installs application nodejs dependencies - this invalidates the cached layer for the COPY directive 
RUN ["node_modules/bower/bin/bower.js", "install", "--allow-root"]   installs application frontend dependencies - this invalidates the cached layer for the COPY directive          
RUN ["node_modules/gulp/bin/gulp.js"]      runs task runner that again invalidates the COPY directive layer by creating new (minified etc) files


CORRECT


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
WORKDIR /var/www/my_application            container directory where the RUN, CMD will be run

COPY package.json  /var/www/my_application this layer will only be invalidated if the package,json changes
RUN ["npm", "install"]                     installs application nodejs dependencies

COPY bower.json  /var/www/my_application   this layer will only be invalidated if the bower.json  changes
RUN ["node_modules/bower/bin/bower.js", "install", "--allow-root"]  installs application frontend dependencies    

COPY .  /var/www/my_application            puts all our application files to its resting place in the container

RUN ["node_modules/gulp/bin/gulp.js"]      runs task runner

Wednesday, 4 May 2016

Dockerizing a nodejs application requiring AWS authentication

Background


To dockerize an application/run an application in a Docker container, refers to running the application in a light, portable virtual machine (container), that contains all it needs to run the application without the need to have to set up the environment/dependencies that the application requires, whether locally or on a Virtual Machine, to be able to run it.

Docker works in a client/server mode. We need to have a Docker daemon (server) running in the background and we use the Docker CLI (command line interface, client) to send Docker directives (to build an image, run it, inspect containers, inspect the image, log into the container and more) to the Docker daemon. Docker directives can be either run by a root/sudoer or it is possible to create a docker group, add one's user to that group and run the docker CLI by that user.

Docker container provides a stripped version of the Linux OS. Docker image provides the application dependencies.
We say we load the image into the container and run the application, for which the image was built, in the container.

The order of the Docker commands in the Dockerfile matters:

The image is built based on directives found in the Dockerfile. Docker images are built in layers. Each layer is a part of the image's filesystem. It either adds to or replaces the previous layer. Each Docker directive, when run, results in a new layer and the layer is cached, unless Docker is instructed otherwise. When the container starts, a final read-write layer is added on top to allow to store changes to the container. Docker uses the union filesystem to merge different filesystems and directories into the final one.

When the Docker image is being built, it tries to use the previously cached layers. If a layer was invalidated (one of the invalidation criteria is the checksum of files present in the layer, so when a file has changed during the build), Docker reruns all directives from the one, whose cached layer was invalidated, to the current command to recreate/create up-to-date layers. This makes Docker efficient and fast, but it also means the order of commands is important.

Example:


WRONG

1
2
3
4
5
6
WORKDIR /var/www/my_application             container directory where the RUN, CMD will be run

COPY .  /var/www/my_application             puts all our application files, including package,json, to its resting place in the container
RUN npm install                                                           installs application nodejs dependencies - this invalidates the cached layer for the COPY directive 
RUN ["node_modules/bower/bin/bower.js", "install", "--allow-root"]   installs application frontend dependencies - this invalidates the cached layer for the COPY directive          
RUN ["node_monules/gulp/bin/gulp.js"]     runs task runner that again invalidates the COPY directive layer by creating new (minified etc) files

CORRECT

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
WORKDIR /var/www/my_application            container directory where the RUN, CMD will be run

COPY package.json  /var/www/my_application this layer will only be invalidated if the package,json changes
RUN npm install                            installs application nodejs dependencies

COPY bower,json  /var/www/my_application   this layer will only be invalidated if the bower.json  changes
RUN ["node_modules/bower/bin/bower.js", "install", "--allow-root"]  installs application frontend dependencies    

COPY .  /var/www/my_application            puts all our application files to its resting place in the container

RUN ["node_monules/gulp/bin/gulp.js"]      runs task runner


Essential commands:

Command to create the docker image:

[sudo] docker build image-name  .

[sudo] docker run  [ -t        -i  ]         image-name                [ ls -l ]
-----------|----------         |         |                      |                                |
runs the container      |         |                      |   command to run interactively in the container
                                   |         |     image to run in the container
                                   |    interactive    
                                   |
creates a pseudo terminal with stdin and stdout

Command to create the docker image:

[sudo] docker run ubuntu /bin/echo 'Hello everybody'
[sudo] docker run   -t -i   ubuntu /bin/bash                               (creates an interactive bash session that allows
                                                                                                        to explore the container)

Terminology


  • docker container ..........  provides basic Linux operating system
  • docker image ...............  loading the image into the container extends the container basics with the required dependencies to provide the desired functionality, eg running a web application, running, populating and maintaining a database etc
  • Dockerfile ..................... contains docker instructions/commands for creating the image
  • Docker hub/registry ...... Docker Engine (providing the core docker technology) makes it possible for people to share software through uploading created images

Requirements

  1. download the docker software to be able to run the docker daemon and the CLI binary, which allows to work with docker container and docker images
  2. create a docker image , which, after loading into a Docker container, sets up the container for running an application
  3. run the image in the docker container. There is a public docker registry (or it's possible to have a private one), from where it is either possible to download an already existing image and use it as is, or it is possible to use as existing image as a basis for creating a customized one.

Download and install the Docker

  1. installation instructions for Linux based Operating Systems:
    1. https://docs.docker.com/engine/installation/

Implement the application


                   TODO

Dockerize the application


Create a docker image build file called Dockerfile


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
FROM node:argon
ENV appDir ${appDir:-/var/www/}

RUN mkdir -p ${appDir}
WORKDIR ${appDir}

COPY package.json ${appDir}/
RUN ["npm", "install"]

COPY bower.json ${appDir}/
RUN ["./node_modules/bower/bin/bower", "install", "--allow-root"]

COPY . ${appDir}
RUN ["./node_modules/gulp/bin/gulp.js"]

CMD ["npm", "start"]

EXPOSE 9090

FROM node:argon
we shall base our custom image on a suitable existing image from the Docket registry/hub. We could use ubuntu image and download and install a particular nodejs version as part of the image build ourselves, but it is more convenient to use an image already created for a particular use. For other nodejs targeted images, see  Docker hub.

ENV APP_DIR=/var/www/clear_cache_example
we are setting an environment variable determining our application root.

RUN mkdir -p $APP_DIR
docker RUN directive executes a shell command.

WORKDIR $APP_DIR
docker WORKDIR command decides the place where the subsequent RUN docker directive will be executed.

COPY package.json $APP_DIR
copies a file in our local directory, where we shall be running the docker build command, into the application root directory in the docker container

RUN ["npm", "install"]
installs application dependencies into the container $APP_DIR/node_modules

COPY . $APP_DIR
copies the application files into the application root directory in the docker container.

RUN ["./node_modules/bower/bin/bower", "install", "--allow-root"]
the root will be the user running the commands (unless we specify another user with docker USER directive) and bower complains if we run install unless we specifically allow it. The relative path ./node_modules/bower/bin/bower needs to be used as bower is not installed globally in the node:argon image. We downloaded bower as a nodejs dependency and therefore we have access to its binary in the node_modules directory.

RUN ["./node_modules/gulp/bin/gulp.js"]
now run the default gulp task

CMD ["npm", "start"]
there can be only one CMD directive in the Dockerfile which either starts the application, as in our case,  or sets the command line arguments for an application command to run with, if ENTRYPOINT (which executable to run after the image is loaded into the container) is specified.

Example:

                # Default Memcached run command arguments
                CMD ["-u", "root", "-m", "128"]

                # Set the entrypoint to memcached binary
                ENTRYPOINT memcached

EXPOSE 9090
port on which the application is running in the container


Create .dockerignore file to exclude files from being added to the image

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
.git
.gitignore
.gitattributes
node_modules
bower_components
Dockerfile
.dockerignore
*.bak
*.orig
*.md
*.swo
*.swp
*.js.*

node_modules and bower_components dependencies will be downloaded and installed during the image build ()

Create a docker image of your application (command must be run in the directory containing Dockerfile

           [sudo] docker build -t [username/]image-name .

Run the application in the docker container

           [sudo] docker run  -v "/home/tamara/.aws:/root/.aws"  -p 3000:9999 -it image-name
                                             |                                                          |
                                             |                                                          |
                       binds a local directory                                application port exposed by the image, 9999,
                       to the container                                           is accessible locally/from outside of the container on 3000

The application communicates with an AWS S3 bucket. The AWS authentication resides in the .aws subdirectory of the user running the application in the form of the file /home/tamara/.aws/credentials. The dockerized application is run by root in the container (unless we dictate otherwise by using the docker USER command), so we bind, at runtime, the local /home/tamara/.aws to the container directory /root/.aws.

Literature

Docker documenation
How To Create Docker Containers Running Memcached

Friday, 8 April 2016

How to pass parameters to a gulp task within the gulpfile

There is information out there, how to pass a parameter on a command line when running a gulp task. The example here shows how to pass a parameter from one task to another within the gulpfile.

Tested on Ubuntu 15.04 using node@4.4.2.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
'use strict';
/*
 * performs a build task of uglifying and concatenating txt files in the top subdirectories
 * the built files are deposited in build directory of each top subdirectory
 *
 * the challenge was in passing a variable to a task
 *
 * works with node@4.4.2, gulp (CLI version 1.2.1, Local version 3.9.1)
 * 
 */
const gulp = require('gulp');

const fs = require('fs');
const path = require('path');
const concat = require('gulp-concat');
const uglify = require('gulp-uglify');
const notify = require('gulp-notify');
const _ = require('lodash');
const async = require('async');

// Function variable containing the processing task
// ------------------------------------------------
//      Process task needs to be performed on every given subdirectory.
//      For that the full path, that dynamically changes, is required.
//      The task is wrapped in a function, that accepts a subdirectory parameter,
//      which is then available to the task.
//      
const subdirTask = (subdirPath, cb) => {
    // if the result is to live in ./build/${sundirectory} => get the subdirectory from the path
    //      const basename = path.basename(subdirPath);
    
    // Define the processing task
    // --------------------------
    gulp.task('process', () => {
      // selects files to act on
      return gulp.src(`${subdirPath}/*.txt`)
        // does what needs to be done 
        .pipe(uglify())
        .pipe(concat('all.txt'))
        // puts the result in the the specified place 
        .pipe(gulp.dest(`${subdirPath}/build`))
        // notifies that work was done
        .pipe(notify(`... Finished processing ${subdirPath}`));
    });
    // Now run the task
    // ----------------
    gulp.start('process');
    console.log(`\t\t\t\tprocessing ${subdirPath}`);
    cb();
};

// The main building task
// ----------------------
//      gets the parent directory
//      gets its top level subdirectories
//      asynchronously runs the actual processing task
//
gulp.task('build', () => {
    console.log('Hello world! Build me up!');

    // Get parent directory
    // --------------------
    let baseDir = process.env.baseDir || `${__dirname}/test`;
    baseDir = path.isAbsolute(baseDir) ? baseDir : path.join(__dirname, baseDir);

    console.log(`PARENT DIRECTORY = ${baseDir}`);

    // Asynchronously get the directory content
    // ----------------------------------------
    fs.readdir(baseDir, (err, items) => {

        // retrieve only top lever subdirectories
        //      use node-dir to get all nested directories
        let subdirs = _.filter(items, item => fs.statSync(`${baseDir}/${item}`).isDirectory());
        // get subdirectory full paths
        subdirs = _.map(subdirs, item => `${baseDir}/${item}` );

        console.log('PROCESSED SUBDIRS:');
        console.log(subdirs);

        // Asynchronously run the process task for each subdirectory
        // ---------------------------------------------------------
        //      async parameters:
        //          subdirs ...... array of subdirectories to work on
        //          subdirTask ... function variable, contains the work to be done
        //                              accepts two input parameters:
        //                                  1) subdirectory (provided implicitly by async)
        //                                  2) callback (provided as the third parameter to async.each)
        //          callback ..... callback function, the second parameter for subdirTask                         
        //          
        async.each(subdirs, subdirTask, (err) => {
            if (err) {
                console.error(err);
            } else {
                console.log(`Processing finished`);
            }
        });
        
    });
});

// default task is run when the gulp command is issued on its own
// --------------------------------------------------------------
gulp.task('default', ['build']);

Wednesday, 30 March 2016

Make sure your plants don't die with Raspberry Pi and a moisture sensor


Credit goes to all the articles on this topic, that I came across and from which I learnt.

#!/usr/bin/python
# Script for sending email notifications when the earth moisure changes
#
# Moisture sensor   Raspberry Pi
#      VCC        3V3  
#      GND        GND
#      DO         G12      
#  
# If you attach an LED to G12 as well, the LED will light up when there is not enough moisture;
# ie when the pin state is HIGH/True   
#---------------------------------------------------

import RPi.GPIO as GPIO                    # To get access to GPIO pins on the Raspberry Pi
from smtplib import SMTP_SSL               # For sending email notifications
import time                                # For the sleep function

# Configuration
#---------------------------------------------------
# GPIO
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)

# GPIO pin to connect the digital output from the moisture sensor to
PIN = 12
# Set the GPIO pin to receive input from the moisture sensor
GPIO.setup(PIN, GPIO.IN)
# GPIO will ignore changes withing BOUNCETIME ms, ie will check roughly every BOUNCETIME ms
#      ie acts as a timer - reduces the chances of the callback being run multiple times
BOUNCETIME = 500

# email
smtp_host = "mail.btinternet.com"          # SMTP provider host
smtp_port = 465                            # SMTP provider port (corresponds either to SMTP_SSL or to SMTP)
smtp_username = "john.doe@btinternet.com"  # Login for the SMTP provider
smtp_password = "BIGSECRET"                # Password to login to the SMTP provider

smtp_sender    = "john.doe@btinternet.com" # This is the FROM email addresssmtp_receivers = ['john.doe@btinternet.com','jane.doe@btinternet.com']  # This is the TO email address
# Prepare email messages
#  Triple quotes preserve line breaks in the string. 
# There MUST be an extra empty line after the subject line, otherwise the received email body is empty 
#---------------------------------------------------

# No moisture is detected
alert_message = """From: <Your friendly Raspberry Pi>
To: """ + ', '.join(smtp_receivers)
alert_message = alert_message + "\n"
alert_message = alert_message + """Subject: Moisture Sensor Notification - ALERT

Warning, no moisture detected! Plant is on its death bed !!!
"""

# Moisture is detected
thanks_message = """From: <Your friendly Raspberry Pi>
To: """ + ', '.join(smtp_receivers)
thanks_message = thanks_message + """Subject: Moisture Sensor Notification - THANK YOU

Thank you! Your care is much appreciated. The plant will live :)
"""

# This is our sendEmail function
#---------------------------------------------------

def sendEmail(smtp_message):
 print("Trying to send a message with %s, %s...\n" % (smtp_host, smtp_port))
 try:
  smtp_ssl = SMTP_SSL(smtp_host, smtp_port)
  smtp_ssl.login(smtp_username, smtp_password)
  smtp_ssl.sendmail(smtp_sender, smtp_receivers, smtp_message)         
  print "Successfully sent email"

 except smtplib.SMTPException:
  print "Error: unable to send email"
 
 print("-----------------------------------------")

# Logic for sending an email
#---------------------------------------------------
# Callback function to be called every time when there is a change in input on the specified GPIO PIN
#---------------------------------------------------

def alertThem(PIN):
 pin_state = GPIO.input(PIN)
 print("Pin %d state = %d (%s)" % (PIN, pin_state, time.asctime(time.localtime(time.time()))))

 if pin_state:
  print "LED off => near death experience"
  sendEmail(alert_message)
 else:
  print "LED on => all well and moist"
  #sendEmail(thanks_message)

# Watch the GPIO pin for state change
# The pin state is checked and the callback is triggered 
# changes within BOUNCETIME period are ignored (ie checks roughly every BOUNCETIME ms)
#---------------------------------------------------
GPIO.add_event_detect(PIN, 
                      GPIO.RISING, 
                      bouncetime=BOUNCETIME, 
                      callback=alertThem)

def loop():
# Waiting for 0.5s makes sure running the script will not make the CPU 100% busy
 time.sleep(0.5)

# ==================================================
# Now run forever
#---------------------------------------------------
if __name__ == '__main__':
    try:
        print('Press Ctrl-C to quit.')
        print("Local current time : %s" % time.asctime(time.localtime(time.time())))

        alertThem(PIN)

        while True:
            loop()
    finally:
        GPIO.cleanup()

Monday, 28 March 2016

Little playtime with a Bubble machine and Raspberry Pi - now you see them, now you don't

Tested with Python 2.7.9.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
# Playtime with a Bubble machine
#-----------------------------------------------------------------------------------
#
# The user has a predetermines time to have fun with the Bubble machine
# He needs to provide the password. If it is correct, he will be able
# to enjoy the bubbles for a limited time, until he is asked for the password again.
# 
# If the password the correct, a green LED lights up and the Bubble machine
# starts churning. Before the password is provided or if it is wrong, a red LED 
# is lit up and no bubbles.
#
#-----------------------------------------------------------------------------------
# Bubble machine is connected to:
# ground (GND)
# 5V pin
# G25 pin
# (for more information see: http://blog.web-zazen.co.uk/2014/12/controlling-bubble-machine-with-arduino.html
#
# LEDs are connected:
# long wire through 10kOhme resistor:  
#  green LED: to G18 
#  red LED  : to G20 
# short wire to ground (GND)
#-----------------------------------------------------------------------------------
#-----------------------------------------------------------------------------------

import RPi.GPIO as gpio
import time

# Setup
# Rasperry Pi pins
#-----------------------------------------------------------------------------------
gpio.setmode(gpio.BCM)
gpio.setup(18, gpio.OUT)   # Green LED
gpio.setup(20, gpio.OUT)   # Red LED
gpio.setup(25, gpio.OUT)   # Bubble machine

#  timeout
#-----------------------------------------------------------------------------------
playtime = 30     # Playtime length
nowtime = time.time()    # Starting now
stoptime = nowtime + playtime

# Playtime !!
#-----------------------------------------------------------------------------------
while ((stoptime-nowtime) > 0):
 print "You have %d seconds left" % (stoptime-nowtime)

 gpio.output(18, gpio.LOW)  # Starting with switched off green LED
 gpio.output(20, gpio.HIGH)  #   switched on red LED
 gpio.output(25, gpio.HIGH)  #  switched off Bubbles

 password = raw_input("Show you are the privileged one. What is the password? ")

 # Authentication
 # successful
        #---------------------------------------------------------------------------
 if password == "bubbles":
  gpio.output(18, gpio.HIGH) # Green LED on
  gpio.output(20, gpio.LOW) # Red LED off
  gpio.output(25, gpio.LOW) # Bubbles blowing!

  time.sleep(5)   # ... for 5 seconds we are happy

 # failed
        #---------------------------------------------------------------------------
 else:
  gpio.output(18, gpio.LOW) # Green LED off
  gpio.output(20, gpio.HIGH) # Red LED on
  gpio.output(25, gpio.HIGH) # No bubbles

  print("You failed the test!")

 nowtime = time.time()
 
# Clean up - switch everything off
#-----------------------------------------------------------------------------------
print "No time left, buster"
gpio.output(18, gpio.LOW)  # Ending with switched off green LED
gpio.output(20, gpio.LOW)  #        switched off red LED
gpio.output(25, gpio.HIGH)  #       switched off Bubbles

# Close the channels to avoid a message about channels being already in use
#-----------------------------------------------------------------------------------
gpio.cleanup()

Saturday, 19 March 2016

Go implementation of Fibonacci - two ways

There are various ways how to go about the implementation of Fibonacci, or other series, in Go. Some are common to different languages, like a recursive function or using a closure. One approach is Go specific, taking advantage of its concept of channels. I shall be showing and explaining the closure based and channel based implementations.

Closure based implementation

Closures are functions that "close over"  the scope of the parent namespace and, as a result, have access to and remember variables in the parent scope, even after they finished running. As a result, closures have been used abundantly to implement, for instance, counter incrementing.


Channel based implementation

Go introduced a concept of channels, that serve for communication between goroutines, the Go approach to concurrency. If a channel is created without specifying its capacity, it is blocking and the processing will stop until the channel receives or can send data (depending on whether a sender or a receiver).


Full program


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
package main

import (
 "fmt"
 "time"
)

func main() {
        max := 15

 fmt.Println("================== FIBONACCI CLOSURE implementation ==================")
 start := time.Now()
 fmt.Println("%s", start)

 f := fib_closure()

 for n := 0; n < max; n++ {
  fmt.Printf(">>> %d\n", n)
  fmt.Println(f(n))
 }
 end := time.Now()
 fmt.Printf("Calculation finished in %s \n", end.Sub(start))

 fmt.Println("================== FIBONACCI: CHANNEL implementation ==================")
 fmt.Println("%s", start)

 start = time.Now()
 c := fib_chan()

 for n := 0; n < max; n++ {
  fmt.Printf(">>> %d\n", n)
  fmt.Println(<-c)
 }
 end = time.Now()
 fmt.Printf("Calculation finished in %s \n", end.Sub(start)) 

}

func fib_closure() func(int) int {
 i, j := 1, 1

 return func(n int) int {
  switch {
  case n == 0 || n == 1:
   return 1
  default:
   i, j = j, i+j
  }

  return i
 }
}

func fib_chan() chan int {
 c := make(chan int)

 go func() {
  for i, j := 0, 1; ; i, j = i+j, i {
   c <- i
  }
 }()

 return c
}


Explanation


Shared by both approaches:


We decide how many Fibonnaci numbers we want calculated.

Performance comparison        max := 15

We are also interested in the performance of each approach, so we shall do some benchmaking using start := time.Now(), end :+ time.Now() and subtraction end.Sub(start).

Fibonacci - closure implementation


func fib_closure() func(int) int {

i, j := 1, 1 ..... for 0 and 1, the fibonnaci is 1

return func(n int) int { .... We are returning a closure, that on each subsequent invocation remembers the                                                            values  of i and j.

switch {
case n == 0 || n == 1:
return 1
default:
i, j = j, i+j  ........ The previous state is remembered (i, j) and the new one calculated (j, i+j)
}

return i
}
}

Then doing the calculations: 

fmt.Println("================== FIBONACCI CLOSURE implementation ==================")
start := time.Now()
fmt.Println("%s", start)

f := fib_closure() ............ We initialize the state by running the fib_closure()  and creating a reference to its                                                        return function/closure.

for n := 0; n < max; n++ {
fmt.Printf(">>> %d\n", n)
fmt.Println(f(n)) ..... The closure f(n) remembers calculations of < n
}
end := time.Now()
fmt.Printf("Calculation finished in %s \n", end.Sub(start))


Fibonacci - channel implementation


func fib_chan() chan int {
c := make(chan int)

go func() {
for i, j := 0, 1; ; i, j = i+j, i { ..... the logic of the fibonnaci calculation
c <- i  ................................. the result is sent to the channel
}
}()

return c
}

Then doing the calculations:

fmt.Println("================== FIBONACCI: CHANNEL implementation ==================")
fmt.Println("%s", start)

start = time.Now()
c := fib_chan() ................ Go channel which returns the Fibonacci result sent to it

for n := 0; n < max; n++ {
fmt.Printf(">>> %d\n", n)
fmt.Println(<-c) ...... the channel returns received the calculated value
}
end = time.Now()
fmt.Printf("Calculation finished in %s \n", end.Sub(start)) 


Performance comparison


On different runs, I got the following runtimes:

Closure - Channel

      222 - 140 us
      253 - 168 us
      255 - 175 us

The difference is due to what else the OS was doing at the time when the program ran. The closure implementation is roughly 1.5 times slower. (The recursive approach is the slowest.)

USB memory stick - read-only problem all of a sudden

OS: Ubuntu 15.10
CPU: 64bit


PROBLEM


I used a memory stick to transfer my ssh files. I copied a directory with the files onto the stick, then took the stick out. Since then on, it was not possible either to create new files and directories or delete existing ones (even though the menu showed the action as a possibility) on the stick, or to change their permissions.  The message I got was that I was dealing with read-only file system.

SOLUTION


After various attempts to remedy the situation, I decided to reformat the memory stick. The stick originally came with some files, that I wanted to keep. There was also some old directory of mine and the new one I added last. I created a backup directory on my Desktop and tried to copy over the files I wanted to keep. All were copied successfully, apart from the last one, which, on attempt to copy, gave an error about incorrect input/output.

I concluded the problem was caused by me taking out the memory stick after adding the ssh directory without ejecting the drive first.  

I reformated the stick (fat32), copied back the original files and directories, added the ssh directory, then ejected the stick. The stick is writeable and all files and directories are healthy.