All posts by Germán Podestá


Docker for javascript front-end apps.

Introduction All we know that the docker registry is fully populated with user contributed images that can handle a wide variety of scenarios. Anyway, also is true that not all this images were created following the same standars as the [official images]( You can check more info at the end of this article. In this article we intend to provide a base image for both, building and develop front-end applications using bower and gulp. At the same time we mostly try, to follow the bests practices for docker. Getting started. The overall idea is to create a Dockerfile in the base folder of the project that inherits from our predefined docker image. An example could be:
FROM devecoop/gulp-bower:node-0.10.38-onbuild
    EXPOSE 3000
In this particular case we added `EXPOSE 3000` to indicate that is the port where our application we'll be running. Development workflow. With this approach you can use docker to improve the development flow. And also you can use the same configuration to build and deploy the frontend app. So, the first time you run the docker you must build the image with:
$ docker build -t my-gulp-bower-image .
And then create the container, but this time we are going to mount the project folder inside the container so we can work with the files from outside.
$ docker run -it -v $PWD:/usr/src/app my-gulp-bower-image serve
You can also specify options for bind the 3000 port to the host computer (for our case it wasn't necessary, we use the container ip for testing). As you can see, we used 'serve' as a command to run in the container, the container is configured to run gulp plus the specified command. So, in our case we started the development server inside docker (you can check in detail how it works in Moreover, supposing that another developer has added some dependencies to bower for example. We can run:
$ docker exec -it  bash
To enter in the container and run `bower install` for example.Alternatively, we can rebuild the image and create the container again. Or better, use docker-compose to automate this. Take note that docker's exec command is not available in all docker versions. Deployment workflow. In this case we only need to specify build as the end of the docker command:
$ docker run --rm -it -v $PWD:/usr/src/app my-gulp-bower-image build
In our case, the builded application ends in the dist/ directory, so we only need to copy that to the production server. In addition, we added the `--rm` to indicate docker to discard the container once the process is finished. Possible enhancements and references. Of course, this guide could be improved adding docker compose to automatize even more the flow. Also we can go deeper and integrate the sample Dockerfile to a Yeoman Generator.
  • GitHub Repo with the dockerfiles
  • Guidelines that the official image creators must comply
  • Best Practices on writing dockerfiles.

Password management with clipperz

Clipperz is a very useful password manager, it has a lot of features and runs all the strong cryptographic algorithms in the browser. However, it is quite difficult to install in your own server because there aren't any complete guide for that.

First download the program and some utilities

$ git clone $ cd password-manager $ sudo apt-get install python-git

Edit 'clipperz/backend/php/properties/' to correct the paths:

{ "request.path": "/php/index.php", "dump.path": "/php/dump.php", "": "false" }

Build the application

$ ./scripts/build install --backends php --frontends beta

Install mysql, php and apache:

$ sudo apt-get install php5-mysql mysql-server apache2 libapache2-mod-php5

Now we have to create the database:

$ mysql -u root -p

> CREATE DATABASE clipperz; > GRANT ALL PRIVILEGES ON clipperz.* TO 'clipperz'@'localhost' IDENTIFIED BY 'clipperz';

Copy the installation and update the config with your database details

$ cp -R target/php /var/www $ vi /var/www/php/configuration.php

Go to http://clipperz_host/php/setup/index.php click on POG me up an then proceed

$ cd /var/www $ cp php/beta/index.html ./

Remove the ability to access the database via the web:

$ rm -fr setup

Guía ágil para instalación de Tryton con localización Argentina

In the following article we will let you know which tools we have used and modified in Devecoop to run the python platform consist of the Tryton client, the Tryton server and data base. We have based in the tools created by Nantic, to facilitate the installation and we also include the argentinian localization.

The result at the end of our guide we will get Tryton working, with a lange number of modules and tools included, for example:

- Trytond: Tryton server.
- Tryton: Tryton client.
- Sao: Web client.
- Proteus: Is an useful library to test and to generate test data
- Oficial modules.
- Available modules for Argentina.

Please notice that we have used SO ubuntu 12.04.

Let's start

The first step is update the index:

$ apt-get update

Then install the packages below what will need to clone the diferents modules:

$ apt-get install mercurial
$ apt-get install git

And other useful libraries:

LXML is a library for processing XML and HTML

$ sudo apt-get install libxml2-dev libxslt1-dev

LDAP is a standar protocol

$ sudo apt-get install libldap2-dev libsasl2-dev libssl-dev


$ sudo apt-get install quilt

Needed for account_invoice_ar:

Swig is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages

$ sudo apt-get install swig


$ sudo apt-get install python-M2crypto
cp -r /usr/lib/python2.7/dist-packages/M2Crypto* [dentro del virualenv]/.virtualenvs/tryton_env/lib/pythonX.X/site-packages/

Tryton uses Postgres as database engine, below you have the needed packages:

$ sudo apt-get install postgresql postgresql-contrib pgadmin3 postgresql-server-dev-all

The system packages installation is completed, now we will create the directory of the new project:

$ mkdir proyecto_tryton
$ cd proyecto_tryton

Within the new file we have to clone the following repositories, that include the argentinian localization, tasks and utils. That's means many useful commands that we will use with 'invoke'(a python library to create scripts).

$ hg clone config
$ hg clone tasks
$ hg clone utils

Needed for account_invoice_ar:

hg clone
cp -r pyafipws  [dentro del virtualenv]/.virtualenvs/tryton_env/lib/pythonX.X/site-packages

With the command 'invoke -l' all the available commands will be displayed.

Many of the repositories that we have cloned needs some dependencies which we need to download whith the command 'pip' within independent enviroment that we create thanks virtualenvwrapper:

$ sudo apt-get install virtualenvwrapper
# Close and open the terminal
$ sudo apt-get install python-dev
$ mkvirtualenv nombre_del_entorno

Probably we have an old 'pip' version, thar can couse problems during the instalation so before let's update 'pip':

$ pip install pip -U
$ pip install -r tasks/requirements.txt
$ pip install -r config/requirements.txt

Then we have to create a file call 'local.cfg' en the main directory, which will be have a simbolic link from 'config/local.cfg':

$ touch local.cfg

Right now we are able to execute the 'bs' that will clone all the modules specifyed within 'project/config/':

$ invoke clone --config config/base.cfg
$ invoke clone --config config/core.cfg
$ invoke clone --config config/tryton-ar.cfg
$ invoke bs.create_symlinks

The following text must be copied in 'trytond.cong', this will be the server configuration (user and password are examples):

#This file is part of Tryton.  The COPYRIGHT file at the top level of
#this repository contains the full copyright notices and license terms.

# This is the hostname used when generating tryton URI
#hostname =

# Activate the json-rpc protocol
jsonrpc = *:8000
#ssl_jsonrpc = False

# Configure the path of json-rpc data
#jsondata_path = /var/www/localhost/tryton

# Activate the xml-rpc protocol
#xmlrpc = *:8069
#ssl_xmlrpc = False

# Activate the webdav protocol
#webdav = *:8080
#ssl_webdav = False

# Configure the database type
# allowed values are postgresql, sqlite, mysql
db_type = postgresql

# Configure the database connection
## Note: Only databases owned by db_user will be displayed in the connection dialog
## of the Tryton client. db_user must have create permission for new databases
## to be able to use automatic database creation with the Tryton client.
db_host = localhost
db_port = 5432
db_user = tryton
db_password = tryton
#db_minconn = 1
#db_maxconn = 64

# Configure the postgresql path for the executable
#pg_path = None

# Configure the Tryton server password
admin_passwd = admin

timezone = America/Argentina/Buenos_Aires

# Configure the path of the files for the pid and the logs
#pidfile = False
#logfile = False

#privatekey = server.pem
#certificate = server.pem

# Configure the SMTP connection
#smtp_server = localhost
#smtp_port = 25
#smtp_ssl = False
#smtp_tls = False
#smtp_password = False
#smtp_user = False

# Configure the path to store attachments and sqlite database
data_path = /var/lib/tryton

# Allow to run more than one instance of trytond
#multi_server = False

# Configure the session timeout (inactivity of the client in sec)
#session_timeout = 600

# Enable psyco module
# Need to have psyco installed
#psyco = False

# Enable auto-reload of modules if changed
#auto_reload = True

# Prevent database listing
#prevent_dblist = False

# Enable cron
# cron = True

# unoconv connection
#unoconv = pipe,name=trytond;urp;StarOffice.ComponentContext

# Number of retries on database operational error
# retry = 5

Almost finished we have to create the user for the data base:

sudo su postgres
createuser --pwprompt --superuser tryton

To check the access of the data base, open the archive '/etc/postgresql/9.1/main/pg_hba.conf' and check out if there is a line like this:

local    all    all    md5

Else add a new one

Now we are prepared to run the server:

./ start

This command will show us in real time the logs. Also allows you to stop the server, restarting or specify the data base and another feathures.


./ stop
./ krestart

The Nantic guide is

Documenting directory Trees with tree

I had to document the directory hierarchy of our running servers. It occurred to me to use the 'tree' command to generate a txt fromthe hierarchy tree and that can then be added to our wiki. Plus you can add by hand a brief description for each directory.

The tree command can generate the directory hierarchy from a specific file. It can print to the screen, generate a text file and also can generate an html file.


├── bin
├── games
├── include
├── lib
├── lib32
├── local
├── sbin
├── share
└── src

9 directories

To install it with apt:

$ sudo apt-get install tree

To copy the output to a text file you can use the -n option (to deactivate the color special characters) and -o to indicate a file name

$ tree -d -L 1 -n -o fhs.txt /

You can generate html with the -H

$ tree -H -d -L 1 -n -o fhs.html /

You can specify a pattern of files to include with the -P option and also estipulate several searching directories. Don't forget to add the single quotes around the -P pattern to prevent bash to expand it.

$ tree -P '*.list' sources.list.d/ /etc/apt/