Category Archives: development

Exploring Loopback – Part 2.

Imagen-Devecoop-Loopback

Hi! In this new post we will add another model through code, see the relationships, and play a little with datasources.

In the previous post we created a todo model using the wizard.

Now we will create another model, category, this time manually. To do this we have many ways, we will see two of them:

  1. through code
  2. through a JSON schema

1 - Creating a model manually through code

As seen in the previous post, models are generally exstended from a base model (PersistedModel), so we have to do the following:

  1. Create a new file , for instance “create-category.js”, inside the boot (server/boot) folder, to have the model initialized when the application starts. You can copy the content from other files in that same folder. The files inside the “boot” folder are scripts to be executed in order when the application starts, and after the bootstrapper runs (https://github.com/strongloop/loopback-boot).

These scripts are usually used for configuration, models creation, and testing data creation.

Method 1 - using inheritance.

Adding these lines we create the model:

  var Model = server.loopback.Model;
  var Category = Model.extend('categories');

Here you can find the code.

Method 2 - using configurations.

The model can be created like this also:

  var config = {
      dataSource: 'db',
      public: true
    };
    server.model('Category', config);

Here you can find the code. Here you can see the configurations that can be applied.

Method 3 - using the createModel method:

var Category = server.loopback.createModel('Category');

Here you can find the code.

This one is the recommended way to create a model manually. To know more about createModel, we can read the apidocs.

I also include here this answer by Raymond Feng, co-founder of Loopback, if you want to know what he thinks about this different methods.

Now that we have our model created, when the server starts we will be able to use it creating a file “category.js” inside “common/models”, with this content:

module.exports = function(Category) {
  
};

We will see next what other things can be done inside this model, for now, leave it empty.

We will review another way of defining a model.

2 - Defining a model manually via JSON schema

To begin with, let’s add a “name” field.

Create “category.json” inside the common/models folder:

  {
    "name": "category",
    "plural": "categories",
    "base": "PersistedModel",
    "idInjection": true,
    "options": {
      "validateUpsert": true
    },
    "properties": {
      "name": {
        "type": "string",
        "required": true
      }
    },
    "validations": [],
    "relations": {},
    "acls": [],
    "methods": []
  }

Then create “category.js”, just like we did before.

Finally, we add this into “model-config.json”:

  "category": {
      "dataSource": "db",
      "public": true
    }

You can see this code here.

Validations

So we said that inside “category.js” we could do other things. What we do in this file is adding behaviour to the model, adding remote methods, hooks, bussiness logic, validations and more.

For now, let’s take a look at validations, and how LoopBack makes them really easy. Our model has a name, and we want it to be unique. To achieve this, let’s add in category.js the following:

  module.exports = function(Category) {
    Category.validatesUniquenessOf('name', {message: 'el nombre debe ser unico'});
  };

You can find the code here

As you can test, if we have an “Example” and we want to add another “Example”, the explorer will display:

Imagen-Duplicate

Validations is a long topic. For those interested, here’s a very good article about all the validation methods provided by the framework.

Relationships

So far, we haven’t gone too deep. Here’s when the interesting part begins.

With our new model, we want to categorize our ToDos, to group them, search, etc. Let’s say that a “todo” model can have zero, one or more categories. The todo model has a relationship with the model category.

We can define relationships in the JSON schema, for this example we’ll use hasAndBelongsToMany since a todo can have many categories, and a category can include many todos.

To add this relationship, we go to todo.json and add:

  "relations": {
    "categories": {
      "type": "hasAndBelongsToMany",
      "model": "category"
    }
  }

Here is the list of all possible relationships.

Let’s see an example:

Add some categories and todos first, like we did in the previous post.

  • I created 3 categories: House, Animals, Car

Imagen-GET-Categories

  • And I created 2 items: Clean bathroom, Buy food

Imagen-GET-Todo

And to add a category to an item, I send a PUT request to the endpoint /todos/{id}/categories/rel/{fk}. Here you can see all the HTTP methods.

Let’s add category “House” (fk: category id 1) to the item “Clean bathroom” (id: todo id 1):

Imagen-PUT-Categories-TODO

And we get:

Imagen-PUT-Categories-TODO-Response

To verify, make a GET request to /todos/{id}/categories

Imagen-GET-Categories-TODO

We can see we now have “House”

Imagen-GET-Categories-TODO-Response

Filters

Another great LoopBack feature is the different ways to query our data, in this post we will see the most common one, the “where” filter:

If we want to search the item with name “House”, we do this in the GET to categories:

Imagen-GET-Categories-TODO-Filter

We get all those that match “House” exactly. We can search the ones starting with “A”:

{"where" : {"name":{"like":"A"}}}

and we get “Animals”.

Imagen-GET-Categories-Filter-Like-Response

We can do this for all the models. Try yourself, how do you get the list of all the pending ToDos ?

We will see more on filters in the next part, when we integrate the client.

Persistance

Right now it’s all great, but we lose the data as soon as the server stops! We need to persist our data. LoopBack can connect to all the most popular databases. Let’s see how easy it is to connect to a noSQL database like MongoDB, and relational databases like MySQL.

Persistance with MongoDB

Let’s connect to my database of choice, MongoDB, through the CLI.

First of course we need Mongo installed, version 2.6 or higher, the downloads are here.

The we install the mongo connector, using npm:

  $ npm install loopback-connector-mongodb --save

And add our new datasource:

  $ slc loopback:datasource

We get the familiar wizard:

? Enter the data-source name: todoMongo
? Select the connector for todoMongo: 
  PostgreSQL (supported by StrongLoop) 
  Oracle (supported by StrongLoop) 
  Microsoft SQL (supported by StrongLoop) 
❯ MongoDB (supported by StrongLoop) 
  SOAP webservices (supported by StrongLoop) 
  REST services (supported by StrongLoop) 
  Neo4j (provided by community) 
(Move up and down to reveal more choices)

Let’s review the choices, they are self-descriptive:

  • ? Enter the data-source name: Name of the data source, any name you want, in this case, todoMongo
  • ? Select the connector for todoMongo: The connector name, in this case mongoDB, you can see it is supported by strongloop. There are many more to choose.

In datasources.json we will find the new one:

{
  "db": {
    "name": "db",
    "connector": "memory"
  },
  "todoMongo": {
    "name": "todoMongo",
    "connector": "mongodb"
  }
}

It’s all set. Before using the app, do the following:

  1. Configurate the connection:
  "todoMongo": {
    "name": "todoMongo",
    "connector": "mongodb",
    "host": "127.0.0.1", 
    "database": "todoDB", 
    "username": "", 
    "password": "", 
    "port": 27017 
  }

  1. Order the models to use the new datasource, changing the dataSource field in model-config.js, like this:
  "todo": {
    "dataSource": "todoMongo",
    "public": true
  },
  "category": {
    "dataSource": "todoMongo",
    "public": true
  }

Here you’ll find the code so far.

Go back to the explorer, add some categories. Now if you quit the server, start it again, and GET the categories, we will see they were persisted.

Persistance with MySQL

This works very similar as with Mongo.

Make sure you have MySQL 5.0 or higher, downloads are here.

Then follow the same steps as we did with Mongo: installing the connector, configuring models and connection, and that’s it.

Here you can find the connector documentation if needed.

That’s it for this part, I hope you enjoyed it, I sure did.

Next time we will integrate the client with our application.

Exploring Loopback – Part 1.

Imagen-Devecoop-Loopback

Hi, this is the first part of a series of articles related to the Loopback framework.

Loopback, as its pages says is a javascript framework based on express, so if you know express then it should be easy to understand and apply your knowledge.

LoopBack is an open source Node.js framework built on top of Express optimized for building APIs for mobile, web, and other devices. Connect to multiple data sources, write business logic in Node.js, glue on top of your existing services and data, connect using JS, iOS & Android SDKs

In my last project we choosed to use this powerful and interesting framework. Among the features we find:

  • Easy-to-use CLI wizard.
  • Built-in API Explorer.
  • Several features for creation, relationship and ACL on the models.
  • It’s isomorphic, sharing the same API between client and server side.

The best way to show you the potential is using it, so here we go…

Step 1 - Installation:

First we need to verify if we have Node and NPM installed, if you need help check this awesome post by Ale

Easy, we have a npm package, so run the following command:

$ npm install -g strongloop

(Yes, it says strongloop, that’s the company that develops loopback, it was recently acquired by IBM)

Once we have it installed, let’s get to work!

The first thing we need to do is creating a new project, as I’m someone that forgets the things that I have to do (and I have too much to do), I think that we can create something easy and productive: a TODO list.

How we do that ?, with our Easy-to-use CLI wizard:

$ slc loopback

In general the CLI wizard guides us asking questions, in this case we have the following (yes!, is a yeoman generator):

[?] Enter a directory name where to create the project: todo-app
[?] What's the name of your application? todo-app

Here we can see what we just did.

Step 2 - Creating our model

Our next step is creating our todo model, which is going to have a text field with string type and a boolean field to know if it is completed or not.

$ cd todo-app
$ slc loopback:model
? Enter the model name: todo
? Select the data-source to attach todo to: db (memory)
? Select model's base class: PersistedModel
? Expose todo via the REST API? Yes
? Custom plural form (used to build REST URL): todos
Let's add some todo properties now.

What we just chose ?:

  • Select the data-source to attach todo to: db (memory): Memory options means that when we stop the app, we will loose every saved data. On the next issues we will see how to use different datasources.
  • Select model’s base class: PersistedModel: PersistedModel is the base model of built-in models, except for Email. It provides all the standard create, read, update, and delete (CRUD) operations and exposes REST endpoints for them.
  • Expose todo via the REST API? Yes: We can use the API Explorer.

So far we created the model, now we need to add the properties that I mentioned:

Enter an empty property name when done.
? Property name: text
   invoke   loopback:property
? Property type: string
? Required? Yes
Let's add another todo property.
Enter an empty property name when done.
? Property name: completed
   invoke   loopback:property
? Property type: boolean
? Required? Yes

To finish we just use ctrl+c.

Two files were added, todo.js y todo.json. The todo.json file is where we define properties, fields, relationships, permissions, etc. And the todo.js file is where we gonna to create the remote methods, hooks and any related code of the model, etc.

This is our todo.json file:

{
  "name": "todo",
  "plural": "todos",
  "base": "PersistedModel",
  "idInjection": true,
  "options": {
    "validateUpsert": true
  },
  "properties": {
    "text": {
      "type": "string",
      "required": true
    },
    "completed": {
      "type": "boolean",
      "required": true
    }
  },
  "validations": [],
  "relations": {},
  "acls": [],
  "methods": []
}

And todo.js:

module.exports = function(Todo) {

};

Additionally the new datasource of the model was added in the model-config.json file

Here we can see what we just did.

So, what we do next ?, how about running what we just did ?, we can run the app as the following:

$ node .
Browse your REST API at http://0.0.0.0:3000/explorer
Web server listening at: http://0.0.0.0:3000/

If we go to http://0.0.0.0:3000/explorer or http://localhost:3000/explorer we will see two models, todo and Users (the exposed models are the ones that have the public property set to true).

Well now is when you guys test the API (I will only show a couple of simple examples), loopback already create the API for us, where we have the most common operations: POST, GET, PUT, find, exists, etc.

####Adding a new item:

Fist we open the accordeon in POST /todos (http://localhost:3000/explorer/#!/todos/create)

We have three separated groups in there: Response Class, Parameters, Response Messages.

Response class has two visualizations options: Model: shows us how the properties are defined (type, required, not required, etc) and Model Schema: a json with default values.

In Parameters we have a textarea called value, in there we can add the item that we want (create the json by hand or click on the Model Schema on the Data Type column on the right to set as parameter value)

In both cases we can set the content type too.

At last, we have the Response Messages, which show us where the request from and some data of the response like the body, the code and the headers.

Imagen-Making-POST

In this example we add one item, “Clean the kitchen” :(. After we click on “Try it out!”, we have the following response:

Imagen-POST-RESPONSE

To see all added items we can do the same steps but with GET option, in addition we can use filter here, but we will leave this for later.

Imagen-GET

That was the first look on what Loopback offers, we create an API REST without adding a single line of code.

In the next part we will see how to integrate the client and connect with a database.

docker

Docker for javascript front-end apps.

Introduction All we know that the docker registry is fully populated with user contributed images that can handle a wide variety of scenarios. Anyway, also is true that not all this images were created following the same standars as the [official images](https://registry.hub.docker.com/search?q=library&f=official). You can check more info at the end of this article. In this article we intend to provide a base image for both, building and develop front-end applications using bower and gulp. At the same time we mostly try, to follow the bests practices for docker. Getting started. The overall idea is to create a Dockerfile in the base folder of the project that inherits from our predefined docker image. An example could be:
FROM devecoop/gulp-bower:node-0.10.38-onbuild
 
    EXPOSE 3000
In this particular case we added `EXPOSE 3000` to indicate that is the port where our application we'll be running. Development workflow. With this approach you can use docker to improve the development flow. And also you can use the same configuration to build and deploy the frontend app. So, the first time you run the docker you must build the image with:
$ docker build -t my-gulp-bower-image .
And then create the container, but this time we are going to mount the project folder inside the container so we can work with the files from outside.
$ docker run -it -v $PWD:/usr/src/app my-gulp-bower-image serve
You can also specify options for bind the 3000 port to the host computer (for our case it wasn't necessary, we use the container ip for testing). As you can see, we used 'serve' as a command to run in the container, the container is configured to run gulp plus the specified command. So, in our case we started the development server inside docker (you can check in detail how it works in https://github.com/Devecoop/docker-gulp-bower/blob/master/node-0.10.38/docker-entrypoint.sh) Moreover, supposing that another developer has added some dependencies to bower for example. We can run:
$ docker exec -it  bash
To enter in the container and run `bower install` for example.Alternatively, we can rebuild the image and create the container again. Or better, use docker-compose to automate this. Take note that docker's exec command is not available in all docker versions. Deployment workflow. In this case we only need to specify build as the end of the docker command:
$ docker run --rm -it -v $PWD:/usr/src/app my-gulp-bower-image build
In our case, the builded application ends in the dist/ directory, so we only need to copy that to the production server. In addition, we added the `--rm` to indicate docker to discard the container once the process is finished. Possible enhancements and references. Of course, this guide could be improved adding docker compose to automatize even more the flow. Also we can go deeper and integrate the sample Dockerfile to a Yeoman Generator.
  • GitHub Repo with the dockerfiles https://github.com/devecoop/docker-gulp-bower
  • Guidelines that the official image creators must comply https://github.com/docker-library/official-images/blob/master/README.md
  • Best Practices on writing dockerfiles. https://docs.docker.com/articles/dockerfile_best-practices/

Nginx as an automatic reverse proxy

Nginx is a nice piece of software, an elegant webserver keeping things simple (although it has given me some headaches). On this case I'll show you how to setup a reverse proxy for any hostname on your internal/external network. A practical use case for this, could be the following
[PC] <-VPN-> [ VPN TERMINATION POINT ]     <-->[HOST A.INTRANET.LOCAL]
                                                                                                               <-->[HOST B.INTRANET.LOCAL]
                                                                                                               <-->[HOST C.INTRANET.LOCAL]
Lets say we are working remotely and had a VPN connection that is able to access a single linux box (VPN termination point), but we need to navigate to other hosts on the internal network i.e: A.INTRANET.LOCAL The solution to this problem is simple, but we need to make some assumptions:
  • The intranet has an internal DNS server capable of resolving INTRANET.LOCAL subdomains.
  • The websites we want to access are all accessible via hostname.
All we need to do is install nginx. On Ubuntu/Debian is as simple as:
$ sudo apt-get install nginx
Then put the following inside the /etc/nginx/sites-enabled/default file:  
server {
listen   80;
server_name  localhost;
access_log  /tmp/nginx.access.log;
 
location / {
resolver 10.47.4.109;
proxy_pass $scheme://$host;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
  Lets explain the tricky parts a little bit:
  • resolver 10.0.0.2: This is necessary as nginx does not use the standard dns resolution method (a.k.a resolv.conf) so we need to configure a dns server. On this case 10.0.0.2 is the intranet dns server.
  • proxy_pass $scheme://$host: This is simple, it redirects all incoming requests, to the same hosts it was originally intended to. The special variable $scheme contains the protcol (http, https) and the $host variable the hostname.
  • proxy_set_header Host $host: This sets the Host header on the requests, necessary to process a virtualhost directive on any webserver listening on that hostname. proxy_set_header X-Forwarded-For $remote_addr: This attachs the original remote address to the request.
Note: This configuration, as it is, it will work only for websites listening on port 80, you may have to adjust the listen port to accomdate to other requirements.
WARNING:  One has to be very carefull implementing this solution, as the nginx configuration will act as a proxy for *any* host on the internet. You need to make sure that is not exposed to the outside world and be aware that anyone knowing the ip address inside the intranet will be able to use it, so you are encourage to take securiy measures

Creating a bot for slack

Slack is one of the coolest and most versatile IM platforms available today, we use it all the time here at Devecoop as our primary channel for communication. One of the greatest things it has, is its integration capabilities with 3rd part services (i.e.: twitter, github, bitbucket, circleci, etc), that can be exploited right out of the box, without too much hussle. On this opportunity I will show you how you can create your own bot, using Slack's outgoing webhooks feature.

Lets get to work.

As all we need on our side is a small webserver , listening to requests from slack, we are going to start writing it using Python and its BaseHTTPServer module

  1. #!/usr/bin/python
  2.  
  3. import cgi
  4. import io
  5. from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
  6.  
  7. PORT_NUMBER = 3000
  8.  
  9. class SlackBotHandler(BaseHTTPRequestHandler):
  10. def do_something(self, text):
  11. with io.open('slacklog.txt', 'a') as file:
  12. file.write(text + "\n")
  13.  
  14. def do_POST(self):
  15. content_len = int(self.headers.getheader('content-length', 0))
  16. post_body = self.rfile.read(content_len)
  17. try:
  18. postvars = cgi.parse_qs(post_body, keep_blank_values=1)
  19. user_name = postvars.get('user_name')[0]
  20. text = postvars.get('text')[0]
  21. text = text.strip("!save &lt;").rstrip("&gt;")
  22.  
  23. self.send_response(200)
  24. self.send_header('Content-type','text/html')
  25. self.end_headers()
  26. self.do_something(text)
  27.  
  28. payload = '{"text" : "Your data has been save %s!"}' % user_name;
  29. except:
  30. payload = '{"text" : "Sorry %s Could not save your data!}' % user_name;
  31.  
  32. # Send the html message
  33. self.wfile.write(payload)
  34. return
  35.  
  36.  
  37. try:
  38. #Create a web server and define the handler to manage the
  39. #incoming request
  40. server = HTTPServer(('', PORTNUMBER), SlackBotHandler)
  41. print 'Started httpserver on port ' , PORTNUMBER
  42. server.serve_forever()
  43.  
  44. except KeyboardInterrupt:
  45. print '^C received, shutting down the web server'
  46. server.socket.close()

What this small script does is pretty simple. It fires an httpserver that listens on port 3000, then when it receives a post, the "doPOST" method handles it, writing the text it receives to a text file and returning a simple message to slack. All you need to do is execute this script and it will start serving and listening for slack events. You need a server with  a public IPAddress in order for this to work, alternativately you can use a service like Heroku. Lets say our hostname is example.com so the url we have to put on slack would be "http://example.com:3000·

The second part is far more simple. On the Slack website, go to the upper left corner and click on the dropdown link with the name of your team, then Click on "Configure integrations"-> "Outgoing webhooks" -> "Add Outgoing Webhooks Integration", the fields you need to fill are self explanAtory, the most important ones are what channel/s you want your hook to get called, the word/s that are going to be used to fire the hook, and the url/s it will call when matching those words.

In our case we will use this bot on any channel and configure the hook to be fired when hitting the "!save" keyword, and the url as we previously mention will be "http://example.com:3000.

All we need to do then is press save, go to a channel write a line using the magic "!save" word and enjoy our fresh new bot!

Note: In order to add more security, each hook we create generates a token, we should use this value if we want our server to reject anything that does not have this token on the body.

 

 

 

 

OS X notifications for Emacs-jabber

Where I am working they have a jabber-based chat service to communicate with each other and of course I use the excellent emacs-jabber to chat from within Emacs.

The (huge) problem was that my colleagues were complaining that I weren't noticing some of their messages! And that happened when I switched, for instance, from Emacs to Firefox. That was because Emacs wasn't notifying me of the new messages. I could have switched to Adium, but I like to keep everything related with work in a single environment.

Here are some quick step-by-step instructions to get OS X notifications for every new message received in Emacs-jabber.

1. We're going to use terminal-notifier to send notifications from Emacs to OS X. Although it comes with prebuilt binaries I highly recommend Homebrew to install it and many other common packages (ack, wget, mysql, python, etc). Give it a try, you will love it. Once installed, you can test it with a simple example such as:

/usr/local/bin/terminal-notifier -sender org.gnu.Emacs -title 'Message title' -message 'This is the message content!'

It should appear a notification just like this one:

Screen Shot 2015-04-05 at 20.02.22

(Please refer to terminal-notifier itself for more details on its usage)

2. Now the only pending task is to tell Emacs to send notifications via a shell command executing terminal-notifier. Add the following lines to your .emacs :

(defun msg-via-notifier (title msg) (shell-command (format "/usr/local/bin/terminal-notifier -sender org.gnu.Emacs -title '%s' -message '%s'" title msg) ) )
(defun notify-jabber-message (from buf text proposed-alert) (msg-via-notifier from text))
(add-hook 'jabber-alert-message-hooks 'notify-jabber-message)

3. That's it!

You can also test the recently added msg-via-notifier function (after restarting Emacs) generating an example notification from Emacs:

(msg-via-notifier "Notification title" "Notification content")

Setup multiple scenarios for e2e testing on django with django-remote-scenario

When we are testing an application (blackbox), either manually or on an automated fashion, often have the need to create different sets of data, for the different scenarios of each feature we want to test (Checkout this link: http://dannorth.net/introducing-bdd/). That is where django-remote-scenario comes to the rescue!.

I wrote this tool, because I needed to do e2e testing for an Angular application with a Django backend. I needed to create different sets of data, so a third party application could retrieve each one of them, via REST services at will. The idea is simple, create a "scenario file" for each scenario you want to test,and django remote scenario will translate it into an URL that can be remotely call to load the data into the database, and be ready to be consume.

Quickstart

Install django-remote-scenario::

pip install django-remote-scenario

Then add it to an exsiting django project::

INSTALLED_APPS = [
...
django_rs

You need add django_rs urls to your project url file like this::

urlpatterns = patterns('',
...
url(r'^drs/', include('django_rs.urls')),
..
)

To create custom scenarios, just create a directory inside your app named "scenarios" , then add as many files as scenarios you want to implement and create a init.py file to import them. Inside each of those files, you need to implement a main() function setting up the models you want to create for the scenario, you could create them by hand or use something like django_dynamic_fixtures.

Note: Your scenario is not limited to creating new models, you may also mock specific parts of the enviroment as well

Once everything is ready, start the server this way, this will enable the dynamic call of scenarios::

python manage.py rune2eserver initial_data.json

Note: You need to pass a initial fixture file with the barebones of your data.

It is also possible to pass a specific settings file, for testing purposes, in case you want to do the tests using a different database for example::

python manage.py rune2eserver initial_data.json --settings=demoproject.test_settings

To start using it, just go to the following url:

http://127.0.0.1:8000/drs/[APPLICATION]/[SCENARIO]

after doing that the database will be populated with the data you provided in your scenario. Take into account that, everytime you call an scenario, all the other data in the database is erased, except for the one in your initial_data fixture files, wich are loaded again, and also the one you pass as a parameter when you call the command.

Inside this repository you will find a demo Django project preconfigured with a simple scenario that sets up four objects. Use it like this:

First run the server:

$ python manage.py rune2eserver initial_data.json --settings=demoproject.test_settings

Then go to your browser and setup a scenario:

http://127.0.0.1:8000/drs/demoapp/scenario_1

You may also pass a parameter to avoid flushing the database on a specific call:

http://127.0.0.1:8000/drs/demoapp/scenario_1/?flush=0

Later you could see the results on the following url::

http://127.0.0.1:8000/demoapp/

Guía ágil para instalación de Tryton con localización Argentina

In the following article we will let you know which tools we have used and modified in Devecoop to run the python platform consist of the Tryton client, the Tryton server and data base. We have based in the tools created by Nantic http://www.tryton-erp.es/, to facilitate the installation and we also include the argentinian localization.

The result at the end of our guide we will get Tryton working, with a lange number of modules and tools included, for example:

- Trytond: Tryton server.
- Tryton: Tryton client.
- Sao: Web client.
- Proteus: Is an useful library to test and to generate test data
- Oficial modules.
- Available modules for Argentina.

Please notice that we have used SO ubuntu 12.04.

Let's start

The first step is update the index:

$ apt-get update

Then install the packages below what will need to clone the diferents modules:

$ apt-get install mercurial
$ apt-get install git

And other useful libraries:

LXML is a library for processing XML and HTML

$ sudo apt-get install libxml2-dev libxslt1-dev

LDAP is a standar protocol

$ sudo apt-get install libldap2-dev libsasl2-dev libssl-dev

Quilt

$ sudo apt-get install quilt

Needed for account_invoice_ar:

Swig is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages

$ sudo apt-get install swig

M2crypto

$ sudo apt-get install python-M2crypto
cp -r /usr/lib/python2.7/dist-packages/M2Crypto* [dentro del virualenv]/.virtualenvs/tryton_env/lib/pythonX.X/site-packages/

Tryton uses Postgres as database engine, below you have the needed packages:

$ sudo apt-get install postgresql postgresql-contrib pgadmin3 postgresql-server-dev-all

The system packages installation is completed, now we will create the directory of the new project:

$ mkdir proyecto_tryton
$ cd proyecto_tryton

Within the new file we have to clone the following repositories, that include the argentinian localization, tasks and utils. That's means many useful commands that we will use with 'invoke'(a python library to create scripts).

$ hg clone https://bitbucket.org/vdichiera/tryton-config config
$ hg clone http://bitbucket.org/nantic/tryton-tasks tasks
$ hg clone https://bitbucket.org/nantic/nan_tryton_utils utils

Needed for account_invoice_ar:

hg clone https://code.google.com/p/pyafipws
cp -r pyafipws  [dentro del virtualenv]/.virtualenvs/tryton_env/lib/pythonX.X/site-packages

With the command 'invoke -l' all the available commands will be displayed.

Many of the repositories that we have cloned needs some dependencies which we need to download whith the command 'pip' within independent enviroment that we create thanks virtualenvwrapper:

$ sudo apt-get install virtualenvwrapper
# Close and open the terminal
$ sudo apt-get install python-dev
$ mkvirtualenv nombre_del_entorno

Probably we have an old 'pip' version, thar can couse problems during the instalation so before let's update 'pip':

$ pip install pip -U
$ pip install -r tasks/requirements.txt
$ pip install -r config/requirements.txt

Then we have to create a file call 'local.cfg' en the main directory, which will be have a simbolic link from 'config/local.cfg':

$ touch local.cfg

Right now we are able to execute the 'bs' that will clone all the modules specifyed within 'project/config/':

$ invoke clone --config config/base.cfg
$ invoke clone --config config/core.cfg
$ invoke clone --config config/tryton-ar.cfg
$ invoke bs.create_symlinks

The following text must be copied in 'trytond.cong', this will be the server configuration (user and password are examples):

#This file is part of Tryton.  The COPYRIGHT file at the top level of
#this repository contains the full copyright notices and license terms.
[options]

# This is the hostname used when generating tryton URI
#hostname =

# Activate the json-rpc protocol
jsonrpc = *:8000
#ssl_jsonrpc = False

# Configure the path of json-rpc data
#jsondata_path = /var/www/localhost/tryton

# Activate the xml-rpc protocol
#xmlrpc = *:8069
#ssl_xmlrpc = False

# Activate the webdav protocol
#webdav = *:8080
#ssl_webdav = False

# Configure the database type
# allowed values are postgresql, sqlite, mysql
db_type = postgresql

# Configure the database connection
## Note: Only databases owned by db_user will be displayed in the connection dialog
## of the Tryton client. db_user must have create permission for new databases
## to be able to use automatic database creation with the Tryton client.
db_host = localhost
db_port = 5432
db_user = tryton
db_password = tryton
#db_minconn = 1
#db_maxconn = 64

# Configure the postgresql path for the executable
#pg_path = None

# Configure the Tryton server password
admin_passwd = admin

timezone = America/Argentina/Buenos_Aires

# Configure the path of the files for the pid and the logs
#pidfile = False
#logfile = False

#privatekey = server.pem
#certificate = server.pem

# Configure the SMTP connection
#smtp_server = localhost
#smtp_port = 25
#smtp_ssl = False
#smtp_tls = False
#smtp_password = False
#smtp_user = False

# Configure the path to store attachments and sqlite database
data_path = /var/lib/tryton

# Allow to run more than one instance of trytond
#multi_server = False

# Configure the session timeout (inactivity of the client in sec)
#session_timeout = 600

# Enable psyco module
# Need to have psyco installed http://psyco.sourceforge.net/
#psyco = False

# Enable auto-reload of modules if changed
#auto_reload = True

# Prevent database listing
#prevent_dblist = False

# Enable cron
# cron = True

# unoconv connection
#unoconv = pipe,name=trytond;urp;StarOffice.ComponentContext

# Number of retries on database operational error
# retry = 5

Almost finished we have to create the user for the data base:

sudo su postgres
createuser --pwprompt --superuser tryton

To check the access of the data base, open the archive '/etc/postgresql/9.1/main/pg_hba.conf' and check out if there is a line like this:

local    all    all    md5

Else add a new one

Now we are prepared to run the server:

./server.py start

This command will show us in real time the logs. Also allows you to stop the server, restarting or specify the data base and another feathures.

Example:

./server.py stop
./server.py krestart

The Nantic guide is http://www.tryton-erp.es/posts/crear-un-entorno-tryton-con-las-herramientas-nantic.html.

E2E tests with django-casper

We often need to test our "javascript rich" Django application and the infamous TestClient provided with Django is not enough on this cases. Here is when django-casper comes to the rescue

First a brief introduction. Javascript has a great package named PhantomJs. PhantomJs is a headless webkit browser (yeap no need to open FF/Chrome for testing ala selenium!). CasperJS is a library on top of that to ease the testing from CasperJS website:

CasperJS is an open source navigation scripting &testing utility written in >Javascript for the PhantomJS WebKit headless browser and SlimerJS (Gecko).It >eases the process of defining a full navigationscenario and provides useful >high-level functions,methods & syntactic sugar for doing common tasks such as:

  • defining & ordering browsing navigation steps
  • filling & submitting forms
  • clicking & following links
  • capturing screenshots of a page (or part of it)
  • testing remote DOM
  • logging events
  • downloading resources, including binary ones
  • writing functional test suites, saving results as JUnit XML
  • scraping Web contents

django-casper is a sort of wrapper for casperjs, allowing us to run javacript/casper tests from Django's built-in test command, this is great, not only to facilitate the development process, but to take advantage of Django's own test runner to create mocks, stubs, fixtures, etc. for our front end testing.

Installation:

We will need a running python and node instance with django and casperjs installed respectively. If you don't have it already on your box here is a quick howto for GNU/Linux

Node

Install nvm

$ curl https://raw.github.com/creationix/nvm/v0.4.0/install.sh | sh

From nvm, install latest node version

$ nvm install latest

then open a new terminal or source ~/.zshrc or ~/.bashrc or whatever rc file for the shell you are using

Create virtualenv

Create a new virtualenv instance if you don't have virtualenv installed please refer to the official documentation http://virtualenv.readthedocs.org/en/latest/virtualenv.html Note: You can skip this step and install Django globally, but is not recommended

$ virtualenv django-casper && source django-casper/bin/activate

Now we are ready to install casperjs and django

Installing django

$ pip install django

Installing casperjs

$ npm install -g casperjs

Installing django-casper

It is possible to install django-casper from pip (pip install django-casper), but we are going to use the following method, in order to get the example code for tests.

$ git clone git@github.com:dobarkod/django-casper.git $ cd django-casper $ python setup.py install

Using it

Django-casper comes with a testproject that implements some test tests.

$ cd testproject $ python manage.py test testapp

This should run all the test included in the application.

Where are my tests?!!

Tests are divided in two parts, the django part where the backend stuff is prepared (fixture, backend mocks, etc) and the casper part where the actual tests are written. Lets see an example:

$ vim testapp/tests.py

from casper.tests import CasperTestCase
import os.path

from django.contrib.auth.models import User


class CasperTestTestCase(CasperTestCase):

    def test_that_casper_integration_works(self):
        self.assertTrue(self.casper(
            os.path.join(os.path.dirname(__file__),
            'casper-tests/test.js')))

On this file we have one test that in turn calls the casper library passing the test casper should run on this case casper-tests/test.js. Inside this django test, we could add new data to the database, and test the result on the casper-tests/test.js test.

Lets see the content of the casper-tests/test.js test file

 casper.test.comment('Casper+Django integration example');
 var helper = require('../../../casper/jslib/djangocasper.js');

 helper.scenario('/',
function() {
   this.test.assertSelectorHasText('em', 'django-casper',
 "There's a mention of django-casper on the page");
},
function() {
  this.click('a');
   this.test.assertSelectorHasText('#messages p', 'Good times!',
 "When the link is clicked, a message is added to the page");
}
 );

helper.run();

We can observe, that the test is opening the root "/" page (helper.scenario('/'.... ), and then asserting the content of the page on the first function. The second function, clicks on a link on the page, and asserts that a message is added to the page.

These are basic tests, I encourage you to give it a try. Also take a look a Django dynamic fixture, a library to create dynamic test data for your django tests.

Happy Testing!

Useful links

django-casper: https://github.com/dobarkod/django-casper

django-dynamic-fixture: https://github.com/paulocheque/django-dynamic-fixture

casperjs: http://casperjs.org/

virtualenv: http://www.virtualenv.org/en/latest/

nvm: https://github.com/creationix/nvm

Unignoring files in bazaar

Bazaar is a great tool for quickly start versioning a python project. For example if you have this one in particular:

my_project 
 - __init__.py 
 - my_module.py 
 - my_module.pyc 
 - main.py 
 - library.so

To start versioning, at the root level execute the following commands:

$ bzr init 
$ bzr commit -m "Initial commit"

Bazaar, by default, will ignore all .pyc files, so we don't have to worry about commit them by mistake. .pyc files are not the only ones being ingored by default. Bazaar will also ignore vim buffer files (.*swp), dynamically linked libraries (.so) and some others too. So, what if we need to "unignore" some of this default patterns?.

Here is what we should do

Just create a .bzrignore file at the top level of the project, and add the pattern you want to unignore, preceded by a ! mark, for example if we want to start versioning all .so files, we just need to add the following pattern

!*.so

If we check our repository status now it will say

unknown
library.so

Now we can add this file and start versioning it.

Happy "bazaaring"!