ls -1tr | head -5

My devops stack: gitlab-ce, aws spot instances and s3.

Wow, that was near to a month since the last post... time passes so fast.

This "how-to" is the continuation of the "My 'cheap' devops stack" (https://danybmx.github.io/blog/blog/2018/my-cheap-devops-stack.html)

In the first post of the series we saw how to install and configure a gitlab-ce with docker registry and gitlab-ci with a gitlab-ci-runner all over docker, in this second post we will se how to use AWS spot instances for run our ci jobs and s3 as cache between them.

This was done following the official git lab documentation (https://docs.gitlab.com/runner/configuration/runner_autoscale_aws/).

The code for this post was on

First step: create an AWS account and an IAM user.

We need an AWS account, we can create it on the AWS website (https://portal.aws.amazon.com/billing/signup).

Once we have it, we should create an access key for our account (https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html),
I think that the best here is create a new IAM user (https://console.aws.amazon.com/iam/home?#/users) for our account with the permissions that gitlab needs:

  • AmazonEC2FullAccess
  • AmazonS3FullAccess

Well, at this point we have the Access key ID and his Secret access key that we will use from gitlab to access to our AWS account, create the spot instance requests, manage them and also the manage S3 buckets.

Second step: create the S3 bucket

Go to the S3 Services on AWS (https://s3.console.aws.amazon.com/s3/home?region=us-east-1) and create the bucket:

Third step: configure the runner to run on AWS spot instances

We configured in the docker-compose.yml a volume for the ci_runner instance in which we have now the config.toml file of the runner that we've created in the previous post.

Now, we should modify it to configure it for run on aws spot instances. For that you can edit the config.toml file to something like this:

  • runner/config.toml
concurrent = 1
check_interval = 0

[[runners]]
  name = "aws"
  url = "https://git.dpstudios.es/"
  token = "{{RUNNER_TOKEN}}"
  executor = "docker+machine"
  limit = 1
  [runners.docker]
    image = "alpine"
    privileged = true
    disable_cache = true
  [runners.cache]
    Type = "s3"
    ServerAddress = "s3.amazonaws.com"
    AccessKey = "{{YOUR_AWS_IAM_USER_ACCESS_KEY}}"
    SecretKey = "{{YOUR_AWS_IAM_USER_SECRET_KEY}}"
    BucketName = "gitlab-ci-runners-cache"
    BucketLocation = "{{YOUR_AMAZON_REGION}}"
    Shared = true
  [runners.machine]
    IdleCount = 0
    IdleTime = 600
    MachineDriver = "amazonec2"
    MachineName = "gitlab-docker-machine-%s"
    OffPeakTimezone = ""
    OffPeakIdleCount = 0
    OffPeakIdleTime = 0
    MachineOptions = [
      "amazonec2-access-key={{YOUR_AWS_IAM_USER_ACCESS_KEY}}",
      "amazonec2-secret-key={{YOUR_AWS_IAM_USER_SECRET_KEY}}",
      "amazonec2-region={{YOUR_AMAZON_REGION}}",
      "amazonec2-vpc-id={{YOUR_DEFAULT_VPC_ID}}",
      "amazonec2-use-private-address=false",
      "amazonec2-tags=runner-manager-name,aws,gitlab,true,gitlab-runner-autoscale,true",
      "amazonec2-security-group=docker-machine-scaler",
      "amazonec2-instance-type=m3.medium",
      "amazonec2-request-spot-instance=true",
      "amazonec2-spot-price=0.10",
      "amazonec2-block-duration-minutes=60"
    ]

Remember to replace all the YOUR_* "variables" and save the file. You can get the RUNNER_TOKEN from your current config.toml file.

The ci-runner will reload automatically your config so the next build will request an spot instance in AWS and run the configuration over it. Finally will store the cache in the AWS S3 service and thats all!

Regards!! I hope this was useful for someone!

My 'cheap' devops stack

This past week I listen a few times "Spot Instances" while my workmates talk about our CI environment. When I arrive my home, I just start to read about AWS Spot Instances, and well... for CI they're pretty awesome. The thing is that AWS offer the instances that are not in use with a lower price, the instances are the same that the ones you create on-demand except for one thing, AWS can claim and stop them with only two minutes in advance. This last part doesn't matter in a big way to e2e tests or build tasks if you can try them in another moment.

So, I start to put together things that learn lately and finally the idea of run a GitLab + Docker registry + CI-runner comes up. I want to run this all on a small 15€ VPS server that I rent for my personal projects. That projects are small, but some of them are in "production" and I don't want to hit performance only because I made some changes and the CI starts to do tests, package jars, build docker images... Here is where the spot instances will "save my life", the ci-runner will only manage the spot-instance request and the build status. The "heavy" load will be done in AWS.

I spent a long afternoon but finally I got everything working!! until a new problem pops up... cache between instances... If the CI runs a build on an instance that finally shutdowns due to inactivity, next time that the CI will run, should download all the node/java dependencies and that's a little bit slow. So what can I do? configure S3 as cache storage!

I'll split this "how-to" into three posts, this is the first and we will get at the end of it a running gitlab-ce with a docker-registry and a gitlab-runner, all over docker!

In the next post I'll show how to integrate it with AWS Spot instances and S3 storage.

In the last post we will see how to deploy docker images with ansible (in my way, maybe not the best).

Dependencies

First of all, you should have installed this tools on your VPS or computer:

  • docker-engine: just docker :P
  • docker-compose: will help us to manage docker instances and have them all connected.
  • docker-machine: will allow gitlab-ci-runner to connect to AWS instance and register it as a docker-machine.

Here are my current versions:

daniel@vps:/home/gitlab$ docker -v
Docker version 17.12.1-ce, build 7390fc6
daniel@vps:/home/gitlab$ docker-compose -v
docker-compose version 1.8.0, build unknown
daniel@vps:/home/gitlab$ docker-machine -v
docker-machine version 0.14.0, build 89b8332

Step 1. Introduction to the docker-compose file

I want to really thanks to this guy github.com/sameersbn for create this pretty nice docker images and a github.com/sameersbn/docker-gitlab/blob/master/docker-compose.yml that do all the work... It's quite easy to work with well-documented projects like this.

The docker-compose.yml file and code that I will show below are published on github.com/danybmx/my-cheap-devops. You can go there and do a FF to this post!

Gitlab

Well as I said, that guy create a docker-compose.yml that do everything so I just copied it and clean it up to configure only with my needs.

If you want to remove the docker registry, just remove the registry service and the REGISTRY_* environment values from gitlab service.

You should replace {{YOUR_IP}} on this docker-compose.yml with your host machine IP or public IP as you prefer.

This docker-compose basically start 5 instances for run gitlab-ce with ci-runners and docker-registry.

  • sameersbn/redis

    • There are no so much to comment on this, the configuration is basically the image and a volume for persist the data.

  • sameersbn/postgresql

    • As in the previous service, here we configure another volume for persist the data and also set following environment values (replace with your owns):

      • DB_USER=gitlab
      • DB_PASS=RM4L6X6An4wpLKQE
      • DB_NAME=gitlabhq
      • DB_EXTENSION=pg_trgm
    • This image will create this user the first time it run and also the database, so we should use this info in the gitlab service.

  • sameersbn/gitlab (this is the bigger one :P)

    • As in the previous service, we configure one volume for persisting data
    • Set two port bindings in this case, but this is just because is prepared to work on your computer. In production, I doesn't like to expose ports on the host so I prefer to have a proxy (maybe I can write a future post with this)
    • Set that this instance depends on redis and postgresql to be sure that docker doesn't start this machine if the database fails. In the case of registry, we add it too to this list
    • Environment vars, there are a lot!
      • DB_*: Fill it up with the previous data
      • REDUS_*: Fill it up with the previous data
      • GITLAB_HOST: The host that will expose gitlab over the network (the domain or host IP)
      • GITLAB_PORT: HTTP Port of gitlab on the GITLAB_HOST that is exposed to the network
      • GITLAB_SSH_PORT: This is the ssh port on the GITLAB_HOST that is exposed to the network and allows you to connect by SSH to the gitlab instance (use git through ssh)
      • GITLAB_SECRETS_*: gitlab use this internally for encrypt/decrypt replace them with another ones :)
      • GITLAB_ROOT_EMAIL: This will be the admin login
      • GITLAB_ROOT_PASSWORD: This will be the admin password
  • gitlab/gitlab-runner the ci-runner

    • Here we set two volumes, one for have access to the config.toml file and other for share the host machine docker socket with it. This is a need if you want to use (docker inside docker) dind.

  • registry

    • Expose port 5000 to allow communication between instances on the same network (docker-compose generates a default network for their machines)
    • Bind host port 9001 to the 5000 for allowing the access from the network (as in the gitlab, IMHO this is ugly on production)
    • We create two volumes here, one is for persisting the registry repository and the other is for the certificates that should be shared with gitlab service.
    • We need to set following environment variables to link it with gitlab:

      • REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: Path where images will be stored
      • REGISTRY_AUTH_TOKEN_REALM: Address to the gitlab jwt authentication
      • REGISTRY_AUTH_TOKEN_SERVICE: This should be "container_registry"
      • REGISTRY_AUTH_TOKEN_ISSUER: The certificate's issuer
      • REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: Path to the root certificate
      • REGISTRY_STORAGE_DELETE_ENABLED: This allows to delete images
    • For run the registry this others environment variables should be added to the gitlab service:

      • GITLAB_REGISTRY_ENABLED: Just should be true
      • GITLAB_REGISTRY_HOST: The host URL under which the Registry will run and the users will be able to use.
      • GITLAB_REGISTRY_PORT: The port under which the external Registry domain will listen on.
      • GITLAB_REGISTRY_API_URL: The internal API URL under which the Registry is exposed to.
      • GITLAB_REGISTRY_KEY_PATH: Path to the certificates (did you remember the shared volumes?)

Step 2. Generate certificates

I've created a bash script for generating needed certificates, it can be found at github.com/danybmx/my-cheap-devops/create-registry-certificates.sh.

You can fill the data you want on the script but don't worry about it, this only will be used for internal communication between registry and gitlab.

$ sh create-registry-certificates.sh
Generating a 2048 bit RSA private key
..................+++
..................................................+++
writing new private key to 'registry.key'
-----
Signature ok
subject=/C=ES/ST=PO/L=Vigo/O=Registry/OU=Registry/CN=registry
Getting Private key

Ensure that the certs folder is in the same folder as the docker-compose.yml file and that's all!

Step 3. Run instances!

Now, you just need to run the instances and wait until gitlab are available! How? just run:

$ docker-compose up -d
$ docker-compose logs -f

The -f option on docker-compose gives you the option to chose the config file instead use the default.

We run the up with -d for start in detached mode and then show the logs with -f follow options. This allows you to Ctrl-c without stop the instances.

Now, access to localhost:9000/!. It will show you a 502 at the beggining, you should wait and refresh until it works. This is just because gitlab is still starting.

After a while, you should see your own gitlab-ce login page!, the login info is the one that you have set on the docker-compose file GITLAB_ROOT_EMAIL and GITLAB_ROOT_PASSWORD. Or just register for an account without admin permissions.

Gitlab login page

Create a test project and that's all, you have, gitlab running and also the registry if you have chosen that option!

Gitlab project docker registry

Step 4. Register a gitlab-ci-runner

Well, we launched all the stack but we didn't register any ci-runner on gitlab, go ahead.

First of all, login on your gitlab as admin and go to /admin/runners localhost:9000/admin/runners.

Once here, you will see a token, copy it.

Go to the terminal, and navigate to the path where the docker-compose files are. Once there, run the following command:

docker-compose exec ci_runner gitlab-runner register

This will execute the gitlab-runner register command inside the runner instance and this will prompt you for some data in order to register the runner.

  1. Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/)
    • Here you should write the internal http url to reach gitlab, in our case should be http://gitlab since gitlab is the name of the service.
  2. Please enter the gitlab-ci token for this runner:
    • Just paste the token you copied on the website.
  3. Please enter the gitlab-ci description for this runner:
    • A description for identify the runner from gitlab, I keep it with the default.
  4. Please enter the gitlab-ci tags for this runner (comma separated):
    • I keep this blank too
  5. Whether to lock the Runner to current project [true/false]:
    • Here I put false, if it's true, the runner will run only for a specific project.
  6. Please enter the executor: docker, shell, ssh, docker-ssh+machine, docker-ssh, parallels, virtualbox, docker+machine, kubernetes:
    • docker
  7. Please enter the default Docker image (e.g. ruby:2.1):
    • alpine

That's all, refresh the website and you'll have a new ci-runner waiting!

Step 5. Create a test pipeline

Just clone the test-project that you've created:

git clone http://localhost:9000/{{user}}/test-project.git
cd test-project

Create on it a .gitlab-ci.yml file with following content:

image: alpine:latest

stages:
  - build
  - test
  - deploy

build:
  stage: build
  script:
  - echo "I'm building"

test:
  stage: test
  script:
  - echo "I'm testing"

deploy:
  stage: deploy
  script:
  - echo "I'm deploying"

Commit and push the change:

git add .gitlab-ci.yml
git commit -m "First ci-runner test!"
git push -u origin master

Go to the website and see how pipelines pass (or not... and you should debug a little bit hehe)

Gitlab success pipeline

Now you should play with .gitlab-ci.yml options, and adapt it to your project. Maybe when finish this posts series I'll try to show what I do on my personal builds.

On next post we will see how to use AWS Spot instances as machines for launch our tests/builds and how to configure S3 as cache. Go to AWS and create your account!

Hello jbake and travis

Yesterday I published this blog on GitHub, it was so easy, just download jbake, run it, change some templates and I be able to start writing my first post...

I found out jbake when trying to apport something to the VigoJUG. They use jbake for their website and the first time I see it, it was strange... When I need to create a simple static website I just download any PHP framework, install it, and create every page. if I need to do something dynamic, just use MySQL or SQLite. That's great but you need a hosting with PHP and also MySQL or SQLite, with jbake you can have a pseudo-dynamic website powered with markdown files in just a few steps, and the better is that you finally get a static HTML site that can be published everywhere, also in GitHub pages.

The problem here was that you need to run ./gradlew clean bake on your computer and then do a git commit -m "xxx" and a git push. That's a little bit tedious since you need to have Java (is not a problem at all If you do it on your computer but is a good excuse haha), so I went to the vigojug/vigojug.github.io repository to see how they do this process and I realise that they use travis-ci to bake the site and then push to another branch.

It was interesting so I tried to do it and after some mistakes (: it was working!

Here are the steps that I followed:

1. Create travis-ci project.

Go to travis-ci.com website and sign in with GitHub, once logged in, click on the plus sign that appears on the left sidebar and activate the repository that you want.

2. Create a deployment key for allowing travis-ci to push on the repository.

For this just run ssh-keygen and follow instructions, store this files securely but not in the repository.

Whats the problem now? Well, ship this key to GitHub is risky but with travis, you can encrypt the key and configure travis for decrypting it before starting the build process.

Just run the following commands:

# Install travis-cli
gem install travis
# Login in travis-cli
travis login
# Encrypt the rsa-key
travis encrypt-file {your-key-file} -r {github_user}/{github_repo_name}

This will show you something like this:

$ travis encrypt-file blog-travis -r danybmx/blog
encrypting blog-travis for danybmx/blog
storing result as blog-travis.enc
storing secure env variables for decryption

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

    openssl aes-256-cbc -K $encrypted_xxxxxxxxxxxx_key -iv $encrypted_xxxxxxxxxxxx_iv -in blog-travis.enc -out blog-travis -d

Pro Tip: You can add it automatically by running with --add.

Make sure to add blog-travis.enc to the git repository.
Make sure not to add blog-travis to the git repository.
Commit all changes to your .travis.yml.

Just follow the instructions shown and that's done!

3. Add the deployment key to GitHub repo.
  • Go to github.com and navigate to the Settings of your project, then click Deploy keys on the sidebar.
  • Click Add deploy key on the right and fill it with the content of the .pub file that was generated in the previous step.
4. Setup gradle for do all the things!

I need to build the site with jbake and then push the generated code to a different branch (gh-pages in my case). Fortunately, gradle has plugins for everything and I can use the following plugins to get this done.

  • jbake-gradle-plugin: Add the bake task to generate the source from gradle.
  • grgit: This is a library that allows gradle to use git directly.
  • gradle-git-publish: Provide the task gitPublishPush task along with others that will help us to commit and push the content of a directory to a remote branch.

    You can check how to configure all those plugins in the github.com/danybmx/blog/blob/master/build.gradle.

    Finally, I've created a custom task that runs the clean task, followed by the bake task and finally the gitPublishPush that will push the generated content to the gh-pages branch.

5. Create the .travis.yml file.

For this case, the travis file is quite simple and short so I'll paste it here, but you can found it at github.com/danybmx/blog/blob/master/.travis.yml.

language: java
jdk:
- oraclejdk8
before_install:
- openssl aes-256-cbc -K $encrypted_1ecfa12135cc_key -iv $encrypted_1ecfa12135cc_iv -in blog-travis.enc -out blog-travis -d
- export COMMIT_MESSAGE=$(git log -1 --pretty=%B)
script:
- ./gradlew bakeAndPush -Dorg.ajoberstar.grgit.auth.ssh.private=./blog-travis
  • language: We need to set a language, Java, in this case, is enough.
  • jdk: We define the JDK version (jbake shows an error with jdk9... so I keep jdk8 here).
  • before_install:
    • We need to decode the blog-travis.enc file that is the rsa-key for push into GitHub using the command that travis-cli shows in step 2.
    • I also create an environment variable with the message of the last commit for use it as commitMessage on the gitPublish gradle plugin.
  • script: Here we should define the commands that travis will launch for test/deploy.
    • In this case, the ./gradlew bakeAndPush is enough and the -Dorg.ajoberstar.grgit.auth.ssh.private=./blog-travis is just for indicate where the rsa-key that we use as deployment-key is. ajoberstar.org/grgit/grgit-authentication.html
6. PUSH!

We've finished! the only thing that we need to do now is push the changes to master and wait to see how travis-ci do the rest. If it fails, just iterate a bit to fix the problems :P

More things that we can improve.

Push directly to master is not a good practice... maybe use another branch like dev or source that just merge with master if travis ends successfully is a good idea, by now, I will maintain it in that way just because this is not an "important" project.

I hope this can be useful for someone, bye!!

A new attemp...

Well, this is not my first attempt to create a blog, it's like my third... but this last year was a new beginning in my life in what refers to work. I quit my job at the end of 2016 and start a new company. I was so scared, there were more than 10 years working in the "same" company, working in the way that I want because I have no team... It was only me and two more people, but I supported all the technical part.

I've created an invoicing system from the scratch in PHP for the company that should be able to work in the way that they want, manage stock, generate stickers, calculates OEM products needed to create final products... and a lot features more. Also, I've got the opportunity to work doing some graphic design, photography, animation and even 3D. It was pretty nice, I've learned a lot but... I miss something, a team.

So, mid-2016, I've fixed a mate website (a WordPress that was "hacked") and he tells me, "I knew a company that was searching for a JS developer", I was interested so I've contacted the company... They were looking for a Java Developer, not JS... and well, my knowledge of Java was pretty low, anyway I went to the technical test and It wasn't too bad... I need to say that it was my first job interview and I was so nervous but they allow me (and everyone) to use the internet, that helps me a lot. Finally I pass the test but I rejected the job because I didn't realise that it was for a Junior and the salary was much lower than in the current job (well... I think that the main reason was that I'm really scared and I should say that was a good salary for a Junior job offer).

3 Months later, the company gets in touch with me again (THANKS), they have a job offer for a Senior developer and arrange me an interview with somebody of the team in which I'll join. I went and she explains to me what they do, how they work and how the company takes care of their employees, it was just amazing and helps me a lot to take my decision... a few days later I've communicated my current company that I'll leave in about 15 days. Those days were really long for me... I was scared of how things will go... it will be a change from the freedom where I can write programs in the language that I want, without documentation (cause never has time), and working alone to something completely different, a well organised team, focused in Oracle products and Java (JVM), something more serious...

Late 2016 I've started my new job, it was quite nice! I join into a team that helps me a lot and teach me something every day, have all the things well documented, and although there were many new concepts they make it really easy to understand.

Now, there was more than one year in the company, I keep really impressed about how they get care of their employees, we have ping pong (I'm still so bad), breakfast in time-to-time where someone can talk about any topic, training sessions, playful days... just awesome. And they also support the VigoJUG (Vigo Java Users Group), where anyone can go to attend their talks (one each month) or even host them and of course, go for a few beers after with best engineers I've ever met.

And that helps me to understand the value of the community and that we have in Vigo a log of meetups about programming! At the moment I only go to the VigoJUG but there are groups for everything! Python, PHP, JS... is just amazing!. Thank you guys for your big effort!.

So, this third blog attempt is just because I want to share something with the community from which I get a lot, share the things that I do for my small projects and... practice English! :P I'm sure that you didn't see any faults in this text! haha. I don't know, maybe anything ends being useful for anybody.


Older posts are available in the archive.