My adventures in the world of GitLab CI

Laravel, Docker, Continuous Integration, GitLab, PHP


I start this article with an important disclaimer - I haven't used GitLab's continuous integration so far, so this post can be seen kind of as an intro.

How does the whole thing work?

In general, each repository is already connected with the GitLab CI system. The only thing that needs to be done in order to run the CI itself is to commit a .gitlab-ci.yml file. This will cause a new CI job to be started on each subsequent commit. The CI job is described in the .gitlab-ci.yml file and, generally speaking, is your chosen docker image, on which series of commands are executed. One cool option, that GitLab gives you, is that you can enter names and values of variables in your account for later use in the .gitlab-ci.yml file. At the time of writing, the variables are entered through Project -> Settings -> Pipelines -> Secret Variables. The documentation of GitLab CI is here.

What exactly did I do?

In this first encounter with the CI implementation of GitLab I had a fairly simple task. On every single commit for a PHP and Laravel based project, it had to be built and then uploaded to my shared hosting account in SuperHosting. According to different guides, documentation, etc. eventually I managed to do it with this content in my .gitlab-ci.yml file.

    image: tetraweb/php:7.1

        - mv -f php.ini /usr/local/etc/php/conf.d/phpi.ini
        - docker-php-ext-enable mcrypt pdo_mysql intl zip bz2
        - cd laravel
        - composer install
        - composer dump-autoload
        - php artisan clear-compiled
        - php artisan cache:clear
        - php artisan route:clear
        - php artisan view:clear
        - php artisan config:clear
        - php artisan vendor:publish --force
        - php artisan optimize
        - cd ..
        - mkdir -p ~/.ssh
        - eval $(ssh-agent -s)
        - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n\tServerAliveInterval 30\n\n\tServerAliveCountMax 6\n\n\tPort XXXX\n\n" > ~/.ssh/config'

            - ssh-add <(echo "$MY_SSH_PRIVATE_KEY")
            - rsync -a -q -z /builds/GITLAB_ACC/REPO_NAME/some_laravel_dir/ [email protected]_NAME:some_laravel_dir
            - rsync -a -q -z /builds/GITLAB_ACC/REPO_NAME/your_www_dir/ [email protected]_NAME:your_www_dir

Now a little explanation of what's what

With image, you choose which docker image to use in the CI job. There are a lot of docker images available. You can find them either in Google or at, or in some guide or documentation, where someone has recommended a cool image. It's very important to choose your image very carefully, because the different ones have different pre-installed software, scripts and configurations. There are a lot of images with very poor documentation that have insufficient information about what's inside the image. My recommendation is to use something popular, well documented and of course appropriate for the project. In the case of tetraweb/php, the cool thing is that I have everything I need pre-installed, including the php extensions. This means that I will not have to wait to install software and compile php extensions every time, which was actually the case with other images.

The before_script section is a list of preparatory steps that are executed before the task itself. A brief description of the steps in my case is:

  • I set up my own php.ini
  • I activate the php extensions that I need. Here I use a bash script written specifically for the purpose and included in the docker image I am using
  • cd into the Laravel folder
  • I run a bunch of composer and laravel artisan commands to set up the project
  • cd back into home folder
  • I set up SSH settings - this is to remove the strict verification of the host key, to put an alive interval and to set the preferred connection port

This is the CI task itself. The steps are:

  • I add my private ssh key
  • I start the file upload

Difficulties that I encountered and how I overcame them

I originally decided to upload via a ftp client, so that I don't have to meddle with my SuperHosting account. Big mistake. Slow, difficult and problematic. I tried with lftp and ncftp. There was one pretty strange problem - not all directories and files were uploaded, only part of them, even though I used the recursive flag. I guess the problem was some kind of timeout or something, but I really had no desire to debug it, so I just upgraded my account and went to SSH. This was way better. Here is the moment to thank SuperHosting for upgrading my account with SSH access free of charge.

Another problem I encountered was with building the project. PHP was giving me some kind of error, but it didn't show it. Unfortunately, the docker images I tested came with display_errors=0. In the end, it turned out that all .ini files placed in /usr/local/etc/php/conf.d/ are automatically loaded. I put the php.ini file from my computer into the repository, removed the extension and the extensions_dir directives, added 1 line to the .gitlab-ci.yml file, telling it to copy the configuration file into the conf.d folder and everything was fixed. Even the error disappeared, which was great.


GitLab surprised me, I was expecting a much worse free tier CI implementation. Aside from the problems I faced, which were entirely caused by my own ignorance, everything went well. Currently, with each commit the new version is directly uploaded to the hosting.


Drop us a line at [email protected] or right here and let us know how we can help you out.