Welcome to 2019! Time for a fresh look at building a minimalistic Rails API app suitable for something like a microservice and deploying it to a cloud service. The foremost objective is low dependency and to use all defaults when possible. This article aims to set you up just enough to get you rolling on your own.

Set up Ruby

Start by setting up Ruby locally. I recommend rbenv over rvm. Follow their instructions to set up both rbenv and ruby-build.

Use rbenv to install the latest version of Ruby:

cd empty/local/folder/for/app
echo '2.5.1' > .ruby-version # Replace 2.5.1 with whatever the latest version is
rbenv install

Initialize your application

Run rails new. Do not overwrite .ruby-version when prompted.

# Use if you won't want to bother with persistence:
rails new --api -O -M -C -J .

# Use if you want persistence:
rails new --api --database=postgresql .

# To customize further, see:
rails new --help

This assumes you have plenty of Git experience. Initialize Git in your codebase, decide where your codebase will live remotely (Github, Gitlab, private repo), and set up a remote. You'll need it so your production/live app can fetch the latest code.

git init .

Set up deployment

For an expensive but zero-configuration option, Google Cloud's App Engine gets you up and running very quickly if you have more money than time. It would be a new instance for Ruby. See Google's tutorial for how to do this.

For a cheap but higher-configuration option, Linode offers a $5 price point for a very decent VPS. Here is a brief overview of how to manually provision a simple and cheap cloud server, perfect for a hobby project.

  1. Head over to https://manager.linode.com/linodes/add and select Nanode 1GB.

  2. Create a root password and deploy the distro you're most comfortable with. This guide is written for Fedora 28.

  3. Boot the instance.

  4. Go to "Remote Access" and copy the SSH command, something like ssh root@123.123.123.123. Log in with that root password.

  5. Set your hostname: echo "whatever-you-want" > /etc/hostname

  6. Point whatever domain you want to use at your instance.

  7. Set your own hostname for reverse DNS. Find the settings back in the Linode manager under Remote Access > Reverse DNS.

  8. Create a deploy user: adduser deploy

  9. Allow deploy user to sudo without a password: visudo, then add deploy ALL=(ALL) NOPASSWD:ALL under ## Allow root to run any commands anywhere.

  10. Switch to deploy user: su deploy.

  11. Create an authorized_keys file:

    cd && mkdir .ssh && chmod 700 .ssh && touch .ssh/authorized_keys && chmod 600 .ssh/authorized_keys
    vim .ssh/authorized_keys
    
  12. On your own machine, cat ~/.ssh/id_rsa.pub and copy the output into the remote's .ssh/authorized_keys file you just created.

  13. Disable root ssh login: sudo vim /etc/ssh/sshd_config and change PermitRootLogin to no.

  14. Test the SSH connection. On your own machine, add an entry to ~/.ssh/config for the new box, and run ssh (your new alias).

  15. Install Git, gcc:

    sudo dnf install -y git openssl-devel readline-devel zlib-devel && sudo dnf groupinstall -y "C Development Tools and Libraries"
    
  16. Install Rbenv and ruby-build.

  17. Open port 80 and 443:

    sudo firewall-cmd --zone=FedoraServer --add-service=http --permanent && sudo firewall-cmd --complete-reload && sudo firewall-cmd --list-all-zones
    sudo firewall-cmd --zone=FedoraServer --add-service=https --permanent && sudo firewall-cmd --complete-reload && sudo firewall-cmd --list-all-zones
    
  18. Set up and configure gem mina. Ensure you read their Getting Started guide and copy over first-time items such as master.key to the shared directory.

    Add master.key to Mina's shared_files so its not lost on deploy:

    set :shared_files, fetch(:shared_files, []).push('config/master.key')
    

    Put the app under /srv:

    set :deploy_to, '/srv/app'
    

    Symlink from your home direectory for convenience:

    cd && ln -s /srv/app ./app
    
  19. Configure Puma at config/puma.rb.

    Define Puma log locations in the remote server's ~/.bashrc. Setting them as variables allows Puma's config file to be easily used in development too:

    export PUMA_LOGFILE_OUT=/srv/app/shared/log/puma.log
    export PUMA_LOGFILE_ERR=/srv/app/shared/log/puma.err
    

    Add them to Puma's config:

    if ENV.has_key?("PUMA_LOGFILE_OUT")
      stdout_redirect ENV.fetch("PUMA_LOGFILE_OUT"), ENV.fetch("PUMA_LOGFILE_ERR"), true
    end
    

    Add pids and sockets to Mina's shared_dirs in config/deploy.rb:

    set :shared_dirs, fetch(:shared_dirs, []).push('pids', 'sockets')
    

    Point Puma at those directories which will exist only in production:

    app_dir = File.expand_path("../..", __FILE__)
    
    if File.exists?(File.join(app_dir, 'pids'))
      pidfile "#{app_dir}/pids/puma.pid"
      state_path "#{app_dir}/pids/puma.state"
    end
    
    if File.exists?(File.join(app_dir, 'sockets'))
      bind "unix://#{app_dir}/sockets/puma.sock"
    end
    

    You should now be able to mina deploy. As a sidenote, mina can rollback too -- mina rollback.

  20. Create a systemd script for Puma

    /etc/systemd/system/puma.service:

    [Unit]
    Description=Puma Rails Server
    After=network.target
    
    [Service]
    Type=simple
    User=deploy
    Environment=RAILS_ENV=production
    Environment=PORT=80
    ExecStart=/bin/bash -lc 'cd /home/deploy/app/current && rbenv exec bundle exec puma -C config/puma.rb'
    ExecStop=/bin/bash -lc 'cd /home/deploy/app/current && rbenv exec bundle exec pumactl -S /home/deploy/app/tmp/pids/puma.state stop'
    TimeoutSec=15
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    
  21. Configure Nginx.

    Make Nginx run under the deploy user. That makes permissions a lot easier:

    /etc/nginx/conf.d/nginx.conf:

    user deploy
    

    Set up your app-specific Nginx configuration by creating a conf.d file:

    /etc/nginx/conf.d/your_app.conf:

    upstream app {
      server unix:/srv/app/shared/sockets/puma.sock fail_timeout=0;
    }
    
    server {
      listen 443 ssl http2;
      listen [::]:443 ssl http2;
    
      server_name your-domain.com;
      root /srv/app/current/public;
    
      try_files $uri/index.html $uri @app;
    
      location @app {
        proxy_pass http://app;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
      }
    }
    
    server {
      listen 80;
      listen [::]:80;
    
      server_name your-domain.com;
    
      location / {
        return 301 https://your-domain.com$request_uri;
      }
    }
    
  22. Start your app on bootup: sudo systemctl enable puma && sudo systemctl enable nginx.

  23. Allow non-root users to bind to lower numbered ports such as 80 and 443:

    sudo setcap 'cap_net_bind_service=+ep' "$(rbenv prefix)/bin/ruby"
    
  24. Set up free SSL via EFF's Certbot. Pretty self-explanatory to get it set up. Note that it will add some lines to your_app.conf Nginx configuration.

  25. Reboot your instance entirely to test that it starts all services correctly.

Create space for private configuration

Experience tells me to always avoid checking in sensitive private configuration like API keys and passwords.

If you took the Linode approach, simply define them as environment variables in /home/deploy/.bashrc:

export FOO=bar

An exception to this is Rails secrets configuration (secrets.yml.enc). By design, this is an encrypted YAML file versioned by Git. Understand that Rails creates this on app initialization, and is decrypted by a separate key -- config/master.key. master.key is ignored via .gitignore. You can remove master.key and place its contents in an env variable called RAILS_MASTER_KEY on deployment. Google "rails credentials edit" for more information.

Google Cloud App Engine doesn't have a private space for environment variables. One solution to this is to leverage gem dotenv-rails. Install the gem for all groups, despite the docs limiting it to production and test:

gem 'dotenv-rails'

Check in a .env file in the root of your app:

export FOO=bar

Copy this file to somewhere temporary, and change the contents to:

export FOO=baz

You have just created a variable whose value is "bar" in non-production and "baz" in production.

Now, go to the big Google Cloud menu, and navigate to Storage > Storage > Browser. Click into the bucket automatically created for your application. Upload the copied .env file, as well as your Rails master.key (in config/) to this bucket.

Open up config/environments/production.rb, and add this:

# Load ENV on GCP
storage = Google::Cloud::Storage.new
bucket = storage.bucket("#{ENV.fetch('GOOGLE_CLOUD_PROJECT')}.appspot.com")

path = Rails.root.join('tmp', 'env').to_s
bucket.file('.env').download(path)

config.env = OpenStruct.new(Dotenv.load(path))

path = Rails.root.join('config', 'master.key').to_s
bucket.file('master.key').download(path)

On app startup, this copies .env and master.key from your app's bucket to config/master.key and tmp/.env, respectively. It also loads variables in .env into an OpenStruct that lives in Rails.application.config.env.

That provides variable FOO as Rails.application.config.env.FOO, for instance. So you'd have to access all configuration that way, rather than ENV['FOO'], which is a bit of a drawback, but at the benefit of having somewhere to put this stuff on GCP.

Supercharge Minitest

Minitest doesn't support running tests at line numbers out of the box. Unless you prefer something like Guard, I highly recommend setting up gem m which remedies this.

group :test do
  gem 'm'
  gem 'spring-commands-m'
end

Set up a binstub, which runs under spring:

bundle
spring stop
bundle binstub m
bundle binstubs bundler --force

You are now able to run tests such as:

bin/m  test/foo/bar/baz_controller_test.rb:123

BTW, if you are using vim-vroom, configure as:

let g:vroom_use_binstubs = 1
let g:vroom_test_unit_command = 'm'

Set up JSON test helpers

Assuming your app serves JSON, a nice way to define responses is with JBuilder. Assist your dev cycles by showing full JSON output when a test fails and expose a json variable to test against. Place this in spec/test_helper.rb within the ActiveSupport::TestCase class definition:

def json
  @json ||= JSON.parse(response.body, object_class: OpenStruct)
end

def after_teardown
  if !passed? && respond_to?(:response) && response.present?
    puts JSON.pretty_generate JSON.parse(response.body)
  end

rescue JSON::ParserError
  nil
end

That lets you write a test such as:

test '#foo' do
  o = foos(:my_fixture_name)
  get some_route_helper_url(o, format: :json)

  assert_response :ok

  assert_kind_of Hash,        json
  assert_equal o.id,          json.id
  assert_equal o.property,    json.property
end

Configure CORS

CORS is necessary if you call your app from a page running Javascript served under a different domain. If you plan on doing that, there's a lot to know about CORS. Just to get your feet wet, set a couple of headers set controller-wide in Rails:

class ApplicationController < ActionController::API
  before_action do
    headers['Access-Control-Allow-Origin'] = '*'
    headers['Access-Control-Allow-Headers'] = 'origin, content-type, accept, user-agent'
  end
end

That allows any origin (caller of your API) to make HTTP requests to your app. You'll want to set this to the actual originating domain name in production.

Browsers also sometimes make what's called a preflight request to your endpoints which indicate their capabilities. Preflight requests use the OPTIONS HTTP verb. Support this by adding a route at the collection level for each of your API resources:

match 'objects(*path)', controller: :objects, action: :options, via: :options
resources :objects

That allows any resourceful route starting with objects/ to also be called via the OPTIONS verb. Route this to a simple, bodyless request, specifying which verbs your resource supports:

class ObjectsController < ApplicationController
  def options
    response.headers['Accept'] = 'GET, POST, PUT, PATCH, DELETE'
    head :ok
  end
end

This simplistic approach allows any caller to make any request using any of the core resourceful verbs to any of the core resourceful routes. This is just the beginning of proper CORS handling, however. A mature app should utilize a gem such as rails_http_options for more proper handling of CORS. rack-cors gives you middleware to reduce your controllers as another option:

config.middleware.insert_before 0, Rack::Cors do
  allow do
    origins '*'
    resource '*', headers: :any, methods: %i(get post delete put patch)
  end
end

Version your API

You will iterate on your API design. For best handling of breaking changes, wrap all controllers with an API version:

# routes.rb
namespace :api1, path: 'api/v1' do
  # ...
end

# any controller
module Api1
  class ObjectsController
    # ...
  end
end

Autoload lib

By default you have to explicitly require anything you add under lib/. Traditional thinking mandates anything in lib/ should be application-agnostic, but I suggest moving past that as it's a great spot for classes that aren't quite models, service classes, or anything else that needs a home to keep controllers skinny.

config.autoload_paths << Rails.root.join('lib')
config.eager_load_paths << Rails.root.join('lib')

Avoid reinventing authentication

I strongly advise minimizing time spent on boilerplate features. Authentication has to be the most boilerplate of the boilerplate. How many implementations exist across the whole internet? A staggering number.

There are many Omniauth options with the big players (Google, FB, etc.) however keep in mind it can be difficult to remember which third party you authenticated with for a particular site. Those using password managers may prefer a standard username/password scheme.

For that, gem Devise has always been a huge time saver. devise_token_auth exposes authentication functionality as endpoints for a single-page app, and has pretty decent documentation. Take a look through it here:

https://devise-token-auth.gitbook.io/devise-token-auth/

I've spent some time working out the basics for using Devise Token Auth with a Vue app. If you are in this boat too, have a look at Integrating devise_token_auth with a Vue app.

Build your app

That should give you plenty of development momentum for your next big idea.

You're likely building a single-page app, and if that's true, here's a shameless plug for Vue. At the latest RubyConf, I was saddened to not run into a single developer building in Vue, everyone seems to be on React. I'll just leave you with this question: after so much effort to separate them, are you truly on board with bringing back mixed HTML and JS? Vue pushes a 3-section design of HTML, JS, and CSS that really makes too much sense.

So onward & upwards, go forth and prosper, or whatever cutesy idiom you want. And remember, cozy up to the defaults first. Wait until you outgrow, then upgrade. In that spirit, I'll leave you with some links:

For an example app that inspired this blog post, check out one of my side projects:

https://gitlab.com/mcnelson/dev-activity

More blog posts