Kubernetes and Istio High Level Overview


I have been using Kubernetes in production for a while now, and I would like to record my own mental model and understanding of it in this article, and maybe talk a little bit about my experience with it.

Learning Kubernetes is kind of like learning how to ride a bike. At first, it’s extremely complicated, but after you get the hang of it, it becomes extremely powerful and can get you pretty far along the DevOps path. Not sure if that analogy made any sense, but I’ll just stick with it for now.

I believe the first hurdle to learning Kubernetes is in understanding the theory behind it. There are many books and courses out there that go into much greater detail, but my goal for this article is to distill the information into a concise overview of the purpose and high level architecture of Kubernetes, enough that hopefully even a non-dev would understand. I have on occasion received questions about what Kubernetes is from project managers and other business people. And I recall somewhat struggling to provide a clear answer. So hopefully, writing this will help clear up my own understanding of this new and increasingly popular technology.

As a disclaimer, I do not claim to be a pro on this topic, but I do have experience setting up and managing Kubernetes clusters on GCP/GKE for real clients for over two years. Previously, my experience consisted of deploying websites to standard web hosting services like DreamHost and PaaS like Heroku. With that said, let’s start with “What is Kubernetes”.

Container Orchestration

Kubernetes is “Container Orchestration”. What does that mean exactly? Well, Kubernetes is a tool that allows you to describe the desired state of your infrastructure in configuration files (aka manifest files). With these files, you can then tell Kubernetes to go and make it happen. I write the configuration files which specifies the containers I want to run and how many containers I want to run. I send it off to Kubernetes and it magically schedules the containers to start running on my cluster, so long as I have enough resources (CPU and memory) to run it.

How does the magic happen? Luckily for us, most of this has been abstracted away by the creators of Kubernetes, and it’s not something that we really need to know to get started. Although, it is probably good to know. In a nutshell, the Kubernetes master server talks to all of the connected nodes which have a kubelet process running on them. That, to be honest, is about as much as I know. I have never needed to dig into the internals, and have gotten this far.

I forgot to mention earlier to explain the terminology of cluster. This word gets thrown around a lot when speaking about Kubernetes, so I figured it might be useful to define it here. Cluster is simply a group of nodes that are connected to the same Kubernetes network. A node is simply a compute instance. And a compute instance is simply a unit of compute resources (CPU and memory) that you use to run apps, services, or whatever else you might want to run in the cloud.

So when we send our configuration to Kubernetes, and tell it “Hey, Kubernetes master, I want this backend-service container running!”, essentially what is happening is the Kubernetes master will send a message to a node with available resources to spin up a “Pod” to run your “backend-service” container. The pod will go ahead and pull the image from a container registry and spin up a docker container. Assuming everything goes well, the pod reports back to the master with a status update saying everything is good to go. Then, you as the developer can see the result of the deployment by issuing a `kubectl get pod` command. If it says “running”, then you know the deployment succeeded. The status will always be up to date, given Kubernete’s health check mechanism. So you’ll always know whether a pod is alive or dead.

Deployments, Pods, and Services

Deployments is another word that also gets thrown around a lot, along with services, and pods. Fortunately, these are the three main Kubernetes objects that you’ll need to get familiar with. You can think of a deployment as a set of instructions for Kubernetes to deploy a specific container to your cluster. A pod is a unit of deployment, which typically runs a single container on it. Sometimes, there can be more than one container, like in the case of using Istio sidecar container, but that is outside the scope of this article. And a service is what you use to expose your services to your network.

As long as a deployment for a container exists, Kubernetes will work to make sure that container is always up and running. If the pod with the container ever fails, Kubernetes will make sure to restart it. This brings us to Kubernete’s self-healing capability. If you delete a pod on purpose, it will automatically respawn itself as long as the deployment for it exists. The analogy I use here is that a deployment is like a magical scroll (the manifest file) that summons a minion (the pod) to help you achieve your mission. The number of minions (pods) that gets summoned is defined by the replica option in the file.

Ingress and Istio

So once we’ve exposed the service to our private network, how do we get that exposed to the world? For this, we introduce the ingress. The ingress allows traffic from the outside world to flow through to reach our services. But first it must pass through the ingress. The ingress routes the traffic, based on the host and path to our services. And as far as I know, the best practice is to use a third party solution for the ingress functionality. Initially, I started with ingress-nginx, but have transitioned to Istio, a full service mesh solution, with powerful traffic management, monitoring, and tracing features. It also seems like Istio is growing in popularity, alongside Kubernetes. It is even included as an option during the GKE installation process, which is evidence of its adoption in the community.

Hopefully, this text diagram will suffice in demonstrating the request flow:

Traffic -> Gateway -> Virtual Service -> Service -> Pod

It might also be worthy to note that when Istio is installed on your Kubernetes cluster on GKE, the gateway is automatically bound to a GCP load balancer and assigned an IP address. You can also assign a static IP, which I may write about in a future post. The Gateway and Virtual Service are specific features of Istio, but for all intents and purposes, it is what provides the ingress functionality in this example. The analogy I like to use for the ingress is that it is very similar to an application router. Think of a React router or a Ruby on Rails router. Essentially, the router is responsible for accepting requests, which is usually represented by a URL, and then routing it to the proper place, whether it be a controller action or a page, and then returning the response.


So why do we have to go through all this in order to get a simple app running? How is this simpler than running `git push heroku master`? I certainly have asked myself this when I started using K8s. And I believe it just comes down to your project requirements. Do you have a need for a microservice architecture where containers are able to speak to each other within your private network? Or do you want granular control over the resource allocation for each service running in your infrastructure? Or does your project require the orchestration of multiple containers running simultaneously in your cluster? Do you need to be able to add more compute resources to your cluster on demand or autoscaling? Or perhaps you need to “scale your infrastructure without scaling your DevOps team”? If you answered yes any of these, then it’s probably a good choice. In general, Kubernetes is probably more suitable for big complex projects. For hobby projects, I’d stick with Heroku or DreamHost.

I hope that helps to clarify some of the concepts around K8s and Istio, and didn’t add to the confusion. Please let me know if there is anything incorrect or if there’s a better way to think about these concepts. I am still learning about this topic and would love to hear what you think.

How to deploy a Vue.js app to Heroku


This is the quickest way that I have found to deploy a Vue + webpack app that was generated by the vue-cli to Heroku. The idea is to have Heroku build the app and serve it with express in minimal time.

Step 1. Add these postinstall and start hooks to your package.json scripts section:

"postinstall": "npm run build",
"start": "node server.js"

Step 2. Create a server.js file in the root of your app:

var express = require('express')
var path = require('path')
var serveStatic = require('serve-static')

var app = express()
app.use(serveStatic(path.join(__dirname, 'dist')))

var port = process.env.PORT || 5000
console.log('server started ' + port)

Step 3. Configure Heroku to install dev dependencies:

This step is required to get the necessary packages installed to run and serve the build on Heroku, since all the required packages were placed into devDependencies when the app was generated.

$ heroku config:set NPM_CONFIG_PRODUCTION=false

The alternative to this step would be to move all the devDependencies into dependencies.



7 Tips for Recent Coding Bootcamp Grads


1. Keep working on interesting projects and finish what you start. Continue pushing to github. Projects must get progressively more challenging and impressive.

2. Build a portfolio website and post all of your projects there. Having a site makes it easier to sell your skills during interview.

3. Practice difficult programming problems. Do “code katas” on codewars.com, topcoder, project euler etc. This will help with those interview questions.

4. Take online courses. I recommend code school, pluralsight, and coursera. Then apply what you learn in projects.

5. Read a lot of programming books.

6. Watch screencasts. These are useful when you’re working on projects and need to figure out how to implement a specific feature.

7. Build your “brand” and online presence. Start a blog and write about programming stuff. Fill in linkedin with projects and clients. Get testimonials. Etc.

Convert Exported Evernote HTML Files Into Text Files


I’ve been using Evernote for a while, and I think it’s great. It makes it easy to flip through notes and has a super useful full text search feature. While they do have a way to import text files, they don’t have a way to export to text. Here is a ruby script I wrote to convert their exported HTML files into text:

require 'nokogiri'

my_notes_dir = File.expand_path(ARGV[0])

Dir.mkdir("evertextfiles") unless File.directory?("evertextfiles")
evertextfiles_dir = File.expand_path("evertextfiles")
Dir.glob("*.html") do |filename|
  new_filename = filename.sub(".html", "") + ".txt"
  File.open(filename, "r") do |f|
    html = Nokogiri::HTML(IO.read filename)
    text = html.at('body').inner_text
    File.open(new_filename, "w") do |new_file|
      new_file.puts text

To run it:

ruby evertext.rb path/to/notes

Lessons Learned Building a Rails App With Wildcard Subdomains


I recently launched my first software as a service web app called MicroSweepstakes, which allows you to build a simple sweepstakes website and have it hosted on a subdomain of microsweepstakes.com. This follows the same pattern as other do it yourself site builders like Squarespace, Weebly, WordPress, Wix, etc. There were a few key things I learned in the process.

Some DNS services support wildcard domains

I first bought a domain on DreamHost, but quickly learned that DreamHost did not support wildcard domains. According to the forums, you can activate wildcard domains, but you will have to use their VPS services and then go through the process of contacting DreamHost support. I did initially try to work with support to get the wildcard domains set up and pointed to my Digital Ocean IP, but eventually gave up due to the hassle. I ended up purchasing another domain on DNSimple, and the process was fairly simple. The wildcard domain I need worked right off the bat. The annual expense for DNSimple ended up being $50+, after all of their service fees, WHOIS protection, and domain renewal, as opposed to DreamHosts’ $10.95/year. The lesson here is figure out if you need wildcard subdomains before purchasing your domain. Some providers support it. Some don’t.

Two types of SSL certificates

There are two types of SSL certificates you can purchase through DNSimple/Comodo. A single subdomain SSL certificate which runs for $20/year, and a wildcard subdomain SSL certificate which runs for $100/year. Without thinking it through carefully, I purchased the single subdomain SSL certificate assuming that I would need it only for the checkout page on my web app. By purchasing the single subdomain SSL certificate, I added another layer of complexity to my multiple subdomain app. Because some pages required SSL and some didn’t, I had to work the logic into my Rails app. I figured it would have been easier to just purchase the wildcard SSL certificate to make everything work within the https protocol instead of both http and https. So that is exactly what I did. I purchased the $100 SSL certificate. Then I realized that because my app supported custom domains, which allows users to point their domain to my app, the wildcard SSL certificate for microsweepstakes.com was essentially useless, since it wouldn’t even matter because of custom domains. Essentially, each custom domain would need its own SSL certificate for the entire site to run on https. I decided to go back to using the single subdomain SSL certificate only on the user dashboard, and then disable SSL on any custom domain parts of the app. The lesson learned here is to think more about how I’m going to structure the secure parts of my app before rushing into purchasing SSL certificates.

Working with subdomains locally

There are few strategies for working with subdomains locally. Pow and lvh.me. With Pow, you can run your app on a .dev top level domain locally, for example microsweepstakes.dev. With lvh.me, you can run your subdomain on lvh.me:3000. There a few things to consider. With lvh.me, byebug and/or pry works out of the box. You just run your server and see your log output which stops if there is a byebug/pry breakpoint. The only downside is that it requires an internet connection to work. Pow, on the other hand doesn’t require an internet connection. The downsides are that it is difficult to set up and byebug/pry does not work out of the box. You’ll have to connect to the byebug/pry through the command line, and it made the app slow when connected. I would work with lvh.me so long as I have an internet connection which is 99% of the time.

Rails route constraints

Rails has some useful features for dealing with subdomains. In your routes file, you can use lambdas to add constraints to your routes. Here is an example:

# microsite routes for custom domains
get '', to: 'microsite#home', constraints: lambda { |r| !(r.host =~ /microsweepstakes.com|microsweepstakes.dev|lvh.me/) }
get ':permalink', to: 'microsite#show', constraints: lambda { |r| !(r.host =~ /microsweepstakes.com|microsweepstakes.dev|lvh.me/) }
post 'create_entry', to: 'microsite#create_entry', constraints: lambda { |r| !(r.host =~ /microsweepstakes.com|microsweepstakes.dev|lvh.me/) }

# microsite routes for microsite domain
get '', to: 'microsite#home', constraints: lambda { |r| r.subdomain.present? && r.subdomain != 'www' }
get ':permalink', to: 'microsite#show', constraints: lambda { |r| r.subdomain.present? && r.subdomain != 'www' }
post 'create_entry', to: 'microsite#create_entry', constraints: lambda { |r| r.subdomain.present? && r.subdomain != 'www' }

The lambda places a condition for the route by accessing the request object, so you can route requests based on the subdomain and/or host name.


Working with dynamic subdomains and SSL certificates can get extremely complicated with many gotchas. Think carefully about your SSL needs before spending your money. Make sure your DNS provider supports wildcard domains before registering, if you need wildcard subdomains. Use lvh.me to work with subdomains. Use lambdas to place constraints on routes.

SSL Certificate, Nginx, Ruby on Rails


Prerequisites. You have google apps set up and can already received e-mails from admin@yoursite.com.

Purchase an SSL certificate from DNSimple.

Follow instructions sent to admin@yoursite.com for domain control validation.

In production.rb, change the force_ssl option to true.

config.force_ssl = true

Download your certificates from DNSimple. It can take up to an hour before they appear.

Rsync your SSL certificates to your server.

$ rsync -av www_yoursite_com.pem deploy@yoursite.com:~/ssl/
$ rsync -av www_yoursite_com.key deploy@yoursite.com:~/ssl/

On the server, move the certs into /etc/nginx/ or wherever you want to put them.

$ sudo mv ~/ssl/www_yoursite_com.pem /etc/nginx/
$ sudo mv ~/ssl/www_yoursite_com.key /etc/nginx/

Edit your Nginx configuration found in /etc/nginx/sites-available.

The proxy_set_header X-Forwarded-Proto https; is necessary to prevent an infinite redirect loop.

upstream app {
  server unix:/tmp/unicorn.yoursite.sock fail_timeout=0;
server {
  listen 443;

  ssl on;
  ssl_certificate /etc/nginx/www_yoursite_com.pem;
  ssl_certificate_key /etc/nginx/www_yoursite_com.key;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

  server_name www.yoursite.com yoursite.com;
  root /var/www/yoursite/current/public;
  try_files $uri/index.html $uri @app;
  location @app {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_pass http://app;
  error_page 500 502 503 504 /500.html;
  client_max_body_size 4G;
  keepalive_timeout 10;

Restart Nginx.

$ sudo /etc/init.d/nginx restart




Ruby on Rails, Sending Email with sendmail on Ubuntu VPS



config.action_mailer.delivery_method = :sendmail
config.action_mailer.default_url_options = { host: "yourdomain.com", port: 25}

Install sendmail

$ sudo apt-get install sendmail

Configure hosts file correctly:

$ nano /etc/hosts

And make sure the line looks like this: localhost localhost.localdomain yourhostnamehere

Run hostname if you don’t know your hostname.

Reload /etc/hosts, so that the previous changes take effect

sudo /etc/init.d/networking restart that works but seems to be deprecated, better use:

$ /etc/init.d/networking stop
$ /etc/init.d/networking start

Run the sendmail config and answer ‘Y’ to everything:

$ sendmailconfig



Ruby on Rails, Nginx, Unicorn, Postgresql, Capistrano


This is a step-by-step guide for anyone who is trying to learn the basics of setting up Ruby on Rails, Nginx, Unicorn, and Postgresql to run on a VPS and use Capistrano to automate deployment. For this tutorial, we will use Digital Ocean as the VPS provider because it is cheap, fast, and simple to use.

Software versions used in this tutorial:

Ruby 2.1.3
Rails 4.2.0
Postgresql 9.3.6
Ubuntu 14.04 x64
Nginx 1.4.6
Unicorn 4.8.3
Capistrano 3.4.0

For reference, here is the git repo of this deploydemo app:


Step 1: Preparation.

Spin up a Digital Ocean droplet with Ubuntu 14.04 x64.

Create a new repository on GitHub or Bitbucket. You’ll need this for Capistrano to pull from during deploys.

Step 2: Create a simple Rails app.

Create a new rails app. We will set the database to postgresql.

user@local $ rails new deploydemo -d postgresql

Commit and push app to git origin.

user@local $ cd deploydemo
user@local $ git init
user@local $ git add .
user@local $ git commit -m "initial commit"
user@local $ git remote add origin git@github.com:travisluong/deploydemo.git
user@local $ git push origin master

Run bundle to install dependencies.

user@local $ bundle

Create postgresql database for development.

user@local $ createdb deploydemo_development

Create a rails scaffold

user@local $ bin/rails g scaffold post title content:text
user@local $ bin/rake db:migrate

Set the root to the scaffold index in config/routes.rb.

root 'posts#index'

commit the changes.

user@local $ git add .
user@local $ git commit -m “scaffold"

Step 3: Install and configure Capistrano and Unicorn.

Add these gems to Gemfile.

gem 'unicorn'
group :development do
  gem 'capistrano-rails'
  gem 'capistrano-rvm'
  gem 'capistrano3-unicorn'

Run bundle.

user@local $ bundle

Install Capistrano. This will create some files.

user@local $ cap install

In config/deploy.rb, set the application, repo_url, deploy_to with your settings. You can uncomment linked_files, and linked_dirs. You might also want to set format to pretty and log level to info to get rid of some unimportant (failed) messages from Capistrano.

set :application, 'deploydemo'
set :repo_url, 'git@github.com:travisluong/deploydemo.git'
set :deploy_to, '/var/www/deploydemo'
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', 'public/system')
set :format, :pretty
set :log_level, :info

Add require statements in Capfile. This will make it work with RVM and add extra rake tasks to Capistrano.

require 'capistrano/rails'
require 'capistrano/rvm'
require 'capistrano3/unicorn'

Add server to config/deploy/production.rb. You can find the server IP from your Digital Ocean dashboard. You’ll also want to set the unicorn_rack_env to production, since the gem uses “deployment” environment by default for some reason.

server '162.xxx.xxx.xx', user: 'deploy', roles: %w{app db web}
set :unicorn_rack_env, -> { "production" }

Commit changes.

user@local $ git add .
user@local $ git commit -m "capistrano"

Create a the unicorn configuration file in config/unicorn/production.rb.

working_directory '/var/www/deploydemo/current'
pid '/var/www/deploydemo/current/tmp/pids/unicorn.pid'
stderr_path '/var/www/deploydemo/log/unicorn.log'
stdout_path '/var/www/deploydemo/log/unicorn.log'
listen '/tmp/unicorn.deploydemo.sock'
worker_processes 2
timeout 30

before_fork do |server, worker|
  old_pid = "/var/www/microsweepstakes/current/tmp/pids/unicorn.pid.oldbin"
  if old_pid != server.pid
    sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
    Process.kill(sig, File.read(old_pid).to_i)
    rescue Errno::ENOENT, Errno::ESRCH

Here is what this file is doing:

  1. Set the working directory of the app.
  2. Set the unicorn pid file location. That contains the id of the unicorn process.
  3. The unicorn log files.
  4. The socket file that connects to Nginx.
  5. The number of workers each master process will spawn.
  6. The amount of time a request is given before Unicorn kills the process.
  7. A before fork that kills the old process when a new process is started, so we can achieve zero downtime deploys.

Commit changes and push to origin.

user@local $ git add .
user@local $ git commit -m "unicorn"
user@local $ git push origin master

Step 4: Create a deploy user on your VPS.

ssh in to remote server.

user@local $ ssh root@162.xxx.xxx.xx

Create a deploy user and give sudo privileges. You can leave all the fields blank, except for password.

root@remote $ adduser deploy
root@remote $ adduser deploy sudo

Switch to deploy user.

root@remote $ sudo su deploy

Step 5: Set up all of the SSH keys.

CD into home and make a .ssh directory.

deploy@remote $ cd
deploy@remote $ mkdir .ssh

Copy your ssh key from local over to remote for password-less login.

user@local $ cat ~/.ssh/id_rsa.pub | ssh -p 22 deploy@162.xxx.xxx.xx 'cat >> ~/.ssh/authorized_keys'

Follow the instructions in the link below to add an ssh key to GitHub for your remote VPS. You need to do this so that Capistrano can pull the application from GitHub on deploys. The commands in this guide should be run on the VPS as your deploy user.


Step 6: Install packages and dependencies.

Run update.

deploy@remote $ sudo apt-get update

Install packages.

deploy@remote $ sudo apt-get install -y curl git-core build-essential zlib1g-dev libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libcurl4-openssl-dev libxml2-dev libxslt1-dev python-software-properties

Install node.js for JavaScript runtime.

deploy@remote $ sudo apt-get install -y nodejs

Install Postgresql. libpq-dev is needed for building the pg gem later.

deploy@remote $ sudo apt-get install -y postgresql postgresql-contrib libpq-dev

Install nginx. If you navigate to your IP in your browser, you should see an nginx welcome page.

deploy@remote $ sudo apt-get install -y nginx

Step 7: Set up the postgres user and create production database.

Create the production database.

deploy@remote $ sudo -u postgres createdb deploydemo_production

Set up password for “postgres” user.

deploy@remote $ sudo -u postgres psql
postgres=# \password postgres
postgres=# \q

Step 8: Install RVM, ruby, and bundler.

Add a line to your .gemrc to turn off document generation. Document generation takes way too long.

deploy@remote $ echo "gem: --no-document" >> ~/.gemrc

Install rvm and ruby.

deploy@remote $ gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
deploy@remote $ \curl -sSL https://get.rvm.io | bash -s stable
deploy@remote $ source /home/deploy/.rvm/scripts/rvm
deploy@remote $ rvm install 2.1.3

Install bundler.

deploy@remote $ gem install bundler

Step 9: Create the directories and shared files needed for Capistrano deployment.

Make /var/www directory in remote.

deploy@remote $ sudo mkdir /var/www

Change the owner of /var/www to deploy.

deploy@remote $ sudo chown deploy /var/www

Make the shared config directory.

deploy@remote $ mkdir -p /var/www/deploydemo/shared/config

Make log directory for unicorn log.

deploy@remote $ mkdir -p /var/www/deploydemo/log

Create the shared database.yml file that will be shared between releases.

deploy@remote $ sudo vim /var/www/deploydemo/shared/config/database.yml
  adapter: postgresql
  encoding: unicode
  pool: 5
  timeout: 5000
  database: deploydemo_production
  username: postgres
  password: password
  host: localhost

Run rake secret to generate a secret key. You will put that in the shared secrets.yml file.

user@local $ bin/rake secret

Create the shared secrets.yml file and put in the secret key you generated in the last step.

deploy@remote $ sudo vim /var/www/deploydemo/shared/config/secrets.yml
  secret_key_base: 94d04182d80fe4ea1ec41b6839b019a02e8a3f8cfa0696ee3b5281d5512473c8483334b23f31bd7fcdf3914263d0719c819494613e3d6ffb1792a45b6277da66

Add the RAILS_ENV variable to .bashrc so Unicorn can load the right environment.

deploy@remote $ echo 'export RAILS_ENV=production' >> ~/.bashrc
deploy@remote $ source ~/.bashrc

Run cap production deploy:check to make sure all your files and directories are in place. Everything should be successful.

user@local $ cap production deploy:check

Step 10: Configure and restart Nginx.

Back up the default file. I put it in home directory for now.

deploy@remote $ sudo mv /etc/nginx/sites-available/default ~

Create a new nginx default file with these settings. Note that the socket is the same one specified in the Unicorn configuration.

deploy@remote $ sudo vim /etc/nginx/sites-available/default
upstream app {
  server unix:/tmp/unicorn.deploydemo.sock fail_timeout=0;
server {
  listen 80;
  server_name localhost;
  root /var/www/deploydemo/current/public;
  try_files $uri/index.html $uri @app;
  location @app {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_pass http://app;
  error_page 500 502 503 504 /500.html;
  client_max_body_size 4G;
  keepalive_timeout 10;

Restart nginx.

deploy@remote $ sudo service nginx restart

Step 11: Deploy the app.

Run cap production deploy. First time will take a while since it has to run bundle and install all dependencies.

user@local $ cap production deploy

Start the unicorn workers.

user@local $ cap production unicorn:start

You should see your simple CRUD app when you go to your IP in the browser.


I hope you have found this guide helpful. If you see any improvements that can be made to this guide or have any questions, feel free to contact me.


If you’re having issues with symlinks, deleting the “current” symlink and running deploy again might help.

If there’s an error, look at the Capistrano output. There’s usually some information that can help you debug the problem.

Use the cap -T command to see all Capistrano commands.

Use ps aux | grep unicorn to check unicorn processes. Capistrano has some commands to stop and restart unicorn workers, but use kill [pid] to kill the process manually if needed.




















How to Set Up a Wildcard Subdomain on Heroku


Recently, I needed to get subdomains to work on Heroku for a multitenant application. It might be slightly more expensive, but I highly recommend DNSimple over other services for a few reasons. DNSimple supports ALIAS records, which allows you to use the root domain. They also support wildcard subdomains. And things just work. No need to contact support to set up wildcard domains. DNSimple makes DNS simple. Outlined below is the general process for getting the DNS stuff set up.

Step 1: Learn how to work with subdomains in Rails.

Follow instructions in railscast http://railscasts.com/episodes/123-subdomains-revised.

Step 2: Deploy your heroku app.

Step 3: Add the subdomain.

heroku domains:add *.your-domain.com

Step 4: Sign up for DNSimple account.

Step 5: Register a domain

 Click add domain.

Make sure “register or transfer this domain” is checked, otherwise you might end up adding a domain that isn’t even registered and wonder why things aren’t working.

Register the domain if it is available.

Step 6: Add Heroku service in DNSimple.

Click services and add heroku service.

Enter your heroku app.

Click on your domain.

Click on DNS.

Click Manage records.

Add a CNAME record for *.your-domain.com pointing to your-app.herokuapp.com.

Ruby on Rails Dynamic Forms


I needed to create a dynamic form where users can add additional fields to save additional properties onto a model. I followed the instructions on this RailsCast: http://railscasts.com/episodes/403-dynamic-forms

Everything worked fine except for one thing. I was on Rails 4 and I needed to add all of the nested attributes to the parameter white list. You also have to add the id and _destroy as permitted parameters for the remove link to work.


def product_params
  params.require(:product).permit(:name, fields_attributes: [:name, :field_type, :required, :id, :_destroy])