<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Fideloper</title>
    <link>https://fideloper.com</link>
    <description>Coding, servers, and business.</description>
    <item>
      <title>Wildcard Subdomains: Multi-tenancy in Nginx</title>
      <link>https://fideloper.com/wildcard-subdomains-nginx-multitenancy</link>
      <description>Setup Nginx for multi-tenancy (wildcard subdomains) with a little special configuration I like to add.</description>
      <content:encoded><![CDATA[<iframe src="https://www.youtube.com/embed/B42obkAG3jU?si=oWDIOkqG65-Yo7ko" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p>Here we use LetsEncrypt (certbot) with the CloudFlare DNS plugin to generate a free, auto-renewing TLS certificate to use with Nginx.</p>
<p>Then we configure Nginx to use that TLS certificate and create a configuration to support multi-tenancy in our applications.</p>
<p>We use a special configuration to capture the value of the subdomain so we can pass it off to our PHP application (or do anything we want, like use dynamic app locations for local development - as described here: <a href="https://www.youtube.com/watch?v=SPHxW1C4G6I">https://www.youtube.com/watch?v=SPHxW1C4G6I</a> ).</p>
<p>Useful Links:</p>
<p>Install certbot: <a href="https://certbot.eff.org/">https://certbot.eff.org/</a>
Certbot challenge types: <a href="https://letsencrypt.org/docs/challenge-types/">https://letsencrypt.org/docs/challenge-types/</a>
My site: fideloper.com
My newsletter: <a href="https://fideloper.ck.page/">https://fideloper.ck.page/</a></p>
<h2>Install and Configure Certbot</h2>
<p>We can start by installing/configuring Certbot. In our case, we'll use the Cloudflare plugin to manage DNS.</p>
<p>Why do we need to manage DNS? Certbot uses a DNS challenge for wildcard subdomains (instead of the HTTP challenge for non-wildcard domains).</p>
<pre><code class="language-bash"># Install certbot on Ubuntu as per
# https://certbot.eff.org/instructions?ws=nginx&amp;os=snap

sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot

sudo snap set certbot trust-plugin-with-root=ok
sudo snap install certbot-dns-cloudflare
</code></pre>
<p>We'll run stuff as user root, so we'll configure a location to save credentials to the Cloudflare API.</p>
<p>You'll need to generate an API token in Cloudflare (you can lock them down to be specific to managing one domain's DNS). Docs on <a href="https://certbot-dns-cloudflare.readthedocs.io/en/stable/">that are here</a>.</p>
<p>Create file <code>/root/.secrets/cloudflare.ini</code> and add something like:</p>
<pre><code class="language-ini">dns_cloudflare_api_token = 0123456789abcdef0123456789abcdef01234567
</code></pre>
<p>Then we can generate our wildcard certificate!</p>
<pre><code class="language-bash"># Optionally add --force-renewal if
# a current certificate is generated and
# you want to over-write it
sudo certbot certonly \
    --dns-cloudflare \
    --dns-cloudflare-credentials /root/.secrets/cloudflare.ini \
    --post-hook &quot;service nginx reload&quot; \
    --non-interactive \
    --agree-tos \
    --email your-email-here \
    -d *.your-domain.tld \
    -d your-domain.tld
</code></pre>
<p>Check out the video for more details on all of this.</p>
]]></content:encoded>
      <pubDate>Tue, 05 Dec 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>TLS Certificates in 30 Seconds</title>
      <link>https://fideloper.com/tls-certs-in-30-seconds</link>
      <description>We'll see the fastest, easiest way to setup an SSL (TLS) certificates using certbot.</description>
      <content:encoded><![CDATA[<iframe src="https://www.youtube.com/embed/RbJLplHRKCM?si=_v9yc72KdJttDhp-" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p>Let's see how to install, setup, and configure LetsEncrypt (certbot) with Nginx to get an SSL certificate in something like 30 seconds.</p>
<p>This will help you get and configure an TSL certificate that auto-renews itself via LetsEncrypt - you never have to think about it again!</p>
<p>When you install certbot, it will add a systemd timer. This timer periodically checks if the certificate needs renewing, and if so, does it! Configuration in /etc/letsencrypt keeps information about the certificates installed on the server, including post-renewal hooks (like running &quot;service nginx reload&quot;).</p>
<p>In our case, we'll use certbot one-line command to obtain the certificate. We'll make sure Nginx is configured to allow requests to a .well-known directory. Finally we  see how Nginx should be configured to use the generated TLS certificates (thanks to H5BP Nginx server configs for making it so easy).</p>
<p>Here's some resources:</p>
<ul>
<li>LetsEncrypt <a href="https://letsencrypt.org/">https://letsencrypt.org/</a></li>
<li>Certbot <a href="https://certbot.eff.org/">https://certbot.eff.org/</a></li>
<li>H5BP Nginx: <a href="https://github.com/h5bp/server-configs-nginx">https://github.com/h5bp/server-configs-nginx</a></li>
<li>Video on setting up H5BP with Nginx: <a href="https://www.youtube.com/watch?v=d6kfuPo3Cnw&amp;ab_channel=ChrisFidao">https://www.youtube.com/watch?v=d6kfuPo3Cnw&amp;ab_channel=ChrisFidao</a></li>
</ul>
<h2>The Setup</h2>
<p>We can install and configure Certbot pretty easy:</p>
<pre><code class="language-bash"># Install certbot on Ubuntu as per 
# https://certbot.eff.org/instructions?ws=nginx&amp;os=snap
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
</code></pre>
<p>Then, assuming Nginx is up and running with a site on port 80, and nothing is blocking the <code>/path/to/web-root/.well-known</code> directory from serving files:</p>
<pre><code class="language-bash"># Optionally add --force-renewal if
# a current certificate is generated and
# you want to over-write it
sudo certbot certonly --webroot \
    -w /var/www/app/public \
    -d someapp.xyz \
    -d www.someapp.xyz \
    --post-hook &quot;service nginx reload&quot; \
    --non-interactive \
    --agree-tos \
    --email your-email-here
</code></pre>
<p>Then you can configure Nginx to use the new TLS certificate! See the video above to a few details on that.</p>
]]></content:encoded>
      <pubDate>Tue, 05 Dec 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Production-Ready Nginx</title>
      <link>https://fideloper.com/production-ready-nginx</link>
      <description>We'll see how Nginx default configuration falls short, and what to do about it.</description>
      <content:encoded><![CDATA[<iframe src="https://www.youtube.com/embed/d6kfuPo3Cnw?si=So_aJceMu2dVVlTN" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p>Fixing Nginx's default configuration.</p>
<p>Nginx's default configuration is <em>fine</em>, but could use some help!</p>
<p>The quickest way to improve Nginx's default configuration is to use <a href="https://github.com/h5bp/server-configs-nginx">H5BP's Nginx configuration</a>.</p>
<h2>The Problems with Defaults</h2>
<p>There's a few &quot;problems&quot; (things we can improve) with the default configuration.</p>
<ol>
<li>No caching for easily-cached static assets</li>
<li>No HTTP security headers/configuration in place</li>
<li>TLS encryption defaults could use improvement</li>
</ol>
<p>Let's fix that!</p>
<h2>Using H5BP</h2>
<p>I basically just blow away the default Nginx configuration and use H5BP's:</p>
<pre><code class="language-bash">sudo mv /etc/nginx /etc/nginx.old
git clone https://github.com/h5bp/server-configs-nginx.git /etc/nginx
</code></pre>
<p>Files in <code>/etc/nginx/conf.d</code> are loaded - your site configurations go here. There are templates in there for you to use!</p>
<p>The main thing to check out is <code>h5bp/basic.conf</code>, which then loads other configuration files. This is the default set of configuration loaded - but there is more there to check out and optionally use!</p>
<p>The defaults sets a great set of un-obstrusive security settings, file caching, letting LetsEncrypt (certbot) work, and more.</p>
<p>Check out the video for a ton more details!</p>
]]></content:encoded>
      <pubDate>Tue, 05 Dec 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Multi-Tenant Local Dev</title>
      <link>https://fideloper.com/multi-tenant-local-dev-nginx-dnsmasq</link>
      <description>We'll use `dnsmasq` and `nginx` on MacOS to setup a multi-tenant local dev environment. This lets us map subdomains of a `.test` domain to the same code base.</description>
      <content:encoded><![CDATA[<iframe src="https://www.youtube.com/embed/SPHxW1C4G6I?si=JCcLcV_4CVec0Ttk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p>Your app may need to allow users/teams/tenants/whatever the ability to have their own subdomain. If your domain is <code>myapp.com</code>, this means letting tenants use <code>foo.myapp.com</code> to access their account.</p>
<p>This affects your code base quite a but, but what I want to show you is a server setup for this. I'll be doing videos on this for development and production, so keep an eye on the <a href="https://www.youtube.com/@fideloper">Youtube channel</a> for more.</p>
<p>In this article (and video!) we're concentrating on local development. We'll setup dnsmasq and Nginx so any subdomain used for a local  test domain (e.g. <code>myapp.test</code>) will point to your one codebase.</p>
<p>Here's a quick run down of what's covered in the video - there's 3 steps to this:</p>
<ol>
<li>Installing and configuring <code>dnsmasq</code></li>
<li>Making your computer use <code>dnsmasq</code> for DNS resolution</li>
<li>Intalling and configuring <code>nginx</code></li>
</ol>
<h2>dnsmasq</h2>
<p>The tl;dr on installing and configuring <code>dnsmasq</code> with <a href="https://brew.sh">homebrew</a> is this:</p>
<pre><code class="language-bash">brew install dnsmasq

echo &quot;address=/test/127.0.0.1&quot; \
    | sudo tee /opt/homebrew/etc/dnsmasq.d/test.conf

# use sudo here
sudo brew services start dnsmasq
</code></pre>
<p>We install <code>dnsmasq</code>, and then configure it so that domains ending in <code>.test</code> resolve to <code>127.0.0.1</code>. Then we use <code>brew services</code> to start <code>dnsmasq</code>. If you already have dnsmasq, you want to run <code>restart</code> instead of just <code>start</code>.</p>
<p>Make sure <code>dnsmasq</code> is started with <code>sudo</code>, as it needs elevated permissions to do what it's doing.</p>
<p>To test this, you can run:</p>
<pre><code class="language-bash">dig foo.test @127.0.0.1
</code></pre>
<p>The <code>ANSWER</code> section should tell you that <code>foo.test</code> resolves to <code>127.0.0.1</code>. Using <code>@127.0.0.1</code> tells the <code>dig</code> command to use the domain name server (<code>dnsmasq</code>!) running at <code>127.0.0.1</code>.</p>
<h2>Using dnsmasq</h2>
<p>We need to make our computer (MacOS in this case) use dnsmasq for its DN server (in addition to the regular DN servers it uses). To do that, we'll create a file <code>/etc/resolver/test</code>.</p>
<blockquote>
<p><strong>Warning:</strong> This change may not work instantly. You may need to wait a bit or even restart your computer.</p>
</blockquote>
<p>Here are the commands to run:</p>
<pre><code class="language-bash">sudo mkdir -p /etc/resolver

echo &quot;nameserver 127.0.0.1&quot; \
    | sudo tee /etc/resolver/test
</code></pre>
<p>That file just contains <code>nameserver 127.0.0.1</code>, which tells the operating system to find a nameserver running locally at <code>127.0.0.1</code>.</p>
<p>To test this, run the same <code>dig</code> command as above, but without the <code>@127.0.0.1</code> part. The <code>dig</code> command shouldn't need that anymore, since the OS now knows to check localhost for a domain name server first.</p>
<pre><code class="language-bash">dig foo.test
</code></pre>
<h2>Nginx</h2>
<p>Lastly, we can install and configure <code>Nginx</code>.</p>
<pre><code class="language-bash">brew install nginx

# no sudo here
brew services start nginx

# &lt;Add configuration here, see below&gt;

nginx -t
nginx -s reload
</code></pre>
<p>In this case, we install <code>nginx</code> and then start it <strong>without</strong> sudo. This make it run as our current user, allowing Nginx to read (and write to, if needed) files owned by our current user. This is good as our code bases are likely owned by our current user.</p>
<p>The video explains these in depth. For now, I'll just write 2 Nginx config files for you to use. One lets you use <code>*.myapp.test</code> for your <code>myapp</code> code base. The second configuration sets up a generic thing. Any <code>*.test</code> domain you use will map to a directory on your filesystem, allowing you to use any domain you want, knowing it will reach an app that is in a folder of the same name.</p>
<p>The multi-tenant setup in file <code>/opt/homebrew/etc/nginx/servers/a-srv.conf</code>:</p>
<pre><code class="language-nginx">server {
    location 80;

    server_name ~^(?&lt;tenant&gt;.+)\.myapp\.test$;

    root /Users/&lt;you&gt;/srv/myapp/public;
    index index.html index.htm index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param TENANT $tenant;
        include fastcgi_params;
    }
}
</code></pre>
<p>The multi-tenant setup in file <code>/opt/homebrew/etc/nginx/servers/x-srv.conf</code>:</p>
<pre><code class="language-nginx">server {
    location 80;

    server_name ~^(?&lt;app&gt;.+)\.test$;

    root /Users/&lt;you&gt;/srv/$app/public;
    index index.html index.htm index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}
</code></pre>
<p>Notes:</p>
<ol>
<li>The filenames are alphabetically ordered so the multi-tenant setup is loaded first</li>
<li>The directory mapping of <code>domain -&gt; code base directory</code> for <code>*.test</code> domains assumes a document root of <code>public</code> within the code base, which may be Laravel specific</li>
</ol>
<h2>I'm not a Laravel dev</h2>
<p>Here are Nginx configurations to use if you're not a Laravel developer, and your app listens for HTTP requests (e.g. Node, Go).</p>
<pre><code class="language-nginx"># /opt/homebrew/etc/nginx/servers/a-srv.conf
server {
    location 80;

    server_name ~^(?&lt;tenant&gt;.+)\.myapp\.test$;

    root /Users/&lt;you&gt;/srv/myapp/public;
    index index.html index.htm;

    location / {
        try_files $uri $uri/ @app;
    }

    location @app {
        proxy_set_header Host $http_host;
        proxy_set_header X-Tenant $tenant;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://127.0.0.1:8000;
    }
}

# /opt/homebrew/etc/nginx/servers/x-srv.conf
server {
    location 80;

    server_name ~^ (?&lt;app&gt;.+)\.test$;

    root /Users/&lt;you&gt;/srv/$app/public;
    index index.html index.htm;

    location / {
        try_files $uri $uri/ @app;
    }

    location @app {
        proxy_set_header Host $http_host;
        proxy_set_header X-App $app;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://127.0.0.1:8000;
    }
}

</code></pre>
<p>These proxy over HTTP instead of FastCGI. They use a &quot;named&quot; location block <code>@app</code> but the principles are otherwise the same.</p>
<h2>More Details</h2>
<p>There's more details and description in the <a href="https://www.youtube.com/watch?v=SPHxW1C4G6I">Youtube video</a>, definitely check it out!</p>
<h2>Use Laravel Herd!?</h2>
<p>You get this setup (but even better!) using <a href="https://herd.laravel.com">Laravel Herd</a>. If you're a Laravel developer, considering just using that.</p>
]]></content:encoded>
      <pubDate>Tue, 31 Oct 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Understanding Nginx Try Files</title>
      <link>https://fideloper.com/nginx-try-files</link>
      <description>Nginx's `try_files` directive is seemingly simple, but actually has some surprising depth!</description>
      <content:encoded><![CDATA[<iframe src="https://www.youtube.com/embed/VPrBA2iZe1c?si=y9RQJb3e5wYf9bme" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p>The nginx <code>try_files</code> directive is actually interesting! Not, like, <em>amazingly</em> interesting - but it has more depth than appears at first glance.</p>
<p>First, Nginx <em>almost</em> doesn't need <code>try_files</code>. Without it, Nginx could serve static files just fine:</p>
<pre><code class="language-nginx">server {
    listen 80;
    server_name _;

    root /var/www/html/public;
    index index.html index.html;
}
</code></pre>
<p>If we support PHP, we could have something like this:</p>
<pre><code class="language-nginx">server {
    listen 80;
    server_name _;

    root /var/www/html/public;
    index index.html index.html;

    location ~ \.php$ {
        # pass off to PHP-FPM via fastcgi
    }
}
</code></pre>
<p>That actually works for static files and <strong>the home page</strong> of our PHP application. Once we introduce a path into our URI (e.g. <code>example.com/foo/bar)</code>, it breaks. This is where <code>try_files</code> comes in.</p>
<h2>Adding <code>try_files</code></h2>
<p>The <code>try_files</code> directive is going to run through each option given in order to attempt to use its directives to find a file that exists on the server.</p>
<pre><code class="language-nginx">server {
    listen 80;
    server_name _;

    root /var/www/html/public;
    index index.html index.html index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }
}
</code></pre>
<p><strong>For a given URI, this will do the following:</strong></p>
<p>The <code>$uri</code> option tells <code>try_files</code> to find the URI as given as a file on the disk drive relative to the <code>root</code>, which is <code>/var/www/html/public</code> in this case.</p>
<p>1️⃣ A URI of <code>/css/app.css</code> will search in <code>/var/www/html/public/css/app.css</code>.</p>
<p>2️⃣ A URI of <code>/foo/bar</code> will have 2 behaviors - one for if the directory exists, and one for if it does not.</p>
<p>First, the <code>$uri/</code> option tells <code>try_files</code> to treat the URI as a directory and see if a directory exists. If the URI relates to an existing directory, Nginx needs to figure out what file to serve from that directory.</p>
<p>That's where the <code>index</code> directive comes into play. Since Nginx just knows a directory exist, we need <code>index</code> to tell Nginx which files to serve out of there (if they exist). You can have any files there. The first matched file &quot;wins&quot; and is served.</p>
<p>3️⃣ If the given URI matches neither an existing file nor directory, then <code>try_files</code> goes to the fallback URI - <code>/index.php?$query_string;</code>.</p>
<h2>But other location blocks?</h2>
<p>The <code>location /</code> block, and the <code>try_files</code> directive within it, actually work together with other <code>location</code> blocks! Here's slightly more complete configuration file:</p>
<pre><code class="language-nginx">server {
    listen 80;
    
    server_name _;
    root /var/www/html/foobar.com;

    index index.html index.htm index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        # pass off to PHP-FPM via fastcgi
    }

    location ~* \.(?:css(\.map)?|js(\.map)?|jpe?g|png|gif|ico)$ {
        expires    7d;
        access_log off;
        log_not_found off;
    }

    location ~ /\.(?!well-known).* {
        deny all;
    }
}
</code></pre>
<p>If the <code>try_files</code> directive resolves/finds a file that is a static asset (css, js, image), then the third location block actually handles the request. That means both <code>location / {}</code> and <code>location ~* \.(&lt;stuff&gt;)$ {}</code> blocks are relevant to such a request!</p>
<p>The same is true for when PHP files are used - the <code>location ~ \.php$ {}</code> block is used:</p>
<ol>
<li>When <code>index</code> resolves to <code>index.php</code> in a directory</li>
<li>When the fallback <code>/index.php?$requests_uri</code> is used</li>
<li>When a PHP file given by the URI exists as given</li>
</ol>
<p>In these cases, the &quot;matched&quot; (used?) PHP file found by <code>try_files</code> is handled by the <code>location ~ \.php$ {}</code> block, which passes the request off to PHP-FPM. This is why a 404 error (whether for a static file or a non-existent application route) is generally returned from the PHP application. All real &quot;can't find a file on the disk&quot; cases are passed off to <code>/index.php</code>, and therefore the request is sent to the PHP application proxied via FastCGI in this configuration.</p>
]]></content:encoded>
      <pubDate>Tue, 10 Oct 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>PHP is Weird, Stateless, and Beautiful</title>
      <link>https://fideloper.com/php-weird-stateless-and-beautiful</link>
      <description>PHP is stateless, much like HTTP. This makes it weird but is one of its great strengths.</description>
      <content:encoded><![CDATA[<iframe src="https://www.youtube.com/embed/ECuD_dGvxyY?si=zqemBEO-T8ZkYDiu" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p>PHP is, historically, stateless.</p>
<p>This is mostly a result of <a href="https://stackoverflow.com/a/13200206">HTTP being stateless</a> as well. Each HTTP request has no knowledge of any request before it.</p>
<p>PHP is much the same. Under &quot;traditional&quot; process models, it rebuilds its entire world on each request! There's no global state.</p>
<h2>Language Comparison</h2>
<p>In the video, we compare other popular languages and see how its possible to create a global variable that increases in value with each web request. In other words, there's global state that that you need to worry about.</p>
<p>PHP is different - even global variables are &quot;reset&quot; to their initial value on each request. (This is not to be confused with super globals such as <code>$_GET</code>, <code>$_POST</code>, <code>$_SERVER</code>, and so on).</p>
<h2>Pros and Cons</h2>
<p>This makes PHP much easier to use - the mental model of what your code is doing is much simpler when there's no state that might be accidentally saved between web requests.</p>
<p>However it also means we need to reload a lot of code on each web request. This is alleviated by PHP's opcache, but it's still not as efficient as having the framework/code loaded already when accepting a new web request.</p>
<p>Additionally, PHP can't make use of things like connection pooling - each web request instead needs to make a whole new connection to databases, caches, and other external services.</p>
<p>This, luckily, is mostly just fine - PHP is still fast!</p>
<h2>Newer PHP models</h2>
<p>There are newer ways to run PHP - using Swoole or RoadRunner, for example, we can run PHP as a long-running process. This behaves more like other programming languages where we need to worry about global state, but we get the benefit of not having to reload your code/framework on each request.</p>
<p>Laravel has made using this simple via <a href="https://laravel.com/docs/10.x/octane">Laravel Octane</a>. However it's still not the mainstream way to run PHP! Given how large PHP is, I'm not sure it ever will be. It's good, but not a silver bullet - everything is a tradeoff.</p>
]]></content:encoded>
      <pubDate>Sun, 01 Oct 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Nginx Unit with Laravel and PHP</title>
      <link>https://fideloper.com/nginx-unit-with-laravel-and-php</link>
      <description>I came across Nginx Unit recently. Turns out, it's really cool! We can get rid of PHP-FPM, and run our apps more efficient. Let's see how, and go over the pros and cons.</description>
      <content:encoded><![CDATA[<iframe src="https://www.youtube.com/embed/-ek0-mLZDbo?si=-ExJ7DdPCzz5bCyV" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p><a href="https://unit.nginx.org/">Nginx Unit</a> is a &quot;Universal Web App Server&quot; brought to you by Nginx. This is a web server that can &quot;directly&quot; communicate with your code base, helping you pass off HTTP requests to your code in a way it can understand.</p>
<p>It supports a bunch of languages and has a module for each supported language. That lets it treat each language specially, as needed.</p>
<p>For PHP support, it has a PHP module that creates PHP processes, similar(ish) to PHP-FPM, but without needing PHP-FPM. Getting rid of PHP-FPM sounds really rad to me, so I decided to see how it works.</p>
<h2>Install PHP</h2>
<p>We'll install PHP the usual way on any Ubuntu server - using the <code>ppa:ondrej/php</code> repository. This is basically the de-facto way to use PHP on Ubuntu servers. It lets us get the latest PHP and install multiple versions of PHP on the same server (if so desired).</p>
<p>One issue with Unit: It expects the system-set default PHP version to be used. Luckily, we can recompile it's PHP module ourself (<a href="https://github.com/nginx/unit/issues/625">see here</a>) to use our <code>ppa:ondrej/php</code>-installed PHP.</p>
<p>First, we'll just install PHP as usual.</p>
<pre><code class="language-bash">sudo add-apt-repository -y ppa:ondrej/php
sudo apt-get install -y php8.2-dev php8.2-embed \
                   php8.2-bcmath php8.2-cli php8.2-common php8.2-curl \
                   php8.2-gd php8.2-intl php8.2-mbstring php8.2-mysql php8.2-pgsql \
                   php8.2-redis php8.2-soap php8.2-sqlite3 php8.2-xml php8.2-zip
curl -sLS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/bin/ --filename=composer
</code></pre>
<p>Three things to note:</p>
<ol>
<li>We need <code>php8.2-dev</code> (the <code>-dev</code> version) of PHP to get certain PHP &quot;header&quot; files, allowing us to later compile Unit's PHP module</li>
<li>We need <code>php8.2-embed</code>, as this is the <a href="https://en.wikipedia.org/wiki/Server_application_programming_interface">SAPI</a> that unit uses to spin up PHP processes.</li>
<li>We do NOT need <code>php-fpm</code>, and so we don't install it</li>
</ol>
<p>Everything else is all the usual &quot;stuff&quot; for a PHP installation, typically seen on a Forge server.</p>
<h2>Install Nginx Unit</h2>
<p>We can install Nginx Unit as per their docs - nothing special to do there!</p>
<p>I used Ubuntu 22.04, and followed <a href="https://unit.nginx.org/installation/#ubuntu-2204">their instructions</a> for that system:</p>
<pre><code class="language-bash">sudo curl --output /usr/share/keyrings/nginx-keyring.gpg  \
      https://unit.nginx.org/keys/nginx-keyring.gpg

echo &quot;deb [signed-by=/usr/share/keyrings/nginx-keyring.gpg] https://packages.nginx.org/unit/ubuntu/ jammy unit
deb-src [signed-by=/usr/share/keyrings/nginx-keyring.gpg] https://packages.nginx.org/unit/ubuntu/ jammy unit
&quot; | sudo tee /etc/apt/sources.list.d/unit.list

sudo apt-get update
sudo apt-get install -y unit
</code></pre>
<p>I only installed <code>unit</code>, opting NOT to install <code>unit-dev</code> nor <code>unit-php</code> yet, as we'll need to compile the PHP module ourself.</p>
<h2>Manually Build Unit's PHP Module</h2>
<p>Next we <em>manually</em> (so uncultured) re-build Unit's PHP module to work with <code>ppa:ondrej/php</code>. This was taken from the aforementioned <a href="https://github.com/nginx/unit/issues/625">GitHub issue</a> which helpfully pointed out how we can make this work.</p>
<p>Run the following as user <code>root</code>:</p>
<pre><code class="language-bash">cd /opt

# Latest version of Unit as of this release
VERSION=&quot;1.31.0&quot;
curl -O https://unit.nginx.org/download/unit-$VERSION.tar.gz
tar xzf unit-$VERSION.tar.gz
cd unit-$VERSION

./configure --prefix=/usr --state=/var/lib/unit --control=unix:/var/run/control.unit.sock \
    --pid=/var/run/unit.pid --log=/var/log/unit.log --tmp=/var/tmp --user=unit --group=unit \
    --tests --openssl --modules=/usr/lib/unit/modules --libdir=/usr/lib/x86_64-linux-gnu
./configure php --module=php82 --config=php-config8.2
make php82
make install php82-install

# Restart 
systemctl unit restart

# Check logs to ensure PHP module is loaded
cat /var/log/unit.log
</code></pre>
<p>We compiled the PHP module ourself, which uses the currently-installed PHP version (taken from the <code>ppa:ondrej/php</code> repository). Success!</p>
<h2>Create a Laravel App</h2>
<p>In our case, we'll just directly create a new Laravel application on our server:</p>
<pre><code class="language-bash">mkdir -p /var/www
cd /var/www
composer create-project laravel/laravel html

# Ensure files are owned by &quot;unit&quot;, the user created
# by unit, so PHP can write to log files, etc
chown -R unit:unit /var/www/html
</code></pre>
<p>Easy enough!</p>
<h2>Configure Unit</h2>
<p>We can now configure Unit to run our application.</p>
<p>Create <code>/var/www/unit.json</code> with:</p>
<pre><code class="language-json">{
    &quot;listeners&quot;: {
        &quot;*:80&quot;: {
            &quot;pass&quot;: &quot;routes&quot;
        }
    },

    &quot;routes&quot;: [
        {
            &quot;match&quot;: {
                &quot;uri&quot;: &quot;!/index.php&quot;
            },
            &quot;action&quot;: {
                &quot;share&quot;: &quot;/var/www/html/public$uri&quot;,
                &quot;fallback&quot;: {
                    &quot;pass&quot;: &quot;applications/laravel&quot;
                }
            }
        }
    ],

    &quot;applications&quot;: {
        &quot;laravel&quot;: {
            &quot;type&quot;: &quot;php&quot;,
            &quot;root&quot;: &quot;/var/www/html/public/&quot;,
            &quot;script&quot;: &quot;index.php&quot;,
            &quot;processes&quot;: {}
        }
    }
}
</code></pre>
<blockquote>
<p>See the video for a full explanation of this configuration.</p>
</blockquote>
<p>Then tell Unit to use this configuration to run the application:</p>
<pre><code class="language-bash">sudo curl -X PUT --data-binary @unit.json --unix-socket \
       /var/run/control.unit.sock http://localhost/config/
</code></pre>
<p>Head to your application (listening on port 80 currently)! Test it out with <code>curl localhost</code>.</p>
<h2>Control API</h2>
<p>You can use the <a href="https://unit.nginx.org/controlapi/">Control API</a> to retrieve and update configuration.</p>
<p>Let's GET our configuration:</p>
<pre><code class="language-bash">curl -X GET \
    --unix-socket /var/run/control.unit.sock \
    http://localhost/config/
</code></pre>
<h2>Pros of Unit</h2>
<p>We can remove <code>PHP-FPM</code>! This is great, as it makes throwing our apps into a container a LOT simpler.</p>
<p>Unit also seems to be more efficient - I could NOT get it to break using <code>ab</code> to send 100 requests at a time, with 10,000 total requests.</p>
<h2>Cons of Unit</h2>
<p>There are some trade-offs!</p>
<p>First, we can't switch PHP versions without re-compiling the Unit PHP module. This means it will be hard (impossible?) to run multiple versions of PHP at the same time while using Unit.</p>
<p>You might also need another HTTP layer in front of Unit (Nginx, Cloudflare, Cloudfront, fly.io's HTTP layer, etc). It turns out that Unit either makes some standard-ish configuration hard, or can't do it at all. Some examples:</p>
<ol>
<li>Gzip compression isn't (yet?) a thing for Unit</li>
<li>It makes protecting dot files or other routes difficult</li>
<li>It's not easy to set cache headers for static assets</li>
</ol>
]]></content:encoded>
      <pubDate>Tue, 12 Sep 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>How Nginx and PHP-FPM turn a web request into code</title>
      <link>https://fideloper.com/how-nginx-and-php-fpm-turn-a-web-request-into-code</link>
      <description>Let's see how an HTTP request gets turned into code that Laravel can run!</description>
      <content:encoded><![CDATA[<iframe src="https://www.youtube.com/embed/lh4RnczaATI?si=2JhaZpxpn70xDKVn" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p>Let's see how an HTTP request gets from your web server into your PHP/Laravel code base.</p>
<p>The moving parts are:</p>
<ol>
<li><strong>Nginx</strong> - receives web request, sends it to PHP-FPM via FastCGI</li>
<li><strong>PHP-FPM</strong> - Takes FastCGI request, spins up processes of PHP and runs your code</li>
<li><strong>Laravel</strong> - Takes PHP <a href="https://www.php.net/manual/en/language.variables.superglobals.php">super globals</a>, along with <a href="https://www.php.net/manual/en/wrappers.php.php"><code>php://input</code></a> stream, and creates an HTTP <code>Request</code> class</li>
</ol>
<h2>Nginx</h2>
<p>Nginx is configured for Laravel/PHP applications with the following config:</p>
<pre><code class="language-nginx">server {
    server_name app.chipperci.com;
    root /home/forge/app.chipperci.com/public;

    index index.html index.htm index.php;

    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    error_page 404 /index.php;

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
    }
}
</code></pre>
<p>This is taken from Laravel Forge, with a bunch of items removed for brevity. What we see above are the parts we care about.</p>
<p>The first <code>location / {}</code> uses <code>try_files</code> to attempt to match the request URI to a static file or directory within the configured <code>root</code>, which is <code>/home/forge/app.chipperci.com/public</code> in our example. If it files a static file, it serves it! Otherwise, it runs <code>index.php</code>.</p>
<p>If the file found ends in <code>.php</code> (or we're using the fallback <code>index.php</code> file), then we eventually end up in the <code>location ~ \.php$ {}</code> block. This passes the request off to PHP-FPM via the FastCGI protocol.</p>
<p>It first splits the path at whatever <code>*.php</code> file is in there. This lets PHP get the correct URI of the request - everything AFTER <code>index.php</code> usually.</p>
<p>We also <code>include fastcgi_params</code> which is the information that is populated in the <code>$_SERVER</code> PHP super global.</p>
<p>Finally we pass the request off to <code>PHP-FPM</code>, which in this case is listening via a unix socket file at <code>/var/run/php/php8.2-fpm.sock</code>.</p>
<h2>PHP-FPM</h2>
<p>PHP-FPM is written in C, and is hard to parse (in other words, I have no idea what's going on in there). A lot of the logic to taking a request and spinning up PHP is in the <a href="https://github.com/php/php-src/blob/master/sapi/fpm/fpm/fpm_request.c">fpm_request.c file</a>. However the basics are that PHP-FPM manages processes and runs our PHP application through its &quot;entrypoint&quot;, the <code>index.php</code> file.</p>
<p>PHP-FPM populates the PHP super globals, any any streams needed such as <code>php://input</code> (that will be the body of the HTTP request if there is one).</p>
<h2>Our Code</h2>
<p>Your framework likely abstracts the HTTP request into a class that represents the HTTP request itself. In Laravel, that's  <a href="https://github.com/illuminate/http/blob/master/Request.php">this class</a>,  which extends the underlying <a href="https://github.com/symfony/http-foundation/blob/6.3/Request.php">Symfony HTTP request here</a>.</p>
<p>Laravel uses the method <a href="https://github.com/symfony/http-foundation/blob/6.3/Request.php#L300C28-L300C45"><code>createFromGlobals()</code></a>, which creates an HTTP request instance based off of the global information (super globals) and gets the body of the HTTP request from <code>php://input</code>.</p>
<p>Then Laravel can use that to match against registered routes, run controller code, and generally do all the magic it does for us!</p>
]]></content:encoded>
      <pubDate>Tue, 05 Sep 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Configure and Troubleshoot PHP-FPM</title>
      <link>https://fideloper.com/configure-and-troubleshoot-php-fpmconfigure-and-troubleshoot-php-fpm</link>
      <description>PHP-FPM's default configuration is likely not optimized for your server. See why and how to fix it!</description>
      <content:encoded><![CDATA[<iframe style="aspect-ratio: 16 / 9; width: 100%" src="https://www.youtube.com/embed/vohsuhwWvpw?si=HPMji1Gq3ToADtL1" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<p>PHP-FPM usually comes with a particular setting that's set too low. This results in Gateway errors when you get too much traffic, often before the server even runs out of resources.</p>
<p>Let's see what that setting is, and how to rectify it.</p>
<h2>What is PHP-FPM</h2>
<p>PHP-FPM is an application gateway. It sits between Nginx (the web server) and your code base. Nginx will get an HTTP request, and is then configured to &quot;proxy&quot; the requests off to PHP-FPM using the <a href="http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html">fast CGI protocol</a>.</p>
<p>When PHP-FPM receives a request from Nginx, it spins up a process (if one isn't already running), which runs an instance of your PHP application (call the <code>index.php</code> file, or whatever, with all the needed data - <code>$_SERVER</code>, <code>$_GET</code>, <code>$_POST</code>, body of the request, all that good stuff).</p>
<h2>The Problem with PHP-FPM</h2>
<p>The &quot;problem&quot; (if we want to call it that), is that PHP-FPM has a setting <code>max_children</code>. This is the number of processes it's will to spin up. Since each child process handles a single HTTP request at a time, in serial, the only way to handle concurrent requests is by spinning up more processes. Eventually we can hit the max number of processes configured (<code>max_children</code>), leading to Gateway errors (PHP-FPM refuses to handle the request, nor does it queue the request).</p>
<p>This can happen prematurely - before the server is actually out of resources (RAM, CPU). However it often coincides with your database being overloaded. In that case, requests may not respond in a timely manner, causing a pile-up of pending HTTP requests, eventually leading to <code>max_children</code> being hit and the dreaded Gateway error.</p>
<p>This has always bugged me - it sometimes is self-limited (when <code>max_children</code> is set too low) and other times is disguising a real issue (e.g. database overload).</p>
<h2>Debugging PHP-FPM Issues</h2>
<p>If you hit a Gateway error, the place to look to see what's happening is in the PHP-FPM log. This is often in <code>/var/log</code> on your linux server. On Debian/Ubuntu servers, it's the <code>/var/log/php8.2-fpm.log</code> (adjust that as needed for your PHP version).</p>
<p>You'll see errors saying &quot;You're going to hit your max limit soon&quot;, and perhaps errors like &quot;max_children reached&quot;. If you see those, you know you should probably increase your PHP-FPM's configured <code>max_children</code>.</p>
<h2>How to Configure PHP-FPM</h2>
<p>To configure the correct number of <code>max_children</code>, you need to know how many concurrent requests your server can handle.</p>
<p>I typically do a calculation like this:</p>
<pre><code>floor(RAM available / RAM used per request)
</code></pre>
<p>If I have a 4gb RAM server, I'll perhaps allocate 3gb of that to the web server. If each PHP web request takes 100mb, that's <code>3072 / 50</code> or roughly 30 concurrent requests that I'll allow the server to handle. That means I can set <code>max_children</code> to <code>30</code>!</p>
<p>The amount of free RAM goes down a lot when you're competing for resources within the server. If your database is on the same server, then you should allocate less RAM to PHP for serving web requests.</p>
<h2>One Important Tip</h2>
<p>The best tip I can give you for optimizing this is to get your database off of your web server, and into it's own dedicated server. This gives your database more resources that it likely needs, and frees up your web server to dedicate a LOT more resources to serving web requests.</p>
]]></content:encoded>
      <pubDate>Thu, 31 Aug 2023 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>X. Load Balancing</title>
      <link>https://fideloper.com/golang-proxy-load-balancing</link>
      <description>Remember when I called a reverse proxy "basically just a load balancer"? Ours doesn't balance any load. Let's fix that!</description>
      <content:encoded><![CDATA[<p>Remember when I called a reverse proxy &quot;basically just a load balancer&quot;? Ours doesn't balance any load. Let's fix that!</p>
<h2>A Quick Review</h2>
<p>We use a Mux to match the incoming request to a Target. The Mux can match incoming requests against the domain used, the port requested on, the URI, and more.</p>
<p>Currently, our <code>Target</code> object lets use define a single upstream (backend) server via the <code>AddTarget()</code> method.</p>
<p>What we want is to allow a Target to have <strong>multiple upstreams</strong>. The Target can decide how to distribute incoming requests amonst those upstream servers (aka Load Balancing).</p>
<h2>Watch Your Language</h2>
<p>We've been calling an upstream server a bunch of things interchangably so far (upstream, backend, target). We'll need to firm this language up a bit.</p>
<p><strong>So, our firmed up language:</strong></p>
<ol>
<li><code>Mux</code>: Matches an incoming request to a <strong>Target</strong></li>
<li><code>Target</code>: For a matched request, route the request to an available <strong>Upstream</strong></li>
<li><code>Upstreams</code>: A collection of backend servers (each being an &quot;upstream&quot;) a <strong>Target</strong> might send a request to</li>
</ol>
<p>The part that's new here is that we're allowing a Target to have multiple Upstreams. The Target will be responsible for deciding which Upstream to send a request to.</p>
<h2>Refactoring Targets</h2>
<p>So we need the ability to send to multiple Upstreams for a matched Target.</p>
<p>Before we do that, let's organize the code a bit more.</p>
<p>I decided the <code>Target</code> struct should be the object responsible for choosing which Upstream to send to. Since we'll be adding logic to our <code>Target</code> struct, let's refactor a bit to put the Target &quot;stuff&quot; into its own file.</p>
<p>We'll add file <code>target.go</code>.</p>
<pre><code>.
├── go.mod
├── go.sum
├── main.go
└── reverseproxy
    ├── listener.go
    ├── reverseproxy.go
    └── target.go
</code></pre>
<p>Then we take <code>type Target struct {...}</code> out of <code>reverseproxy.go</code> and plop it into <code>target.go</code>:</p>
<pre><code class="language-go">// File target.go

package reverseproxy

import (
    &quot;github.com/gorilla/mux&quot;
    &quot;net/url&quot;
    &quot;sync&quot;
)

type Target struct {
    router   *mux.Router
    upstream *url.URL
}
</code></pre>
<p>So far, so good. We didn't really change anything yet!</p>
<h3>Multiple Upstreams</h3>
<p>We're going to add to the <code>Target</code> to accomplish two things:</p>
<ol>
<li>Allow for multiple upstreams</li>
<li>Be able to load balance amongst available upstreams</li>
</ol>
<p>First, we'll change the <code>upstream</code> property to be <code>upstreams</code> (plural) and just make it a slice of <code>*url.URL</code>'s. We'll add some other items as well to help with load balancing.</p>
<p>Then we can add a method to the <code>Target</code> struct to help us select an upstream server. We'll hard code a round-robin strategy - no need to abstract different strategies for now.</p>
<p>Here's the updated struct and its shiny, new <code>SelectUpstream()</code> method:</p>
<pre><code class="language-go">package reverseproxy

import (
    &quot;github.com/gorilla/mux&quot;
    &quot;net/url&quot;
    &quot;sync&quot;
)

type Target struct {
    router       *mux.Router

    // NOTE: New properties here:
    upstreams    []*url.URL
    lastUpstream int
    lock         sync.Mutex
}

// SelectUpstream will load balance amongst available
// targets using a round-robin algorithm
func (t *Target) SelectUpstream() *url.URL {
    count := len(t.upstreams)
    if count == 1 {
        return t.upstreams[0]
    }

    t.lock.Lock()
    defer t.lock.Unlock()

    next := t.lastUpstream + 1
    if next &gt;= count {
        next = 0
    }

    t.lastUpstream = next

    return t.upstreams[next]
}
</code></pre>
<p>We added method <code>SelectUpstream()</code>. This will just return a target of our choosing. The <code>Target</code> struct now has property <code>upstreams</code> (plural), which replaced <code>upstream</code> (singular).</p>
<p>Our <code>SelectUpstream()</code> method returns the first upstream (<code>*url.URL</code>) if we only defined one. No load balancing in that case!</p>
<p>Otherwise we do some boring logic to ensure we loop through the given upstreams without accidentally panicing with a <strong>index out of range</strong> error.</p>
<p>We track the last upstream that we sent a request to via <code>lastUpstream</code>, and we use a Mutex to safely increment said <code>lastUpstream</code> variable for when requests are coming on concurrently.</p>
<blockquote>
<p>You can also use atomic ints for that case, which be a bit faster. However <a href="https://stackoverflow.com/questions/47445344/is-there-a-difference-in-go-between-a-counter-using-atomic-operations-and-one-us">this SO answer</a> scared me off of them. However an atomic might be preferred here.</p>
</blockquote>
<p>Not too bad, logic-wise!</p>
<h2>Defining the Upstreams</h2>
<p>Let's next update the easy thing - we'll change our <code>main.go</code> file to define one or more upstreams when we create a <code>Target</code>.</p>
<p>Instead of just passing a string <code>&quot;http://localhost:8000&quot;</code>, we'll pass slices of strings <code>[]string{&quot;http://localhost:8000&quot;, ...}</code>:</p>
<pre><code class="language-go">// Plenty of stuff omitted for brevity

func main() {
    r := &amp;reverseproxy.ReverseProxy{}

    // Handle URI /foo
    a := mux.NewRouter()
    a.Host(&quot;fid.dev&quot;).Path(&quot;/foo&quot;)
    // Add a single upstream
    r.AddTarget([]string{&quot;http://localhost:8000&quot;}, a)

    // Handle anything else
    // Add multiple upstreams
    r.AddTarget([]string{
        &quot;http://localhost:8001&quot;,
        &quot;http://localhost:8002&quot;,
        &quot;http://localhost:8003&quot;,
    }, nil)
}
</code></pre>
<p>Where as before we would send <code>AddTarget</code> a <code>string</code>, now we just send a slice of strings <code>[]string</code>. This way we can define one or more upstream servers.</p>
<p>Also not too bad, logic-wise!</p>
<h2>Directing Requests</h2>
<p>Now we're finally ready to update our code and actually do the load balancing.</p>
<p>This is a change in <code>reverseproxy.go</code>. Since the Director is responsible for directing where an incoming request is proxied to, it seems like the right place to add our load balancing logic.</p>
<p>We'll update the <code>Director</code> function:</p>
<pre><code class="language-go">// Director returns a function for use in http.ReverseProxy.Director.
// The function matches the incoming request to a specific target and
// sets the request object to be sent to the matched upstream server.
func (r *ReverseProxy) Director() func(req *http.Request) {
    return func(req *http.Request) {
        for _, t := range r.targets {
            match := &amp;mux.RouteMatch{}
            if t.router.Match(req, match) {
                var targetQuery = upstream.RawQuery

                // Call our new SelectUpstream method
                upstream := t.SelectUpstream()

                // Send requests to that selected upsteam
                req.URL.Scheme = upstream.Scheme
                req.URL.Host = upstream.Host
                req.URL.Path, req.URL.RawPath = joinURLPath(upstream, req.URL)
                if targetQuery == &quot;&quot; || req.URL.RawQuery == &quot;&quot; {
                    req.URL.RawQuery = targetQuery + req.URL.RawQuery
                } else {
                    req.URL.RawQuery = targetQuery + &quot;&amp;&quot; + req.URL.RawQuery
                }
                if _, ok := req.Header[&quot;User-Agent&quot;]; !ok {
                    // explicitly disable User-Agent so it's not set to default value
                    req.Header.Set(&quot;User-Agent&quot;, &quot;&quot;)
                }
                break
            }
        }
    }
}
</code></pre>
<p>Instead of referencing <code>t.upstream</code>, we had our target select an upstream and directed the request to that!</p>
<p>The relevant part:</p>
<pre><code class="language-go">// Call our new method
upstream := t.SelectUpstream()

// Direct our request to the given upstream
req.URL.Scheme = upstream.Scheme
req.URL.Host = upstream.Host
req.URL.Path, req.URL.RawPath = joinURLPath(upstream, req.URL)
</code></pre>
<p>Still not too bad! We're cruising along here.</p>
<h2>We Did It!</h2>
<p>If we build and run the reverse proxy, we'll see that requests are bounced between the 3 backend servers <code>localhost:8001-8003</code> for incoming requests.</p>
<p>The separate backend for requests matching <code>fid.dev/foo</code> continues to work and send to the one backend server <code>localhost:8000</code>.</p>
<p>If we make requests to something else (without any upstreams being present), the error messages will show the load balancing working:</p>
<pre><code>2022/11/22 18:57:55 http: proxy error: dial tcp [::1]:8002: connect: connection refused
2022/11/22 18:57:57 http: proxy error: dial tcp [::1]:8003: connect: connection refused
2022/11/22 18:57:58 http: proxy error: dial tcp [::1]:8001: connect: connection refused
2022/11/22 18:57:58 http: proxy error: dial tcp [::1]:8002: connect: connection refused
2022/11/22 18:57:58 http: proxy error: dial tcp [::1]:8003: connect: connection refused
... and so on ...
</code></pre>
<p>What else are we missing here? <strong>Health checks!</strong></p>
<p>Let's see how to test our upstreams health next. We'll start with &quot;passive&quot; health checks.</p>
]]></content:encoded>
      <pubDate>Tue, 22 Nov 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>IX. Graceful Shutdown</title>
      <link>https://fideloper.com/golang-proxy-graceful-shutdown</link>
      <description>Our proxy isn't very graceful. If we turn it off, it cuts off all current connections. But Go handles this for us! We just need to orchestrate it. Let's see how.</description>
      <content:encoded><![CDATA[<p>If we shut down our program (currently via <code>sigint</code>, and <code>crtl+c</code>), it will cut off any current connections. Go's <code>http.Server</code> actually handles graceful shutdowns! We just need to orchestrate it properly with our whacky setup.</p>
<p>We'll do just that, giving connections up to 10 seconds to finish their request before shutting down forcefully.</p>
<p>Let's start a little backwards this time, here's an updated <code>main.go</code> file.</p>
<p>Read the <a href="#">previous article</a> for more context. What we do there is call a new <code>Stop()</code> method on my <code>ReverseProxy</code> after we send an interrupt signal:</p>
<pre><code class="language-go">package main

import (
    &quot;github.com/fideloper/someproxy/reverseproxy&quot;
    &quot;github.com/gorilla/mux&quot;
)

func main() {
    proxy := &amp;reverseproxy.ReverseProxy{}

    // Let's assume we have 2 backends to proxy to
    // localhost:8000 (for our fun API requests)
    // and localhost:8001 (for everything else)

    // Match requests to &quot;localhost/api&quot; and &quot;localhost/api/*&quot;
    r := mux.NewRouter()
    r.Host(&quot;localhost&quot;).PathPrefix(&quot;/api&quot;)
    proxy.AddTarget(&quot;http://localhost:8000&quot;, r)

    // Catch-all for all other requests
    proxy.AddTarget(&quot;http://localhost:8001&quot;, nil)

    // Listen for http://
    proxy.AddListener(&quot;:80&quot;)

    // Listen for https://
    proxy.AddListenerTLS(&quot;:443&quot;, &quot;keys/fid.dev.pem&quot;, &quot;keys/fid.dev-key.pem&quot;)

    if err := proxy.Start(); err != nil {
        log.Fatal(err)
    }

    // Shutdown when we receive Ctrl+c (interrupt)
    c := make(chan os.Signal, 1)
    
    // We'll accept graceful shutdowns when quit via SIGINT (Ctrl+C)
    // SIGKILL, SIGQUIT or SIGTERM (Ctrl+/) will not be caught.
    signal.Notify(c, os.Interrupt)

    // Block until we receive our signal.
    &lt;-c

    proxy.Stop() // This is the only new thing here
}
</code></pre>
<p>The addition of <code>proxy.Stop()</code> is all we changed.</p>
<h2>Adding the Stop Method</h2>
<p>Next we'll edit <code>reverseproxy.go</code> and add that <code>Stop()</code> method. Let's first talk about what it will do!</p>
<p>It turns out that <code>http.Server</code> has a <code>Shutdown()</code> method. This will handle gracefully closing listeners. If any are active, it will wait <em>indefinitely</em> for them to disconnect.</p>
<p>To make sure &quot;indefinitely&quot; isn't forever, we can pass it a <code>context.Context</code> with a timeout. We'll set that timeout to 10 seconds (I randomly chose 10 seconds).</p>
<p>After 10 seconds, any connections that refuse to finish are forcefully cut and the server will shut down.</p>
<p>It looks a bit like this:</p>
<pre><code class="language-go">// For a given HTTP server listening for connections
srv := &amp;http.Server{}

srv.Serve(someListener)

// ...

// We can later shut it down gracefully, with a 10 second deadline
context, close := context.WithTimeout(context.Background(), time.Second * 10)
srv.Shutdown(context)
</code></pre>
<p><strong>The <code>Shutdown()</code> method is blocking</strong>, so it will take up to 10 seconds to complete.</p>
<h3>But ... multiple servers!</h3>
<p>We need to handle shutting down multiple servers, so our <code>Stop()</code> function is just a tad more complex than the above logic. Here's the updated <code>Stop()</code> method in <code>reverseproxy.go</code>:</p>
<pre><code class="language-go">// Stop will gracefully shut down all listening servers
func (r *ReverseProxy) Stop() {
    // A context that times out in 10 seconds
    ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
    defer cancel()

    // A waitgroup allows us to block until all
    // goroutines are finished - when all servers
    // have finished shutting down.
    var wg sync.WaitGroup

    for _, srv := range r.servers {
        // This prevents a nasty and extremely common bug
        // Google for &quot;golang range loop variable re-use&quot;
        srv := srv

        // Tell the WaitGroup to wait for
        // +1 things to finish
        wg.Add(1)
        go func() {
            // Tell the WaitGroup we finished
            // one of the things
            defer wg.Done()

            // Wait up to 10 seconds for the
            // server to shutdown
            if err := srv.Shutdown(ctx); err != nil {
                log.Println(err)
                return
            }
            log.Println(&quot;A listener was shutdown successfully&quot;)
        }()
    }

    // Block until all servers are shut down
    wg.Wait()
    log.Println(&quot;Server shut down&quot;)
}
</code></pre>
<p>Let's cover what's going on there.</p>
<p>First we create a context. We just tell the <code>cancel()</code> method to run at the end of the function.</p>
<p>We have multiple servers to shutdown, and we want them to shutdown in parallel (not in serial!). Therefore, we'll run each <code>Shutdown</code> method in a goroutine.</p>
<p>This creates a race condition - the function will just finish immediately if we run those in a goroutine, and it's possible our entire app shuts down before the shutdown calls are complete.</p>
<p>To ensure we wait for all servers to finish shutting down, we'll use a <code>sync.Waitgroup</code>, which helps us orchestrate that. The call to <code>wg.Wait()</code> blocks further execution until <code>wg.Done()</code> is called for each <code>wg.Add(1)</code>. If we setup 3 servers, we need to make sure all 3 are done shutting down.</p>
<p>This will now gracefully shutdown each server! If a connection is doing something that takes longer than 10 seconds, the context will timeout and the server will forcefully shutdown. In that case, you'll see log output similar to this:</p>
<pre><code>2022/10/08 13:42:04 A listener was shutdown successfully
2022/10/08 13:42:14 context deadline exceeded
2022/10/08 13:42:14 A listener was shutdown successfully
2022/10/08 13:42:14 Server shut down
</code></pre>
<p>One server shutdown immediately (it didn't have any active connections), but the other had a long-running connection that exceeded the 10 second limit. It was shutdown forcefully.</p>
<p>And voilà, our reverse proxy will now shutdown gracefully!</p>
]]></content:encoded>
      <pubDate>Fri, 18 Nov 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>VIII. Multiple Listeners</title>
      <link>https://fideloper.com/golang-proxy-multiple-listeners</link>
      <description>So far we've just listened on port 80 for connections. Let's allow our proxy to listen on any port, and support TLS connections.</description>
      <content:encoded><![CDATA[<p>So far the proxy only listens for incoming requests on port 80. That's dumb.</p>
<p>It would be way more useful if we could listen for both <code>http://</code> and <code>https://</code> (TLS) connections, or even listen on custom ports.</p>
<p>To do that, we need to create multiple <em>listeners</em>, where each listener is a network socket to use to listen for requests (such as <code>0.0.0.0:443</code>).</p>
<p>We'll run <code>srv.Serve(listener)</code> once for each listener defined.</p>
<h2>Expanding on Listeners</h2>
<p>Previously we started an HTTP server by just passing it a <code>string</code> address. This address is converted to a <code>net.Listener</code> and used to listen for connections:</p>
<pre><code class="language-go">srv := &amp;http.Server{Addr: &quot;:80&quot;, Handler: r.proxy}
</code></pre>
<p>We want to be able to listen on multiple addresses of our choosing.</p>
<p>To accomplish this, we can build a Listener concept into the code.</p>
<p>Let's start by defining a new type: <code>type Listener struct</code>. This will contain things we need to listen and serve <code>http://</code> or <code>https://</code> connections.</p>
<p>We can create a new file <code>listener.go</code> for this - Here's the updated project layout:</p>
<pre><code>.
├── go.mod
├── go.sum
├── main.go
└── reverseproxy
    └── listener.go
    └── reverseproxy.go
</code></pre>
<p>File <code>listener.go</code> contains a new <code>Listener</code> struct and some methods to make it convenient to use:</p>
<pre><code class="language-go">package reverseproxy

import &quot;net&quot;

type Listener struct {
    Addr    string
    TLSCert string
    TLSKey  string
}

// Make creates a net.Listen object to be used
// when starting an http server
func (l *Listener) Make() (net.Listener, error) {
    return net.Listen(&quot;tcp&quot;, l.Addr)
}

// ServeTLS tells us if we should be serving TLS
// connections instead of unsecured connections
func (l *Listener) ServesTLS() bool {
    return len(l.TLSCert) &gt; 0 &amp;&amp; len(l.TLSKey) &gt; 0
}
</code></pre>
<p>The <code>Listener</code> struct contains an Address compatible with <code>net.Listen()</code>, and optional fields for a TLS certificate/key.</p>
<p>Method <code>Make()</code> generates a <code>net.Listener</code> for us (or an error, if our address <code>Addr</code> is invalid).</p>
<p>Method <code>ServesTLS()</code> returns a boolean - essentially just saying this listener is meant to be used for TLS certificates if the certificate and key fields are used.</p>
<p>Now that we've abstracted the concept of a Listener, we can use it!</p>
<h2>Implementing Multiple Listeners</h2>
<p>We need to update our <code>ReverseProxy</code> object to make use of multiple listeners.</p>
<p>Let's see what that looks like. Here are the changes within <code>reverseproxy.go</code>:</p>
<pre><code class="language-go">type ReverseProxy struct {
    listeners []Listener
    proxy     *httputil.ReverseProxy
    servers   []*http.Server
    targets   []*Target
}

// AddListener adds a listener for non-TLS connections on the given address
func (r *ReverseProxy) AddListener(address string) {
    l := Listener{
        Addr: address,
    }

    r.listeners = append(r.listeners, l)
}

// AddListenerTLS adds a listener for TLS connections on the given address
func (r *ReverseProxy) AddListenerTLS(address, tlsCert, tlsKey string) {
    l := Listener{
        Addr:    address,
        TLSCert: tlsCert,
        TLSKey:  tlsKey,
    }

    r.listeners = append(r.listeners, l)
}

// Start will listen on configured listeners
func (r *ReverseProxy) Start() error {
    r.proxy = &amp;httputil.ReverseProxy{
        Director: r.Director(),
    }

    for _, l := range r.listeners {
        listener, err := l.Make()
        if err != nil {
            // todo: Close any listeners that
            //       were created successfully
            //       before one returned error
            return err
        }

        srv := &amp;http.Server{Handler: r.proxy}

        r.servers = append(r.servers, srv)

        // TODO: Handle unexpected errors from our servers
        if l.ServesTLS() {
            go func() {
                if err := srv.ServeTLS(listener, l.TLSCert, l.TLSKey); err != nil &amp;&amp; !errors.Is(err, http.ErrServerClosed) {
                    log.Println(err)
                }
            }()
        } else {
            go func() {
                if err := srv.Serve(listener); err != nil &amp;&amp; !errors.Is(err, http.ErrServerClosed) {
                    log.Println(err)
                }
            }()
        }
    }

    return nil
}
</code></pre>
<p>Let's review what's going on!</p>
<p>Our <code>ReverseProxy</code> struct gets 2 new properties</p>
<ol>
<li>One is a slice of <code>Listener</code> objects.</li>
<li>The other is a slice of <code>*http.Server</code> objects.</li>
</ol>
<p>Each listener needs its own HTTP server to accept and handle connections, so we need one of each per network socket we're listening on.</p>
<blockquote>
<p>A network socket is a network interface (address) + port combo being bound to in order to listen for connections.</p>
</blockquote>
<p>We add two methods (which is a bit more consistent with the stdlib's HTTP methods) - one for regular connections, and one for TLS connections. These just add to our slice of listeners.</p>
<p>The <code>Start()</code> method used to run this:</p>
<pre><code class="language-go">srv := &amp;http.Server{Addr: &quot;:80&quot;, Handler: r.proxy}
srv.ListenAndServe()
</code></pre>
<p>But now we have multiple listener objects! We'll need a server for each listener, and we'll use the <code>Serve</code> / <code>ServeTLS</code> methods instead of <code>ListenAndServe</code>.</p>
<p>Additionally, <code>ListenAndServe()</code> / <code>Serve()</code> / <code>ServeTLS()</code> are all blocking. That means these all need to run in a goroutine, so they can run concurrently.</p>
<p>For each listener, we create a <code>http.Server</code>, passing it our <code>http.ReverseProxy</code> handler. Each server listens on our given address defined by our <code>Listener</code> objects. TLS listeners get the TLS treatment.</p>
<p>We also handle (which is to say, &quot;log&quot;) errors here, where as before we just ignored them.</p>
<p>Here's a fun fact: the <code>Serve</code> methods <em>always</em> return an error. If the error is <code>ErrServerClosed</code>, it just means the server was shutdown properly and will no longer accept new connections. All other errors are &quot;true&quot; errors.</p>
<p>Our <code>Start()</code> method now just returns an error if an unexpected error is returned. However, if the <code>Serve</code> methods return an unexpected error, we just log them.</p>
<p>I didn't go through the hoops of using an <a href="https://www.tutorialspoint.com/how-to-handle-errors-within-waitgroups-in-golang">error channel</a> to handle those errors (yet?).</p>
<h2>Running the Server</h2>
<p>We can now update our <code>main.go</code> file to try to run this.</p>
<pre><code class="language-go">package main

import (
    &quot;github.com/fideloper/someproxy/reverseproxy&quot;
    &quot;github.com/gorilla/mux&quot;
)

func main() {
    proxy := &amp;reverseproxy.ReverseProxy{}

    // Let's assume we have 2 backends to proxy to
    // localhost:8000 (for our fun API requests)
    // and localhost:8001 (for everything else)

    // Match requests to &quot;fid.dev/api&quot; and &quot;fid.dev/api/*&quot;
    r := mux.NewRouter()
    r.Host(&quot;fid.dev&quot;).PathPrefix(&quot;/api&quot;)
    proxy.AddTarget(&quot;http://localhost:8000&quot;, r)

    // Catch-all for all other requests
    proxy.AddTarget(&quot;http://localhost:8001&quot;, nil)

    // Listen for http://
    proxy.AddListener(&quot;:80&quot;)

    // Listen for https://
    proxy.AddListenerTLS(&quot;:443&quot;, &quot;keys/fid.dev.pem&quot;, &quot;keys/fid.dev-key.pem&quot;)

    if err := proxy.Start(); err != nil {
        log.Fatal(err)
    }
}
</code></pre>
<p><strong>Wait! TLS keys!</strong> I used <a href="https://github.com/FiloSottile/mkcert">mkcert</a> to generate some TLS keys (and a local CA to authenticate them) for me. The domain I used is <code>fid.dev</code>. I edited my <code>/etc/hosts</code> file so <code>fid.dev</code> pointed to <code>127.0.0.1</code>.</p>
<p>I should probably have just used something like <code>fid.local</code> or the more grim <code>fid.localhost</code>. Oh well!</p>
<p>In any case, <code>mkcert</code> Just Worked™ for me as described in its readme (on MacOS Monteray). I placed the keys it generated in a new directory named <code>keys</code>.</p>
<h3>We gotta fix a thing</h3>
<p>OK, if you try to run this....it will just exit immediately.</p>
<p>Remember how we ran our servers in goroutines? Yeah, there's nothing <em>blocking</em> anymore, so the program just ends (and goroutines get shut down).</p>
<p>We need a way to keep our program running. To do that, we can listen (and wait) for an interrupt signal (<code>SIGINT</code>), which is roughly Windows/Mac/Linux compatible.</p>
<p>This gives us the ability to block (keep the servers running) until we hit <code>ctrl+c</code>. Based on some random Stack Overflow answer, <code>os.Interrupt</code> is the only signal that will work on Windows.</p>
<p>With some additions, <code>main.go</code> now looks like this:</p>
<pre><code class="language-go">package main

import (
    &quot;github.com/fideloper/someproxy/reverseproxy&quot;
    &quot;github.com/gorilla/mux&quot;
)

func main() {
    proxy := &amp;reverseproxy.ReverseProxy{}

    // Let's assume we have 2 backends to proxy to
    // localhost:8000 (for our fun API requests)
    // and localhost:8001 (for everything else)

    // Match requests to &quot;localhost/api&quot; and &quot;localhost/api/*&quot;
    r := mux.NewRouter()
    r.Host(&quot;localhost&quot;).PathPrefix(&quot;/api&quot;)
    proxy.AddTarget(&quot;http://localhost:8000&quot;, r)

    // Catch-all for all other requests
    proxy.AddTarget(&quot;http://localhost:8001&quot;, nil)

    // Listen for http://
    proxy.AddListener(&quot;:80&quot;)

    // Listen for https://
    proxy.AddListenerTLS(&quot;:443&quot;, &quot;keys/fid.dev.pem&quot;, &quot;keys/fid.dev-key.pem&quot;)

    if err := proxy.Start(); err != nil {
        log.Fatal(err)
    }

    // Shutdown when we receive Ctrl+c (interrupt)
    c := make(chan os.Signal, 1)
    
    // We'll accept graceful shutdowns when quit via SIGINT (Ctrl+C)
    // SIGKILL, SIGQUIT or SIGTERM (Ctrl+/) will not be caught.
    signal.Notify(c, os.Interrupt)

    // Block until we receive our signal.
    &lt;-c
}
</code></pre>
<p>Now our program will run indefinitely until we send it <code>SIGINT</code> (interrupt).</p>
<p>When we hit <code>ctrl+c</code>, the server will shut down. It will also cut off any connections! We'll next see how to do graceful shutdowns.</p>
]]></content:encoded>
      <pubDate>Fri, 11 Nov 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>VII. Multi Host Reverse Proxy</title>
      <link>https://fideloper.com/golang-reverse-proxy-multiple-hosts</link>
      <description>Golang has a Single Host Reverse Proxy. We'll expand on it by making it multi-host.</description>
      <content:encoded><![CDATA[<p>Golang comes with a <code>SingleHostReverseProxy</code>. Let's do the obvious thing and give it the ability to handle multiple hosts.</p>
<p>It would be neat if we could configure the proxy to match some request parameters in order to decide which target (upstream/backend server - I need to decide on a vocabulary) to send the request to.</p>
<p>I noticed that Traefik leveraged <a href="https://github.com/gorilla/mux">gorilla/mux</a> for this. Let's do some theft and &quot;pull a Traefik&quot; (only worse, because I'm not a team of experts).</p>
<h2>Matching Routes</h2>
<p>Digging into the Gorilla Mux package, we find the <a href="https://github.com/gorilla/mux/blob/3cf0d013e53d62a96c096366d300c84489c26dd5/mux.go#L127"><code>Match()</code> method</a>.</p>
<p>That <code>Match()</code> method matches a given request to a registered route. Perfect!</p>
<blockquote>
<p>Gorilla/Mux wants us to pass a handler to use when a route is matched, but we'll ignore the <code>Handler</code> stuff. Traefik doesn't ignore that feature but our reverse proxy is different and we do. We just want the matching for now!</p>
</blockquote>
<p>We can do something like this:</p>
<pre><code class="language-go">import &quot;github.com/gorilla/mux&quot;

// Register a route to match a request where
// hostname is &quot;localhost&quot; and path is &quot;/foo&quot;
a := mux.NewRouter()
a.Host(&quot;localhost&quot;).Path(&quot;/foo&quot;)

// Match an *http.Request to
// my Mux's registered route
match := &amp;mux.RouteMatch{}
if a.Match(req, match) {
    // Do something if we have a match
}
</code></pre>
<p>Our goal is to add some possible &quot;targets&quot; (see, I decided on a word!) to our <code>ReverseProxy</code> and have the <code>Director</code> match an incoming request to the desired upstream target.</p>
<h2>Implementing Gorilla Mux</h2>
<p>First, we grab <code>gorilla/mux</code> for our project via <code>go get -u github.com/gorilla/mux</code>.</p>
<p>Then, back in <code>reverseproxy.go</code>, we edit some stuff:</p>
<pre><code class="language-go">type ReverseProxy struct {
    proxy  *httputil.ReverseProxy
    targets []*Target
}

type Target struct {
    router   *mux.Router
    upstream *url.URL
}
</code></pre>
<p>The <code>ReverseProxy</code>'s <code>Target</code> has become <code>targets</code> (lower case, no longer exported). We won't add targets ourselves directly, but instead use a new <code>AddTarget()</code> method:</p>
<pre><code class="language-go">// AddTarget adds an upstream server to use for a request that matches
// a given gorilla/mux Router. These are matched via Director function.
func (r *ReverseProxy) AddTarget(upstream string, router *mux.Router) error {
    url, err := url.Parse(upstream)

    if err != nil {
        return err
    }

    if router == nil {
        router = mux.NewRouter()
        router.PathPrefix(&quot;/&quot;)
    }

    r.targets = append(r.targets, &amp;Target{
        router:   router,
        upstream: url,
    })

    return nil
}
</code></pre>
<p>The method <code>AddTarget()</code> is added to the <code>ReverseProxy</code> struct.</p>
<p>One notable bit of logic is that if we pass <code>nil</code> for the <code>router</code> parameter, we create a catch-all router via <code>router.PathPrefix(&quot;/&quot;)</code>.</p>
<p>After we register some targets, we need our <code>Director</code> function to be able to spin through the registered targets, and match any. If they are matched, it sends to that upstream target.</p>
<p>The <code>gorilla/mux</code> lib has the matching function, we just (ab)use it.</p>
<pre><code class="language-go">// Director returns a function for use in http.ReverseProxy.Director.
// The function matches the incoming request to a specific target and
// sets the request object to be sent to the matched upstream server.
func (r *ReverseProxy) Director() func(req *http.Request) {
    return func(req *http.Request) {
        // Check each target for a match
        for _, t := range r.targets {
            // We don't actually use the match variable
            // but we need to make it to satisfy the
            // Gorilla/Mux Match() method
            match := &amp;mux.RouteMatch{}
            // The magic is here ✨
            if t.router.Match(req, match) {

                // This is all stdlib Director method stuff.
                // We adjusted it to use our matched target.
                targetQuery := t.upstream.RawQuery

                req.URL.Scheme = t.upstream.Scheme
                req.URL.Host = t.upstream.Host
                req.URL.Path, req.URL.RawPath = joinURLPath(t.upstream, req.URL)
                if targetQuery == &quot;&quot; || req.URL.RawQuery == &quot;&quot; {
                    req.URL.RawQuery = targetQuery + req.URL.RawQuery
                } else {
                    req.URL.RawQuery = targetQuery + &quot;&amp;&quot; + req.URL.RawQuery
                }
                if _, ok := req.Header[&quot;User-Agent&quot;]; !ok {
                    // explicitly disable User-Agent so
                    // it's not set to default value
                    req.Header.Set(&quot;User-Agent&quot;, &quot;&quot;)
                }
                break // First match wins
            }
        }
    }
}
</code></pre>
<p>As the comments suggest, the <code>Director</code> function now spins through the registered Targets, and uses Gorilla/Mux to match them. It directs the proxy to the first matched upstream Target.</p>
<p>We don't handle the case of a catch-all fallback here (if no matches are found). We'll assume we're responsible developers for now and define the fallback in our <code>main.go</code> file.</p>
<h2>Tying it Together</h2>
<p>Since that's all setup, let's go back to that <code>main.go</code> file and use our new feature!</p>
<pre><code class="language-go">package main

import (
    &quot;github.com/fideloper/someproxy/reverseproxy&quot;
    &quot;github.com/gorilla/mux&quot;
)

func main() {
    proxy := &amp;reverseproxy.ReverseProxy{}

    // Let's assume we have 2 backends to proxy to
    // localhost:8000 (for our fun API requests)
    // and localhost:8001 (for everything else)

    // Match requests to &quot;localhost/api&quot;
    // and &quot;localhost/api/*&quot;
    r := mux.NewRouter()
    r.Host(&quot;localhost&quot;).PathPrefix(&quot;/api&quot;)
    proxy.AddTarget(&quot;http://localhost:8000&quot;, r)

    // Catch-all for all other requests
    proxy.AddTarget(&quot;http://localhost:8001&quot;, nil)

    proxy.Start()
}
</code></pre>
<p>Here we register 2 targets into our reverse proxy. One captures anything starting (or equal to) <code>/api</code>, as long as the request was made to hostname <code>localhost</code>. These requests will go to upstream server <code>localhost:8000</code>.</p>
<p>The 2nd target is a catch-all route (see, we're responsible!) that sends anything else to <code>localhost:8001</code>.</p>
<blockquote>
<p>Note that <code>proxy.AddTarget()</code> can return an error, but I'm ignoring those for brevity.</p>
</blockquote>
<p>If we start this up, we'll see it works! You don't even need to have backend servers to test this - we'll see some logging for failed requests.</p>
<pre><code class="language-bash"># Assuming our proxy server is running...

curl localhost/api/foo

curl localhost/whatever
</code></pre>
<p>I didn't have any upstream servers running, so these curl requests receive a <code>502 Bad Gateway</code> response. But, we'll see the following logged from our reverse proxy:</p>
<pre><code>2022/10/07 21:32:09 http: proxy error: dial tcp [::1]:8000: connect: connection refused
2022/10/07 21:32:13 http: proxy error: dial tcp [::1]:8001: connect: connection refused
</code></pre>
<p>They're attempting to send to the correct locations! Port <code>8000</code> for our request to <code>localhost/api/foo</code> and port <code>8001</code> for any other request (<code>localhost/whatever</code> in our case).</p>
<p>Now we can target multiple backend hosts, using parameters of our choosing!</p>
]]></content:encoded>
      <pubDate>Sat, 05 Nov 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>VI. My Own Reverse Proxy</title>
      <link>https://fideloper.com/golang-create-own-reverse-proxy</link>
      <description>We'll start to create our own Reverse Proxy, onto which we'll eventually be adding a bunch of features.</description>
      <content:encoded><![CDATA[<p>Let's get started making our own proxy. We'll just steal the contents of <code>NewSingleHostReverseProxy()</code>, instantiate our own <code>ReverseProxy</code>, and get that working.</p>
<pre><code class="language-go">package main

import (
    &quot;log&quot;
    &quot;net/http&quot;
    &quot;net/http/httputil&quot;
    &quot;net/url&quot;
    &quot;strings&quot;
)

func main() {
    // Define the backend server we proxy to
    target, err := url.Parse(&quot;http://localhost:8000&quot;)

    if err != nil {
        log.Fatal(err)
    }

    // Stolen from `httputil.NewSingleHostReverseProxy()`
    targetQuery := target.RawQuery
    director := func(req *http.Request) {
        req.URL.Scheme = target.Scheme
        req.URL.Host = target.Host
        req.URL.Path, req.URL.RawPath = joinURLPath(target, req.URL)
        if targetQuery == &quot;&quot; || req.URL.RawQuery == &quot;&quot; {
            req.URL.RawQuery = targetQuery + req.URL.RawQuery
        } else {
            req.URL.RawQuery = targetQuery + &quot;&amp;&quot; + req.URL.RawQuery
        }
        if _, ok := req.Header[&quot;User-Agent&quot;]; !ok {
            // explicitly disable User-Agent so it's not set to default value
            req.Header.Set(&quot;User-Agent&quot;, &quot;&quot;)
        }
    }

    // Create the proxy object
    proxy := &amp;httputil.ReverseProxy{Director: director}

    // Listen for http:// connections on port 80
    srv := http.Server{Addr: &quot;:80&quot;, Handler: proxy}

    // Start the server
    srv.ListenAndServe()
}

// helper functions

func singleJoiningSlash(a, b string) string {
    // snip
}

func joinURLPath(a, b *url.URL) (path, rawpath string) {
    // snip
}
</code></pre>
<p>We literally just ripped out the <code>NewSingleHostReverseProxy()</code> method directly. Unfortunately, it calls some non-exportable helper functions, so we ripped those out too. They're mine now.</p>
<p>All they do is add slashes to URL's in way that ensures a slash exists if needed, and that double slashes do not exist. You can find them in stdlib <a href="https://go.dev/src/net/http/httputil/reverseproxy.go"><code>http.httputil.reverseproxy.go</code></a>.</p>
<p>Instantiating a new <code>httputil.ReverseProxy</code> object lets us keep all the fancy logic of the Reverse Proxy, but then modify / customize what we need later.</p>
<h2>Some Refactoring</h2>
<p>Let's do something fun. First, I hate having this mess of code in the <code>main</code> namespace. Let's make our own module and tuck away some of this trash. It'll make this a tad harder to write about, but the examples will be simpler.</p>
<p>Here's the new project layout:</p>
<pre><code>.
├── go.mod
├── go.sum
├── main.go
└── reverseproxy
    └── reverseproxy.go
</code></pre>
<p>The first thing we're going to do is wrap the <code>httputil.ReverseProxy</code> into our own <code>ReverseProxy</code> struct. This helps us tuck code away into our own modules, and then later we can more easily add some functionality to it.</p>
<p>File <code>reverseproxy.go</code> can have this:</p>
<pre><code class="language-go">package reverseproxy

import (
    &quot;net/http&quot;
    &quot;net/http/httputil&quot;
    &quot;net/url&quot;
    &quot;strings&quot;
)

type ReverseProxy struct {
    Target *url.URL
    proxy  *httputil.ReverseProxy
}

// Start will listen on configured listeners
func (r *ReverseProxy) Start() error {
    r.proxy = &amp;httputil.ReverseProxy{
        Director: r.Director(),
    }

    // Hard-coding port 80 for now
    srv := &amp;http.Server{Addr: &quot;:80&quot;, Handler: r.proxy}

    return srv.ListenAndServe()
}

// Director returns a function for use in http.ReverseProxy.Director.
// The function matches the incoming request to a specific target and
// sets the request object to be sent to the matched upstream server.
func (r *ReverseProxy) Director() func(req *http.Request) {
    return func(req *http.Request) {
        targetQuery := r.Target.RawQuery
        req.URL.Scheme = r.Target.Scheme
        req.URL.Host = r.Target.Host
        req.URL.Path, req.URL.RawPath = joinURLPath(r.Target, req.URL)
        if targetQuery == &quot;&quot; || req.URL.RawQuery == &quot;&quot; {
            req.URL.RawQuery = targetQuery + req.URL.RawQuery
        } else {
            req.URL.RawQuery = targetQuery + &quot;&amp;&quot; + req.URL.RawQuery
        }
        if _, ok := req.Header[&quot;User-Agent&quot;]; !ok {
            // explicitly disable User-Agent so it's not set to default value
            req.Header.Set(&quot;User-Agent&quot;, &quot;&quot;)
        }
    }
}

// Helper functions moved here

func singleJoiningSlash(a, b string) string {
    // snip
}

func joinURLPath(a, b *url.URL) (path, rawpath string) {
    // snip
}
</code></pre>
<p>So, I created my own <code>ReverseProxy</code> and added a <code>Start()</code> method. The <code>Start()</code> method has some code smells in it, but it's simple to see what's going on so we'll keep it.</p>
<p>I also added a <code>Director()</code> method onto my proxy struct. This generates a <code>Director</code> function for us, which is used by <code>httputil.ReverseProxy</code>. For now, we just copied, pasted, and tweaked the stdlib <code>Director</code> method to get it working. There are only minor tweaks here. Specifiaclly, we don't pass it a <code>target *url.URL</code>, but instead use the <code>Target *url.URL</code> that's defined in our own <code>ReverseProxy</code> struct.</p>
<p>We also moved the helper functions into this file, and I'm still hiding their <a href="https://go.dev/src/net/http/httputil/reverseproxy.go">boring contents</a> for brevity (<code>// snip!</code>).</p>
<p>The <code>main.go</code> file, which uses our <code>ReverseProxy</code>, instantiates and starts the server:</p>
<pre><code class="language-go">package main

import (
    &quot;github.com/fideloper/someproxy/reverseproxy&quot;
    &quot;log&quot;
    &quot;net/url&quot;
)

func main() {
    target, err := url.Parse(&quot;http://localhost:8000&quot;)

    if err != nil {
        log.Fatal(err)
    }

    proxy := &amp;reverseproxy.ReverseProxy{
        Target: target,
    }

    proxy.Start()
}
</code></pre>
<p>Simple! But there's no <em>features</em> yet. Let's add some!</p>
]]></content:encoded>
      <pubDate>Fri, 28 Oct 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>V. Single Host Reverse Proxy</title>
      <link>https://fideloper.com/golang-single-host-reverse-proxy</link>
      <description>Check out the stdlib Reverse Proxy! It does a lot of work for us.</description>
      <content:encoded><![CDATA[<p>It's time to get to more interesting things. The first reverse proxy we'll make is not of our making at all.</p>
<p>It turns out that the stdlib has a a decent implementation of a reverse proxy: <code>proxy := httputil.NewSingleHostReverseProxy(backend)</code>. As the name implies, it just handles a single host for the backend. It's not very load-balancer-y but it is very reverse-proxy-y.</p>
<p>Let's take a quick look at how to use it:</p>
<pre><code class="language-go">package main

import (
    &quot;log&quot;
    &quot;net/http&quot;
    &quot;net/http/httputil&quot;
    &quot;net/url&quot;
)

func main() {
    // The backend is another HTTP service listening
    // for requests. Our reverse proxy will send (proxy)
    // requests to it.
    backend, err := url.Parse(&quot;http://localhost:8000&quot;)

    if err != nil {
        log.Fatal(err)
    }

    // The proxy is a Handler - it has a ServeHTTP method
    proxy := httputil.NewSingleHostReverseProxy(backend)

    // We listen for requests on port 80
    srv := http.Server{Addr: &quot;:80&quot;, Handler: proxy}

    srv.ListenAndServe()
}
</code></pre>
<p>This reverse proxy accepts http, http/2, TLS, and gRPC connections. There's a bunch of stuff going on in there! You can check out the <a href="https://go.dev/src/net/http/httputil/reverseproxy.go"><code>http.httputil.reverseproxy.go</code></a> file from stdlib to see more.</p>
<p>Here's a few things it handles:</p>
<ol>
<li>HTTP <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Trailer">Trailer headers</a> (needed for gRPC in conjunction with http/2)</li>
<li><a href="https://nathandavison.com/blog/abusing-http-hop-by-hop-request-headers">Hop-by-hop headers</a></li>
<li><a href="https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88">HTTP streaming</a></li>
<li><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Upgrade">Upgrading HTTP connections</a> (hijacking the TCP connection for Websocket use)</li>
</ol>
<p>If we run the above code and send requests to <code>http://localhost</code>, it will attempt to proxy any request to <code>localhost:8000</code>. It will return <code>502 Bad Gateway</code> if you don't have any web server listening on <code>localhost:8000</code>.</p>
<h2>Digging In</h2>
<p>Let's see what the <code>NewSingleHostReverseProxy</code> is doing.</p>
<p>Its main job is to return an instance of <code>httputil.ReverseProxy</code>. The <code>ReverseProxy</code> isn't itself limited to proxying to a single host, but Golang's stdlib only offers this &quot;example&quot; of a quick single-upstream reverse proxy. More on that later.</p>
<p>If we take a look at the <code>httputil.ReverseProxy</code> struct, there's a few interesting things. Let's take a look at the 2 most interesting.</p>
<h3>Director</h3>
<p>First, the <code>Director</code> function.</p>
<pre><code class="language-go">type ReverseProxy struct {
    // Director must be a function which modifies
    // the request into a new request to be sent
    // using Transport. Its response is then copied
    // back to the original client unmodified.
    // Director must not access the provided Request
    // after returning.
    Director func(*http.Request)

    // snip
}
</code></pre>
<p>The <code>Director</code> function takes the <em>incoming</em> HTTP request (well, a copy of it) and modifies it. It's modified in a way that makes it ready to be sent to the backend/upstream server. Part of this is setting the requests host, schema (etc) to that of the upstream server so it works when the request is sent to it.</p>
<p>The modified request is eventually sent over to the backend server, but that's not the Director's responsibility. The <code>Director</code> is just responsible for modying the incoming request.</p>
<p>For example, if our proxy (running locally) is sent a request via <code>curl http://localhost:80</code>, and we've configured a backend/upstream server (&quot;target&quot;) of <code>http://localhost:8000</code>, then the Directory will take the incoming request copy, and set the Host to <code>localhost:8000</code>. Later, the configured <code>Transport</code> will use that request information to connect to the upstream server and send that request.</p>
<h3>Default Director Function</h3>
<p>Looking at the <code>NewSingleHostReverseProxy()</code> method, we see that all it does is define a Director function and returns a new <code>ReverseProxy</code> object with that Director.</p>
<pre><code class="language-go">func NewSingleHostReverseProxy(target *url.URL) *ReverseProxy {
    targetQuery := target.RawQuery
    director := func(req *http.Request) {
        req.URL.Scheme = target.Scheme
        req.URL.Host = target.Host
        req.URL.Path, req.URL.RawPath = joinURLPath(target, req.URL)
        if targetQuery == &quot;&quot; || req.URL.RawQuery == &quot;&quot; {
            req.URL.RawQuery = targetQuery + req.URL.RawQuery
        } else {
            req.URL.RawQuery = targetQuery + &quot;&amp;&quot; + req.URL.RawQuery
        }
        if _, ok := req.Header[&quot;User-Agent&quot;]; !ok {
            // explicitly disable User-Agent so it's not set to default value
            req.Header.Set(&quot;User-Agent&quot;, &quot;&quot;)
        }
    }
    return &amp;ReverseProxy{Director: director}
}
</code></pre>
<p>Pretty simple, if we continue to ignore all the Reverse Proxy code we didn't look at.</p>
<h2>Transport</h2>
<p>As mentioned, the <code>Director</code> function &quot;directs&quot; the copied request to the correct location - our backend target <code>http://localhost:8080</code>.</p>
<p>The thing that sends the request to the backend (and returns the response) is named <code>Transport</code>:</p>
<pre><code class="language-go">type ReverseProxy struct {
    // snip

    // The transport used to perform proxy requests.
    // If nil, http.DefaultTransport is used.
    Transport http.RoundTripper

    // snip
}
</code></pre>
<p>The <code>Transport</code> is of type <code>http.RoundTripper</code>, which is yet another interface:</p>
<pre><code class="language-go">type RoundTripper interface {
    // snipped a big ole comment
    RoundTrip(*Request) (*Response, error)
}
</code></pre>
<p>Since we didn't define a Transport, the <code>DefaultTransport</code> is used. The code within it is too complex to paste here - it has a bunch of responsibilities. It's <a href="https://go.dev/src/net/http/transport.go">interesting to look at</a>, I suggest you do!</p>
<p>The basics of it, however, are that is makes a round trip! It figures out where the request needs to go, sends it, and then gets the response. How it sends it involves making a TCP connection, handling TLS, and more. Receiving the response may involve waiting for a streamed response to complete.</p>
<h3>Round Tripper</h3>
<p>The main logic for that is in the Transport's <code>roundTrip()</code> method. The <code>Transport</code> struct's method <code>RoundTrip()</code> is kinda/sorta hidden in <a href="https://go.dev/src/net/http/roundtrip.go"><code>http/roundtrip.go</code></a>, which doesn't compile in JS/WASM contexts (hence method <code>roundTrip()</code> - lower case - being where the real logic is. <code>RoundTrip()</code> just calls <code>roundTrip()</code>).</p>
<p>The <code>roundTrip()</code> method makes some checks, gets a persistent connection object (also defined in that same file), and then calls <code>roundTrip()</code> on that connection. The persistent connection is a connection to the upstream server.</p>
<p>This is actually the more complex logic - concurrently writing the request to the upstream server while also reading for a response that may come before the full request is even sent.</p>
<h2>We Don't Need to Care Yet</h2>
<p>In any case, there's a lot going on in there! We don't need to really care right now, but I think some advanced features might have us digging into the Transport.</p>
<p>Next, let's make our own reverse proxy! We'll start simple, and then add some nifty features on top of what the stdlib provides for us.</p>
]]></content:encoded>
      <pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>~ Reverse Proxy Time</title>
      <link>https://fideloper.com/reverse-proxy-time</link>
      <description>Let's start building a reverse proxy.</description>
      <content:encoded><![CDATA[<p>We've been talking about regular old web servers. Let's start talking about something a little more interesting: Reverse Proxies.</p>
<p>Reverse Proxies are basically load balancers (don't @ me about that definition, <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/">look it up yourself</a>).</p>
<p>A few popular ones written in Golang are <a href="https://github.com/traefik/traefik">Traefik</a> and <a href="https://github.com/caddyserver/caddy">Caddy</a> (remember I mentioned those back in the <a href="/how-golang-http-servers-work">first article</a>?).</p>
<p>Let's see if we can implement some fraction of their functionality ourselves. Onward!</p>
]]></content:encoded>
      <pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>IV. Adding Context to Requests</title>
      <link>https://fideloper.com/golang-context-http-middleware</link>
      <description>We'll use Golang Context objects to add shared data amongst our middleware and request handlers.</description>
      <content:encoded><![CDATA[<p>We're going to discuss Golang's &quot;context&quot; objects (<code>context.Context</code>). I'll assume you're at least <a href="https://p.agnihotry.com/post/understanding_the_context_package_in_golang/">passingly familiar with them</a>.</p>
<p><strong>It's useful if your request handlers can share information about a request.</strong></p>
<p>Often the request data itself (<code>http.Request</code>) has everything you need, but sometimes your application has its own data. For example - the authenticated user.</p>
<p>To help here, one common pattern is to pass a <code>context</code> object through the middleware chain. A middleware can set some data, and the following middleware can see that data. This is generaly done with <code>context</code> objects.</p>
<p>Now, using contexts is generally agreed to be a good thing, <strong>but what data you save to a context is disagreed upon</strong>. The rules of thumb that I like are:</p>
<ol>
<li>For HTTP requests, only put information in a context that is specific to that request</li>
<li>Don't put data into a request that lives on longer than that one request</li>
</ol>
<p>Something that belongs in a request context: the current user, or perhaps a DB transaction used just for that request.</p>
<p>Something that doesn't belong in a request context: A Logger or DB connection (which is indeed different from a specific transaction).</p>
<h2>Context and Cloning</h2>
<p>Pontificating about programming aside, there's a few annoying things to explain about Go's context object, especially in regards to HTTP requests.</p>
<p>First, an <code>http.Request</code> object has a few pertinent methods:</p>
<ol>
<li><code>req.Context()</code> returns the request's context. If none was set on the request, it returns a new <code>context.Background()</code>.</li>
<li><code>req.WithContext(ctx)</code> returns a <em>shallow</em> copy of the request with the provided context. Requests are (should be) immutable, and contexts are definitely immutable.</li>
</ol>
<p>This means adding a context to a request nets us a copied request object with a new context on it.</p>
<p>But just what the hell is a shallow copy of a request?</p>
<p>Here's <code>WithContext(ctx)</code> from stdlib, with a bit of the relevant stdlib comments (which will change <em>after</em> Go 1.19):</p>
<pre><code class="language-go">// To change the context of a request, such as an incoming request you
// want to modify before sending back out, use Request.Clone. Between
// those two uses, it's rare to need WithContext.
func (r *Request) WithContext(ctx context.Context) *Request {
    if ctx == nil {
        panic(&quot;nil context&quot;)
    }
    r2 := new(Request)
    *r2 = *r
    r2.ctx = ctx
    return r2
}
</code></pre>
<p>Great, so I should never actually use <code>WithContext()</code>?!? I asked people smarter than me (who were <em>also</em> confused, it wasn't just me)! One of those people <a href="https://github.com/golang/go/issues/53413">went to the source to ask</a>.</p>
<p>It turns out, using <code>WithContext()</code> is just fine for our use case. We can run <code>newReq := r.WithContext(myShinyNewCtx)</code> in our middleware, and pass that along as if it is our original request.</p>
<p>Using <code>r.Clone()</code> is a &quot;deep copy&quot;. It's better suited for making a completely new copy of the request with it's own &quot;lifecycle&quot;. For example, the built-in <code>httputil.NewSingleHostReverseProxy()</code> makes use of <code>Clone()</code> in order to take a received request, and then modify it as needed before passing the cloned &amp; modified request to an upstream server.</p>
<p>Here's the <code>Clone()</code> method:</p>
<pre><code class="language-go">// Clone returns a deep copy of r with its context changed to ctx.
// The provided ctx must be non-nil.
//
// For an outgoing client request, the context controls the entire
// lifetime of a request and its response: obtaining a connection,
// sending the request, and reading the response headers and body.
func (r *Request) Clone(ctx context.Context) *Request {
    if ctx == nil {
        panic(&quot;nil context&quot;)
    }
    r2 := new(Request)
    *r2 = *r
    r2.ctx = ctx
    r2.URL = cloneURL(r.URL)
    if r.Header != nil {
        r2.Header = r.Header.Clone()
    }
    if r.Trailer != nil {
        r2.Trailer = r.Trailer.Clone()
    }
    if s := r.TransferEncoding; s != nil {
        s2 := make([]string, len(s))
        copy(s2, s)
        r2.TransferEncoding = s2
    }
    r2.Form = cloneURLValues(r.Form)
    r2.PostForm = cloneURLValues(r.PostForm)
    r2.MultipartForm = cloneMultipartForm(r.MultipartForm)
    return r2
}
</code></pre>
<p>It does more stuff! I'm still not sure why a &quot;shallow&quot; copy is safe to use with Middleware while a &quot;deep&quot; copy requires explictly copying some data. It seems like it's concerned with cloning specific types of the <code>http.Request</code> struct defined by the <code>http</code> module (vs &quot;standard&quot; types such as <code>string</code>, <code>bool</code>, or <code>[]string</code>).</p>
<p>Anyway, let's do some contexting.</p>
<h2>Adding Context</h2>
<p>We'll stick with our example of adding information about the current authenticated user. Let's add a Middleware that &quot;adds&quot; the current user to the request's context.</p>
<pre><code class="language-go">// UserMiddleware gets the current user and adds it to a new context
func UserMiddleware(h http.HandlerFunc) http.HandlerFunc {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ctx := context.WithValue(r.Context(), &quot;user&quot;, &quot;fideloper&quot;)
        newReq := r.WithContext(ctx)
        h.ServeHTTP(w, newReq)
    })
}
</code></pre>
<p>Contexts are immutable, so each &quot;change&quot; requires creating an new context based off of an old one. We grab <code>r.Context()</code>, which likely is just returning <code>context.Background()</code> as mentioned earlier.</p>
<p>We pass our new request object <code>newReq</code> along in <code>h.ServeHTTP</code>, leaving the old one to die a lonely death when the garbage collector comes calling.</p>
<p>We can add this into our Middleware stack, and then we get a user object (just a <code>string</code> for now) that any other middleware/handler can retrieve.</p>
<p>Here's the whole thing:</p>
<pre><code class="language-go">package main

import (
    &quot;context&quot;
    &quot;fmt&quot;
    &quot;log&quot;
    &quot;net&quot;
    &quot;net/http&quot;
)

// Middleware is a func type that
// allows for chaining middleware
type Middleware func(http.HandlerFunc) http.HandlerFunc

// CompileMiddleware takes the base http.HandlerFunc h
// and wraps around the given list of Middleware m
func CompileMiddleware(h http.HandlerFunc, m []Middleware) http.HandlerFunc {
    if len(m) &lt; 1 {
        return h
    }

    wrapped := h

    // loop in reverse to preserve middleware order
    for i := len(m) - 1; i &gt;= 0; i-- {
        wrapped = m[i](wrapped)
    }

    return wrapped
}

// Let's define some middleware!

// LogMiddleware logs some output 
// for each request received
func LogMiddleware(h http.HandlerFunc) http.HandlerFunc {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        log.Printf(&quot;%s: %s&quot;, r.Method, r.RequestURI)
        h.ServeHTTP(w, r)
    })
}

// UserMiddleware gets the current user 
// and adds it to a new context
func UserMiddleware(h http.HandlerFunc) http.HandlerFunc {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ctx := context.WithValue(r.Context(), &quot;user&quot;, &quot;fideloper&quot;)
        newReq := r.WithContext(ctx)
        h.ServeHTTP(w, newReq)
    })
}

// RateLimitMiddleware limits how often
// a request can be made from a given client
func RateLimitMiddleware(h http.HandlerFunc) http.HandlerFunc {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if rateLimitBreached(r) {
            writer.WriteHeader(403)
            fmt.Fprint(writer, &quot;Rate limit reached&quot;)
            return
        }

		h.ServeHTTP(w, r)
	})
}

func main() {
    mux := http.NewServeMux()

    // Define our middleware stack
    // Last middleware will be run first
    stack := []Middleware{
        LogMiddleware,
        UserMiddleware,
        RateLimitMiddleware,
    }

    // Assign our actual HTTP Handler to a variable
    handleAllRequests := func(writer http.ResponseWriter, r *http.Request) {
        writer.WriteHeader(200)
        fmt.Fprint(writer, &quot;HELLO!&quot;)
        fmt.Fprintf(writer, &quot; HERE IS YOUR USER: %s&quot;, r.Context().Value(&quot;user&quot;))
    }

    // Set our handler as a &quot;wrapped&quot; handler - each middleware is called
    // before finally calling the handleAllRequests http Handler
    mux.HandleFunc(&quot;/&quot;, CompileMiddleware(handleAllRequests, stack))

    srv := &amp;http.Server{
        Handler: mux,
    }

    ln, err := net.Listen(&quot;tcp&quot;, &quot;:80&quot;)
    if err != nil {
        panic(err)
    }

    srv.Serve(ln)
}
</code></pre>
<p>In addition to creating the <code>UserMiddleware</code> and adding it to the <code>stack</code>, we updated the base handler to print out information about the current user, retrieved from the request context.</p>
<p>That's this part:</p>
<pre><code class="language-go">// Assign our actual HTTP Handler to a variable
handleAllRequests := func(writer http.ResponseWriter, r *http.Request) {
    writer.WriteHeader(200)
    fmt.Fprint(writer, &quot;HELLO!&quot;)
    fmt.Fprintf(writer, &quot; HERE IS YOUR USER: %s&quot;, r.Context().Value(&quot;user&quot;))
}
</code></pre>
<h2>But I Want Types!</h2>
<p>One thing sort of sucks: The type <code>any</code>.</p>
<p>The <code>context.WithValue</code> method accepts a value of type <code>any</code>, and <code>r.Context().Value(&quot;foo&quot;)</code> can return a value of type <code>any</code>.</p>
<p>This means Go's compiler (and our IDE's) can't enforce types, nor help us to know what data is being get/set in the context object. But we want type safety! That's why we use Go!</p>
<p><a href="https://www.calhoun.io/pitfalls-of-context-values-and-how-to-avoid-or-mitigate-them/">This article</a> (and <a href="https://blog.khanacademy.org/statically-typed-context-in-go/">this one</a>) covers some ways to get type saftey. I've not settled on what I like best, <strong>but here's a stab at it</strong>.</p>
<p>First, let's assume that our context shouldn't just receive a string representing a user. We'll instead make a <code>User</code> struct and some helper functions to manage it:</p>
<pre><code class="language-go">// Still a bit contrived,
// but bear with me
type User struct {
    Username string
}

// setUser adds a user to a context, returning 
// a new context with the user attached
func setUser(ctx context.Context, u *User) context.Context {
    return context.WithValue(ctx, &quot;user&quot;, u)
}

// getUser returns an instance of User,
// if set, from the given context
func getUser(ctx context.Context) *User {
    user, ok := ctx.Value(&quot;user&quot;).(*User)

    if !ok {
        return nil
    }

    return user
}
</code></pre>
<p>These helper functions gives us a type-safe way to manage getting/setting the <code>User</code> to/from our context, and gives the compiler something to chew on.</p>
<p>Our <code>UserMiddleware</code> becomes this:</p>
<pre><code class="language-go">// UserMiddleware gets the current user and adds it to a new context
func UserMiddleware(h http.HandlerFunc) http.HandlerFunc {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ctx := setUser(r.Context(), &amp;User{
            Username: &quot;fideloper&quot;,
        })
        newReq := r.WithContext(ctx)
        h.ServeHTTP(w, newReq)
    })
}
</code></pre>
<p>The only change there is to use the <code>setUser</code> function to add a new <code>User</code> to the context (which returns a new Context - remember, contexts are immutable).</p>
<p>Then we can update our base handler to use <code>getUser</code> to retrieve the <code>User</code>. I chose to return <code>nil</code> if no user is associated, rather than an error. You do you.</p>
<pre><code class="language-go">// Assign our actual HTTP Handler to a variable
handleAllRequests := func(writer http.ResponseWriter, r *http.Request) {
    writer.WriteHeader(200)
    fmt.Fprint(writer, &quot;HELLO!&quot;)
    
    user := getUser(r.Context())

    if user != nil {
        fmt.Fprintf(writer, &quot; HERE IS YOUR USER: %s&quot;, user.Username)
        return
    }

    fmt.Fprint(writer, &quot; NO USER AUTHENTICATED&quot;)
}
</code></pre>
<p>Here's the whole thing:</p>
<pre><code class="language-go">package main

import (
    &quot;context&quot;
    &quot;fmt&quot;
    &quot;log&quot;
    &quot;net&quot;
    &quot;net/http&quot;
)

type User struct {
    Username string
}

func setUser(ctx context.Context, u *User) context.Context {
    return context.WithValue(ctx, &quot;user&quot;, u)
}

func getUser(ctx context.Context) *User {
    user, ok := ctx.Value(&quot;user&quot;).(*User)

    if !ok {
        return nil
    }

    return user
}

// Middleware is func type that allows for chaining middleware
type Middleware func(http.HandlerFunc) http.HandlerFunc

// CompileMiddleware takes the base http.HandlerFunc h and wraps around the given list of Middleware m
func CompileMiddleware(h http.HandlerFunc, m []Middleware) http.HandlerFunc {
    if len(m) &lt; 1 {
        return h
    }

    wrapped := h

    // loop in reverse to preserve middleware order
    for i := len(m) - 1; i &gt;= 0; i-- {
        wrapped = m[i](wrapped)
    }

    return wrapped
}

// Let's define some middleware!

// LogMiddleware logs some output for each request received
func LogMiddleware(h http.HandlerFunc) http.HandlerFunc {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        log.Printf(&quot;%s: %s&quot;, r.Method, r.RequestURI)
        h.ServeHTTP(w, r)
    })
}

// UserMiddleware gets the current user and adds it to a new context
func UserMiddleware(h http.HandlerFunc) http.HandlerFunc {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ctx := setUser(r.Context(), &amp;User{
            Username: &quot;fideloper&quot;,
        })
        newReq := r.WithContext(ctx)
        h.ServeHTTP(w, newReq)
    })
}

// RateLimitMiddleware limits how often a request can be made from a given client
func RateLimitMiddleware(h http.HandlerFunc) http.HandlerFunc {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if rateLimitBreached(r) {
            writer.WriteHeader(403)
            fmt.Fprint(writer, &quot;Rate limit reached&quot;)
            return
        }

		h.ServeHTTP(w, r)
	})
}

func main() {
    mux := http.NewServeMux()

    // Define our middleware stack
    // Last middleware will be run first
    stack := []Middleware{
        LogMiddleware,
        UserMiddleware,
        RateLimitMiddleware,
    }

    // Assign our actual HTTP Handler to a variable
    handleAllRequests := func(writer http.ResponseWriter, r *http.Request) {
        writer.WriteHeader(200)
        fmt.Fprint(writer, &quot;HELLO!&quot;)
        user := getUser(r.Context())

        if user != nil {
            fmt.Fprintf(writer, &quot; HERE IS YOUR USER: %s&quot;, user.Username)
            return
        }

        fmt.Fprint(writer, &quot; NO USER AUTHENTICATED&quot;)
    }

    // Set our handler as a &quot;wrapped&quot; handler - each middleware is called before
    // finally calling the handleAllRequests http Handler
    mux.HandleFunc(&quot;/&quot;, CompileMiddleware(handleAllRequests, stack))

    srv := &amp;http.Server{
        Handler: mux,
    }

    ln, err := net.Listen(&quot;tcp&quot;, &quot;:80&quot;)
    if err != nil {
        panic(err)
    }

    srv.Serve(ln)
}

</code></pre>
<p>In reality I'd have several files and/or some modules of my own here to manage Users, Middleware, etc. In this examples, we're throwinng it all into one file.</p>
<p>But now we know how to use context objects to pass data through our middleware and handlers!</p>
<h2>No More Web Servers</h2>
<p>That's a wrap on web servers. The next thing we'll look into is more fun: Reverse Proxies.</p>
]]></content:encoded>
      <pubDate>Thu, 20 Oct 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>III. Chainable Middleware</title>
      <link>https://fideloper.com/golang-chainable-middleware</link>
      <description>We'll use our knowledge of Handlers to create middleware for our Go web applications.</description>
      <content:encoded><![CDATA[<p>It turns out that Handlers provide a nice way to create a chain of HTTP middleware.</p>
<p>There's more than one way to go about this, but they all use <code>http.Handler</code>. If you google it, you'll come across examples that have you nest function calls in a way that gets grim pretty quicly.</p>
<pre><code class="language-go">func(w http.Writer, seriously(
    who(
        wants(
            this(req)
        )
    )
) {}
</code></pre>
<p>However, if you google &quot;golang chainable middleware&quot; (or similar), you'll find a much nicer pattern! Let's explore it, and see how Handlers help us here.</p>
<h2>Generating a Middleware</h2>
<p>Let's see a function that generates a middleware:</p>
<pre><code class="language-go">// LogMiddleware returns a function that logs
// related output for each request received.
func LogMiddleware(next http.HandlerFunc) http.HandlerFunc {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		log.Printf(&quot;%s: %s&quot;, r.Method, r.RequestURI)
		next.ServeHTTP(w, r)
	})
}
</code></pre>
<p>We'll deal with <code>http.HandlerFunc</code> instead of <code>http.Handler</code> in this case. The only reason for this is that it's easier to pass a regular function (see the <a href="/golang-http-handlers">previous article on Handlers</a> if you haven't read it).</p>
<p>Function <code>LogMiddleware</code> takes an instance of <code>http.HandlerFunc</code> and returns another <code>http.HandlerFunc</code>.</p>
<p>The function we return does some logic, and then calls <code>ServeHTTP</code> on the &quot;next&quot; <code>http.HandlerFunc</code>. We know the <code>ServeHTTP</code> method is available to call thanks to the magic of <code>http.HandlerFunc</code> as described in the <a href="(/golang-http-handlers)">previous article</a>.</p>
<p>We can keep nesting middleware having each one call the &quot;next&quot; Handler. Let's write some code and make that idea concrete.</p>
<h2>Codify the Middleware</h2>
<p>First, we'll codify this &quot;pattern&quot; of generating Middleware functions as a <code>type</code>:</p>
<pre><code class="language-go">// Middleware is func type that allows
// for chaining middleware
type Middleware func(http.HandlerFunc) http.HandlerFunc
</code></pre>
<p>Defining this gives us the ability to enforce types, which we'll see later.</p>
<h2>Chaining the Middleware</h2>
<p>The <code>LogMiddleware</code> function logs info about the request before calling the &quot;next&quot; middleware. Every middleware does this until the last Handler is run (or if a middleware decides to short-circuit the process and do something else, e.g. return a &quot;not authorized&quot; response).</p>
<p>Since each middleware calls the &quot;next&quot; middleware, we call this a <strong>chain of middleware</strong>.</p>
<p>That becomes a bit clearer if you see multiple middleware in use. Let's pretend we have 2 middleware:</p>
<pre><code class="language-go">// LogMiddleware logs some output for each request received
func LogMiddleware(h http.HandlerFunc) http.HandlerFunc {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		log.Printf(&quot;%s: %s&quot;, r.Method, r.RequestURI)
		h.ServeHTTP(w, r)
	})
}

// RateLimit middleware limits how often a
// request can be made from a given client
func RateLimitMiddleware(h http.HandlerFunc) http.HandlerFunc {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        // Love me some handy-wavy magic
        if rateLimitBreached(r) {
            // 429 Too Many Requests
            writer.WriteHeader(429)
            fmt.Fprint(writer, &quot;Rate limit reached&quot;)
            return
        }

		h.ServeHTTP(w, r)
	})
}
</code></pre>
<p>We have 2 middleware, and then we have a function that actually handles the web request:</p>
<pre><code class="language-go">func(writer http.ResponseWriter, r *http.Request) {
    writer.WriteHeader(200)
    fmt.Fprint(writer, &quot;HELLO&quot;)
}
</code></pre>
<p>Let's whip up a helper function that will wrap an HTTP handler function (the thing doing the actual work of responding to a request) in our various middleware:</p>
<pre><code class="language-go">// CompileMiddleware takes the base http.HandlerFunc h 
// and wraps it around the given list of Middleware m
func CompileMiddleware(h http.HandlerFunc, m []Middleware) http.HandlerFunc {
	if len(m) &lt; 1 {
		return h
	}

	wrapped := h

	// loop in reverse to preserve middleware order
	for i := len(m) - 1; i &gt;= 0; i-- {
		wrapped = m[i](wrapped)
	}

	return wrapped
}
</code></pre>
<p>Putting it all together along with our basic web server looks like this:</p>
<pre><code class="language-go">package main

import (
	&quot;fmt&quot;
	&quot;log&quot;
	&quot;net&quot;
	&quot;net/http&quot;
)

// Middleware is func type that allows for
// chaining middleware
type Middleware func(http.HandlerFunc) http.HandlerFunc

// CompileMiddleware takes the base http.HandlerFunc h 
// and wraps it around the given list of Middleware m
func CompileMiddleware(h http.HandlerFunc, m []Middleware) http.HandlerFunc {
	if len(m) &lt; 1 {
		return h
	}

	wrapped := h

	// loop in reverse to preserve middleware order
	for i := len(m) - 1; i &gt;= 0; i-- {
		wrapped = m[i](wrapped)
	}

	return wrapped
}

// Let's define the middleware!

// LogMiddleware logs some output for each
// request received
func LogMiddleware(h http.HandlerFunc) http.HandlerFunc {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		log.Printf(&quot;%s: %s&quot;, r.Method, r.RequestURI)
		h.ServeHTTP(w, r)
	})
}

// RateLimit middleware limits how often a
// request can be made from a given client
func RateLimitMiddleware(h http.HandlerFunc) http.HandlerFunc {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if rateLimitBreached(r) {
            writer.WriteHeader(403)
            fmt.Fprint(writer, &quot;Rate limit reached&quot;)
            return
        }

		h.ServeHTTP(w, r)
	})
}

func main() {

	mux := http.NewServeMux()

	// Define our middleware stack
	// These run in the order given
	stack := []Middleware{
        LogMiddleware,
		RateLimitMiddleware,
	}

    // Assign our base HTTP Handler to a variable
	handleAllRequests := func(writer http.ResponseWriter, r *http.Request) {
		writer.WriteHeader(200)
		fmt.Fprint(writer, &quot;HELLO&quot;)
	}

    // Set our handler as a &quot;wrapped&quot; handler.
    // Each middleware is called before finally
    // calling the handleAllRequests http Handler
	mux.HandleFunc(&quot;/&quot;, CompileMiddleware(handleAllRequests, stack))

	srv := &amp;http.Server{
		Handler: mux,
	}

	ln, err := net.Listen(&quot;tcp&quot;, &quot;:80&quot;)
	if err != nil {
		panic(err)
	}

	srv.Serve(ln)
}
</code></pre>
<p>Fairly simple, if a little verbose (just like Golang itself). Any request will be logged, then checked against a rate limit. If the rate limit is not reached, then our <code>handleAllRequests</code> handler is finally run.</p>
<p>Note that we preserve the order that middleware are run. Our base handler is run last.</p>
<h2>Quick Review</h2>
<p>The annoying part (but the part that makes this interesting for me to learn and write about) is how Golang's type system can be a bit obtuse.</p>
<p>A little review from the last article: Golang uses a <code>type HandlerFunc func(...)</code> to define a type that is itself a function. That function type provides method <code>ServeHTTP</code>. This is so
we can pass a regular function and have the <code>http</code> stdlib convert it to an <code>http.Handler</code> (which wants that <code>ServeHTTP</code> method).</p>
<p>On top of that, we abuse it a bit for our Middleware, allowing us to create a chain of <code>http.Handler</code>'s (technically <code>http.HandlerFunc</code>'s) that can call each
sibling middleware (in a specific order!) before finally calling the actual HTTP handler that returns a response.</p>
<blockquote>
<p>Sidenote: Any middleware that doesn't call <code>ServeHTTP</code> breaks the chain. This is on purpose - it can abort the chain and respond with whatever makes sense if needed. That's why middleware are often used on routes that require authentication. No use processing the request further if we know the user needs to be authenticated to perform the action.</p>
</blockquote>
<p>So, now we have middleware! <strong>Let's next see how to share information amongst our middleware.</strong></p>
]]></content:encoded>
      <pubDate>Tue, 18 Oct 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>II. It's HTTP Handlers All the Way Down</title>
      <link>https://fideloper.com/golang-http-handlers</link>
      <description>See how Golang (ab)uses Handlers when accepting HTTP requests.</description>
      <content:encoded><![CDATA[<p>The previous article covered <a href="/how-golang-http-servers-work">how HTTP servers work in Go</a>. In it, I mentioned Handlers quite a bit. Let's finally figure those out!</p>
<p>The <code>http.Handler</code> is just an interface:</p>
<pre><code class="language-go">type Handler interface {
    ServeHTTP(ResponseWriter, *Request)
}
</code></pre>
<p>The <code>http.Server</code> has a <code>Handler</code> property, which wants a <code>http.Handler</code>:</p>
<pre><code class="language-go"># From stdlib http module:
type Server struct {
    // snip
    Handler Handler // handler to invoke, http.DefaultServeMux if nil
}
</code></pre>
<p>Most properties in the <code>Server</code> have a long comment describing it. This property does not, but the little comment it does have tells us it uses the <code>DefaultServeMux</code> if it's nil. That's the behavior we saw in the previous article.</p>
<p>In the example in the last article, we created a <code>ServeMux</code> and a <code>Server</code>:</p>
<pre><code class="language-go">mux := http.NewServeMux()
mux.HandleFunc(&quot;/&quot;, func(writer http.ResponseWriter, r *http.Request) {
    writer.WriteHeader(200)
    fmt.Fprint(writer, &quot;HELLO&quot;)
})

srv := &amp;http.Server{
    Handler: mux,
}
</code></pre>
<p>We passed a <code>http.ServeMux</code> to the server's Handler. That <code>ServeMux</code> object satisifes the <code>http.Handler</code> interface because it has a <code>ServeHTTP</code> method.</p>
<p>The interface doesn't care what the <code>ServeHTTP</code> method does (that's not the job of an interface). The Mux's <code>ServeHTTP</code> method happens to match an HTTP request to a user-defined <code>http.Handler</code> (yes, another Handler!) and calls <code>ServeHTTP</code> on <em>that</em> Handler.</p>
<p>That process looks a bit like the below, where the <code>ServeHTTP</code> method that's on the <code>ServeMux</code> struct matches a user-defined Handler and calls <code>ServeHTTP</code> on it:</p>
<pre><code class="language-go">// ServeHTTP dispatches the request to the handler whose
// pattern most closely matches the request URL.
func (mux *ServeMux) ServeHTTP(w ResponseWriter, r *Request) {
    // snip

    // Match the request URI to
    // a user-registered route
    h, _ := mux.Handler(r)

    // The user-defined route is
    // an instance of http.Handler
    h.ServeHTTP(w, r)
}
</code></pre>
<p>When I saw a &quot;user-defined handler&quot;, I mean this thing:</p>
<pre><code class="language-go">mux.HandleFunc(&quot;/&quot;, func(writer http.ResponseWriter, r *http.Request) {
    writer.WriteHeader(200)
    fmt.Fprint(writer, &quot;HELLO&quot;)
})
</code></pre>
<p>This is a bit confusing at first. The <code>ServeMux</code> satisfies the <code>Handler</code> interface, but it also then matches an incoming request to a registered handler. The registered handler <em>also</em> satisfies the <code>Handler</code> interface, and so we can call <code>ServeHTTP</code> on that handler.</p>
<p><strong>That happens a few more times!</strong> It's a whole chain of Handlers! Here's roughly what the code path is:</p>
<blockquote>
<p><strong>1:</strong> An incoming request (eventually) triggers the creation of a <code>http.serverHandler</code> (a private, unexported object)</p>
</blockquote>
<p></p>
<blockquote>
<p><strong>2:</strong> <code>http.serverHandler</code> has a <code>ServeHTTP</code> method! It's the first <code>ServeHTTP</code> method called in the &quot;chain&quot;</p>
</blockquote>
<p></p>
<blockquote>
<p><strong>3:</strong> <code>http.serverHandler</code> contains a reference to the <code>Server</code> and uses it to call the server's <code>Handler</code> (which, remember, is often an instance of <code>ServeMux</code>, although it doesn't have to be)</p>
</blockquote>
<p></p>
<blockquote>
<p><strong>4:</strong> Ours <em>is</em> a Mux, and the Mux matches the incoming requestss route to a <code>http.HandlerFunc</code> (the function we provided as a handler), and calls <code>ServeHTTP</code> on that handler!</p>
</blockquote>
<p><strong>We can look at the code a bit to see that more clearly.</strong></p>
<p>The chain of Handler calls <em>roughly</em> looks like this, which I copied/pasted from stdlib, and then tweaked to make sense:</p>
<pre><code class="language-go">// A. Deep in http/server.go...

//    This is the top-level ServeHTTP call
//    serveHandler has a reference to the Server
sh := serverHandler{srv: c.server} 
sh.ServeHTTP(w, w.req)


// B. Within serverHandler.ServeHttp()...

//    A Server is a prop of the serverHandler.
//    It calls the server's Handler (ServeMux).
handler := sh.srv.Handler
if handler == nil {
    // Look, the default ServeMux!
    handler = DefaultServeMux
}

handler.ServeHTTP(rw, req)


// C. Within handler.ServeHTTP()...

//    Our handler function has been converted to a HandlerFunc
//    which gives it the ServeHTTP() method that's called here:
userDefinedHandler := mux.Handler(request)
userDefinedHandler.ServeHTTP(w, r)
</code></pre>
<p>🐢 It's Handlers all the way down.</p>
<h2>HandlerFunc</h2>
<p>In the code comments directly above I mentioned that our handler function is &quot;converted&quot; to a <code>http.HandlerFunc</code>.</p>
<p>Here's the handler function we defined previously:</p>
<pre><code class="language-go">mux := http.NewServeMux()

// We pass a regular function as a handler function
mux.HandleFunc(&quot;/&quot;, func(writer http.ResponseWriter, r *http.Request) {
    writer.WriteHeader(200)
    fmt.Fprint(writer, &quot;HELLO&quot;)
})
</code></pre>
<p>The function we passed there actually satisifes <code>Handler</code> even though it's not explicitly named <code>ServeHTTP</code> and <em>is not typed as a</em> <code>http.Handler</code>! However, we actually called <code>ServeHTTP()</code> on it, and it ran our handler code. That's weird!</p>
<pre><code class="language-go">// Somehow this runs the handler code we wrote
// even tho what we passed was a regular func()
// and didn't define something with a ServeHTTP
// method on it.
userDefinedHandler := mux.Handler(request)
userDefinedHandler.ServeHTTP(w, r)
</code></pre>
<p>The trick is the <code>http.HandlerFunc</code> &quot;conversion&quot;. <strong>Let's see how that works with an example.</strong></p>
<p>It does some whack nonsense, just bear with me.</p>
<pre><code class="language-go">package main

import (
    &quot;fmt&quot;
    &quot;net&quot;
    &quot;net/http&quot;
)

// MyHandler is of type &quot;func&quot;, with a specific function signature. Weird!
type MyHandler func(http.ResponseWriter, *http.Request)

// ServeHTTP adds a function with the correct
// signature to make it satisfy http.Handler.
// Note that MyHandler `m` is *callable* as a
// function. We can, and do, call `m(w, r)`!
func (m MyHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    m(w, r)
}

func main() {
    mux := http.NewServeMux()

    // Our function gets turned into an instance of MyHandler,
    // which provides method ServeHTTP, and calls the func we
    // passed into mux.Handle() here.
    //
    // * Note that this method requires us to use mux.Handle,
    //   not mux.HandleFunc
    mux.Handle(&quot;/&quot;, MyHandler(func(writer http.ResponseWriter, r *http.Request) {
        writer.WriteHeader(200)
        fmt.Fprint(writer, &quot;HELLO&quot;)
    }))

    srv := &amp;http.Server{
        Handler: mux,
    }

    ln, err := net.Listen(&quot;tcp&quot;, &quot;:80&quot;)
    if err != nil {
        panic(err)
    }

    srv.Serve(ln)
}
</code></pre>
<p>This is a bit weird unless you're pretty familiar with Golang. The pattern was new to me.</p>
<p><strong>In Golang, you can just define your own types.</strong> Above, we defined (named) a type <code>MyHandler</code>. Its of type <code>func</code>! That function has a specific signature.</p>
<p>We then give <code>MyHandler</code> a <code>ServeHTTP</code> method! This was new to me - I'm used to creating <code>struct</code> types and adding methods to those, but here we created <code>MyHandler</code> as type <code>func</code> ... <em>and then we added a method on that</em>!</p>
<p>To repeat myself, this is the part I'm talking about:</p>
<pre><code class="language-go">// MyHandler is of type &quot;func&quot;, with a specific function signature. Weird!
type MyHandler func(http.ResponseWriter, *http.Request)

// ServeHTTP adds a function with the correct
// signature to make it satisfy http.Handler.
// Note that MyHandler `m` is *callable* as a
// function. We can, and do, call `m(w, r)`!
func (m MyHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    m(w, r)
}
</code></pre>
<p>By adding the <code>ServeHTTP</code> method, the <code>MyHandler</code> type now satisfies <code>http.Handler</code> interface.</p>
<p>What's weird is that <code>ServeHTTP</code> calls <code>m(writer, r)</code>. Turns out...you can do that. Since <code>MyHandler</code> is a <code>func</code> you can just call it like a function.</p>
<pre><code class="language-go">func (m MyHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    // We can actually call &quot;m&quot; as a function. Neat!
    // &quot;m&quot; is from `func(m MyHandler)` where I named 
    // the instance of this function &quot;m&quot;
    m(w, r)
}
</code></pre>
<p>After we define all of that, we pass a regular ole' function with our handling code (status 200, string <code>&quot;HELLO&quot;</code>), but we <strong>typecast</strong> that handler function as a <code>MyHandler</code>. As I've said before, <code>MyHandler</code> satisfies interface <code>http.Handler</code>, so we're effectively able to take our regular function and pretend it's an <code>http.Handler</code>.</p>
<p><strong>Fun fact: This is what the stdlib does</strong>.</p>
<p>It turns out <code>MyHandler</code> is an exact copy of the following stdlib code from <code>http/server.go</code>, which defines type <code>http.HandlerFunc</code>:</p>
<pre><code class="language-go">// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as HTTP handlers. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(ResponseWriter, *Request)

// ServeHTTP calls f(w, r).
func (f HandlerFunc) ServeHTTP(w ResponseWriter, r *Request) {
    f(w, r)
}
</code></pre>
<p>Note the comment from the stdlib code:</p>
<blockquote>
<p><em>HandlerFunc type is an adapter to allow the use of</em>
<em>ordinary functions as HTTP handlers</em>.</p>
</blockquote>
<p>We allowed ourselves to write an &quot;ordinary function&quot; as a handler even though we technically needed to satify interface <code>http.Handler</code>. Instead of creating a struct (or whatever) and adding a method <code>ServeHTTP()</code> on it, we can just pass a function to <code>mux.HandleFunc</code>.</p>
<p><strong>Being asked to pass a regular function is just syntactic sugar</strong>, and also confusing. It hides an important implementation detail of Golang's <code>http</code> module - handlers!</p>
<p>(Maybe the core team was as sick as Handlers we are, and hid them from us.)</p>
<h2>What's the Point?</h2>
<p>In the first post, I mentioned asking <a href="https://caddy.community/t/trying-to-find-where-tcp-requests-turn-into-http-requests-in-code/17308">Caddy's community forum a question</a>. That question came from trying to figure out where the hell <code>ServeHTTP</code> came from. I couldn't find the complete code path.</p>
<p>The whole point of me writing this is basically my excitment in finally figuring it out.</p>
<p>The <em>really</em> nifty surprise was seeing how (ab)used the <code>http.Handler</code> type is! <strong>It's everywhere!</strong></p>
<blockquote>
<p>Sidenote - the thing we did above might actually be useful for you to explicitly use. <a href="https://go.dev/blog/error-handling-and-go#simplifying-repetitive-error-handling">Here's an example</a> of how this pattern might help with error handling in your own http applications.</p>
</blockquote>
<p>Not only are Handlers all over the place, they often <strong>call each other in a chain</strong>. It's almost like a design pattern! Yes, that's a hint!</p>
<p><strong>We'll next see how we can use this knowledge in some cool ways.</strong></p>
]]></content:encoded>
      <pubDate>Wed, 12 Oct 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>I. How HTTP Servers are Put Together in Go</title>
      <link>https://fideloper.com/how-golang-http-servers-work</link>
      <description>Crack open a basic Golang HTTP server and see how it works.</description>
      <content:encoded><![CDATA[<p>I spent a week staring at the <a href="https://github.com/caddyserver/caddy">Caddy</a> and <a href="https://github.com/traefik/traefik">Traefik</a> code bases trying to understand how they were put together. While certainly a bunch of things started making sense (who spends a week staring at a code base?!), there were specific questions I couldn't find answers to.</p>
<p>Eventually I got to an understanding where I could ask a coherent question on <a href="https://caddy.community/t/trying-to-find-where-tcp-requests-turn-into-http-requests-in-code/17308">Caddy's community forum</a> about Caddy.</p>
<p>Matt (creator of Caddy) was generous enough to give me an extremely solid answer, pointing out certain parts of the stdlib <code>http</code> module that did what I was asking about. It dawned on me that my grasp on <strong>how Golang handles HTTP</strong> was more tenuous than I thought.</p>
<p>So I set about seeing what I could understand! And I wrote it down. So,here's some ċ̷͍͔o̷̯͓̓̒n̸̨͍̫͒͛t̷̛̫̞̃̀̓̄ë̴͇̜͈̗͉́͗̽ń̸̞̮͈͎t̴̝͉̹̤̖͌͗.</p>
<h2>The simplest lil' HTTP server</h2>
<p>Golang handles HTTP natively. There's a lot that goes into that - http/1, http/2, h2c, TLS, websockets, trailer headers, upgrading connections - the list goes on!</p>
<p>But we're not there yet. Let's just make a very simple web server:</p>
<pre><code class="language-go">package main

import (
    &quot;fmt&quot;
    &quot;net/http&quot;
)

func main() {
    http.HandleFunc(&quot;/&quot;, func(writer http.ResponseWriter, r *http.Request) {
        // Send a 200 response. This is technically
        // superfluous as it implicitly sends a 200
        // when we write any response to the writer
        writer.WriteHeader(200)

        // Here we write &quot;HELLO&quot; to the ResponseWriter
        fmt.Fprint(writer, &quot;HELLO&quot;)
    })

    http.ListenAndServe(&quot;:80&quot;, nil)
}
</code></pre>
<p>We used function <code>http.HandleFunc</code> to pass a URI to match, and a &quot;handler function&quot;. The handler function takes a <code>http.ResponseWriter</code> which you write response data to, and a <code>http.Request</code> object with the request data.</p>
<blockquote>
<p>By default you need to handle GET vs POST (etc) yourself by reading <code>r.Method</code>. Fancy libraries such as <a href="https://github.com/gorilla/mux">gorilla/mux</a> help you there.</p>
</blockquote>
<p>The <code>writer.WriteHeader</code> method just takes an HTTP status code. If we omitted it, a 200 response status would have been sent when we wrote anything to the <code>writer</code>, like our &quot;HELLO&quot; string.</p>
<p>The <code>WriteHeader()</code> method is just sending the HTTP response header, e.g. <code>HTTP/1.1 200</code>.</p>
<p><strong>Here's something more interesting:</strong> The function passed to <code>HandleFunc</code> actually handles <em>all</em> requests. What we're passing in for a URI is technically a &quot;URI pattern&quot;. Patterns ending in a trailing slash <code>/</code> are &quot;rooted subtrees&quot;, which is <a href="https://pkg.go.dev/net/http#ServeMux">Golang's insufferable way</a> of saying that it'll match the given URI and anything after it.</p>
<p>The URI pattern <code>&quot;/&quot;</code> will handle any URI not otherwise matched. We don't have any other URI patterns defined, so it's a catch-all route!</p>
<h2>But why does this work?</h2>
<p>I set a catch-all handler, and then just told <code>http</code> to <code>ListenAndServe</code> on some port.</p>
<p>But those are two top-level functions defined in the stdlib <code>http</code> module. There's no obvious connection between those 2 actions! Shouldn't I have had to pass that handler to the server somehow?</p>
<pre><code class="language-go">// These two top-level functions from http
// don't appear to be &quot;connected&quot; in any
// way at first.
http.HandleFunc(&quot;/&quot;, func(writer http.ResponseWriter, r *http.Request) { 
    // snip
});

// In fact, we pass nil to the param
// that normally would take a handler
http.ListenAndServe(&quot;:80&quot;, nil)
</code></pre>
<p>It turns out that the <code>http</code> stdlib module has default objects and uses them if we don't define those objects ourself.</p>
<ol>
<li>The <code>http.HandleFunc()</code> function adds a handler to a <code>http.DefaultServerMux</code> object, which is conveniently predefined</li>
<li>Deep in <code>http/server.go</code>, a <code>serverHandler{}</code> struct is created. It has a <code>ServeHTTP</code> method on it. This method checks if a handler (the mux) is defined on the Server object. If not, it uses the <code>http.DefaultServerMux</code></li>
</ol>
<p><strong>So the simple lil' HTTP server works because of syntactic sugar.</strong> There are default objects the <code>http</code> module uses unless you explicitly define them.</p>
<h2>Explicitly defining things</h2>
<p>Let's take this exact same setup but make it more complicated. For science!</p>
<pre><code class="language-go">package main

import (
    &quot;fmt&quot;
    &quot;net&quot;
    &quot;net/http&quot;
)

func main() {
    mux := http.NewServeMux()
    mux.HandleFunc(&quot;/&quot;, func(writer http.ResponseWriter, r *http.Request) {
        writer.WriteHeader(200)
        fmt.Fprint(writer, &quot;HELLO&quot;)
    })

    srv := &amp;http.Server{
        Handler: mux,
    }

    ln, err := net.Listen(&quot;tcp&quot;, &quot;:80&quot;)
    if err != nil {
        panic(err)
    }

    srv.Serve(ln)
}

</code></pre>
<p>We have more going on here, but with the exact same result. We took the syntactic sugar and made it less sweet.</p>
<p>Previously, the <code>http</code> module added the route and handler function to <code>http.DefaultServerMux</code>.</p>
<p>Here we create a Mux ourselves, and then register our route against it. The <code>HandleFunc</code> method is exactly the same, but one is &quot;global&quot; to the <code>http</code> module and one is on <code>ServeMux</code> objects.</p>
<pre><code class="language-go">// this:
http.HandleFunc(&quot;/&quot;, func(writer http.ResponseWriter, r *http.Request) {
    // snip
})

// versus this:
mux := http.NewServeMux()
mux.HandleFunc(&quot;/&quot;, func(writer http.ResponseWriter, r *http.Request) {
    // snip
})
</code></pre>
<blockquote>
<p>A Mux is a <a href="https://pkg.go.dev/net/http#ServeMux">&quot;HTTP request multiplexor&quot;</a> and is responsible for matching incoming requests against a list of registered routes. It sends requests to the correct handler.</p>
</blockquote>
<p>After that, we create an instance of <code>http.Server</code>, passing it the Mux as its <code>Handler</code>.</p>
<pre><code class="language-go">srv := &amp;http.Server{
    Handler: mux,
}
</code></pre>
<p><strong>That's curious, though.</strong></p>
<p>If the <code>ServeMux</code> is a Mux, and we pass handler functions to that Mux, why is the <code>ServeMux</code> object referred to as a <code>Handler</code> within the <code>http.Server</code> object?</p>
<p>Interestingly, <code>ServeMux</code> is actually an <code>http.Handler</code>, meaning it &quot;satisfies&quot; interface <code>http.Handler</code> - it has a <code>ServeHTTP()</code> method on it:</p>
<pre><code class="language-go">// ServeMux has a method ServeHTTP()!
type Handler interface {
    ServeHTTP(ResponseWriter, *Request)
}
</code></pre>
<p>So <code>http.Server</code>'s just wants any <code>http.Handler</code>. It doesn't actually need to be a <code>ServeMux</code>, but we usually use one to route specific requests to the code of our choice.</p>
<p><strong>Here's another tidbit:</strong> <code>ServeHTTP(http.ResponseWriter, *http.Request)</code> has an equivalant signature to the handler function we passed to <code>mux.HandleFunc()</code>. Suspicious! Let's table that for a hot second, but keep it in mind.</p>
<pre><code class="language-go">// Our handler:
func(writer http.ResponseWriter, r *http.Request) {
    // snip
}

// Is basically an unnamed `ServeHTTP` method:
type Handler interface {
    ServeHTTP(ResponseWriter, *Request)
}
</code></pre>
<blockquote>
<p>The <code>ServeHTTP()</code> method will come up a lot, the <code>http</code> module leans on the <code>http.Handler</code> interface pretty hard.</p>
</blockquote>
<p>Finally, we create a network listener, and pass that to the server. The server will take the listener and <code>Accept()</code> new connections/data on the defined network socket for HTTP connections (port 80 on all networks, in this case).</p>
<p>So, all of this work boils down to a less-sweet way of doing exactly what our most basic web server did.</p>
<p>Along the way, we learned about the <code>ServeMux</code>, creating <code>http.Server</code> instances, and noticing that the mux is actually a Handler.</p>
<p>We also saw that we can create a network listener ourselves and pass it to our server.</p>
<p>I've hinted that Handlers are sort of interesting. <strong>Let's get into that next.</strong></p>
]]></content:encoded>
      <pubDate>Tue, 11 Oct 2022 00:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Mocking Stripe</title>
      <link>https://fideloper.com/mocking-stripe</link>
      <description>Mocking the Stripe API in your Laravel tests.</description>
      <content:encoded><![CDATA[<p>I recently was searching for a way to mock Stripe API calls with Laravel Cashier in my test suite.</p>
<p>I came across this issue, which had a <a href="https://github.com/stripe/stripe-php/issues/822">great idea</a>.</p>
<p>In this scenario, we replace the HTTP Client used by the Stripe PHP SDK so we don't make any HTTP requests. Then we do some work to return some pre-created responses (fixtures, I suppose we'll call those).</p>
<p>To create the fixtures, I went to the <a href="https://stripe.com/docs/api">Stripe API docs</a> and copied/pasted the JSON responses expected for any particular calls.</p>
<p>In my case, there were three calls to the Stripe API:</p>
<ol>
<li>Creating a customer</li>
<li>Retrieving a customer</li>
<li>Creating a Stripe Checkout session</li>
</ol>
<h2>The Test</h2>
<p>Here's what one of the tests looked like. Obviously this could be abstracted some more, but this served my purposes just fine.</p>
<p>In my case, I'm testing creating a <a href="https://stripe.com/docs/api/checkout/sessions">Stripe Session</a>, which is a required step before redirecting a user to a hosted Stripe Checkout page.</p>
<p>Here is file <code>tests/Feature/CreateStripeSessionTest.php</code>:</p>
<pre><code class="language-php">&lt;?php

namespace Tests\Feature;

use App\Models\User;

use Stripe\ApiRequestor;
use Stripe\HttpClient\ClientInterface;

use Laravel\Jetstream\Jetstream;

use Tests\TestCase;
use Illuminate\Support\Str;
use Illuminate\Foundation\Testing\WithFaker;
use Illuminate\Foundation\Testing\RefreshDatabase;

class CreateStripeSessionTest extends TestCase
{
    use RefreshDatabase;

    /** @test */
    public function creates_stripe_session_and_redirects()
    {
        // Create the mock HTTP client used by Stripe
        $mockClient = new MockClient;
        ApiRequestor::setHttpClient($mockClient);

        $user = User::factory()-&gt;create([
            'stripe_id' =&gt; 'cus_'.Str::random(),
        ]);

        $response = $this-&gt;actingAs($user)-&gt;post(route('create-session'), [
            'intent' =&gt; 'some-meta-data-required',
        ]);

        $response-&gt;assertRedirect($mockClient-&gt;url)
    }
}

// Mock the Stripe API HTTP Client

# Optionally extend Stripe\HttpClient\CurlClient
class MockClient implements ClientInterface
{
    public $rbody = '{}';
    public $rcode = 200;
    public $rheaders = [];
    public $url;

    public function __construct() {
        $this-&gt;url = &quot;https://checkout.stripe.com/pay/cs_test_&quot;.Str::random(32);
    }

    public function request($method, $absUrl, $headers, $params, $hasFile)
    {
        // Handle Laravel Cashier creating/getting a customer
        if ($method == &quot;get&quot; &amp;&amp; strpos($absUrl, &quot;https://api.stripe.com/v1/customers/&quot;) === 0) {
            $this-&gt;rBody = $this-&gt;getCustomer(str_replace(&quot;https://api.stripe.com/v1/customers/&quot;, &quot;&quot;, $absUrl));
            return [$this-&gt;rBody, $this-&gt;rcode, $this-&gt;rheaders];
        }

        if ($method == &quot;post&quot; &amp;&amp; $absUrl == &quot;https://api.stripe.com/v1/customers&quot;) {
            $this-&gt;rBody = $this-&gt;getCustomer(&quot;cus_&quot;.Str::random(14));
            return [$this-&gt;rBody, $this-&gt;rcode, $this-&gt;rheaders];
        }

        // Handle creating a Stripe Checkout session
        if ($method == &quot;post&quot; &amp;&amp; $absUrl == &quot;https://api.stripe.com/v1/checkout/sessions&quot;) {
            $this-&gt;rBody = $this-&gt;getSession($this-&gt;url);
            return [$this-&gt;rBody, $this-&gt;rcode, $this-&gt;rheaders];
        }        

        return [$this-&gt;rbody, $this-&gt;rcode, $this-&gt;rheaders];
    }

    protected function getCustomer($id) {
        return &lt;&lt;&lt;JSON
{
  &quot;id&quot;: &quot;$id&quot;,
  &quot;object&quot;: &quot;customer&quot;,
  &quot;address&quot;: null,
  &quot;balance&quot;: 0,
  &quot;created&quot;: 1626897363,
  &quot;currency&quot;: &quot;usd&quot;,
  &quot;default_source&quot;: null,
  &quot;delinquent&quot;: false,
  &quot;description&quot;: null,
  &quot;discount&quot;: null,
  &quot;email&quot;: null,
  &quot;invoice_prefix&quot;: &quot;61F72E0&quot;,
  &quot;invoice_settings&quot;: {
    &quot;custom_fields&quot;: null,
    &quot;default_payment_method&quot;: null,
    &quot;footer&quot;: null
  },
  &quot;livemode&quot;: false,
  &quot;metadata&quot;: {},
  &quot;name&quot;: null,
  &quot;next_invoice_sequence&quot;: 1,
  &quot;phone&quot;: null,
  &quot;preferred_locales&quot;: [],
  &quot;shipping&quot;: null,
  &quot;tax_exempt&quot;: &quot;none&quot;
}
JSON;

    }

    protected function getSession($url)
    {
        return &lt;&lt;&lt;JSON
{
  &quot;id&quot;: &quot;cs_test_V9Gq09dEmaJ2p3tydHonjbPSr3eq3mfOn52UBVbppDLVEFQfOji1uZok&quot;,
  &quot;object&quot;: &quot;checkout.session&quot;,
  &quot;allow_promotion_codes&quot;: null,
  &quot;amount_subtotal&quot;: null,
  &quot;amount_total&quot;: null,
  &quot;automatic_tax&quot;: {
    &quot;enabled&quot;: false,
    &quot;status&quot;: null
  },
  &quot;billing_address_collection&quot;: null,
  &quot;cancel_url&quot;: &quot;https://example.com/cancel&quot;,
  &quot;client_reference_id&quot;: null,
  &quot;currency&quot;: null,
  &quot;customer&quot;: null,
  &quot;customer_details&quot;: null,
  &quot;customer_email&quot;: null,
  &quot;livemode&quot;: false,
  &quot;locale&quot;: null,
  &quot;metadata&quot;: {},
  &quot;mode&quot;: &quot;subscription&quot;,
  &quot;payment_intent&quot;: &quot;pi_1DoyrW2eZvKYlo2CHqEodB86&quot;,
  &quot;payment_method_options&quot;: {},
  &quot;payment_method_types&quot;: [
    &quot;card&quot;
  ],
  &quot;payment_status&quot;: &quot;unpaid&quot;,
  &quot;setup_intent&quot;: null,
  &quot;shipping&quot;: null,
  &quot;shipping_address_collection&quot;: null,
  &quot;submit_type&quot;: null,
  &quot;subscription&quot;: null,
  &quot;success_url&quot;: &quot;https://example.com/success&quot;,
  &quot;total_details&quot;: null,
  &quot;url&quot;: &quot;$url&quot;
}
JSON;

    }
}

</code></pre>
<h2>Some Details</h2>
<p>There's a few things to note.</p>
<p>First, we tell Stripe to use our mock HTTP client (luckily it's set globally):</p>
<pre><code class="language-php">$mockClient = new MockClient;
ApiRequestor::setHttpClient($mockClient);
</code></pre>
<p>The <code>MockClient</code> test implements the <code>request()</code> method as required by the interface. Here we just &quot;sniff&quot; out the various requests sent to the API and send some fake responses.</p>
<p>To figure out the calls made for the test, I just <code>var_dump()</code>'ed the method parameters:</p>
<pre><code class="language-php">public function request($method, $absUrl, $headers, $params, $hasFile)
{
    // Figure out what API calls Laravel Cashier is making
    // for a given test
    dd($method, $absUrl, $headers, $params, $hasFile);
}
</code></pre>
<p>Then I made mock/fake/fixture/whatever responses for those calls based on the <a href="https://stripe.com/docs/api">API reference</a> (literally just copying/pasting the JSON).</p>
<blockquote>
<p>Under the hood, Stripe takes the <code>object</code> parameter in the JSON response from their API and maps it to a PHP class. So returned JSON <code>&quot;object&quot;: &quot;checkout.session&quot;</code> becomes an instance of <code>Stripe\Checkout\Session</code>.</p>
</blockquote>
<p>The redirect my specific test includes is the URL returned as a parameter when creating a Checkout Session. This sends the user off to a hosted Stripe Checkout page.</p>
<h2>The Offical Way</h2>
<p>There's a more Technically Correct™ way to do this, although I discovered this after I already setup the above and didn't change it.</p>
<p>Stripe has a Golang based project named <a href="https://github.com/stripe/stripe-mock">Stripe Mock</a> that runs a test version of the Stripe API. This project aims to return &quot;approximately correct API response for any endpoint&quot;.</p>
<p>To use this, you can download a binary (or run it in Docker), and then have the Stripe client object use that for it's API endpoint.</p>
<p>Here's <a href="https://gilbitron.me/blog/using-stripe-mock-with-laravel">an article on using Stripe Mock, and even how to set it up in GitHub Actions</a>.</p>
<p>Locally I did something like this to play with it:</p>
<pre><code># In one terminal window
docker run --rm -it -p 12111-12112:12111-12112 \
    stripemock/stripe-mock:latest

# In another terminal window
curl -i -X POST http://localhost:12111/v1/customers \
    -H &quot;Authorization: Bearer sk_test_123&quot; \
    -d 'name=Chris Fidao' -d 'email=foo@example.com'

HTTP/1.1 200 OK
Request-Id: req_123
Stripe-Mock-Version: 0.109.0
Date: Thu, 22 Jul 2021 11:44:24 GMT
Content-Length: 589
Content-Type: text/plain; charset=utf-8

{
  &quot;address&quot;: null,
  &quot;balance&quot;: 0,
  &quot;created&quot;: 1234567890,
  &quot;currency&quot;: &quot;usd&quot;,
  &quot;default_source&quot;: null,
  &quot;delinquent&quot;: false,
  &quot;description&quot;: null,
  &quot;discount&quot;: null,
  &quot;email&quot;: &quot;foo@example.com&quot;,
  &quot;id&quot;: &quot;cus_H42rveoStCxpP4E&quot;,
  &quot;invoice_prefix&quot;: &quot;40BEC7C&quot;,
  &quot;invoice_settings&quot;: {
    &quot;custom_fields&quot;: null,
    &quot;default_payment_method&quot;: null,
    &quot;footer&quot;: null
  },
  &quot;livemode&quot;: false,
  &quot;metadata&quot;: {},
  &quot;name&quot;: &quot;Chris Fidao&quot;,
  &quot;next_invoice_sequence&quot;: 1,
  &quot;object&quot;: &quot;customer&quot;,
  &quot;phone&quot;: null,
  &quot;preferred_locales&quot;: [],
  &quot;shipping&quot;: null,
  &quot;tax_exempt&quot;: &quot;none&quot;
}
</code></pre>
<p>Pretty neat! I would use this method for testing if I had a lot of tests that hit the Stripe API.</p>
]]></content:encoded>
      <pubDate>Thu, 22 Jul 2021 12:56:13 +0000</pubDate>
    </item>
    <item>
      <title>2018</title>
      <link>https://fideloper.com/2018</link>
      <description>My 2018 year in review.</description>
      <content:encoded><![CDATA[<p>Here's my 2018 in review!</p>
<h2>Children, Time, Advice</h2>
<p>On January 1, 2018, my son was just 3 months and 6 days old.</p>
<p>One thing you learn when you have a kid is that a LOT of the business and life advice we hear is not geared towards parents. There were so many times when I muttered &quot;yeah, try that with a baby&quot; to a podcast.</p>
<p>Finding good advice through the lense of parenthood is rare. This is probably because many of the people we listen to don't have kids of their own, or divorce that element from their advice - in either the case, advice that takes parenthood into account is rare.</p>
<p>There's a larger point here that I'm slowly internalizing - everyone giving business and life advice has a context through which they are giving it. This means that what works for some may not work for you. This idea (although not related to children) is covered a bit in <a href="https://artofproductpodcast.com/episode-69">this excellent Art of Product episode</a>.</p>
<p>In any case, this whole section is an excuse to get to this sentence: young kids are <em>fucking hard</em> to have around, and finding time to work on my business outside of work hours (I have a job!) was a major source of stress this year. Working from home exacerbates this, but I won't give up being home during my kid's most important years.</p>
<p>Andrey Butov does a good job explaining the struggle of work and kids in <a href="https://www.hasopinions.wtf/e7cbd7e6">episode 10 of our (newish) podcast</a>.</p>
<h2>What Happened in 2018</h2>
<p>Here's what I can remember doing in 2018!</p>
<h3>Released Scaling Laravel</h3>
<p>Most of the work for <a href="https://courses.serversforhackers.com/scaling-laravel">Scaling Laravel</a> was done in 2017, and then delayed a few months after my son was born. I released the course in early 2018.</p>
<p>This was my big &quot;thing&quot; for the year, and the main driver of revenue. The next largest driver was <a href="https://courses.serversforhackers.com/shipping-docker">Shipping Docker</a>, which was released in 2017.</p>
<p>The sales cliff for courses is steep - sales drop off almost immediately and remain low very soon after an initial launch, so the majority of revenue for a course is made up-front. Revenue from Shipping Docker was therefore low relative to Scaling Laravel.</p>
<h3>Docker in Development</h3>
<p>Docker is a technology that is still moving and changing rather quickly. My course <a href="https://shippingdocker.com">Shipping Docker</a> is about 2 years old as I write this.</p>
<p>This year I refreshed a free part of the course: <a href="https://serversforhackers.com/t/containers">Docker in Development</a>. This was one of the first things I made for the original course - it was in need of a refresh since I learned and refined quite a bit afterwards!</p>
<h3>Vessel</h3>
<p>If you're interested in seeing my current dev workflow using Docker, check out both Part I and Part II of the refreshed Docker in Development course linked above.</p>
<p>For Laravel specifically, I created <a href="https://vessel.shippingdocker.com">Vessel</a>, which essentially is this same development workflow, but specifically for Laravel (and improved upon).</p>
<p>Vessel was released in late 2017, and I've continued to work on it in 2018. The goals for this project were:</p>
<ol>
<li>Simplicity</li>
<li>Great documentation</li>
</ol>
<p><strong>I spent more time on the docs than the initial project.</strong> Good documentation is a lot of work, and I wanted to be proud of the docs I made. I really enjoyed the process of crafting them.</p>
<p>A lesson here: If you want your project to gain traction, the bar is fairly high. One way your project can stand out is through great documentation and a developer-friendly API.</p>
<p>From a business &amp; marketing point of view, this can really help your project get traction and, if it's open source, attract great pull requests. Finding a project with light or bad documentation is super common. However, great documentation and ease-of-use is quickly becoming table stakes.</p>
<h3>Tweet Tips</h3>
<p>Many of you reading this probably has seen <a href="https://twitter.com/i/moments/994601867987619840">Steve Schoger's tweets</a> promoting what eventually became Refactoring UI. Growing a Twitter audience is a great way to generate interest in your projects, and Steve really succeeded there.</p>
<p>Other than being &quot;famous&quot;, one way of having an interesting twitter account is by giving as much useful information as possible. <a href="https://twitter.com/steveschoger">Steve Schoger</a>, <a href="https://twitter.com/adamwathan">Adam Wathan</a>, and <a href="https://twitter.com/wesbos">Wes Bos</a> have really perfected this.</p>
<p>If that's something you're interested in (it's certainly not the be-all-end-all of business), combining a tweet with an image and maybe a link off to an article is a great way to help grow your audience. I make heavy use of <a href="https://carbon.now.sh">Carbon</a> to help make my own tweet tips.</p>
<p>I've done some of this myself in 2018 - Here's a few examples:</p>
<ol>
<li>Collection of <a href="https://twitter.com/i/moments/1004697240013889536">Nginx tips</a></li>
<li>MySQL <a href="https://twitter.com/fideloper/status/1074694950007267338">INFORMATION SCHEMA tip</a></li>
<li>Linux <a href="https://twitter.com/fideloper/status/1060521521674944512">human-readable disk usage tip</a></li>
<li>MySQL <a href="https://twitter.com/fideloper/status/1052948943133392896">mysqldump tip</a></li>
</ol>
<p>These are a great way to generate interest in courses - I have a bunch of MySQL tweet tips lately to help with my upcoming <a href="https://mysqlbackups.tv">MySQL Backups</a> course.</p>
<h3>MySQL Backups</h3>
<p>My next course is going to be all about <a href="https://mysqlbackups.tv">MySQL Backups</a>. This course is in-process right now and is about 75% complete as I write this.</p>
<p>My favorite part about creating this particular course is that I found someone to help me edit my videos. This saves me an incredible amount of time and stress.</p>
<p><strong>Hiring people to help with time-intensive tasks is going to be a theme for me in 2019.</strong></p>
<p>This course is NOT going to be one of better revenue-generating courses. I can tell by how many people have NOT signed up to the email list relative to past courses.</p>
<p>That's disapointing, but totally OK. I like the content of this course a lot - this is a woefully neglected topic. Stack Overflow is often leading people astray.</p>
<p>Thanks to having help edit my videos, and already <a href="https://courses.serversforhackers.com">having infrastructure</a> around selling and viewing the course, I'm not squandering too much opportunity cost in creating a course whose sales won't match my previous &quot;larger&quot; courses.</p>
<h3>Backops.app</h3>
<p>I've started building <a href="https://backops.app">an application</a> to help people automate their MySQL backups (and other stuff, eventually).</p>
<p>Why make an application around a course that's not my most popular? My thinking is that course popularity isn't necessarily an indicitation in who will pay a monthly subscription for a service. More importantly, I feel like I have some competitive advantages:</p>
<ol>
<li>I have an audience in the devops/server space</li>
<li>I can integrate tightly with providers such as Forge (and others!) to make the experience really great</li>
<li>I know there is a market for it, and its rough size, thanks to apps like Ottomatik - which was sold to someone non-technical and outside of the tech communities I (we) hang out in over the last year or so</li>
<li>There are shortcomings in similar apps and open source tools which I feel like I can handle better</li>
</ol>
<p>So, we'll see where that leads! The <strong>Long Slow Saas Ramp of Death</strong> is a real thing, so if I spend most of 2019 on this, I know my business revenue will drop like a rock. That might just be the risk I take this year. We'll see.</p>
<h3>Has Opinions Podcast</h3>
<p>My friend <a href="https://twitter.com/growdev">Dan</a> and I started a &quot;for-fun&quot; podcast who's name, in typical techie fashion, is based purely on a domain I had lying around - <a href="https://hasopinions.wtf">hasopinions.wtf</a>.</p>
<p>I love podcasts where the hosts shoot-the-shit and sprinkle in some interesting things about their business. That's the podcast I hope to make. It'll take time and practice to really find my voice and a focus for the pod.</p>
<p>I don't have grand plans for this in terms of business. I want to keep it fun. I don't have the time nor the inclination to make another thing feel stressful.</p>
<p>Not every aspect of your life needs to be monetized!</p>
<h3>Youtube</h3>
<p>I started moving videos from <a href="https://serversforhackers.com">Servers for Hackers</a> onto a <a href="https://www.youtube.com/serversforhackers">Youtube channel</a> to see if they'd gain any traction.</p>
<p>Based on subscriptions and comments, I think this was a good move overall! I'll likely continue to add <a href="https://serversforhackers.com/">Servers for Hackers</a> content onto Youtube.</p>
<p>This is another thing I'll be looking to hire out, as it's a bit of a tedious process. However it's one that can be systematized through documentation or programming.</p>
<h2>Business</h2>
<p>Over the last 3-4 years, I've released about one large course a year.</p>
<p>Revenue has grown the most within the last 3 years. However, in 2018, it stayed steady with 2017 (it was just a tad lower than 2017).</p>
<p>What's that mean? I'm not sure! In one sense, it's a win, since it's so hard to find time to get any work done between the day job and having a kid. On another hand, there are people under similar constraints but making many multiples of anything I've ever made. That doesn't really mean anything, but as Rob Walling pointed out, entrepreneurs have a knack for turning any positive thing into a negative! I must be a <em>great</em> entrepreneur...</p>
<h3>Business Plans</h3>
<p>I do, of course, want to grow my revenue!</p>
<p>Continuing to grow my audience feels like the right way to grow that revenue number. This means producing more content - which is good, because it's something I like to do!</p>
<p>Finding time to put out a lot of free stuff, and make at least one more course in 2019 (after MySQL Backups) is a large part of my 2019 plans.</p>
<p>The <a href="https://backops.app">backops.app</a> app may interfere with that - we'll see!</p>
<h3>How much did I make?</h3>
<p>My side business has made more than my salary in 2017 and 2018.</p>
<p>Why don't I quit?</p>
<p>I would make way less money! Although I'm aware that this doesn't account for opportunity cost and the potential to make more if I capitalize on the &quot;free time&quot; found by not being employed.</p>
<p>In the last 3 years, my business has made &quot;significant&quot; money relative to my salary (the last 2 years made more than my salary, and the year before that, I made about 50% of my salary).</p>
<p>Because of this, my family is out of debt from student loans and vehicles, I was able to put 20% down on my mortgage (fuck off, <a href="https://www.consumerfinance.gov/ask-cfpb/what-is-private-mortgage-insurance-en-122/">PMI</a>), I'm able to save for retirement, have an emergency fund, and basically follow the good advice from <a href="https://www.reddit.com/r/personalfinance/">/r/personalfinance</a>.</p>
<p>In theory I'm in a good position to go out on my own. However, I'm planning on keeping my job for the following reasons:</p>
<ol>
<li>I have a lot of flexibility right now, and any future job will likely have less. I'm very lucky to have found/been found by UserScape.</li>
<li>I know a paycheck is coming, but also am finding time to make additional money. This alleviates a lot of &quot;Founder Stress&quot;.</li>
<li>Health care in the US is a disaster and is super unfriendly to small business. Striking out on your own is a LOT scarier thanks to the rising cost of health care and how bad the quality of the open market appears to be in terms of expense and coverage.</li>
<li>&quot;Real&quot; work gives me an avenue to learn about topics I would never come across otherwise.</li>
</ol>
<h2>2019 Plans</h2>
<p>Here's what I plan on working on in 2019:</p>
<ul>
<li>Finishing the <a href="http://mysqlbackups.tv">MySQL Backups</a> course</li>
<li>Creating a free Nginx course</li>
<li>Creating a free server course that's a bit more start-to-finish-hosting-of-an-application</li>
<li>Creating a paid Ansible course (probably? AWS is a strong contender, but would be a HUGE course)</li>
<li>Working on <a href="https://backops.app">backops.app</a></li>
</ul>
<p>Competing with all of this is time spent in the day job, raising a kid, and potentially working on a SaaS app. - which is way more than just coding.</p>
<h2>Numbers to Track</h2>
<p>I haven't tracked any particular metrics over the last few years (I wish I did!). Here's a few things that I'm going to start to track, and hopefully back-fill where I can:</p>
<ol>
<li>Newsletter Subscribers</li>
<li>Twitter Followers</li>
<li>Youtube Subscribers</li>
<li>Podcast Downloads (although the pod is really just for fun)</li>
</ol>
<p>You can see they are geared towards audience growth, and not revenue. Revenue will hopefully follow, but I refuse to set a revenue goal.</p>
<p>Having a business of my own started by finding out I enjoyed helping people by digging through code and learning about servers. I want that focus to remain - in other words, <strong>I want to enjoy my work</strong>.</p>
<h2>Things I Like</h2>
<p>Here's some podcasts and resources I've really enjoyed in 2018.</p>
<ul>
<li><a href="https://justinjackson.ca/">Justin Jackson</a> has been putting out great articles and podcasts about his journey making his SaaS. The journey is hard and he's not afraid to say so.</li>
<li><a href="https://artofproductpodcast.com/">Art of Product</a> podcast - Ben and Derrick's journey to build their apps has been super interesting to follow. They're great guys making interesting products.</li>
<li><a href="http://www.tropicalmba.com/podcasts/">Tropical MBA Podcast</a> - They've recovered from their momentary period of &quot;rich founder problems&quot; (which can be forgiven since they sold their business), and are now in full swing with really interesting episodes around being an entrepreneur.</li>
<li><a href="http://bootstrapped.fm/">Bootstrapped FM</a> - Yes, it's my boss (and Andrey!). But it's the perfect &quot;shoot the shit&quot; podcast. Make more episodes, guys.</li>
</ul>
]]></content:encoded>
      <pubDate>Mon, 31 Dec 2018 01:46:30 +0000</pubDate>
    </item>
    <item>
      <title>Changing the Laravel Log File Name (and playing in the Http/Console Kernels)</title>
      <link>https://fideloper.com/laravel-log-file-name</link>
      <description>I wanted to change the default `laravel.log` file name for an application I'm working on. This is hard-coded into Laravel core, and so I had to get a bit fancy to do it. See how!</description>
      <content:encoded><![CDATA[<style>
#vimeoembed {
  position: relative;
  padding-bottom: 56.25%;
  height: 0;
  overflow: hidden;
  max-width: 100%;
  height: auto;
}

#vimeoembed iframe,
#vimeoembed section object,
#vimeoembed section embed {
  position: absolute;
  top: 0;
  left: 0;
  width: 100%;
  height: 100%;
}
</style>
<p>Here's a video explaining the process in Laravel 5.3:</p>
<article id="vimeoembed">
<iframe src="https://player.vimeo.com/video/197547791" width="640" height="360" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
</article>
<br />
What I had to do was edit two files:
<ol>
<li><code>app/Http/Kernel.php</code></li>
<li><code>app/Console/Kernel.php</code></li>
</ol>
<p>Within each, I over-rode the <code>__construct</code> method from the parent class, in order to edit the <code>$bootstrapper</code> array - the array of class names telling Laravel which to load and bootstrap for Http vs Console requests.</p>
<p>Each of these loaded in the <code>ConfigureLogging</code> bootstrap class. This class (<code>Illuminate\Foundation\Bootstrap\ConfigureLogging</code>) hard-coded the laravel log file name.</p>
<h3><code>App\Http\Kernel.php</code></h3>
<pre><code class="language-php">namespace App\Http;

use Illuminate\Routing\Router;
use Illuminate\Contracts\Foundation\Application;
use Illuminate\Foundation\Http\Kernel as HttpKernel;

class Kernel extends HttpKernel {
    // boiler plate removed

    // over-ride parent __construct method
    public function __construct(Application $app, Router $router)
    {
        parent::__construct($app, $router);

        // Replace default logger with HelpSpot Logger
        $loggingKey = array_search('Illuminate\Foundation\Bootstrap\ConfigureLogging', $this-&gt;bootstrappers);
        $this-&gt;bootstrappers[$loggingKey] = 'App\ConfigureLogging';
    }
}
</code></pre>
<h3><code>App\Console\Kernel.php</code></h3>
<pre><code class="language-php">&lt;?php

namespace App\Console;

use Illuminate\Contracts\Events\Dispatcher;
use Illuminate\Console\Scheduling\Schedule;
use Illuminate\Contracts\Foundation\Application;
use Illuminate\Foundation\Console\Kernel as ConsoleKernel;

class Kernel extends ConsoleKernel {
     // boiler plate removed

    // over-ride parent __construct method
    public function __construct(Application $app, Dispatcher $events)
    {
        parent::__construct($app, $events);

        // Replace default logger with HelpSpot Logger
        $loggingKey = array_search('Illuminate\Foundation\Bootstrap\ConfigureLogging', $this-&gt;bootstrappers);
        $this-&gt;bootstrappers[$loggingKey] = 'App\ConfigureLogging';
    }
}
</code></pre>
<h3><code>App\ConfigureLogging</code></h3>
<p>Then we can make our own class that extends the base <code>ConfigureLogging</code> class and tweaks it as needed:</p>
<pre><code class="language-php">&lt;?php

namespace App;

use Illuminate\Log\Writer;
use Illuminate\Contracts\Foundation\Application;
use Illuminate\Foundation\Bootstrap\ConfigureLogging as BaseLoggingBootstrapper;

class ConfigureLogging extends BaseLoggingBootstrapper
{
    /**
     * Configure the Monolog handlers for the application.
     *
     * @param  \Illuminate\Contracts\Foundation\Application  $app
     * @param  \Illuminate\Log\Writer  $log
     * @return void
     */
    protected function configureSingleHandler(Application $app, Writer $log)
    {
        $log-&gt;useFiles(
            $app-&gt;storagePath().'/logs/my-app.log',
            $app-&gt;make('config')-&gt;get('app.log_level', 'debug')
        );
    }

    /**
     * Configure the Monolog handlers for the application.
     *
     * @param  \Illuminate\Contracts\Foundation\Application  $app
     * @param  \Illuminate\Log\Writer  $log
     * @return void
     */
    protected function configureDailyHandler(Application $app, Writer $log)
    {
        $config = $app-&gt;make('config');

        $maxFiles = $config-&gt;get('app.log_max_files');

        $log-&gt;useDailyFiles(
            $app-&gt;storagePath().'/logs/my-app.log', is_null($maxFiles) ? 5 : $maxFiles,
            $config-&gt;get('app.log_level', 'debug')
        );
    }

    /**
     * Configure the Monolog handlers for the application.
     *
     * @param  \Illuminate\Contracts\Foundation\Application  $app
     * @param  \Illuminate\Log\Writer  $log
     * @return void
     */
    protected function configureSyslogHandler(Application $app, Writer $log)
    {
        $log-&gt;useSyslog(
            'my-app',
            $app-&gt;make('config')-&gt;get('app.log_level', 'debug')
        );
    }
}
</code></pre>
<p>And there we have it - our log file will be named <code>my-app.log</code> instead of <code>laravel.log</code>.</p>
]]></content:encoded>
      <pubDate>Fri, 30 Dec 2016 22:25:41 +0000</pubDate>
    </item>
    <item>
      <title>Adapters and Makers</title>
      <link>https://fideloper.com/adapters-and-makers</link>
      <description>Two personality traits exist on opposite ends of a spectrum: adapters and makers.</description>
      <content:encoded><![CDATA[<h2>Learning your Tools</h2>
<p>It's up to you to learn your tools so you can adapt them to your needs. It's not up to the tool to adapt to your needs.</p>
<!-- The former is called programming, while the latter is called opening an entitled-sounding and ultimately ignored GitHub issue. -->
<p>If available tools make it too hard to meet your needs, your alternative option is to make your own tool.</p>
<p><strong>Frustration in available tooling is my theory on how things like Rails and Laravel get created.</strong></p>
<p>That being said, I think the more interesting question is why some of us seem to always adapt available tools while others seem to always create their own tools.</p>
<h2>Personalities</h2>
<p>I know some of those who have made very popular tools. I've always wished to be one of these people. However I've never reached the tipping point where I went to make something new. I've always been able to adapt current tools to suit my needs.</p>
<p>I've noticed that those who make their own tool are different here - they get frustrated at the process of learning someone else's tool.</p>
<p>Over time, I've come to divide this distinction of personalities into <strong>adapters</strong> and <strong>makers</strong>. It should be noted that I imagine these to be on a spectrum, not binary positions.</p>
<h3>Adapters</h3>
<p>Adapters learn their tools and adapt their usage so they can complete their objectives.</p>
<p>Adapters tend to work through the frustrations that come with learning a new tools, either because they enjoy the learning process or sheer determination to not be defeated (apologies if &quot;sheer determination&quot; is a tad hyperbolic).</p>
<p>One thing of note, however: It is common for the ideal solution to suffer compromise due to limitations in available tools. You know this is happening when you need to explain why your code won't be a stakeholder's (your?) exact vision. This is a trade-off of not spending the time on making something bespoke.</p>
<h3>Makers</h3>
<p>Makers are less willing to suffer the frustrations that come with learning a tool. They have a vision of how something should work. While learning a tool, they see how compromised that vision becomes. Rather than pushing through, they make their own tool.</p>
<p>They may spend countless hours on this. Many of us would see this as a waste of time; Indeed it ultimately might be.</p>
<p>However, those with extraordinary (read that as &quot;above average&quot;, not &quot;god-like&quot;) vision may make something great.</p>
<p>This, perversely, often involves some deep learning. While makers may not have the perserverence for deep learning of someone else's tool (we call this &quot;human nature&quot;), they are more willing to shoulder that burden in the building of their own thing.</p>
<blockquote>
<p>While not necessarily directly related to building a tool, this seems to be a trait shared amongst entrepreneurs.</p>
</blockquote>
<h2>Glory</h2>
<p>It's easy to glorify the makers. The very successful ones are few and enjoy the fruits (and labors) of minor to major internet celebrity.</p>
<p>Conversely, successful adapters are more likely to have a better job relative to their less-successful peers in adaptation. This might mean doing better within a company, or doing better in finding new employment. I imagine this is due to a network effect, where their success positively effects the success of what they work on, and thus they themselves become respected within their social/professional circles.</p>
<h2>Fear</h2>
<p>However, I fear that conflating the maker personalities with the celebrity version of success is damaging. I can just imagine the Hacker News response, each commenter desperately trying to prove how they are victoriously fitting the Maker Mold.</p>
<blockquote>
<p>Not that I give my opinions that much weight. Just as the HN crowd try to see something of themselves in others' success, I often find myself imagining my ideas as read and respected by &quot;the masses&quot;. Reality is appropriately harsher than our rosy imaginations.</p>
</blockquote>
<p>That is not my point - both of these personality traits Get Shit Done™. One, however, may be more profitable, if you win that particular lottery.</p>
<h2>Grey</h2>
<p>Cynicism aside, I don't see any one extreme of the spectrum as being more intelligent than the other. I view makers as being more stubborn by nature. I view adapters as being more easy-going by nature.</p>
<p>These are not a measure of intelligence, or even of perserverence. Instead it's a shift in focus in where you feel most comfortable putting effort.</p>
<p>And don't forget it's a spectrum.</p>
]]></content:encoded>
      <pubDate>Tue, 21 Jun 2016 15:22:55 +0000</pubDate>
    </item>
    <item>
      <title>Laravel and Content Negotiation</title>
      <link>https://fideloper.com/laravel-content-negotiation</link>
      <description>Here's a little bit about content negotiation in your Laravel application.</description>
      <content:encoded><![CDATA[<p>Here's a little bit about content negotiation.</p>
<p>An HTTP client, such as your browser, or perhaps jQuery's ajax method, can set an <code>Accept</code> header as part of an HTTP request.</p>
<h2>Accept Header</h2>
<p>This header is meant to tell the server what content types it is willing to accept. From the <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1">HTTP 1.1 spec, section 14.1</a>:</p>
<blockquote>
<p>The Accept request-header field can be used to specify certain media
types which are acceptable for the response.</p>
</blockquote>
<p>Such a header might look something like this:</p>
<pre><code>Accept: application/json
</code></pre>
<p>In a typical request from Chrome, we see something more like this:</p>
<pre><code>Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
</code></pre>
<p>As you can see, the Accept header can get complex.</p>
<p>The above example lists a few media types the client (Chrome) is willing to accept, and even gives them a &quot;quality&quot; factor (the <code>q</code> rating, value 0-1). This is essentially telling the server the ordered preference of content types it wants back.</p>
<p>I won't get any more into that, but your can read more about it in <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1">the HTTP spec</a>.</p>
<h2>The Server</h2>
<p>It's up to the server to follow the rules of HTTP. When a request comes to our application, it's pretty easy to ignore these rules, as our frameworks generally let us return whatever we want.</p>
<blockquote>
<p>This is the &quot;negotiation&quot; part. The client says what content types it's willing to accept, and its preference. The server then can decide what it's willing/able to send back. Or ignore it, if it's a rebel. Not much of a negotiation, if you ask me.</p>
</blockquote>
<p>For example, if a request comes into your Laravel app with an <code>Accept</code> header requesting JSON, you can totally ignore it without ever realizing the HTTP client wanted something else:</p>
<pre><code class="language-php">Route::get('/foo', function()
{
    // Accept header? Whatever, bruh
    return view('foo.bar');
});
</code></pre>
<h3>Checking the Accept header:</h3>
<p>If you want to check for that header, you can do some manual stuff:</p>
<pre><code class="language-php">Route::get('/foo', function()
{
    $accept = request()-&gt;header('accept'); // application/json

    if( $accept === 'application/json' )
    {
        return ['foo' =&gt; 'bar']; // Returns JSON, thanks to Laravel Magic™
    }

    return view('foo.bar');
});
</code></pre>
<h3>It gets easier, however:</h3>
<p>Laravel provides a nice, <a href="https://github.com/laravel/framework/blob/5.2/src/Illuminate/Http/Request.php#L609-L614">easy way to check if a request &quot;wants json&quot;</a>:</p>
<pre><code class="language-php">Route::get('/foo', function()
{

    // Look at this nice, friendly global helper function
    // Hello, global function! We sincerely love you with all our &lt;3
    if( request()-&gt;wantsJson() )
    {
        return ['foo' =&gt; 'bar'];
    }

    return view('foo.bar');
});
</code></pre>
<p>If you check out the function linked above, you can see how Laravel is using the underlying Symfony HTTP classes, which handle the dirty work of knowing that HTTP requests might send down multiple accept content types:</p>
<pre><code class="language-php">public function wantsJson()
{
    $acceptable = $this-&gt;getAcceptableContentTypes();
    return isset($acceptable[0]) &amp;&amp; Str::contains($acceptable[0], ['/json', '+json']);
}
</code></pre>
<p>Note that this is grabbing the <em>first</em> of the list of acceptable content types, <strong>ordered by preference</strong>, sent in the HTTP request and testing if it is a JSON content type. It's <strong>not</strong> simply saying &quot;Yeah, I guess JSON was one of the content types you wanted&quot;.</p>
<p>If you want to see if the request will accept JSON, regardless of it's set preference, use the <a href="https://github.com/laravel/framework/blob/5.2/src/Illuminate/Http/Request.php#L683-L686"><code>acceptsJson()</code> method</a>.</p>
<h3>Other Content Types</h3>
<p>I suggest taking a look at some of the shortcuts the Laravel <code>Request</code> class has - you can check for <a href="https://github.com/laravel/framework/blob/5.2/src/Illuminate/Http/Request.php#L616-L645">other content types with the <code>accepts()</code> method</a>, for example, or use the <a href="https://github.com/laravel/framework/blob/5.2/src/Illuminate/Http/Request.php#L653-L676"><code>prefers()</code> method</a> to see which is the most prefered content type.</p>
<h2>Middleware, or something</h2>
<p>This is ripe for some sort of middleware or other functionality which automatically can decide to return JSON or HTML (or hey, even XML!). If you use RESTful routing controllers, that might be a nice addition, similar to how Rails lets you set a return type.</p>
]]></content:encoded>
      <pubDate>Wed, 06 Jan 2016 22:27:13 +0000</pubDate>
    </item>
    <item>
      <title>Audio Gear for Video Casting Meatbags</title>
      <link>https://fideloper.com/gear-for-video-casting-meatbags</link>
      <description>Since I'm doing a lot of video casting lately, I've invested in some decent gear. Here's how I came about and decided on what gear to get!</description>
      <content:encoded><![CDATA[<p>Since I'm doing a lot of <a href="https://serversforhackers.com/">video casting lately</a>, I've invested in some decent gear. It is, after all, <a href="https://www.youtube.com/watch?v=XEL65gywwHQ">a write off</a>.</p>
<h2>Software</h2>
<p>This article is primarily about the hardware I use, but you might as well know about the software too.</p>
<p>For editing screencasts, I started out recording using QuickTime (the one that comes with your Mac). I had a copy of Adobe Premiere from previous employment, and so tried that to edit the videos. That shit is complicated. You can figure it out, but it's harder to do simpler things than it should be. I never aspired to having &quot;proficiency in Adobe Premiere&quot; on any resume, so I looked for alternatives.</p>
<p>That lead me to <a href="http://www.telestream.net/screenflow/overview.htm">Screenflow</a>, which I can't recommend enough. It strikes a really good balance between control (aka complexity) and sensible defaults (aka ease of use). For example, exporting a video in Screenflow is super easy for sending right to Youtube or Vimeo. If you export an edited video to disk, you get a lot of options, but not so many that you feel like you need to be an expert to wade through them.</p>
<p>Editing sound and video is intuitive. Adding affects, adjusting volume, hiding the mouse, showing keys hit, adding annotations and similar popular uses cases are all fairly easy to add.</p>
<p>I haven't used it, but I'd also check out <a href="https://www.techsmith.com/camtasia.html">Camtasia</a> if you're looking for alternatives.</p>
<p>Anyway, onward to audio stuff!</p>
<h2>First Feeble Audio Steps</h2>
<p>About a year ago, I made one or two videos using the ubiquitous white earbuds of Apple fame. As you might suspect, the quality was horrendous.</p>
<p><strong>Cost:</strong> 1.5 Apple Pricing Units. Apple seems to have a baseline of $19.99 for even the cheapest item, so I've started calling $20 an &quot;Apple Pricing Unit&quot;. It's a thing. I swear.</p>
<h2>Next Attempt</h2>
<p>Searching for better quality, I moved onto using the popular Blue Yeti (eventually also adding a pop filter). The quality is much improved, but varied a lot depending on how close I was to the microphone. Since it comes with a desk stand, this created a few issues:</p>
<ul>
<li>To get close enough, I had to lean forward uncomfortably while speaking into it, which was <em>super</em> awkward when also typing/mousing around while recording. Picture Quasimodo using a computer for the first time.</li>
<li>As the Yeti sat on my desk, <strong>ALL</strong> desk vibrations were recorded as terrible, bassy thuds. This includes both the times I accidentally grazed the desk with my fingers, and any keyboard typing. Worse, it could even be heard after putting the Yeti on thick stacks of  paper to help absorb vibrations.</li>
<li>This is a condenser mic, and therefore picks up just about all stray noises. If you ever wondered how attractive your breathing and wet, sloppy mouthing noises were, grab yourself a condenser mic. Humans are seriously a pile of disgusting wet meat.</li>
</ul>
<p><img src="https://s3.amazonaws.com/sfh-assets/meatbag_status.png" alt="please get this reference" /></p>
<p>The fix for most of the vibration issues would be a desk-mount (or floor stand) and shockmount. However, the Yeti is <em>really</em> heavy, making it a poor choice to put on most stands, which have a hard enough time keeping normal microphones in position without drooping.</p>
<p>You can see the <a href="http://www.amazon.com/Blue-Microphones-Radius-Microphone-Shock/product-reviews/B005DVF15A/ref=cm_cr_dp_see_all_summary?ie=UTF8&amp;showViewpoints=1&amp;sortBy=byRankDescending">dismal reviews for Blue's Radius shockmount on Amazon</a>, although that may mostly be fixed with the <a href="http://www.amazon.com/Blue-Microphones-RADIUS-II-Microphone/dp/B00TTQLA50/ref=sr_1_1?ie=UTF8&amp;qid=1431527502&amp;sr=8-1&amp;keywords=blue+shock+mount">Radius II mount</a>.</p>
<p><strong>Cost</strong>: $130</p>
<h2>Research</h2>
<!--
In code, there's no "wrong way" to code something - just trade offs. Typically these are between speed and long-term maintainability.

I don't believe you can call yourself an expert without knowing the trade-offs. Anyone espousing "the one true way" is over-simplifying for personal gain or does not have enough knowledge in what they're talking about. Actually, I'm sure there are other reasons why someone would do that, so don't go Well, Actually™-ing me to death, I don't actually care about your opinion on the matter.

My point, assuming I actually have one, is that what I look for when researching "stuff and things" are the trade-offs between choices. I trust Google results that explain trade-offs, rather than one-sided arguments for or against something.

Other than wanting to sound smart by interjecting my "you're not as intelligent as me because trade-offs" argument, what I'm try to say is that 
-->
<p>Researching better audio equipment should inform you of the trade-offs to decide between, rather than as a way to find &quot;the one true mic&quot;. You can buy a better microphone, but in reality, you might just be buying a &quot;different&quot; microphone.</p>
<blockquote>
<p>Although with a reasonable price increase, you likely are getting both different and better.</p>
</blockquote>
<p>One of the larger trade-offs to make between microphone types is in deciding between condenser and dynamic microphones.</p>
<p><strong>Condenser microphones</strong>, like the Yeti, pick up a LOT of background noise. This is actually &quot;good&quot;, in that they have a good range/frequency of audio they can record. From what I read, this is great for things like music. In a studio. With lots of sound proofing.</p>
<p>This is <strong>not</strong> as good when you have a dog slopping from a water bowl (gross mouthing noises from across the room!!), or a community pool outside your window (don't these people have f&amp;%^#@!$ jobs!?).</p>
<p><strong>Dynamic microphones</strong> do a great job at filtering out extraneous sounds. In fact, they do such a good job, that you may find yourself needing an amp so people can actually hear you.</p>
<p>In any case, dynamic microphones are great for podcasting / screencasting. These need to pick up voice, but not necessarily require the frequency range desired for instruments and singing.</p>
<blockquote>
<p>Plenty of people recommend dynamics for singing. This depends on the microphone brand, quality and your needs. Everything is a trade off between underlying technology and the quality/focus of a specific product.</p>
</blockquote>
<p>What I needed was something to have great quality for voice, but not pick up every little bit of background noise.</p>
<p>Where I live, at any given hour, there's usually a gaggle of attractive, pool-side Abercrombie models just galavanting around like they don't have a care in the world.</p>
<p><img src="http://cos.h-cdn.co/assets/14/39/980x606/nrm_1411589505-abercrombie-spring-break-2001-022.jpg" alt="Actually from an Abercrombie magazine. I found it by googling 'abercrombie nude on horse', remembering a picture I saw when I was a lot younger." /></p>
<p>Reducing background noise is really important to me.</p>
<h2>Current Setup</h2>
<p>I landed on the <a href="http://www.amazon.com/Heil-Dynamic-Studio-Recording-Microphone/dp/B00PQYBRNY">Heil PR40</a>. This is based on personal recommendations and reviews, which emphasized the greatness of this mic for podcasts and voiceover - just what I do!</p>
<p>Of course, selecting the microphone is just to start. Then you realize you may need a lot more supporting equipment. Here's everything I got:</p>
<ul>
<li><a href="http://amzn.to/1HUVeMp">Heil PR40</a> ~ $330</li>
<li><a href="http://amzn.to/1RytMFZ">Heil Overhead Broadcast Boom</a> - Because I refuse to awkwardly sit forward and try to type while recording. ~ $130</li>
<li><a href="http://amzn.to/1HCgt03">Heil shockmount</a> because vibrations are a thing and they suck on any microphone. ~ $105</li>
<li><a href="http://amzn.to/1FljcKO">Shure X2U Adapter</a> - Most dynamics aren't USB, so this converts your XLR to a USB signal while providing extra &quot;Phantom  +48v&quot; power, which I believe is for condenser mics instead of dynamics. This comes with a USB cable but not an XLR cable. It's nice to have to test various mics with, in case you own or test one that doesn't support USB. ~ $99</li>
<li><a href="http://amzn.to/1LPVxXh">Cloudlifter CL-1</a> - I picked this up to help boost sound output, as dynamic mics can come through a bit soft. I haven't tested the mic with vs without the Cloudlifter yet. ~ $150</li>
<li><a href="http://www.amazon.com/gp/product/B0002E1P30/ref=oh_aui_detailpage_o00_s00?ie=UTF8&amp;psc=1">Mogami Studio Microphone Cable</a> - I'm wary of cable prices, but went with good/expensive cables here since I was in the mindset of &quot;investment for my business&quot;. Probably a bit of self-justification, but so far so good on quality! I guess. I don't really know. ~ $40 - $55</li>
<li><a href="http://www.amazon.com/gp/product/B0002E1P30/ref=oh_aui_detailpage_o00_s00?ie=UTF8&amp;psc=1">Mogami Studio Microphone Cable</a> - Listed here a second time because you need at least two XLR cables, one between the microphone and Cloudlifter, and another between the Cloudlifter to the Shure XLR to USB converter. Don't forget that like I did :D ~ $40 - $55</li>
</ul>
<p> </p>
<p>Here you can see just about everything. I had the Cloudlifter just hanging off the back until I got a longer cord. The Shure X2U is the black gizmo on the bottom right, under the monitor.</p>
<p><img src="https://s3.amazonaws.com/sfh-assets/podcasting.png" alt="heil pr40 and friends" /></p>
<blockquote>
<p>The Shure X2U is really nice. It lets you adjust volume and gain. To help you know what levels are good, it has a green light that blinks when sound is too low, is steady when it's just right, and turns orangey/yellowy when you're too loud.</p>
<p>What's fun is to learn that &quot;S&quot; sounds are something like 4 times as loud as any other sound we make. You can see that because anytime you make an S sound, the light on the X2U will turn orange. There are techniques to <a href="http://www.soundonsound.com/sos/may09/articles/deessing.htm">combat all this madness</a>.</p>
</blockquote>
<p>After rounding a bit, and because I ended up with 3 XLR cables (guessed wrong on the lengths I'd want), I spent ~ <strong>$940</strong> on all of this. You can spend less or much much more depending on your heart's desire.</p>
<p>I love this microphone though. Having it over my head is very nice - it's out of the way. I can move it with me, so I can sit comfortably while casting. The shockmount and pop filter really helps with sound quality and in reducing vibration-based noise.</p>
<p>I still need to use my trackpad instead of my mouse when recording, as the scraping noise of my mouse comes through (I don't use a mouse pad, because I'm a rebel). Keyboard noises comes through as well, however just the tapping of the keys rather than the bassy vibrations caused by physically hitting the keyboard.</p>
<p>So that's it! One large blog post to brag about how I am writing off $940 dollars as a business expense this year.</p>
]]></content:encoded>
      <pubDate>Wed, 13 May 2015 19:59:05 +0000</pubDate>
    </item>
    <item>
      <title>Laravel/Symfony Console Commands &amp; Stderr</title>
      <link>https://fideloper.com/laravel-symfony-console-commands-stderr</link>
      <description>When we use Symfony's Console component to write CLI commands in PHP (and you should!), we're almost always writing any output to "stdout". This isn't necessarily good. </description>
      <content:encoded><![CDATA[<p>When we use Symfony's Console component to write CLI commands in PHP (and you should!), we're almost always writing any output to &quot;stdout&quot;.</p>
<p>There's a few ways to get general output from a CLI command using Console:</p>
<pre><code class="language-php">// Run the command (Laravelish)
public function fire()
{
    echo &quot;This is sent to stdout&quot;;  // Just text

    $this-&gt;info('Some info'); // Regular Text
    $this-&gt;error('An Error'); // Red Text
}

// Run the command (Symonfyish)
public function execute(InputInterface $input, OutputInterface $output)
{
    echo &quot;This is sent to stdout&quot;;  // Just text

    $output-&gt;writeln(&quot;&lt;info&gt;$string&lt;/info&gt;&quot;);   // Regular Text
    $output-&gt;writeln(&quot;&lt;error&gt;$string&lt;/error&gt;&quot;); // Red Text
}
</code></pre>
<blockquote>
<p>Console commands in Laravel use Symfony's Console component under the hood. While I'll write this (mostly) in context of Laravel, this is definitely applicable to Symfony users and those not using Laravel (collectively, &quot;the haters&quot;) as well.</p>
</blockquote>
<p>All of this, even the &quot;error&quot; output, writes to &quot;Stdout&quot;. This isn't necessarily good. In fact, the default behavior can easily make for some surprises to other developers calling these commands over a CLI.</p>
<h2>Convention</h2>
<p>The following are <a href="http://unix.stackexchange.com/questions/79315/when-to-use-redirection-to-stderr-in-shell-scripts">well established *nix conventions</a> to follow for any CLI tool.</p>
<ul>
<li>Only write pertinent, needed information to Stdout</li>
<li>Write info messages (non-pertinent) to Stderr, <em>even if it's not an error message</em></li>
<li>Only attempt to detect if the command was successful based on the result the command returns (exit status code). <strong>That will be with 0 (success) or 1 (failure).</strong>
<ul>
<li>Do not attempt to guess if a command failed simply because output was sent to Stderr</li>
</ul>
</li>
<li>Usually if everything works, then <em>nothing</em> is output. Simply returning a 0 exit code is generally enough.</li>
</ul>
<p>Writing important information to Stdout lets administrators send important data to log files. Writing non-important to Stderr lets administrators ignore it or send it to a log file specifically for errors or other information.</p>
<p>Perhaps more importantly is that Stdout output might get piped to another process to handle (think about anytime you do <code>cat /some/file | grep 'search-term'</code>). You don't want non-important output sent to Stdout in those cases. Sending those to Stderr makes the most sense then.</p>
<p>Lastly, because of these conventions, it's important that your commands return a 0 or 1 if they are successful or if the exit with an error. This is The Way™ that should be used to detect if there's truly an error or if the command operated successfully.</p>
<h2>In Practice</h2>
<p>Here's how I setup Laravel commands:</p>
<pre><code class="language-php">&lt;?php namespace Foo\Bar;

use Illuminate\Console\Command;
use Symfony\Component\Console\Output\ConsoleOutputInterface;

class MyCommand extends Command {

    # Some boiler plate omitted

    public function run()
    {
        // Default $stdErr variable to output
        $stdErr = $this-&gt;getOutput();

        if( $this-&gt;getOutput() instanceof ConsoleOutputInterface )
        {
            // If it's available, get stdErr output
            $stdErr = $this-&gt;getOutput()-&gt;getErrorOutput();
        }

        try {
            // Some operations
             
            // Non-critical information message
            // Since we have the Symfony output object, use writeln function
            $stdErr-&gt;writeln('&lt;info&gt;Status: Working...&lt;/info&gt;')
        } catch( \Exception $e )
        {
            // Since we have the Symfony output object, use writeln function
            $stdErr-&gt;writeln('&lt;error&gt;'.$e-&gt;getMessage().'&lt;/error&gt;');
            return 1;
        }

        // Important output
        $this-&gt;info('Your new API key is: aaabbbcccddd');
        return 0;
    }
}
</code></pre>
<p>And the same in a Symfony command:</p>
<pre><code class="language-php">&lt;?php namespace Foo\Bar;

use Symfony\Component\Console\Command\Command
use Symfony\Component\Console\Input\InputArgument;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\ConsoleOutputInterface;

class MyCommand extends Command {

    # Some boiler plate omitted

    public function execute(InputInterface $input, OutputInterface $output)
    {
        // Default $stdErr variable to output
        $stdErr = $output;

        if( $output instanceof ConsoleOutputInterface )
        {
            // If it's available, get stdErr output
            $stdErr = $output-&gt;getErrorOutput();
        }

        try {
            // Some operations
             
            // Non-critical information message
            $stdErr-&gt;writeln('&lt;info&gt;Status: Working...&lt;/info&gt;')
        } catch( \Exception $e )
        {
            $stdErr-&gt;writeln('&lt;error&gt;'.$e-&gt;getMessage().'&lt;/error&gt;');
            return 1;
        }

        // Important output
        $output-&gt;writeln('&lt;info&gt;Your new API key is: aaabbbcccddd&lt;/info&gt;');
        return 0;
    }
}
</code></pre>
<p>These classes mirror each other. The Laravel version uses some of its syntactic sugar.</p>
<p>Let's go over what's going on.</p>
<p>First, I assign a variable <code>$stdErr</code>. This gets assigned a fallback of the Output object. I'm going to use this variable later for error output regardless of whether it's used for Stdout (the default) or Stderr.</p>
<p>If the Output object happens to be an instance of <a href="https://github.com/symfony/Console/blob/master/Output/ConsoleOutputInterface.php">ConsoleOutputInterface</a>, I'll know it has the <code>getErrorOutput</code> method available. Not all Output implementations do, so this check is important. Stderr can then be assigned the Error Output object, which will write to Stderr. I can then easily differentiate output between Stderr and Stdout.</p>
<p>The rest of this is implementation of the above conventions. I write non-essential information to Stderr, but use the &quot;info&quot; formatters, as they don't need the red error styling.</p>
<p>Actual errors are also output to Stderr, but with the red output styling.</p>
<p>Important information is output to Stdout, again with the &quot;info&quot; styling.</p>
<p>Note that I return 0 or 1 (0 for success). The return value is taken by the Symfony Console component and returned as the exit code of the command. <a href="https://github.com/symfony/Console/blob/master/Command/Command.php#L259">If you define nothing, then 0 is returned</a>, even if you have output an error!</p>
<p>If this command didn't need the resulting output (for example, the new API key), I would return nothing. If I had a &quot;success&quot; message, I would actually return that in Stderr, but with the &quot;info&quot; formatting.</p>
]]></content:encoded>
      <pubDate>Tue, 05 May 2015 17:32:50 +0000</pubDate>
    </item>
    <item>
      <title>Hexagonal Architecture</title>
      <link>https://fideloper.com/hexagonal-architecture</link>
      <description>I recently gave a talk on Hexagonal Architecture at Laracon NYC. The feedback was great, but seemed to have left people wanting for some extra explanation and of course examples. This is an attempt to expand on the ideas of that presentation.</description>
      <content:encoded><![CDATA[<p>I recently <a href="https://speakerdeck.com/fideloper/hexagonal-architecture">gave a talk on Hexagonal Architecture</a> at Laracon NYC. The feedback was great, but seemed to have left people wanting for some extra explanation and of course examples. This is an attempt to expand on the ideas of that presentation.</p>
<ul>
<li><a href="http://userscape.com/laracon/2014/chrisfidao.html">Video of Talk</a></li>
<li><a href="https://speakerdeck.com/fideloper/hexagonal-architecture">Slides for Talk</a></li>
</ul>
<p>I found Hexagonal Architecture to be a good expression of how I think about code. In fact, when I wrote Implementing Laravel, I was actually espousing some ideals of Hexagonal Architecture without knowing it.</p>
<p>Hexagonal Architecture defines conceptual layers of code responsibility, and then points out ways to decouple code between those layers. It's helped clarify when, how and why we use interfaces (among other ideas).</p>
<p>Hexagonal Architecture is <strong>NOT</strong> a new way to think about programming within a framework. Instead, if's a way of describing &quot;best practices&quot; - practices that are both old and new. I use quotes because that's a bit of a loaded phrase. Best practices for me might not be best practices for you - it depends on what technical circles we engage in.</p>
<p>However, Hexagonal Architecture espouses common themes we'll always come across: decoupling of code form our framework, letting our application express itself, using a framework as a means to accomplish tasks in our application, instead of being our application itself.</p>
<h3>Beginnings</h3>
<p>The name for Hexagonal Architecture is brought to us (so far as I can tell) by Alistair Cockburn. He <a href="http://alistair.cockburn.us/Hexagonal+architecture">outlines the architecture</a> very well on his website.</p>
<p>It's intent:</p>
<blockquote>
<p>Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases.</p>
</blockquote>
<h3>Why a Hexagon</h3>
<p>The article takes on a shape of a hexagon. The number of sides is actually arbitrary. The point is that is has many sides. Each side represents a &quot;port&quot; into or out of our application.</p>
<p>A port can be thought of as a vector for accepting requests (or data) into an application. For example, an HTTP port (browser requests, API) can make requests on our application. Similarly, a queue worker or other messaging protocol (perhaps AMQP) can also make a request on our application. These are different ports into our application, but are also part of the &quot;request port&quot;. Other ports could include those for data access, such as a database port.</p>
<h2>Architecture</h2>
<p>Why do we even talk about Architecture?</p>
<p>We talk about architecture because we want our applications to contain two attributes:</p>
<ol>
<li>High Maintainability</li>
<li>Low Technical Debt</li>
</ol>
<p>These are, in fact, the same thing. To word this succinctly: We want our applications to be easy to work with. We want to make future changes easy.</p>
<h3>Maintainability</h3>
<p>Maintainability is the absence (reduction) of technical debt. A maintainable application is one that increases technical debt at the slowest rate we can feasibly achieve.</p>
<p>Maintainability is a long-term concept. Applications in their early form are easy to work with - they haven't yet been formed and molded by the early decisions of the developers working on them. New features and libraries are added quickly and easily.</p>
<p>However, as time goes on, applications can get harder to work on. Adding features might conflict with current functionality. Bugs might hint at systemic issues, which may require large changes in code to fix (and to help clarify overly complex code).</p>
<p>A good architecture early on in a project can help prevent such issues.</p>
<p>What kinds of maintainability are we looking for? What are measures of a highly maintainable application?</p>
<ol>
<li>Changes in one area of an application should affect as few other places as possible</li>
<li>Adding features should not require large code-base changes</li>
<li>Adding new ways to interact with the application should require as few changes as possible</li>
<li>Debugging should require as few work-arounds and &quot;just this once&quot; hacks as possible</li>
<li>Testing should be relatively easy</li>
</ol>
<p>I use the word &quot;should&quot; because there's no perfectly coded application in existence. We want to make our applications easy to work with, but trying for &quot;perfect&quot; becomes a waste of time and an over-exertion of mental energy.</p>
<blockquote>
<p>If you think you're spinning your wheels over &quot;the right way&quot; to do something, then just &quot;get it done&quot;. Come back to the problem later, or keep your code in its &quot;it just works&quot; state. There's no perfectly coded application in existence.</p>
</blockquote>
<h3>Technical Debt</h3>
<p>Technical debt is the debt we pay for our (bad) decisions, and it's paid back in time and frustration.</p>
<blockquote>
<p>Applications all incur a base-line technical debt. We need to work within the confines and limitations of our chosen persistence mechanisms, language, frameworks, tooling, teams and organizations!</p>
</blockquote>
<p>Bad architectural decisions made early on compound themselves to into larger and larger issues.</p>
<p>For every bad decision, we end up making work-arounds and hacks. Some of these bad decisions aren't blatantly obvious - we may simply make a class that &quot;does too much&quot; or mixes multiple concerns.</p>
<p>Smaller, yet equally bad decisions during development similarly also create issues. Luckily, these don't necessarily compound themselves like early architectural &quot;mistakes&quot; can. A solid basis reduces technical debt's rate of growth!</p>
<p>So, we want to reduce as many bad decisions as possible, especially early on in a project.</p>
<blockquote>
<p><strong>We make a discussion of architecture so that we can focus on increasing maintainability and decreasing technical debt</strong>.</p>
</blockquote>
<p>How do we make maintainable applications?</p>
<p>We make them easy to change.</p>
<p>How do we make our applications easy to change? We...</p>
<p><img src="https://speakerd.s3.amazonaws.com/presentations/de8629f0bf520131c2e20239d959ba18/slide_52.jpg?1400633450" alt="" /></p>
<p>We'll go back to this point quite a few times in the following explanations.</p>
<p><img src="https://speakerd.s3.amazonaws.com/presentations/de8629f0bf520131c2e20239d959ba18/slide_8.jpg?1400633450" alt="" /></p>
<h2>Interfaces and Implementations</h2>
<p>Let's take some time to discuss something (seemingly) basic in the world of OOP: Interfaces.</p>
<blockquote>
<p>Not all languages (notably: Python &amp; Ruby) have explicit Interfaces, however conceptually the same goals can be accomplished in such languages.</p>
</blockquote>
<p>You can think of an interface as contract, which defines an application need. If the application need can be or must be fulfilled by multiple implementations, than an interface can be used.</p>
<p>In other words, we use interfaces when we plan on having or needing multiple <em>implementations</em> of an <em>interface</em>.</p>
<p>For example, if our application sends notifications, we might define a notification interface. Then we can implement an SES notifier to use Amazon SES, a Mandrill notifier to use Mandrill and others implementations for other mail systems.</p>
<blockquote>
<p>The interface ensures that particular methods are available for our application to use, no matter what implementation is decided upon.</p>
</blockquote>
<p>For example, the notifier interface might look like this:</p>
<pre><code>interface Notifier {

    public function notify(Message $message);
}
</code></pre>
<p>We know any implementation of this interface <strong>must</strong> have the <code>notify</code> method. This let's us define the <strong>interface</strong> as a dependency in other places of our application.</p>
<p>The application doesn't care which implementation it uses. It just cares that the <code>notify</code> method exists for it to use.</p>
<pre><code>class SomeClass {

    public function __construct(Notifier $notifier)
    {
        $this-&gt;notifier = $notifier;
    }

    public function doStuff()
    {
        $to = 'some@email.com';
        $body = 'This is a message';
        $message = new Message($to, $body);

        $this-&gt;notifier-&gt;notify($message);
    }
}
</code></pre>
<p>See how our <code>SomeClass</code> class doesn't specify a specific implementation, but rather simply requires a subclass of <code>Notifier</code>. This means we can use our SES, Mandrill or any other implementation.</p>
<p>This highlights an important way interfaces can add maintainability to your application. Interfaces make changing our application notifier easier - we can simply add a new implementation and be done with it.</p>
<pre><code>class SesNotifier implements Notifier {

    public function __construct(SesClient $client)
    {
        $this-&gt;client = $client;
    }

    public function notify(Message $message)
    {
        $this-&gt;client-&gt;send([
            'to' =&gt; $message-&gt;to,
            'body' =&gt; $message-&gt;body]);
    }
}
</code></pre>
<p>In the above example, we've used an implementation making use of Amazon's Simple Email Service (SES). However, what if we need to switch to Mandrill to send emails, or even switch to Twilio, to send SMS?</p>
<p>As we've seen, we can easily make additional implementations and switch between those implementations as needed.</p>
<pre><code>// Using SES Notifier
$sesNotifier = new SesNotifier(...);
$someClass = new SomeClass($sesNotifier);

// Or we can use MandrillNotifier

$mandrillNotifier = new MandrillNotifier(...);
$someClass = new SomeClass($mandrillNotifier);

// This will work no matter which implementation we use
$someClass-&gt;doStuff();
</code></pre>
<p>Our frameworks make liberal use of interfaces in a similar fashion. In fact, frameworks are useful <em>because</em> they handle many possible implementations we developers may need - for example, different SQL servers, email systems, cache drivers and other services.</p>
<p>Frameworks use interfaces because they increases the maintainability of the framework - it becomes easier to add or modify features, and easier for us developers to extend the frameworks should we need to.</p>
<p>The use of an interface helps us properly <strong>encapsulate change</strong> here. We can simply make new implementations as needed!</p>
<h3>Going Further</h3>
<p>Now, what if we need to add some functionality around individual (or all) implementations? For example, we may need to add logging to our SEE implementation, perhaps to help debug an issue we're having.</p>
<p>The most obvious way, of course, is to add code directly to the implementations.</p>
<pre><code>class SesNotifier implements Notifier {

    public function __construct(SesClient $client, Logger $logger)
    {
        $this-&gt;logger = $logger;
        $this-&gt;client = $client;
    }

    public function notify(Message $message)
    {
        $this-&gt;logger-&gt;logMessage($message);
        $this-&gt;client-&gt;send([...]);
    }
}
</code></pre>
<p>Adding the logger directly to the concrete implementation may be OK, but our implementation is now doing two things instead of one - we're mixing concerns. Furthermore, what if we need to add logging to all implementations? We'd end up with very similar code in each implementation, which is hardly DRY. A change in how we add logging means making changes in each implementation. Is there an easier way to add this functionality in a way that's more maintainable? Yes!</p>
<blockquote>
<p>Do you recognize some of the SOLID principles being implicitly discussed here?</p>
</blockquote>
<p>To clean this up, we can make use of one of my personal favorite design patterns - the <a href="http://oreilly.com/catalog/hfdesignpat/chapter/ch03.pdf">Decorator Pattern</a>. This makes clever use of interfaces in order to &quot;wrap&quot; a decorating class around an implementation in order to add in our desired functionality. Let's see an example.</p>
<pre><code>// A class wrapping a Notifier with some Logging behavior
class NotifierLogger implements Notifier {

    public function __construct(Notifier $next, Logger $logger)
    {
        $this-&gt;next = $next;
        $this-&gt;logger = $logger;
    }

    public function notify(Message $message)
    {
        $this-&gt;logger-&gt;logMessage($message);
        return $this-&gt;next-&gt;notify($message);
    }
}
</code></pre>
<p>Similar to our other Notifier implementations, the <code>NotifierLogger</code> class <strong>also</strong> implements the <code>Notifier</code> interface. We can see, however, that it doesn't actually notify anything. Instead, it accepts another Notifier implementation in its constructor, calling it &quot;$next&quot;. When run, <code>NotifierLogger</code> will log the Message data and <em>then</em> pass the message onto the real notifier implementation.</p>
<p>We can see that the logger decorator logs the message, and then passes the message off to the notifier to actually do the notifying! If you need to, you can reverse the order of these so the logging is done <em>after</em> the notification is actually sent, so you can also log the results of the sent notification, instead of simply logging the message being sent.</p>
<p>In this way, the <code>NotifierLogger</code> &quot;decorates&quot; the actual notifier implementation with the logging functionality.</p>
<p>The best part is that the consuming class (our <code>SomeClass</code> example above) doesn't care that we pass in a decorated object. The decorator also implements the expected interface, so the requirements set by <code>SomeClass</code> are still fulfilled!</p>
<p>We can chain together multiple decorators also. Perhaps, for example, we want wrap the email notifier with an SMS notifier that sends a text message in addition to sending an email. In that example, we're adding an additional notifier implementation  (SMS) on top of an emailing implementation.</p>
<p>We aren't limited to adding additional concrete notifying implementations, as our logging example shows. A few additional examples can include updating the database, or adding in some metric gathering code. The possibilities are endless!</p>
<p>The ability to add additional behaviors, while keeping each class only doing one thing, <strong>and</strong> still giving us the freedom to add additional implementations, is very powerful - changing our code becomes much easier!</p>
<blockquote>
<p>The Decorator Pattern is just one design pattern of <em>many</em> that make excellent use of <strong>interfaces to encapsulate change</strong>. In fact, almost all of the classic design patterns make use of interfaces.</p>
<p>Furthermore, almost all design patterns exist to make future changes easier. This is not a coincidence. Making a study of design patterns (and <em>when</em> to use them) is a critical step towards making good architectural decisions. I suggest the Head First Design Patterns book for further reading on design patterns.</p>
</blockquote>
<p>I'll repeat: <strong>Interfaces are a central way of encapsulating change.</strong> We can add functionality by creating a new implementation, and we can add behaviors onto existing implementations - all without affecting other areas of our codebase!</p>
<p>Once properly encapsulated, functionality can more easily be changed. Easily changed codebases increase application maintainability (they're easier to change) by reducing technical debt (we've invested time in making changes easier to accomplish).</p>
<p>That was quite a lot on the topic of interfaces. Hopefully that helped clarify some of the important use cases of interfaces, and gave you an taste of how some design patterns make use of them to help make our applications more maintainable.</p>
<h2>Ports and Adapters</h2>
<p>Now, finally we can begin to discuss the meat of Hexagonal Architecture.</p>
<p>Hexagonal Architecture, a layered architecture, is also called the Ports and Adapters architecture. This is because it has the concept of different ports, which can be adapted for any given layer.</p>
<p>For example our framework will &quot;adapt&quot; a SQL &quot;port&quot; to any number of different SQL servers for our application to use. Similarly, we can create interfaces at key points in our application for other layers to implement. This lets us create multiple adaptations for those interfaces as needs change, and for testing. This is also the basis for decoupling our code between layers.</p>
<blockquote>
<p>Creating interfaces for portions of our application that may change is a way to encapsulate change. We can create a new implementations or add more features around an existing implementation as needed with strategic use of interfaces.</p>
</blockquote>
<p>Before returning to the concept of Ports and Adapters, let's go over the layers of the Hexagonal Architecture.</p>
<p><img src="https://speakerd.s3.amazonaws.com/presentations/de8629f0bf520131c2e20239d959ba18/slide_11.jpg?1400675141" alt="hexagonal architecture" /></p>
<h3>Layers: Code</h3>
<p>The Hexagonal Architecture can describe an application in multiple layers.</p>
<p>The goal of describing the architecture in layers is to make <em>conceptual</em> divisions across functional areas of an application.</p>
<p>The code within the layers (and it their boundaries) should describe how the layers communicate with each other. Because layers act as ports and adapters for the other layers inside and surrounding them, describing the communication between them is important.</p>
<p>Layers communicate with each other using interfaces (ports) and implementations (adapters).</p>
<p>Each layer has two elements:</p>
<ol>
<li>The Code</li>
<li>The Boundary</li>
</ol>
<p>The <strong>code</strong> inside of a layer is just what it sounds like - actual code, doing things. Often times this code acts as adapters to ports defined in other layers, but it can also be any code we need (business logic or other services).</p>
<p>Each layer also has a <strong>boundary</strong> between itself and an outside layer. At the boundary we find our &quot;ports&quot;. These ports are interfaces that the layer defines. These interfaces define how outside layers can communicate to the current layer. We'll go into this in more detail.</p>
<h4>Domain Layer</h4>
<p>The inner-most layer is the Domain Layer. This layer contains your business logic and defines how the layer outside of it can interact with it.</p>
<p>Business logic is central to your application. It can also be described as 'policy' - rules your code must follow.</p>
<p>The domain layer and its business logic define the behavior and constraints of your application. It's what makes your application different from others. It's what gives your application value.</p>
<p>If you have an application with a lot of behavior, your application can have a rich domain layer. If your application is more of a thin layer on top of a database (many are!), this layer might be &quot;thinner&quot;.</p>
<p>In addition to business logic (the Core Domain), we often also find supporting domain logic within the Domain Layer, such as Domain Events  (events fired at important points in the business logic) and use-cases (definitions of what actions an be taken on our applications).</p>
<p>What goes inside of Domain Layer is the subject of books by themselves - especially if you are interested in Domain Driven Design, which goes into much detail on how to create applications which closely match the real business processes you are codifying.</p>
<p>Some &quot;Core&quot; Domain Logic:</p>
<pre><code>&lt;?php  namespace Hex\Tickets;

class Ticket extends Model {

   public function assignStaffer(Staffer $staffer)
   {
       if( ! $staffer-&gt;categories-&gt;contains( $this-&gt;category ) )
       {
           throw new DomainException(&quot;Staffer can't be assigned to &quot;.$this-&gt;category);
       }

       $this-&gt;staffer()-&gt;associate($staffer); // Set Relationship

       return $this;
   }

   public function setCategory(Category $category)
   {
       if( $this-&gt;staffer instanceof Staffer &amp;&amp;
           ! $this-&gt;staffer-&gt;categories-&gt;contains( $category ) )
       {
           // Unset staffer if can't be assigned to set category
           $this-&gt;staffer = null;
       }

       $this-&gt;category()-&gt;associate($category); // Set Relationship

       return $this;
   }
}
</code></pre>
<p>Above we can see a <strong>constraint</strong> within the <code>assignStaffer</code> method. If the provided Staffer is not assigned a Category under which this Ticket falls, we throw an exception.</p>
<p>We also see some <strong>behavior</strong>. If the Category of the Ticket is changed, and the current Staffer is unable to be assigned a Ticket of this Category, we unset the Staffer. We do not throw an exception - instead we allow the opportunity to set a new Staffer when the Category is changed.</p>
<p>These are both examples of business logic being enforced. In one scenario, we set a constraint by throwing an error when something is set incorrectly. In another scenario, we provide behavior - once a Category is changed, users must have the opportunity to re-assign a Staffer who is able to handle that Category of Ticket.</p>
<p>Inside of the Domain layer, we may also see some supporting Domain Logic:</p>
<pre><code>class RegisterUserCommand {

	protected $email;
	
	protected $password;
	
	public function __construct($email, $password)
	{
		// Setter email/password
	}
	
	public function getEmail() { ... } // return $email
	
	public function getPassword() { ... }	// return $password
}

class UserCreatedEvent {
	
	public function __construct(User $user) { ... }
}
</code></pre>
<p>Above we have some supporting (but very important) domain logic. One is a Command (aka a Use Case) which defines a way in which our application can be used. It simply takes in the data needed to create a new user. We'll see how that's used later.</p>
<p>Another is an example Domain Event, which our application might dispatch after a user is created. These are important to the things that occur within a domain and so belong in the domain layer. They are not system events often found within the plumbing of our frameworks such as &quot;pre-dispatch&quot;, often used for hooks in case framework behavior needs to be extended.</p>
<h4>Application Layer</h4>
<p>Just outside of the Domain Layer sits the Application layer. This layer orchestrates the use of the entities found in the Domain Layer. It also adapts requests from the Framework Layer to the Domain Layer by sitting between the two.</p>
<p>For example, it might have a handler class handle a use-case. This handler class in the Application Layer would accept input data brought in from the Framework Layer and perform the actions needed to accomplish the use-case.</p>
<p>It might also dispatch Domain Events raised in the Domain Layer.</p>
<p>This layer represents the outside layer of the code that makes up the application.</p>
<p>Of course, you can see that outside of the Application Layer sits the &quot;Framework Layer&quot;. The Framework layer contains code that helps your application (perhaps by accepting an HTTP request or sending an email), but is <strong>not</strong> your application itself.</p>
<h4>Framework Layer</h4>
<p>The Framework Layer sits outside of the Application Layer. It contains code that your application uses but it is not actually your application. This is often literally your framework, but can also include any third-party libraries, SDKs or other code used. Think of all the libraries you bring in with Composer (assuming you use PHP). They are not your framework, but they do act in the same layer - performing tasks to handle application needs.</p>
<p>The Framework Layer implements services defined by the application layer. For example, it might implement a notification interface to send emails or SMS. Your application knows it needs to send notifications, but it may not need to care how they are sent (email vs SMS for example).</p>
<pre><code>class SesEmailNotifier implements Notifier {

	public function __construct(SesClient $client) { ... }
	
	public function notify(Message $message)
	{
		$this-&gt;client-&gt;sendEmail([ ... ]); // Send email with SES particulars
	}
}
</code></pre>
<p>Another example is an event dispatcher. Inside the Framework Layer might be code to implement an event dispatcher interface defined in the application layer. Again, the application knows it has events to dispatch, but it doesn't necessarily need to have its own dispatcher - our framework likely already has one, or we might pull in a library to handle the implementation details of dispatch events.</p>
<p>The Framework Layer also adapts requests from the outside to our Application Layer. For example, it's responsible for accepting HTTP requests, gathering user input and routing this request/data to a controller. The Framework Layer can then call an application use-case, pass it the input data, and have the application handle the use case (rather than handling it itself inside of a controller).</p>
<p>In that way, the framework can sit between all requests made on the application externally (us, setting there using a browser) and the application itself (Application Layer and deeper). The Framework Layer is adapting raw requests into our application.</p>
<h3>Communication Between Layers: Boundaries</h3>
<p>Now that we've seen what code goes inside of each layer, let's talk about an interesting part of each layer: How they communication with each other.</p>
<p>As mentioned, each layer also defines how other layers can communicate with it. Specifically, each layer is responsible for defining how the next outside layer <em>can</em> communicate with it.</p>
<p>The tool for this is the interface. <strong>At each layer boundary, we find interfaces.</strong>. These interfaces are the ports for the next layer to create adapters for.</p>
<p>We saw this in the notifier and event dispatcher examples above.</p>
<p>The Application Layer will implement interfaces (make adapters of the ports) defined in the Domain Layer. It will also contain code for other concerns it may have.</p>
<p>Let's go through each layer's boundary and see how this works.</p>
<h4>Domain Layer</h4>
<p>At the boundary of our Domain Layer, we find definitions in how the outside layer (the Application Layer) can communicate with the domain objects/entities found in the Domain Layer.</p>
<p>For example, our Domain Layer might contain a command (use case).  Above, we saw an example <code>RegisterUserCommand</code>. This command is pretty simple - you might call it a simple DTO (Data Transfer Object).</p>
<p>Our Domain Layer defines Use Cases, but it's job is just to say &quot;This is how you can use me&quot;. Remember, the Application Layer is responsible for orchestrating the Domain Layer code in order to accomplish a task. So, we have communication across boundaries - the Domain Layer defines how it should be used, and the Application Layer uses those definitions (in part) to accomplish the defined use cases.</p>
<p>Our Application Layer, therefore, needs to know how to handle this command to register a user. Since we have communication across these layers, let's define an interface &quot;at the boundary&quot; of the Domain Layer:</p>
<pre><code>interface CommandBus {

    public function execute($command);
}
</code></pre>
<p>So, we've told or Application Layer how to &quot;execute&quot; a command, using a Command Bus. The Command Bus is simple - it just needs to have a method <code>execute</code> available so that implementations can process a Command.</p>
<p>Our Domain Layer contains this CommandBus interface, so that our Application Layer can implement the CommandBus interface. The interface is the port, and the implementations of it are the adapters to that port.</p>
<p>Cognitively, we've done a few things:</p>
<ol>
<li>Recognized that CommandBus may be processed in a few ways</li>
<li>Recognized that because of that, we may have multiple implementations of the CommandBus</li>
<li>Recognized that since our Application Layer orchestrates the use of the Domain Layer, it makes sense for the CommandBus implementation(s) to exist in the Application Layer (they'll orchestrate the use of the Commands defined in the Domain Layer).</li>
<li>We've decoupled the Domain Layer from the Application layer by using an interface. This also defines communication between layers. Command Buses have a defined way to execute commands.</li>
</ol>
<h4>Application Layer</h4>
<p>So our Application Layer can implement a Command Bus. That's right in the middle of this layer - implementations (adapters) to other layers.</p>
<p>However the Application Layer has its own needs to communicate. The Application Layer might need to send a notification to a user. The Framework has the tools to do so - it can send emails, and we can pull in libraries to send SMS messages or other notification transports. The framework is a good place to implement our notification needs.</p>
<p>So, we have communication between layers. The Application Layer needs to send a notification, and we know it can use libraries in the Framework layer to do so. You know what's coming up: another interface!</p>
<pre><code>interface Notifier {

	public function notify(Message $message);
}
</code></pre>
<p>The Application Layer is defining how it will be communicating to the Framework Layer. In fact, it's defining how it will use the Framework Layer, without actually coupling to it. Interfaces (ports) and implementations (adapters) give us the freedom to change the adapters. We don't tie our application to the Framework Layer in this way.</p>
<p>The interface defined &quot;at the boundary&quot; of the Application Layer is defining how the Application Layer will communicate with the Framework Layer.</p>
<blockquote>
<p>This is very much conceptual and is not meant to be taken as concrete rules. If you find yourself asking &quot;What if my <strong>Domain Layer</strong> needs a third party library found in the framework?&quot;, fear not!</p>
<p>If that's a need, then define an interface and implement it using that library! You have to make your code work after all - worrying about breaking the &quot;rules&quot; from some dude or dudette on the internet won't get you anywhere!</p>
<p>The key point is to make sure to decouple concerns (hint: You're doing so by defining an interface) so functionality is easy to switch/modify later. There's no need to live and die by what I write here. There's no &quot;doing it wrong&quot;. To repeat: <strong>There's no doing it wrong</strong>. There's just varying levels of severity in how you shoot yourself in the foot.</p>
<p>A good starting place to read up on the topic of requiring <a href="https://groups.google.com/forum/#!topic/dddinphp/YGogT1NSbO0">third party libraries in your &quot;domain layer&quot;</a> is in this linked thread.</p>
</blockquote>
<h4>Framework Layer</h4>
<p>So far we've seen the boundary in our Domain Layer and in our Application Layer. These two layers both communicate with layers under our control. The Domain Layer communicates with the Application Layer. The Application Layer communicates with the Framework Layer. Who does the Framework Layer communicate with?</p>
<p>The outside world! That world is one filled with protocols - mostly TCP based protocols (such as HTTP/HTTPS). There is certainly lots of code in the framework layer (all the libraries we use), as well as some code we write ourselves, such as controller code and implementations of interfaces defined in the Application Layer.</p>
<p>What exactly is at the boundary of the Framework Layer and the &quot;layer&quot; outside of it, however? Well more interfaces (AND implementations of those interfaces) of course!</p>
<p>Most of our frameworks have code that takes care of talking to the outside work - HTTP implementations, various SQL implementations, various email implementations, and so on.</p>
<p>Luckily, for the most part, we don't have to care about the boundary between the framework layer and the outside world. That's the framework's concern; our benevolent framework creators have taken care of this for us.</p>
<p>This is, arguably, the whole point of a framework. Frameworks provide tools for us to communicate to the world outside of our application, so that we don't need to write that boiler plate code ourselves.</p>
<p>We don't usually need to add to the Framework Layer at its boundary, but of course this isn't always the case. If we're building an API, HTTP level concerns become an issue we need to work through. This usually means implementing <a href="http://en.wikipedia.org/wiki/Cross-origin_resource_sharing">CORS</a>, HTTP caching, HATEOS and other specifics in how our application handles HTTP level requests - concerns that are important to our application, but aren't likely concerns of the Domain Layer or even the Application Layer.</p>
<h2>Use-Cases/Commands</h2>
<p>Earlier in this writing, I've made mention of &quot;Use Cases&quot; and &quot;Commands.&quot; Let's go deeper into what these are.</p>
<p>Hexagonal Architecture isn't just about communication between layers on the micro level (interfaces, implementations for ports and adapters). There's also a concept of the <strong>Application Boundary</strong>, a macro-level concept.</p>
<blockquote>
<p>This boundary separates our application as a whole from everything else (both framework and communication with the outside world).</p>
</blockquote>
<p>We can strictly define how the outside world can communicate with our application. We do this explicitly by creating &quot;Use Cases&quot; (also called &quot;Commands&quot;). These essentially are classes which name actions that can be taken. For example, our <code>RegisterUserCommand</code> defines that our application can register a user. A <code>UpdateBillingCommand</code> might be the code path defined for us to update a user's billing information.</p>
<blockquote>
<p>A Use Case (Command) is an explicitly defined way in which an application can be used.</p>
</blockquote>
<p>Defining Use Cases has some useful side-affects. For example, we clearly and explicitly can see how our application &quot;wants&quot; to be interacted with. This can strictly follow the business logic that our application needs to perform. Use Cases are also useful for clarity amongst a team of developers. We can plan use cases ahead of time, or add them as needed, but we find it harder to create odd logic outside of use cases, which don't seem to fit business logic.</p>
<h3>How do we define use cases?</h3>
<p>We saw some examples already - What we can do is create objects representing an application use case. We’ll call such an object a &quot;Command.&quot; These commands can then be processed by our application &quot;Command Bus&quot;, which will call a command &quot;Handler&quot; to orchestrate the execution of the use case.</p>
<p>So, we have three actors in command processing:</p>
<ol>
<li>A Command</li>
<li>The Command Bus</li>
<li>A Handler</li>
</ol>
<p>A Command Bus accepts a Command in its <code>execute</code> method.  It then does some logic to find and instantiate a Handler for that Command. Finally, the Handler's <code>handle</code> method is called, running the logic to fulfill the Command.</p>
<pre><code>class SimpleCommandBus implements CommandBus {

	// Other methods removed for brevity
	
	public function execute($command)
	{
		return $this-&gt;resolveHandler($command)-&gt;handle($command);
	}
}
</code></pre>
<p>Notice that we are taking coordinating logic we often see within a controller and moving it into a Handler. This is good, as we want to decouple from our framework layer, giving us the benefit of protecting our application from changes in the framework as much as possible (another form if maintainability), and allowing us to run the same code in other contexts (CLI, API calls, etc).</p>
<p>The main benefit of use cases is that we create an avenue to re-use code run in multiple contexts (web, API, CLI, workers, etc).</p>
<p>For example, the code to create a new user in web, API and CLI can be almost exactly the same:</p>
<pre><code>public function handleSomeRequest()
{
	try {
		$registerUserCommand = new RegisterUserCommand( 
			$this-&gt;request-&gt;username, $this-&gt;request-&gt;email, $this-&gt;request-&gt;password );
			
		$result = $this-&gt;commandBus-&gt;execute($registerUserCommand);
		
		return Redirect::to('/account')-&gt;with([ 'message' =&gt; 'success' ]);
	} catch( \Exception $e )
	{
		return Redirect::to('/user/add')-&gt;with( [ 'message' =&gt; $e-&gt;getMessage() ] );
	}
}

</code></pre>
<p>What might change between contexts is how we get user input and pass it into the command, as well as how we handle errors - but those are mostly framework-level concerns. Our application code doesn't need to care if its being used in an HTTP browser request, an HTTP api request or any other request type.</p>
<p>That's where we see the potential of Use Cases. We can re-use them in every context our application can be used (HTTP, CLI, API, AMQP or queue messaging, etc)! Additionally, we've firmly set up a boundary between a framework and our application. The application can, potentially, be used separately from our framework.</p>
<p>That being said, we still might use a framework to implement some application level needs, such as validation, event dispatching, database access, email drivers and many other things our frameworks can do for us! The Use-Case application boundary is just one aspect of Hexagonal Architecture.</p>
<blockquote>
<p>Use Case/Command's main benefit is keeping code DRY - we can re-use the same use case code in multiple contexts (web, API, CLI, etc).</p>
</blockquote>
<p><img src="https://speakerd.s3.amazonaws.com/presentations/de8629f0bf520131c2e20239d959ba18/slide_24.jpg?1400633450" alt="" /></p>
<p>Use Cases also serve to further decouple your application from the framework. This gives some protection from framework changes (upgrades, etc) and also makes testing easier.</p>
<p>Taken to an extreme, you can potentially switch frameworks without re-coding our application. However, I consider this the <strong>edgiest edge case to ever case edges</strong>. It's neither a realistic nor a worthy goal. We want our applications to be easy to work with, not fulfill some arbitrary metric or rare use case.</p>
<h3>Example Command, Command Bus and Handler</h3>
<p>First, our application knows it needs Commands. It also knows it needs a Bus to execute the commands. Finally, we need a Handler to orchestrate the execution of the command.</p>
<p>Commands are, in a sense, arbitrary. Their purpose is simply to exist. Their mere existence fulfills the role of defining how an application should be used. The data they demand tells us what data is needed to fulfill the command. So, we don't really need to interface a Command. They are simply a name (a description) and a DTO (data transfer object).</p>
<pre><code>class RegisterUserCommand {

    public function __construct(username, email, password)
    {
        // set data here
    }

    // define getters here
}
</code></pre>
<p>So, our Command used to register a new user is quite simple. All at once, we provide an explicit definition of one way our application can be used, and what data should accompany that command.</p>
<p>Our Handlers are a bit more complex. They are coupled to a Command in that they expect the data from a Command to be available. This is a spot of tight coupling. Changing some business logic may result in changing the Handler, which may result in changing the Command. As these are all concerns of the all-important business logic, this tight coupling within Domain concerns is deemed &quot;OK&quot;.</p>
<p>While Commands are simple DTO's (containing various data), Handlers have behavior, which the Command Bus makes use of. The handlers, being in the Application Layer, orchestrate the use of Domain entities to fulfill a Command.</p>
<p>The CommandBus used to execute a command must be able to execute all Handlers, and so we'll define a Handler interface to ensure the Command Bus always has something it can work with.</p>
<pre><code>interface Handler {

    public function handle($command);
}
</code></pre>
<p>Handlers then must have a <code>handle</code> method, but are free to handle the fulfillment of the Command in any way it needs. For our <code>RegisterUserCommand</code>, lets take a look at what it's Handler might look like:</p>
<pre><code>class RegisterUserHandler {
	
	public function handle($command)
	{
		$user = new User;
		$user-&gt;username = $command-&gt;username;
		$user-&gt;email = $command-&gt;email;
		$user-&gt;password = $this-&gt;auth-&gt;hash($command-&gt;password);
		
		$user-&gt;save();
		
		$this-&gt;dispatcher-&gt;dispatch( $user-&gt;flushEvents() );
		
		// Consider also returning a DTO rather than a class with behavior
		// So our &quot;view&quot; layers (in whatever context) can't accidentally affect
		// our application - it can just read the results
		return $user-&gt;toArray(); 
	}
}
</code></pre>
<p>We can see that our handler orchestrates the use of some Domain Entities, including assigning data, saving it and dispatching any raised events (if our entities happen to raise events).</p>
<blockquote>
<p>Similar to our interface example where we used a Decorator to add some extra behavior to our notifier, consider what behaviors might be useful to add to a CommandBus or Handler.</p>
</blockquote>
<p>Lastly, we'll discuss the most interesting of our three actors - the Command Bus.</p>
<p>The Command Bus can have multiple implementations. For example, we can use a synchronous Command Bus (running commands as they are received) or perhaps we can create a queue Command Bus, which runs all queued commands only when the queue is flushed. Or perhaps we choose to create an asynchronous Command Bus, which fires jobs into a worker queue, to be worked on as the jobs are received, out of band of the current user's request.</p>
<p>Since we have multiple possible implementations, we'll interface the Command Bus:</p>
<pre><code>interface CommandBus {

    public function execute($command);
}
</code></pre>
<p>We've seen a simple implementation of this already. Let's see it a bit more fleshed out:</p>
<pre><code>class SimpleCommandBus implements CommandBus {

	public function __construct(Container $container, CommandInflector $inflector)
	{
		$this-&gt;container = $container;
		$this-&gt;inflector = $inflector;
	}
	
	public function execute($command)
	{
		return $this-&gt;resolveHandler($command)-&gt;handle($command);
	}
	
	public function resolveHandler($command)
	{
		return $this-&gt;container-&gt;make( $this-&gt;inflector-&gt;getHandlerClass($command) );
	}
}
</code></pre>
<p>The <code>CommandInflector</code> can use any strategy to get a Handler from a Command class. For example, <code>return str_replace('Command', 'Handler', get_class($command));</code> is effective. It's simple, and only requires you keep a certain directory structure for Handlers and Commands (assuming PSR-style autoloading). How you accomplish this is up to your and your project needs.</p>
<p>What else might we use besides a &quot;Simple&quot; Command Bus? Well we might instead call it a &quot;SynchronousCommandBus&quot;, as it's processing commands as they come - synchronously. This infers that we might also consider creating an AsynchronousCommandBus. Instead of processing the commands directly, it might pass them into a worker queue to get processed whenever the job is reached, out of band of the current request.</p>
<p>In addition to different implementations of the Command Bus, we can also add onto our existing ones with more Decorators. For example, I find it useful to wrap some validation around a Command Bus, so that it attempts to validate the Command data before processing it.</p>
<pre><code>class ValidationCommandBus implements CommandBus {

	public function __construct(CommandBus $bus, Container $container, CommandInflector $inflector) { ... }

	public function execute($command)
    {
        $this-&gt;validate($command);
        return $this-&gt;bus-&gt;execute($command);
    }
    
    public function validate($command)
    {
    	$validator = $this-&gt;container-&gt;make($this-&gt;inflector-&gt;getValidatorClass($command));
    	$validator-&gt;validate($command); // Throws exception if invalid
    }
}
</code></pre>
<p>The <code>ValidationCommandBus</code> is a decorator - it does the validation, and then passes the Command off to another Command Bus to execute it. The next Command Bus might be another decorator (perhaps a logger?) or might be the actual Bus doing the processing.</p>
<p>So, combined with different types of Command Buses and possible behaviors we can add on top of our Command Buses, we have a pretty powerful way to handle our application Commands (Use Cases) being called!</p>
<p>And these ways are insulated as much as possible from layers outside of the Application. The Framework Layer (and beyond) does not dictate how the application is used - the application itself dictates its usage.</p>
<h2>Dependencies</h2>
<p>Not explicitly mentioned was the notion of dependencies. Hexagonal Architecture espouses a one-way flow of dependencies: From the outside, in. The Domain Layer (the inner-most layer) should not depend on layers outside of it. The Application Layer should depend on the Domain Layer, but not on the Framework Layer. The Framework Layer should depend on the Application Layer, but not on externalities.</p>
<p>We talked about interfaces as the primary means to encapsulate change. These let us define how communication between layers was accomplished within out application without coupling layers together. Thinking about dependencies is another way of saying the same thing. Let's see how.</p>
<h3>Dependencies: Moving In</h3>
<p>When our data/logic is flowing &quot;in&quot;, dependencies are easier to visualize. If an HTTP request reaches our server, we need code to handle it, otherwise nothing happens. External HTTP requests require our Framework Layer to interpret a request to code. If our Framework interprets a request and routes it to a controller, the controller needs something to act upon. Without our Application Layer, it has nothing to do. The Framework Layer depends on the Application Layer. Our Application Layer needs the Domain Layer in order to have something to orchestrate - it depends on the Domain Layer in order to fulfill a request. The Domain Layer (for the most part) depends on the behavior and constraints found within itself.</p>
<p>When we talk about a request starting from the outside, and the flow of code for handling a request moving inward, dependencies are fairly easy to spot. The outside layers depend on the inside layers, but they can also be ignorant of what the inner layers are doing - they just need to know the methods to call and the data to pass. Implementation details are safely encapsulated away in their proper place. Our use of interfaces between layers has seen to that.</p>
<h3>Dependencies: Going Out (IoC)</h3>
<p>Dependencies going out are a little more complex. This describes what our application does in response to the request: Both in processing a request and responding to it when a request is processed. Let's look at some examples:</p>
<p>Our Domain Layer will likely need database access to create some domain entities. This means our application &quot;depends&quot; on some sort of data storage.</p>
<p>Our Application Layer may need to send a notification when it finishes a task. If this is implemented as an email notification (for example, if we're using SES), then our Application Layer can be said to depend on SES for sending a notification email.</p>
<p>We can see here that conceptually, our inner layers are depending on things found in layers outside of it! How do invert this?</p>
<p>Of course I used the term &quot;invert&quot; on purpose. This is the point of &quot;<strong>Inversion of Control</strong>&quot;, the &quot;I&quot; in SOLID. Here again, our interfaces serve us well.</p>
<p>We Invert Control by using Interfaces. These allow our layers to inform other layers how they will be interacted with, and how they need to interact with other layers. It's up to the other layers to implement these interfaces. In this way, we are letting our inner layers dictate how they are used. This is Inversion of Control.</p>
<p>Our Domain Layer can define an interface for a repository class. This Repository Class will be implemented in another layer (Likely up in the Framework Layer with our Framework database classes). By using an interface, however, we are decoupling our Domain Layer from the specific persistence type used: We have the potential to change persistence models when we test, or if project needs dictate an actual technological change (lucky you, if you get to scale high).</p>
<p>It's a similar situation with the Notifier. Our Application Layer knows it needs to send out notifications. That's why we created the Notifier interface. Our Application doesn't need to know how the notifications are sent - it just knows it needs to define how it's to be interacted with. And so our Notifier interface, defined in the Application Layer, is implemented by an email (perhaps SES) in our Framework Layer. The Application Layer has inverted control by using an interface; it has told the outside layer how it's going to be used. The layers are decoupled as implementations can easily be switched.</p>
<p>So, when our logic is flowing from the &quot;outside, in&quot;, we make use of our interfaces again. We employ the user of Inversion of Control so that dependencies keep flowing in one direction. We decouple our inner layers from outside layers, while still making using them!</p>
<p><img src="https://speakerd.s3.amazonaws.com/presentations/de8629f0bf520131c2e20239d959ba18/slide_21.jpg?1400675141" alt="flow of dependencies" /></p>
<h2>Conclusion</h2>
<p>This covers a lot of material! What you read here is the result of a lot of code architecture study. Instead of being very specific, it's a bit on the general side - we're dealing with concepts here, instead of concrete &quot;do it this way&quot; type rules.</p>
<p>Overall, Hexagonal Architecture is a description of &quot;good&quot; code practice. It's not a specific way to go about coding applications. Its concepts works for opinionated frameworks, as well as for the &quot;no framework&quot; crowd.</p>
<p>Hexagonal Architecture it another way to look at the same old rules we're reading about as we learn more about code architecture.</p>
&lt;script async class="speakerdeck-embed" data-id="de8629f0bf520131c2e20239d959ba18" data-ratio="1.33333333333333" src="//speakerdeck.com/assets/embed.js">&lt;/script>
<iframe width="100%" height="315" src="//www.youtube.com/embed/6SBjKOwVq0o" frameborder="0" allowfullscreen></iframe>
]]></content:encoded>
      <pubDate>Sat, 28 Jun 2014 14:50:59 +0000</pubDate>
    </item>
    <item>
      <title>Vaprobash</title>
      <link>https://fideloper.com/on-vaprobash</link>
      <description>Since Ubuntu 14.04 is released, and most of the kinks are worked out, I wanted to let you know how I see Vaprobash moving forward.</description>
      <content:encoded><![CDATA[<p>Since Ubuntu 14.04 is released, and most of the kinks are worked out, I wanted to let you know how I see Vaprobash moving forward.</p>
<h2>A New Repository</h2>
<p>Vaprobash is moving onto <strong>Ubuntu 14.04 LTS</strong>, where it will stay until 16.04 LTS (if it's still relevant then).</p>
<p>Some people still need Ubuntu 12.04, and so they will still be able to use it! Going forward, there will be two development tracks (two repositories): <code>14.04</code> and <code>12.04</code>. I'll be spending most of my time on <code>14.04</code>. The <code>12.04</code> repository is now official in &quot;maintenance&quot;. PR's and bug fixes are still welcome.</p>
<p>The <a href="https://github.com/fideloper/Vaprobash">fideloper/vaprobash</a> repository is now at <strong>Ubuntu 14.04 LTS</strong>. You can, as always, grab the <code>Vagrantfile</code> by using <a href="http://bit.ly/vaprobash">http://bit.ly/vaprobash</a>:</p>
<pre><code>wget -O Vagrantfile http://bit.ly/vaprobash
</code></pre>
<p>For those who need to use <strong>Ubuntu 12.04 LTS</strong> (perhaps if you need to test against php 5.3 or php 5.4), there is the <a href="https://github.com/fideloper/vaprobash12">fideloper/vaprobash12</a> repository. The <code>Vagrantfile</code> for this repository is located at <a href="http://bit.ly/vaprobash12">http://bit.ly/vaprobash12</a>:</p>
<pre><code>wget -O Vagrantfile http://bit.ly/vaprobash12
</code></pre>
<h2>The Goal</h2>
<p>Vaprobash is meant to get you a development server up and running, as quickly and painlessly as possible.</p>
<p>It's secondary goal (along with <a href="http://serversforhackers.com">serversforhackers.com</a>) is to have you learn how to set up servers yourself. To this end, Vaprobash uses bash scripts, which are reasonably clear in showing each step towards installing various pieces of software onto an Ubuntu server.</p>
<p>The hope is that by not obscuring (and complicating) the install process with a provisioner of some sort (Ansible, Chef, Puppet), people can <strong>learn</strong> by seeing, copying and tinkering on their own.</p>
<p>I want people to learn enough to grow out of Vaprobash.</p>
<h2>Stability</h2>
<p>Vaprobash gets a lot of pull requests. I love this - it's a sure sign that it's simple enough for people to grasp and modify.</p>
<p>However, this has a detriment. At any point in time, there's usually a bug somewhere as well as some new feature all intermingled into the <code>develop</code> or <code>master</code> branches. This makes it hard to pinpoint a release.</p>
<p>Furthermore, PR's regularly introduce bugs; They aren't always thoroughly tested. I can understand that - provisioning a server over and over to test all the various use cases is a pain. It's a slow process, and there are a ton of variations (depending on what people choose to provision).</p>
<p>As of yet, there isn't a great testing process. Even if we used the magic of Docker to make relatively quick, repeatable tests, it would still be hard to test. It's not enough to test that Nginx was installed. We also need to test that it's working (running), and it's virtual servers are configured, and probably some other factors as well. Furthermore, tests could be misleading due to other settings changed by other provisioners - they are not all self-contained. For example, Apache and Nginx need to be aware of if PHP was installed.</p>
<blockquote>
<p>If anyone has a good solution for this, I'm all ears. I think some sort of Docker-based testing would be <strong>amazing</strong>.</p>
</blockquote>
<h2>Going Forward</h2>
<p>Moving forward, I want to start using a <a href="http://semver.org/">tagging/versioning</a> system for both repositories. This will lend some stability for those who need it.</p>
<p>Tagging versions will allow users to &quot;freeze&quot; Vaprobash, so they get the same sets of install scripts every time they provision a new server. This means that their installations are much, much less likely to be different, even as Vaprobash changes (new features, bug fixes).</p>
<p>Each repository will soon be marked as stable with a <code>1.0.0</code> release. Then bug fixes will be pulled in readily (increasing PATCH versions) while non-breaking features will increase MINOR versions. Finally new (breaking) features will increase MAJOR versions.</p>
<p>This will be similar to <a href="https://github.com/laravel/framework/releases">Laravel</a>, where there will be branches for MAJOR.MINOR releases, and tags for all MAJOR.MINOR.PATCH releases.</p>
<p>PR's for new features will be added in more slowly. Perhaps use that extra time to keep testing your scripts :D</p>
]]></content:encoded>
      <pubDate>Fri, 09 May 2014 20:47:25 +0000</pubDate>
    </item>
    <item>
      <title>By the Way</title>
      <link>https://fideloper.com/by-the-way</link>
      <description>Often I see people in articles, comments, and chat espousing "the right way" to code. This is a reaction to that.</description>
      <content:encoded><![CDATA[<p>Often I see people in articles, comments, and chat espousing &quot;the right way&quot; to code. Other times I get asked questions starting with &quot;I'm implementing X just like your [article|book|comment|chat]&quot;. Both of these situations give me pause, because I think there is an important point often not stated: <strong>None of us are right, myself included</strong>.</p>
<p>This is because there's no one way to accomplish a goal in coding. I find this fortunate, as novice programmers would be screwed by the mental leaps needed to understand a senior programmer's methodologies. It would also halt progress of discovering new and useful patterns.</p>
<p>Instead, everyone is somewhere in a spectrum between a &quot;programming novice&quot;, and being an &quot;experienced programmer&quot;. The difference between novice and experience is learning techniques in code maintenance.</p>
<p>We're all on our way to learning new techniques and ideas. <strong>What we write today, we'll throw out tomorrow</strong>. <a href="http://fideloper.com/how-we-code">This article</a>, and its expansion of ideas I touched on <a href="https://leanpub.com/implementinglaravel">in my Laravel book</a>, is one personal reflection of that.</p>
<p>So what's my point?</p>
<p>Instead of copying everything to the letter, take what you read and modify it to your needs. Try new and stupid things. Also, the depth of knowledge you'll find in a good book is better than anything you'll find in a blog article.</p>
<!-- And don't fucking argue on the internet. Everyone talks in definitives as if they know what the fuck they are talking about. They don't. Neither do I. -->
]]></content:encoded>
      <pubDate>Sun, 16 Mar 2014 02:37:17 +0000</pubDate>
    </item>
  </channel>
</rss>
