<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Al Hayes's Tech Blog]]></title><description><![CDATA[Software development and technology. ]]></description><link>https://blog.alanhayes.io/</link><generator>Ghost 4.2</generator><lastBuildDate>Mon, 08 Dec 2025 05:56:46 GMT</lastBuildDate><atom:link href="https://blog.alanhayes.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[A Branching Strategy for Multiple Teams Working on a Single Repo]]></title><description><![CDATA[<p>Branching is one of those topics which doesn&apos;t get a lot of attention. It&apos;s easy to get accustomed to a simplified branching strategy while working on a small team. Cut out the frills, take some short-cuts, and encounter no issues doing so. Until you bring those</p>]]></description><link>https://blog.alanhayes.io/branching-strategy-multiple-teams-single-repo/</link><guid isPermaLink="false">607b37d055aeb5000151acef</guid><dc:creator><![CDATA[Alan Hayes]]></dc:creator><pubDate>Sun, 15 May 2022 15:35:41 GMT</pubDate><media:content url="https://blog.alanhayes.io/content/images/2022/05/brett-jordan-M3cxjDNiLlQ-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.alanhayes.io/content/images/2022/05/brett-jordan-M3cxjDNiLlQ-unsplash.jpg" alt="A Branching Strategy for Multiple Teams Working on a Single Repo"><p>Branching is one of those topics which doesn&apos;t get a lot of attention. It&apos;s easy to get accustomed to a simplified branching strategy while working on a small team. Cut out the frills, take some short-cuts, and encounter no issues doing so. Until you bring those bad habits with you to a more complex development environment. Particularly where multiple teams, each with their own independent releases, share the same repo.</p><p>I encountered this issue while working in a company where the dev team began small and grew rapidly. Too many engineers weren&apos;t aware of the importance of a solid branching strategy.</p><p>While only one team is working on a codebase, developing and releasing one feature at a time, not too much can go wrong when deploying those releases to Production. However, over time things changed. The team grew. Before we knew it, we had several teams working on the same codebase, each with their own independent releases and release schedules. This is where the poor branching strategy really began to creak and hinder us. </p><h2 id="git-flow">Git Flow</h2><p>I recommend adopting the &quot;Git Flow&quot; branching strategy (as described here: &#xA0;<a href="http://nvie.com/posts/a-successful-git-branching-model/">http://nvie.com/posts/a-successful-git-branching-model/</a>) as a <strong>general rule</strong> for <strong>single</strong> teams, no matter how small. Making this second nature to you is good practice.</p><figure class="kg-card kg-image-card"><img src="http://nvie.com/img/git-model@2x.png" class="kg-image" alt="A Branching Strategy for Multiple Teams Working on a Single Repo" loading="lazy"></figure><p>This will keep your team&apos;s features and releases organised and prevent most problems from arising. However one exception to this can arise when your dev team grows larger, to the point where <strong>multiple teams</strong> are working on separate independent releases on the same repo. What was once a reliable branching strategy, is now becoming a little tricky to navigate.</p><h2 id="the-problem-release-schedules-change-">The Problem: Release Schedules Change!</h2><p>In the real world there are so many dependencies, both internal and external, that can influence a release that it is <strong>very common</strong> for release dates to change. And when this happens in a larger company which has multiple teams working on separate releases in the same codebase, this is where your branching strategy will either support your business needs or buckle underneath them. &#xA0;</p><p>Your teams may be entirely independent with separate Business Analysts and Product Owners, or they may share a single PO. They may have excellent cross-team communication, or none at all. It doesn&apos;t matter. No amount of communication or coordination will avoid the problems caused by an ill-fitting branching strategy when a release gets postponed and another release jumps to the front of the queue. How are you going to disentangle the releases? I&apos;ve seen this scenario cause major problems multiple times, as the limitations of a poor branching strategy suddenly come into focus. </p><p>A simple example with just two teams. You can imagine how complex and messy it would be with three, four or even more teams:</p><ul><li><strong>Monday</strong>: Team A&apos;s feature is code complete and testing has passed on Int, they have merged their feature branch into <strong><em>#develop</em></strong>, deployed to Cert, and they are ready to deploy to Prod on <strong>Wednesday</strong>. </li><li><strong>Tuesday</strong>: Team B&apos;s feature is code complete also, and in order to test it with Team A&apos;s code (which is on <em><strong>#develop</strong></em>, and going to Prod tomorrow), they have merged <em><strong>#develop</strong></em> into their feature branch, and deployed it to INT and executed tests. All is looking good, and they are scheduled to go to Prod on <strong>Thursday</strong>. </li><li><strong>Wednesday</strong>: Due to an external dependency, Team A&apos;s release needs to be postponed by a week. Team B is still expected to deploy their feature to Prod on <strong>Thursday</strong>. Management does not expect there to be any issue with this, as both features are unrelated. Now Team B is in a predicament. Their feature branch already includes all of Team A&apos;s unreleased code, which is already on <em><strong>#develop</strong></em>. On top of that, all of their testing has been done on a branch which has included Team A&apos;s code. Assuming they can revert Team A&apos;s code from their branch, they will need to retest everything from scratch and promote their build through the lower environments all over again. </li><li><strong>Thursday</strong>: An engineer on Team B had to do an all-nighter Wednesday night, reverting code, manually fixing conflicts, cherry-picking changes, running tests, fixing bugs, deploying (and redeploying) to Int... all in a frantic attempt to be ready for their release today. This kind of situation should be avoided at all costs. It&apos;s stressful, unsustainable and error-prone. The release will be inherently more risky than it should be. Team B faces the difficult decision whether to go ahead with the release or to postpone it, inviting the wrong kind of attention and some difficult questions from management.</li></ul><h2 id="what-s-the-root-cause-of-the-problem">What&apos;s the root cause of the problem?</h2><p>The most common cause of this mess is the habit of merging code into <em><strong>#develop</strong></em>, (a branch shared with other teams), or even <em><strong>#master</strong></em> (!), <strong>prior</strong> to a release&apos;s deployment to Production. </p><p>I&apos;ve seen teams try to have a single <strong><em>#develop</em></strong> branch shared between the teams, and this simply hasn&apos;t worked. The <strong><em>#develop</em></strong> branch becomes polluted with new features, added by different teams for different releases, and preparing a release branch becomes very daunting:</p><ul><li>Creating a new release branch off <strong><em>#develop</em></strong> would require reverting the other team&apos;s code changes, which is very difficult as mentioned above.</li><li>Another option is to create the release branch off <strong><em>#master</em></strong>, and then cherry-pick the desired changes from the <strong><em>#develop</em></strong> branch into the new release branch. This, as you can imagine, is fraught with error in practice. Forking release branches from #master instead of #develop is not a good idea, as over time, hot-fixes made to the release branch and changes made to develop mean that future merges often result in conflicts. </li></ul><h2 id="how-can-we-avoid-this-hell">How can we avoid this hell? </h2><p>In order to meet the business requirement of having multiple releases in development at the same time, with shifting business plans and release dates, we need a branching model which is flexible enough to cater for this.</p><ol><li>The solution is to keep all of your teams&apos; branches completely isolated from each other. </li><li>The only branch shared by all teams is <em><strong>#master</strong></em>. </li><li>Code is only ever merged into <em><strong>#master</strong></em> after it has been successfully deployed to Production. This is a &quot;<strong>first past the post</strong>&quot; model.</li><li>Changes to <em><strong>#master</strong></em> are immediately pulled into every team&apos;s develop and release branches, and any testing in progress or previously completed on these branches is restarted. </li></ol><p>That&apos;s it. Following these rules, everything is simplified. Each team&apos;s release has a straightforward, direct route to Production, unblocked by any other team, even if release dates change. </p><h2 id="multi-team-git-flow-a-k-a-alan-flow-">Multi-Team Git-Flow (a.k.a. Alan-Flow!)</h2><p>I call this branching model &quot;Multi-Team Git-Flow&quot;. It is an adaptation of the &quot;Git Flow&quot; model described above, where each team has its own isolated set of feature, develop, release and hot-fix branches. </p><p>Consider the Git-Flow diagram at the top of the page as the branching model <strong>for each team</strong>. The key thing is that each team will have their own <strong><em>#develop-&lt;team&gt;</em></strong> branch, rather than a shared <strong><em>#develop</em></strong> branch. This will mean a team can simply fork its own <strong><em>#develop-&lt;team&gt;</em></strong> to create a release branch, and we can avoid the need to <strong>cherry-pick</strong> from a shared <strong><em>#develop</em></strong> branch. </p><p>Each team effectively works in a &quot;silo&quot;, with no interaction with other teams. Teams don&apos;t, and shouldn&apos;t ever need to, make changes to other teams&apos; develop/release branches. </p><p>While the overall picture of all branches across all teams in the repo may seem quite daunting at first, an individual team need only ever concern themselves with their own branches in the repo, namely:</p><ul><li>Feature Branches: <em><strong>#feature/&lt;team&gt;/&lt;feature-name&gt;</strong></em></li><li>Development Branch: <strong>#develop-&lt;team&gt;</strong></li><li>Release Branches: <strong><em>#release/&lt;team&gt;/&lt;release-name&gt;</em></strong><br>Note: A new release branch should be created for each release by forking the <strong><em>#develop-&lt;team&gt;</em></strong> branch.</li><li>The Single Shared Master Branch: <strong><em>#master </em></strong><br>The <strong>only</strong> branch shared by all teams. The <strong>only interaction</strong> between the separate teams occurs when <strong><em>#master</em></strong> gets merged into by another team after a release is successfully deployed to Production. When a release is merged into <strong><em>#master</em></strong>, all other teams&apos; should be notified, so they can immediately pull master into their respective develop and release branches.</li></ul><h2 id="what-are-the-benefits">What are the Benefits?</h2><ul><li>There will be no need to revert code from develop or cherry-pick code into releases ever again. </li><li>It will no longer be possible to omit some code, or take another team&apos;s code into your release branch, by mistake.</li><li>Straightforward forking of <strong><em>#develop-&lt;team&gt;</em></strong> to create a new release branch. </li><li>Moves all merging headaches to <strong>post-PROD deployment</strong> where the changes can be made without the added time-pressure of a looming release date. This means we have a smooth route to Production for all teams&apos; releases, even if plans change and release dates shift around. (Nice!)</li><li>This means stress-free, well-tested, low-risk Prod deployments which go ahead on schedule. </li></ul><h2 id="release-checklist">Release Checklist</h2><ul><li>It is critical that <strong><em>#master</em></strong> has been diligently merged into each team&apos;s <strong><em>#develop-&lt;team&gt;</em></strong> branch and release branches <u>whenever another team has changed <strong><em>#master</em></strong>.</u> This responsibility should be owned by each team&apos;s lead and coordinated amongst the team leads.</li><li>Prior to deploying a release branch to Production, a final check should be performed to verify that <em><strong>#master</strong></em> does not contain any code changes <strong>not already included</strong> in the release branch. Pulling the master branch into the release branch should result in no new code merged. <br><strong>Note</strong>: If there has been changes to <strong><em>#master</em></strong>, then the process was not followed correctly. After the merge, all tests should be run again on Int and Cert. </li><li>On successful Production deployment, merge the release or hotfix into <strong><em>#master</em></strong>. By adhering to this rule, <em><strong>#master</strong></em> always contains the code which has been most recently deployed successfully to Production.<br><strong>Note: </strong>If done correctly, you will find that this merge &quot;fast-forwards&quot; master to the commit deployed to Production).</li><li>If the Production deployment of the release fails, the deployment can be rolled back simply by deploying <em><strong>#master</strong></em>. </li><li>On change to <em><strong>#master</strong></em>, the team lead must notify the other teams of the change. </li></ul><p>That&apos;s all I have for you. I hope you find this strategy helpful. </p><h3 id="happy-branching-">Happy branching!</h3><figure class="kg-card kg-image-card"><img src="https://ik.imagekit.io/irishcelticjewellery/alanhayes-io-blog/branches.jpeg" class="kg-image" alt="A Branching Strategy for Multiple Teams Working on a Single Repo" loading="lazy"></figure>]]></content:encoded></item><item><title><![CDATA[How I built Irish Celtic Jewellery on a shoestring]]></title><description><![CDATA[<figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/1*9j7VGzz4VFsJ0M6PV4ZcOg.png" class="kg-image" alt="Irish Celtic Jewellery" loading="lazy"></figure><p>I recently decided to take on a project of designing and implementing an e-commerce website in the jewellery space, for a non-technical client who had 20+ years domain knowledge. I worked on this project outside my day job, usually on weekends, so utilising my time efficiently was critical. This drove</p>]]></description><link>https://blog.alanhayes.io/how-i-built-irish-celtic-jewellery-on-a-shoestring/</link><guid isPermaLink="false">607b384455aeb5000151acfd</guid><category><![CDATA[Irish Celtic Jewellery]]></category><category><![CDATA[Startup]]></category><category><![CDATA[Getting Started]]></category><category><![CDATA[VPS]]></category><dc:creator><![CDATA[Alan Hayes]]></dc:creator><pubDate>Mon, 12 Apr 2021 19:35:00 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/1*9j7VGzz4VFsJ0M6PV4ZcOg.png" class="kg-image" alt="Irish Celtic Jewellery" loading="lazy"></figure><p>I recently decided to take on a project of designing and implementing an e-commerce website in the jewellery space, for a non-technical client who had 20+ years domain knowledge. I worked on this project outside my day job, usually on weekends, so utilising my time efficiently was critical. This drove many of the key design and architectural decisions. In this post I&#x2019;ll go through the process, the decisions I made, and why I made them.</p><h3 id="a-bit-of-my-background">A bit of my background</h3><p>I am a full stack javascript software engineer, with a primary skill stack including Node.js, Angular 11, UX, Java, GraphQL, SQL, Apache Kafka &amp; Docker. For the past 10+ years I&#x2019;ve been working in software development in an Agile environment, and for the last 8 years I&#x2019;ve been self-employed as a contractor working on both the front-end and back-end, owning critical components that house the critical data of 15+ million users. I&#x2019;m the proud founder of a failed startup, which I regard as my most valuable training, as it engrained in me a &#x201C;founder&#x2019;s mentality&#x201D; and &#x201C;just get it done&#x201D; attitude.</p><p>While it wasn&#x2019;t a success, the process of building something from nothing is a life-changing, and career-changing, experience. To anybody who might be considering starting a company&#x2026; just do it! Regardless of whether or not it succeeds (it&#x2019;s likely it won&#x2019;t by the way), the experience will be a tremendous credit to you and will set you apart from your peers. The more you put into it, the more it will change you. Quite literally.</p><h3 id="the-project">The project</h3><p>Utilising these core skills, I embarked on the project, the requirements from the client were very clear:</p><ul><li>1. Quality</li><li>2. Security</li><li>3. Performant</li><li>4. On Budget (!)</li></ul><p>TL;DR If you want to check out the finished product, the website is <a href="https://www.irishcelticjewellery.com/" rel="noopener">Irish Celtic Jewellery</a>. It offers a wide-range of jewellery, all designed and hand-made by goldsmiths based in Ireland. It costs less than &#x20AC;100/month on average, including all hosting costs and all related expenses.</p><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/1*IdSJ904zsheYlyIsH5q9_A.jpeg" class="kg-image" alt="Quality Irish Celtic Jewellery" loading="lazy"></figure><h3 id="1-quality">1. Quality</h3><p>First off, there is no point in building a below standard website in the competitive jewellery retail space. Big money has been spent building impressive websites with generous marketing budgets. With such a high value low volume product, the market is very difficult to enter for a small business.</p><p>With no development budget or marketing budget to speak of, the focus instead was placed on creating a simple, well designed website. I immediately knew I would use <a href="https://getbootstrap.com/" rel="noopener">Twitter Bootstrap</a> as the foundation of the UI. It&#x2019;s by far the best and simplest UI framework to pick up and get moving fast. It&#x2019;s also opinionated, which I like, and for that reason is the perfect presentation layer to marry with <a href="https://angular.io/" rel="noopener">Angular</a>, my favourite ultra-opinionated UI framework.</p><p>Knowing my limits, I purchased a slick, professionally designed, e-commerce &#x201C;Bootstrap&#x201D; theme for about $40. I highly recommend this approach. Just be sure to check the technology versions used in the theme. For me, that meant ensuring Bootstrap v4 and SCSS was used.</p><h4 id="monorepo-vs-polyrepo">Monorepo vs Polyrepo?</h4><p>Now before getting stuck into the UI, I had to decide the code structure. Whether to house the UI code and backend code in the one repo&#x2026; This took me longer than I thought it would. But after an aborted attempt at a <a href="https://lembergsolutions.com/blog/why-you-should-use-monorepo-and-why-you-shouldnt" rel="noopener">monorepo</a>, I decided to keep the Angular UI entirely separate to the backend. And I have been happy with that decision ever since. It helps me keep the front-end and back-end completely separate in my mind. There&#x2019;s also no extension confusion in VS Code. The monorepo concept is probably best suited to multiple apps of the same type (i.e. UI).</p><h4 id="the-frontend">The Frontend</h4><p>So, after setting up a vanilla Angular 11 project, I set up the SCSS from the Bootstrap theme, and split the theme HTML into my first Angular components. (The entire website ended up consisting of 30+ components.)</p><p>Rather than re-inventing the wheel, there&#x2019;s some excellent services around these days for handling payments and authentication.</p><p>So for payments I integrated with <a href="https://stripe.com/" rel="noopener">Stripe</a> which provides best in class support for various payment methods, fraud detection and the latest <a href="https://stripe.com/en-ie/guides/strong-customer-authentication" rel="noopener">Strong Customer Authentication</a> (SCA) security standards required in Europe.</p><p>For identity authentication I chose <a href="https://auth0.com/" rel="noopener">Auth0</a> (recently acquired by Okta) which has probably the best overall service due to its ease of use, extensive API and excellent documentation.</p><h4 id="the-backend">The Backend</h4><p>For building the backend, it was easy to choose Node.js. The speed at which APIs can be rolled out with Node.js is unrivalled in my opinion, and there&#x2019;s a lot to be said for staying in the javascript ecosystem when receiving JSON payloads from front-end clients.</p><p>I chose <a href="https://github.com/koajs/koa" rel="noopener">Koa</a> as the Node.js web server, which simplifies the endpoints by allowing use of <code>async/await</code> in the API middleware. (Avoid callback hell and generators whenever you can.)</p><p>For Example, this is the code for a fully functioning web server, including a single endpoint and middleware which prints the duration of requests:</p><pre><code class="language-javascript">const Koa = require(&apos;koa&apos;);
const app = new Koa();
app.use(async (ctx, next) =&gt; {
    const start = new Date();
    await next();
    const ms = new Date() - start;
    console.log(`${ctx.method} ${ctx.url} - ${ms}ms`);
});
app.use(ctx =&gt; {  ctx.body = &apos;Hello Koa&apos;;});
app.listen(3000);</code></pre><p><strong>Database</strong></p><p>I chose a relational database as my persistence layer, and in the world of relational databases, the best option is Postgres.</p><p><strong>Search</strong></p><p>To support search, I chose <a href="https://www.elastic.co/elasticsearch/" rel="noopener">Elastic Search</a>, which when optimized for fast-reads (rather than fast indexing) is blazingly fast. It&#x2019;s a powerful tool designed for precisely this purpose, and once configured, it <em>just works</em>. Supporting fuzzy search, search-as-you-type, you name it.</p><p><strong>Cache</strong></p><p>What about a non-persistent cache? Not everything should be writing to the database or file system. For example session context data should be quickly accessible for incoming requests. So for this purpose, I chose <a href="https://redis.io/" rel="noopener">Redis</a>.</p><p><strong>Blog</strong></p><p>Rather than paying for a hosted service, I set up my own <a href="https://ghost.org/" rel="noopener">Ghost</a> blogging instance. Ghost is a very easy to use, modern blogging platform which I highly recommend.</p><p><strong>Build &amp; Deploy Tool</strong></p><p>To efficiently build, test and deploy my code, I set up my own containerized <a href="https://www.jenkins.io/" rel="noopener">Jenkins</a> instance.</p><p><strong>Architecture</strong></p><p>So how to run all these free tools cheaply? I didn&#x2019;t want to pay to use them as hosted services, so to use them for free I needed to host them myself. However to keep costs down, I intended to run them all on the one server. Clearly the best option was to pay for a Virtual Private Server (VPS). There are many of these services available nowadays, but I&#x2019;ve been a happy customer of <a href="https://rimuhosting.com/" rel="noopener">Rimu Hosting</a> for many years so I chose them.</p><p>Next I really didn&#x2019;t want to suffer the heart-ache of trying to install all these tools on the same machine, which is generally fraught with inter-dependency clashes. So I chose a containerized architectural approach. Running each component in a Docker container on the same host keeps each process isolated from the others, improving overall stability.</p><p>Overall I&#x2019;ve got 9 docker containers running on a single VPS, costing &#x20AC;50/month, with the following resources:</p><ul><li>1 single core CPU (Which is insufficient, more on this later)</li><li>10GB RAM</li><li>64GB disk</li><li>100GB data transfer</li></ul><h4 id="docker-compose-vs-docker-swarm-vs-kubernetes">Docker-Compose vs Docker Swarm vs Kubernetes</h4><p>While Kubernetes is the king of container orchestration, it&#x2019;s too advanced for this use case. It&#x2019;s ideal for larger multi-server app deployments but if you intend to run several containers on a single machine, docker compose will do the job nicely and simply.</p><h4 id="continuous-deployment">Continuous Deployment</h4><p>To speed up testing and deployment, I configured <a href="https://docs.github.com/en/github-ae@latest/developers/webhooks-and-events/webhooks" rel="noopener">Git web hooks</a> which get triggered whenever my front-end code or back-end code changes. This causes Jenkins to automatically execute the respective jobs, which are configured to:</p><ul><li>Check out the code</li><li>Execute all unit tests</li><li>Build production-ready code</li><li>Deploy to the test environment and execute integration tests and End-to-End (e2e) tests.</li><li>Deploy to production and execute integration tests and End-to-End (e2e) tests.</li></ul><h4 id="logging-and-metrics-via-the-elk-stack-plus-beats">Logging and Metrics via The Elk Stack (plus Beats!)</h4><p>To monitor all the moving parts, I&#x2019;d recommend setting up Kibana and Logstash, which together with ElasticSearch are known as <a href="https://www.elastic.co/what-is/elk-stack" rel="noopener">The ELK Stack</a>. The simplest way to do this is via 3 docker containers, and with a little configuration will solve all your logging and metrics monitoring needs.</p><p>Logs from all your key applications, and other important services (i.e. web servers) in the many docker containers can be gathered by <strong>Filebeat</strong> and shipped to <strong>Logstash</strong>, which then parses those logs, filters them, transforms them if necessary, and indexes them in ElasticSearch where they are searchable and viewable via Kibana.</p><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/1*dQ4tOIBg9wKeH5ie0naonQ.png" class="kg-image" alt loading="lazy"></figure><p>The same applies for server metrics, which can be shipped to Logstash by a sister app of Filebeat called (you guessed it) <a href="https://www.elastic.co/beats/metricbeat" rel="noopener"><strong>Metricbeat</strong></a>.</p><p>You will then be able to configure nice looking dashboards in Kibana which display all your server metrics, including CPU/Memory usage per docker container.</p><h4 id="datadog">Datadog</h4><p>The premium alternative to this setup is Datadog, which has expanded its capabilities over the years such that it is now a one-stop shop for all your monitoring needs. However their free offering is very limited and it quickly gets expensive. Also it&#x2019;s not possible to host Datadog yourself, it can only be used by setting up agents locally which ship logs and metrics to the hosted Datadog service. Maybe on the next project&#x2026;</p><h4 id="testing">Testing</h4><p>I didn&#x2019;t forget about testing. :) The most important aspect of ensuring quality is surely having good testing processes and methodologies in place. To this end, I&#x2019;ve got 3 kinds of tests:</p><p><strong>Unit tests</strong></p><p>Great at catching regressions quickly and giving confidence to code changes prior to deployment.</p><p>For Angular frontends, the standard unit test runner is Karma.</p><p>On the Node.js backend, I prefer to use mocha, especially since it now supports <a href="https://mochajs.org/#parallel-tests" rel="noopener">parallel tests</a> which greatly speeds things up when you&#x2019;ve got many tests.</p><p><strong>E2E tests</strong></p><p>Great at ensuring key user scenarios, which may touch several components, are actually still functional after a deployment.</p><p>On the Node.js backend, I like to use <a href="https://github.com/avajs/ava" rel="noopener">Ava</a> which is a modern minimalist parallel test runner.</p><p>As for the Angular frontend, the standard way to test E2E has always been <a href="https://www.protractortest.org/" rel="noopener">Protractor</a>&#x2026; however the Google team behind Protractor <a href="https://github.com/angular/protractor/issues/5502" rel="noopener">recently announced</a> that Protractor will be officially deprecated in May 2021. So it&#x2019;s unknown which testing framework will step up as the replacement. My money is on <a href="https://webdriver.io/" rel="noopener">WebDriverIO</a> which I&#x2019;ve heard lots of good things about.</p><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/1*_AXabW716J0HACAk3zB0YA.png" class="kg-image" alt loading="lazy"></figure><p><strong>Performance Tests</strong></p><p>There&#x2019;s no load testing tool that reigns supreme at the moment. Two I&#x2019;ve used a lot are Gatling and Artillery.</p><p><a href="https://gatling.io/" rel="noopener"><strong>Gatling</strong></a><strong>&#x200A;&#x2014;&#x200A;</strong>Tests are written in Scala and execute in a JVM. If that is acceptable to you then perhaps choose it. Of all tools I&#x2019;ve tried Gatling generates by far the best reports (which utilise <a href="https://www.highcharts.com/" rel="noopener">Highcharts</a>)</p><p><strong>Artillery</strong>&#x200A;&#x2014;&#x200A;Runs in Node.js, tests can be coded in javascript and YML, and it avoids the need to use a JVM and Scala. It&#x2019;s simple and fast to write tests. However the visual reports it generates are poor. Years later and there has been no improvement so I would not recommend investing in it. Also, when comparing the performance of Artillery to other testing tools, it rates very poorly. Artillery executes requests 10x slower than Gatling.</p><p>I&#x2019;ve heard good things about <a href="https://github.com/rakyll/hey" rel="noopener">Hey</a>, <a href="https://github.com/wg/wrk" rel="noopener">Wrk</a>, and <a href="https://github.com/tsenart/vegeta" rel="noopener">Vegeta</a>, but have no experience with them yet.</p><h4 id="summary">Summary</h4><p>The tools used to create this e-commerce website ended up being:</p><ul><li>UI/UX: Twitter Bootstrap HTML &amp; SCSS theme</li><li>Frontend Logic: Angular 11</li><li>Backend Logic: Node.js</li><li>Search Engine: Elastic Search</li><li>Cache: Redis</li><li>Database: Postgres</li><li>Containers: Docker</li><li>Orchestration: Docker Compose</li><li>Continuous Delivery: Jenkins</li></ul><p>Overall I&#x2019;ve got 9 docker containers running on a single VPS, costing &#x20AC;50/month.</p><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/0*rapcXf1cL_MUZP2W.jpeg" class="kg-image" alt loading="lazy"></figure><h3 id="2-security">2. Security</h3><p>A major feather in the security cap of the website is the fact that <strong>no</strong> sensitive information is <strong>ever</strong> sent to the backend.</p><ul><li>Customers passwords are <strong>only</strong> sent to Auth0.</li><li>Sensitive credit card information is <strong>only</strong> sent to Stripe.</li></ul><h4 id="authentication">Authentication</h4><p>Once users are authenticated via Auth0, their authenticated session is verified by the website backend which receives a JSON Web Token (JWT) in each authenticated API request, the industry standard in security tokens.</p><p>Authorization rules are then applied to ensure that the user can only access their data (for example their order history, or the status of a recent order) and nobody else&#x2019;s data.</p><h4 id="payments">Payments</h4><p>Sensitive credit card information and billing details are sent directly from the customer&#x2019;s browser to Stripe, which is one of the largest and most secure payment platforms in the world.</p><p><a href="https://stripe.com/" rel="noopener">Stripe</a> provides best in class support for various payment methods, fraud detection and the latest <a href="https://stripe.com/en-ie/guides/strong-customer-authentication" rel="noopener">Strong Customer Authentication</a> (SCA) security standards required in Europe</p><p>Once a payment has been completed successfully, the jewellery website is notified by Stripe via the <a href="https://stripe.com/docs/payments/payment-intents" rel="noopener">Payment Intent API</a>.</p><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/0*hqcOL2ggRAyI80IP.jpg" class="kg-image" alt loading="lazy"></figure><h3 id="3-performant">3. Performant</h3><p>Good performance was a requirement from the very beginning. Rather than cheating good design by beefing up the server resources, I tackled some key areas:</p><h4 id="webp-image-format">WEBP Image Format</h4><p>Image size can be a major performance bottleneck, but there is a new image format that is thankfully widely supported these days called &#x201C;<a href="https://developers.google.com/speed/webp/" rel="noopener">webp</a>&#x201D;. This format was developed by google, can support both lossy and lossless compression, and converted images regularly see size reduction of 80% or more. Adopting this wonderful new image format has been a bit tricky however, as support for it has been conspicuously absent from Safari&#x2026; that is, until <strong>September 2020</strong> when <a href="https://www.caniuse.com/webp" rel="noopener">Apple finally got their finger out</a>.</p><h4 id="image-cdn">Image CDN</h4><p>There is great benefit to be gained by using an Image Content Delivery Network (CDN). These deliver cached copies of your images to clients without sending the request to your web server.</p><p>I found <a href="https://imagekit.io/" rel="noopener">Image Kit</a> to be an excellent CDN service which also provides automatic conversion to WEBP format in multiple dimensions. This also meant I could support responsive images via the new <code>srcset</code> attribute of the <code>img</code> tag, and deliver the best-fitting image optimized for the client&#x2019;s viewport. Hit three birds with one stone? Yes please!</p><h4 id="minification-and-tree-shaking">Minification and Tree-shaking</h4><p>Angular 11 has excellent build tooling, which results in surprisingly small minified javascript and CSS bundles. It achieves this by first removing large chunks of unused javascript (&#x201C;tree-shaking&#x201D;), mainly from imported libraries, and then by minifying the remaining code.</p><p>Tree-shaking and minification reduces:</p><ul><li>the javascript bundle size from 8 MB to 1.8 MB.</li><li>the CSS bundle size from 281 KB to 242 KB.</li></ul><h4 id="asset-compression">Asset Compression</h4><p>I configured the Nginx web server to compress all static content delivered by the website. With just a few lines of code in the Nginx configuration file, all static files are gzipped before being transmitted from the web server to the client&#x2019;s browser which then unzips them automatically.</p><p>Asset Compression further reduces:</p><ul><li>the javascript bundle size from 1.8 MB to just 273 KB</li><li>the CSS bundle size from 242 KB to 46 KB</li></ul><p>The total size of a typical uncached gallery page, including all JS, CSS and HTML assets, as well as the WEBP images, is approx 750KB.</p><h4 id="lighthouse">Lighthouse</h4><p>An excellent tool for testing the quality of your webpages is <a href="https://developers.google.com/web/tools/lighthouse/" rel="noopener">Lighthouse</a>, available in Chrome Dev Tools. Designed by Google, it&#x2019;s actually the engine behind Google&#x2019;s <a href="https://developers.google.com/speed/pagespeed/insights/" rel="noopener">PageSpeed Insights</a>. It can particularly shed light on issues in the following areas:</p><ul><li>Performance<br>Together with SEO, performance is probably the most important success indicator. Lighthouse provides several key metrics, such as First Contentful Paint (FCP), Largest Contentful Paint (LCP) and Time To Interactive (TTI).</li><li>Best Practices<br>Helpful advice on things which don&#x2019;t follow best practices.</li><li>Search Engine Optimization (SEO)<br>Extremely useful in detecting common pitfalls which can harm a page&#x2019;s ranking. More about SEO below.</li><li>Accessibility (a11y)<br>No longer to be considered an afterthought, making your website accessible from Day 1 is highly recommended as it is a lot easier than retrofitting a11y later on.</li></ul><h4 id="seo">SEO</h4><p>A key part of performance, and something which cannot be overlooked these days, is SEO. Marketing and SEO are critical to the success of an e-commerce website. However in my case there is zero budget for marketing, therefore an even greater focus upon SEO is required.</p><p>Since I had already decided to build the frontend using Angular, it was very important to utilize Angular&#x2019;s Server-Side Rendering (SSR) framework: Angular Universal. The main benefit of SSR is greatly improved SEO. Pages are rendered on the server, prior to sending the HTML to the client, so that the first HTML sent can be <strong>prefilled</strong> with content.</p><p>An SEO strategy can be split into 2 components: On-Page SEO and Off-Page SEO.</p><ul><li>On-Page SEO focuses on ensuring that the website code is error free, fast, has all the standard optimizations in place, e.g. <code>title</code> tags, <code>meta</code> tags, <code>alt</code> attributes on images, easily followable <code>a</code> tags, a single <code>h1</code> on each page, good content with minimal duplication, etc. There is a long list of optimizations that can be made, which will be covered in more detail in a separate post. <a href="https://developers.google.com/speed/pagespeed/insights/" rel="noopener">Google Page Insights</a> is your friend here.</li><li>The second, and more laborious, component of SEO is Off-Page SEO, which is just another way of saying <strong>link building</strong>. Gone are the days when you could pay a dodgy service to generate hundreds of low quality links, or spam comments on blogs and forums. Google has improved greatly at detecting spammy low quality links and punishes a website if they suspect the administrator has participated in &#x201C;black hat&#x201D; link building methods.</li></ul><p>The best way to generate links is to build a vibrant active social media presence, and to write content rich posts on websites with high &#x201C;Domain Rating&#x201D; (DR). A very useful tool for helping monitor your link building progress is <a href="https://ahrefs.com/" rel="noopener">Ahrefs</a>. It&#x2019;s a bit on the expensive side for a small company just getting started (&#x20AC;100/month) but worth subscribing to, at least for a month once in a while, to monitor your backlinks and search ranking for particular keywords. Site optimizations and backlink creation can take a lot of time before a noticeable effect is observed. Typically you will not see an impact for 6 months, so don&#x2019;t lose heart. It requires dedication and perseverance.</p><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/1*wGJYJx5bo0Bt9-XkZfa3Ng.png" class="kg-image" alt loading="lazy"></figure><h3 id="4-on-budget">4. On Budget</h3><p>The budget for this project was very limited. The ask was to keep the expenses to approx &#x20AC;100/month. We actually achieved this, and I&#x2019;ll tell you how.</p><p>Firstly, this kind of budget is only possible if you&#x2019;re building a website yourself, and don&#x2019;t count the cost of your own time. That key point aside, the costs were kept to the following:</p><h4 id="server-hosting">Server Hosting</h4><p>The main cost was, expectedly, the hosting charges. As I already mentioned, to keep costs down, I intended to run all the necessary components in docker containers on a single VPS with <a href="https://rimuhosting.com/" rel="noopener">Rimu Hosting</a>.</p><p>However, since I am running 9 docker containers, I do notice a degradation in performance. I put this down to the single core CPU. I asked Rimu Hosting about adding extra cores and they kindly gave me a second dedicated core free of charge!</p><p>This resulted in a monthly hosting cost of only <strong>&#x20AC;60/month</strong> for the following:</p><ul><li>2 core CPU</li><li>10GB SSD RAM</li><li>64GB SSD storage</li><li>100GB Data Transfer Allowance</li></ul><p>What other expenses are there?</p><ul><li>Domain renewal (&#x20AC;25/year)</li><li>SEO analysis via Ahrefs (&#x20AC;100/month). I recommend paying for this service for a single month, once every 6 months, which costs &#x20AC;200/year. This equates to a cost of &#x20AC;16/month. </li><li>&#x201C;<em>What about SSL Certs?</em>&#x201D; I hear you ask&#x2026; Fortunately paying for SSL Certs is now a thing of the past. With the arrival of <a href="https://letsencrypt.org/" rel="noopener">LetsEncrypt</a> which allows you to generate your own free wildcard SSL Cert every 3 months and even provides a tool to automatically renew them. Just stick it in a cronjob, set it to run nightly, and forget about it. When your certs are coming up for renewal it&#x2019;ll take care of it for you.</li><li>Auth0: Free for small companies. </li><li>Stripe: Zero monthly fee + approx 2% of each sale. </li></ul><p><strong>So all-in, the total monthly cost is &#x20AC;78, and with some headroom in the budget for potentially boosting server resources as traffic volumes increase over time. Not bad at all! </strong></p><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/1600/0*2sdPIdN9YzFcd-kZ.jpg" class="kg-image" alt loading="lazy"></figure><h3 id="conclusion">Conclusion</h3><p>I hope you found this post helpful or useful in some way. I wanted to show that it is possible to build a fully operational quality e-commerce website from End-to-End, within a very modest budget.</p><p>If you&#x2019;d like to read more, please go ahead and follow me. I&#x2019;ll be writing several spin-off articles, diving deeper into some of the many topics raised in this post.</p><p>Happy coding!</p>]]></content:encoded></item></channel></rss>