haskell regulated February 7, 2018

Best Practices for Developing Medical Device Software

Medical device regulation makes development more complex than typical software. Learn about the best practices for sound medical device software development.

Posted By:

Niklas Hambüchen

Cache CI builds to an S3 Bucket
no tags February 5, 2018
Cache CI builds to an S3 Bucket

We're happy to announce a new tool, cache-s3, aimed at providing a consistent caching solution across CI tools.

Read More >
Hash Based Package Downloads - part 2 of 2
haskell January 31, 2018
Hash Based Package Downloads - part 2 of 2

A plan for implementing hash-based content addressing in the Haskell build ecosystem.

Read More >
FP Complete and Cardano Blockchain Audit Partnership
blockchain January 24, 2018
FP Complete and Cardano Blockchain Audit Partnership

FP Complete's software development specialists will provide a comprehensive review of Cardano’s code and technical documentation. FP Complete's focus on FinTech is a logical fit for blockchain technology providers.

Read More >
Hash Based Package Downloads - part 1 of 2
haskell January 23, 2018
Hash Based Package Downloads - part 1 of 2

Reproducible build plans are vital for many industries. Can we be doing more to make our build tools more reliable?

Read More >
Weakly Typed Haskell
haskell January 2, 2018
Weakly Typed Haskell

Haskell is often described as a strongly typed programming language. Does that mean that your Haskell code is automatically strongly typed?

Read More >
Parsing command line arguments
haskell December 28, 2017
Parsing command line arguments

Learn about our recommendations on how to reliably parse command line arguments into commands, arguments, flags, configuration, settings, and instructions.

Read More >
Building Haskell Apps with Docker
haskell docker December 21, 2017
Building Haskell Apps with Docker

How do you build a runtime Docker image from Haskell code? This post will show you a few ways, including the newer multi-stage Docker build technique.

Read More >
Announcing Stack 1.6.1 release
haskell December 7, 2017
Announcing Stack 1.6.1 release

The Stack build tool for Haskell, version 1.6.1, is now available. Come read about the new features.

Read More >
Lambda Conference and Haskell Survey
no tags November 22, 2017
Lambda Conference and Haskell Survey

See Michael Snoyman discuss Haskell Monads at the Lambda World Conference in Cadiz, Spain. We also discuss how you can participate in our 2017 Haskell Survey.

Read More >
❮❮ Page 9 of 24 ❯❯
  
  [
  {
    "name": "blockchain",
    "slug": "blockchain",
    "path": "/categories/blockchain/",
    "permalink": "https://tech.fpcomplete.com/categories/blockchain/",
    "pages": [
      {
        "relative_path": "blog/blockchain-technology-smart-contracts-save-money.md",
        "colocated_path": null,
        "content": "<p>With the cost of goods only going up and the increased scarcity of quality workers and resources, saving money and time in your day-to-day business operations is paramount. Therefore, adopting blockchain technology into your traditional day-to-day business operations is key to giving you back valuable time, saving you money, creating less dependency on workers, and modernizing your business operations for good. There are many ways blockchain technology can help you and your business save money and resources, but one profound way is through the use of smart contracts.</p>\n<p>Smart contracts are software contracts that execute predefined logic based on the parameters coded into the system.  Smart contracts are digital agreements that automatically run transactions between parties, increasing speed, accuracy, and integrity in payment and performance. In addition, smart contracts are legally enforceable if they comply with contract law. </p>\n<p>The smart contract aims to provide transactional security while reducing surplus transaction costs. In addition, smart contracts can automate the execution of an agreement so that all parties are immediately sure of the outcome without the need for intermediary involvement. For example, instead of hiring a department to handle contract review and purchasing, your business can run smart contracts that enforce the same procedures more effectively at substantial cost savings.  In addition, your business can use smart contracts to manage your corporate documents, regulatory compliance procedures, cross-border financial transactions, real property ownership, supply management, and the chronology of ownership of your business IP, materials, and licenses. </p>\n<p>Finance and banking are prime examples of industries that have benefited from smart contract applications.  Smart contracts track corporate spending, stock trading, investing, lending, and borrowing. Smart contracts are also used in corporate mergers and acquisitions and are frequently used to configure or reconfigure entire corporate structures. </p>\n<p>Below is an illustration of how smart contracts work:</p>\n<p><img src=\"/images/blog/how-smart-contracts-work.png\" alt=\"CPU usage\" /></p>\n<p>As you can imagine, blockchain technology and smart contracts are still developing. They do have some roadblocks and implementational challenges. Still, these pitfalls and hassles cannot take away from the many benefits blockchain technology offers to businesses needing to save money and resources.</p>\n<p>FP Complete Corporation has direct experience <a href=\"https://www.fpblock.com\">working with blockchain technologies</a>, most recently the <a href=\"https://tech.fpcomplete.com/blog/levana-nft-launch/\">Levana NFT launch</a>, which relied on blockchain technology written by one of our engineers. Previously, one of our senior engineers released a video titled “<a href=\"https://www.youtube.com/watch?v=jngHo0Gzk6s\">How to be Successful at Blockchain Development</a>,” highlighting our expertise in this area in detail.   If you want to learn more about how we can help you with blockchain technology, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us today</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/blockchain-technology-smart-contracts-save-money/",
        "slug": "blockchain-technology-smart-contracts-save-money",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Blockchain Technology, Smart Contracts, and Your Company",
        "description": "How Blockchain Technology and Smart Contracts Can Help You and Your Company Save Money and Resources Now!",
        "updated": null,
        "date": "2022-01-16",
        "year": 2022,
        "month": 1,
        "day": 16,
        "taxonomies": {
          "tags": [
            "blockchain",
            "smart contracts"
          ],
          "categories": [
            "blockchain",
            "smart contracts"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete",
          "keywords": "blockchain, NFT, cryptocurrency, smart contracts",
          "blogimage": "/images/blog-listing/blockchain.png"
        },
        "path": "/blog/blockchain-technology-smart-contracts-save-money/",
        "components": [
          "blog",
          "blockchain-technology-smart-contracts-save-money"
        ],
        "summary": null,
        "toc": [],
        "word_count": 440,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/hedera-platform-audit.md",
        "colocated_path": null,
        "content": "<p><strong>FP Complete Publishes Results of Independent 3rd Party Audits of Hedera Platform and New Hedera Token Service</strong></p>\n<p><em>FP Complete Corporation development specialists conducted a comprehensive review of Hedera's code and technical documentation</em></p>\n<p><strong>Zug, Switzerland – February 9, 2021 –</strong> As part of its goal to deliver\ntransparency to the development community,\n<a href=\"http://www.hedera.com/\">Hedera Hashgraph</a>, the enterprise-grade public\ndistributed ledger, engaged FP Complete, an IT engineering specialist,\nto perform an independent audit of the engineering work by Hedera's\ndevelopment team on the Hedera platform, including the new Hedera Token\nService. The full completed audit reports can be found at:</p>\n<ul>\n<li><a href=\"https://hedera.com/fp-complete-hedera\">Hedera Platform</a></li>\n<li><a href=\"https://hedera.com/fp-complete-hts\">Hedera Token Service</a></li>\n</ul>\n<p>Founded by the former head of Microsoft's own in-house engineering\ntools, Aaron Contorer, FP Complete Corporation is the world's leading\nsupplier of commercial-grade tools and engineering for advanced\nprogramming languages, distributed systems, blockchain, and DevOps\ntechnologies. FP Complete performed an in-depth code review to examine\nthe Hedera software quality, focusing on robustness, security, and\naudibility.</p>\n<p>FP Complete also completed a review of Hedera's code and technical\ndocumentation, enabling the development team to use this ongoing work to\noptimize the engineering methods, tools, and coding standards used to\nimplement the Hedera network. The publication of these results\ndemonstrates the Company's commitment to technical rigor and\ntransparency.</p>\n<p>Dr. Leemon Baird, co-founder and Chief Scientist of Hedera Hashgraph,\ncomments: &quot;These third-party audits by FP Complete illustrate our\ncommitment to high-quality engineering, project transparency, and a\nrigorous and independent auditing process. We are pleased to be able to\npublish these audit results today and look forward to sharing additional\naudit findings as they are completed in the future.&quot;</p>\n<p>Wesley Crook, CEO of FP Complete, comments: &quot;We have worked with the\nHedera team to conduct a third-party audit of their codebase to assess\nsecurity, stability, and correctness. Our team of software, blockchain,\nand network architecture experts has provided feedback throughout the\ndevelopment process.&quot;</p>\n<hr />\n<h2 id=\"about-hedera\">About Hedera</h2>\n<p>Hedera is a decentralized enterprise-grade public network on which\nanyone can build secure, fair applications with near real-time finality.\nThe platform is owned and governed by a council of the world's leading\norganizations including Avery Dennison, Boeing, Dentons, Deutsche\nTelekom, DLA Piper, eftpos, FIS (WorldPay), Google, IBM, LG Electronics,\nMagalu, Nomura, Swirlds, Tata Communications, University College London\n(UCL), Wipro, and Zain Group.</p>\n<p>For more information, visit\nhttps://www.hedera.com, or follow us on Twitter\nat <a href=\"https://twitter.com/hedera\">@hedera</a>, Telegram at\n<a href=\"https://t.me/hederahashgraph\">t.me/hederahashgraph</a>, or Discord at\n<a href=\"https://www.hedera.com/discord\">www.hedera.com/discord</a>. The Hedera\nwhitepaper can be found at\n<a href=\"https://hedera.com/papers\">www.hedera.com/papers</a>.</p>\n<h2 id=\"about-fp-complete\">About FP Complete</h2>\n<p>FP Complete is an advanced server-side software development and DevOps\nconsulting Company. We specialize in helping FinTech companies solve\ntheir unique set of problems related to data and information integrity,\ndata security, architectural design, systems integration, and regulatory\ncompliance. We are recognized worldwide for our contributions to the\nfunctional programming community using the Haskell programming language.\nOur people and processes have helped countless companies increase the\nvelocity and quality of their delivered software products. From fortune\n500 biotech companies to small blockchain FinTech software companies we\nhave solved unique and complicated problems with expert results.</p>\n<p><a href=\"https://www.fpcomplete.com/\">https://www.fpcomplete.com/</a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/hedera-platform-audit/",
        "slug": "hedera-platform-audit",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Hedera Platform Audit",
        "description": "FP Complete has conducted a third party audit of the Hedera Platform and New Hedera Token Service. Check out the press release for more information.",
        "updated": null,
        "date": "2021-02-09",
        "year": 2021,
        "month": 2,
        "day": 9,
        "taxonomies": {
          "tags": [
            "blockchain"
          ],
          "categories": [
            "blockchain"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Staff",
          "blogimage": "/images/blog-listing/distributed-ledger.png",
          "image": "images/blog/hedera-platform-audit.png"
        },
        "path": "/blog/hedera-platform-audit/",
        "components": [
          "blog",
          "hedera-platform-audit"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "about-hedera",
            "permalink": "https://tech.fpcomplete.com/blog/hedera-platform-audit/#about-hedera",
            "title": "About Hedera",
            "children": []
          },
          {
            "level": 2,
            "id": "about-fp-complete",
            "permalink": "https://tech.fpcomplete.com/blog/hedera-platform-audit/#about-fp-complete",
            "title": "About FP Complete",
            "children": []
          }
        ],
        "word_count": 538,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 2
  },
  {
    "name": "devops",
    "slug": "devops",
    "path": "/categories/devops/",
    "permalink": "https://tech.fpcomplete.com/categories/devops/",
    "pages": [
      {
        "relative_path": "blog/partnership-portworx-pure-storage.md",
        "colocated_path": null,
        "content": "<p><strong>FP Complete Corporation Announces Partnership with Portworx by Pure Storage to Streamline World-Class DevOps Consulting Services with State-of-the-Art, End-To-End Storage and Data Management Solution for Kubernetes Projects.</strong></p>\n<p><strong>Charlotte, North Carolina (August 31, 2022)</strong> – FP Complete Corporation, a global technology  partner that specializes in DevSecOps, Cloud Native Computing, and Advanced Server-Side Programming Languages today announced that it has partnered with Portworx by Pure Storage  to bring an integrated solution to customers seeking DevSecOps consulting services for the  management of persistent storage, data protection, disaster recovery, data security, and hybrid  data migrations.</p>\n<p>The partnership between FP Complete Corporation and Portworx will be integral in providing FP  Complete's DevSecOps and Cloud Enablement clients with a data storage platform designed to  run in a container that supports any cloud physical storage on any Kubernetes distribution.</p>\n<p>Portworx Enterprise gets right to the heart of what developers and Kubernetes admins want:  data to behave like a cloud service. Developers and Admins wish to request Storage based on  their requirements (capacity, performance level, resiliency level, security level, access,  protection level, and more) and let the data management layer figure out all the details.  Portworx PX-Backup adds enterprise-grade point-and-click backup and recovery for all  applications running on Kubernetes, even if they are stateless.</p>\n<p>Portworx shortens development timelines and headaches for companies moving from on-prem to cloud. In addition, the integration between FP Complete Corporation and Portworx allows  the easy exchange of best practices information, so design and storage run in parallel.</p>\n<p>Gartner predicts that by 2025, more than 85% of global organizations will be running  containerized applications in production, up from less than 35% in 2019<sup>1</sup>. As container  adoption increases and more applications are being deployed in the enterprise, these  organizations want more options to manage stateful and persistent data associated with these  modern applications.</p>\n<p>&quot;It is my pleasure to announce that Pure Storage can now be utilized by our world-class  engineers needing a fully integrated, end-to-end storage and data management solution for our  DevSecOps clients with complicated Kubernetes projects. Pure Storage is known globally for its  strength in the storage industry, and this partnership offers strong support for our business,&quot; said Wes Crook, CEO of FP Complete Corporation.</p>\n<p>“There can be zero doubt that most new cloud-native apps are built on containers and  orchestrated by Kubernetes. Unfortunately, the early development on containers resulted in  lots of data access and availability issues due to a lack of enterprise-grade persistent storage  data management and low data visibility. With Portworx and the aid of Kubernetes experts like FP Complete, we can offer customers a rock-solid, enterprise-class, cloud-native development  platform that delivers end-to-end application and data lifecycle management that significantly  lowers the risks and costs of operating cloud-native application infrastructure,” said Venkat  Ramakrishnan, VP, Engineering, Cloud Native Business Unit, Pure Storage.</p>\n<div><u><strong>About FP Complete Corporation</strong></u></div>\nFounded in 2012 by Aaron Contorer, former Microsoft executive, FP Complete Corporation is known globally as the one-stop, full-stack technology shop that delivers agile, reliable, repeatable, and highly secure software. In 2019, we launched our flagship platform, Kube360®, which is a fully managed enterprise Kubernetes-based DevOps ecosystem. With Kube360, FP Complete is now well positioned to provide a complete suite of products and solutions to our clients on their journey towards cloudification, containerization, and DevOps best practices. The Company's mission is to deliver superior software engineering to build great software for our clients. FP Complete Corporation serves over 200+ global clients and employs over 70 people worldwide. It has won many awards and made the Inc. 5000 list in 2020 for being one of the 5000 fastest-growing private companies in America. For more information about FP Complete Corporation, visit its website at [www.fpcomplete.com](https://www.fpcomplete.com/).\n<p><sup>1</sup> <small>Arun Chandrasekaran, <a href=\"https://www.gartner.com/en/documents/3988395\">Best Practices for Running Containers and Kubernetes in Production</a>, Gartner, August 2020</small></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/partnership-portworx-pure-storage/",
        "slug": "partnership-portworx-pure-storage",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Complete Corporation Announces Partnership with Portworx by Pure Storage",
        "description": "FP Complete Corporation Announces Partnership with Portworx by Pure Storage to Streamline World-Class DevOps Consulting Services with State-of-the-Art, End-To-End Storage and Data Management Solution for Kubernetes Projects.",
        "updated": null,
        "date": "2022-08-29",
        "year": 2022,
        "month": 8,
        "day": 29,
        "taxonomies": {
          "tags": [
            "devops",
            "insights"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Staff",
          "keywords": "Portworx Pure Storage",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/partnership-portworx-pure-storage/",
        "components": [
          "blog",
          "partnership-portworx-pure-storage"
        ],
        "summary": null,
        "toc": [],
        "word_count": 669,
        "reading_time": 4,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/canary-deployment-istio.md",
        "colocated_path": null,
        "content": "<p>Istio is a service mesh that transparently adds various capabilities\nlike observability, traffic management and security to your\ndistributed collection of microservices. It comes with various\nfunctionalities like circuit breaking, granular traffic routing, mTLS\nmanagement, authentication and authorization polices, ability to do\nchaos testing etc.</p>\n<p>In this post, we will explore on how to do canary deployments of our\napplication using Istio.</p>\n<h2 id=\"what-is-canary-deployment\">What is Canary Deployment</h2>\n<p>Using Canary deployment strategy, you release a new version of your\napplication to a small percentage of the production traffic. And then\nyou monitor your application and gradually expand its percentage of\nthe production traffic.</p>\n<p>For a canary deployment to be shipped successfully, you need good\nmonitoring in place. Based on your exact use case, you might want to\ncheck various metrics like performance, user experience or <a href=\"https://en.wikipedia.org/wiki/Bounce_rate\">bounce\nrate</a>.</p>\n<h2 id=\"pre-requisites\">Pre requisites</h2>\n<p>This post assumes that following components are already provisioned or\ninstalled:</p>\n<ul>\n<li>Kubernetes cluster</li>\n<li>Istio</li>\n<li>cert-manager: (Optional, required if you want to provision TLS\ncertificates)</li>\n<li>Kiali (Optional)</li>\n</ul>\n<h2 id=\"istio-concepts\">Istio Concepts</h2>\n<p>For this specific deployment, we will be using three specific features\nof Istio's traffic management capabilities:</p>\n<ul>\n<li><a href=\"https://istio.io/latest/docs/concepts/traffic-management/#virtual-services\">Virtual Service</a>: Virtual Service describes how traffic flows to\na set of destinations. Using Virtual Service you can configure how\nto route the requests to a service within the mesh. It contains a\nbunch of routing rules that are evaluated, and then a decision is\nmade on where to route the incoming request (or even reject if no\nroutes match).</li>\n<li><a href=\"https://istio.io/latest/docs/concepts/traffic-management/#gateways\">Gateway</a>: Gateways are used to manage your inbound and outbound\ntraffic. They allow you to specify the virtual hosts and their\nassociated ports that needs to be opened for allowing the traffic\ninto the cluster.</li>\n<li><a href=\"https://istio.io/latest/docs/reference/config/networking/destination-rule/\">Destination Rule</a>: This is used to configure how a client in\nthe mesh interacts with your service. It's used for configuring TLS\nsettings of <a href=\"https://istio.io/latest/docs/reference/config/networking/sidecar/\">your sidecar</a>, splitting your service into subsets,\nload balancing strategy for your clients etc.</li>\n</ul>\n<p>For doing canary deployment, destination rule plays a major role as\nthat's what we will be using to split the service into subset and\nroute traffic accordingly.</p>\n<h2 id=\"application-deployment\">Application deployment</h2>\n<p>For our canary deployment, we will be using the following version of\nthe application:</p>\n<ul>\n<li><a href=\"https://httpbin.org/\">httpbin.org</a>: This will be the version one (v1) of our\napplication. This is the application that's already deployed, and\nyour aim is to partially replace it with a newer version of the\napplication.</li>\n<li><a href=\"https://github.com/psibi/tornado-websocket-example\">websocket app</a>: This will be the version two (v2) of the\napplication that has to be gradually introduced.</li>\n</ul>\n<p>Note that in the actual real world, both the applications will share\nthe same code. For our example, we are just taking two arbitrary\napplications to make testing easier.</p>\n<p>Our assumption is that we already have version one of our application\ndeployed. So let's deploy that initially. We will write our usual\nKubernetes resources for it. The deployment manifest for the version\none application:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: apps&#x2F;v1\nkind: Deployment\nmetadata:\n  name: httpbin\n  namespace: canary\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: httpbin\n      version: v1\n  template:\n    metadata:\n      labels:\n        app: httpbin\n        version: v1\n    spec:\n      containers:\n      - image: docker.io&#x2F;kennethreitz&#x2F;httpbin\n        imagePullPolicy: IfNotPresent\n        name: httpbin\n        ports:\n        - containerPort: 80\n</code></pre>\n<p>And let's create a corresponding service for it:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: httpbin\n  name: httpbin\n  namespace: canary\nspec:\n  ports:\n  - name: httpbin\n    port: 8000\n    targetPort: 80\n  - name: tornado\n    port: 8001\n    targetPort: 8888\n  selector:\n    app: httpbin\n  type: ClusterIP\n</code></pre>\n<p>SSL certificate for the application which will use cert-manager:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: cert-manager.io&#x2F;v1\nkind: Certificate\nmetadata:\n  name: httpbin-ingress-cert\n  namespace: istio-system\nspec:\n  secretName: httpbin-ingress-cert\n  issuerRef:\n    name: letsencrypt-dns-prod\n    kind: ClusterIssuer\n  dnsNames:\n  - canary.33test.dev-sandbox.fpcomplete.com\n</code></pre>\n<p>And the Istio resources for the application:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.istio.io&#x2F;v1alpha3\nkind: Gateway\nmetadata:\n  name: httpbin-gateway\n  namespace: canary\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n  - hosts:\n    - canary.33test.dev-sandbox.fpcomplete.com\n    port:\n      name: https-httpbin\n      number: 443\n      protocol: HTTPS\n    tls:\n      credentialName: httpbin-ingress-cert\n      mode: SIMPLE\n  - hosts:\n    - canary.33test.dev-sandbox.fpcomplete.com\n    port:\n      name: http-httpbin\n      number: 80\n      protocol: HTTP\n    tls:\n      httpsRedirect: true\n---\napiVersion: networking.istio.io&#x2F;v1alpha3\nkind: VirtualService\nmetadata:\n  name: httpbin\n  namespace: canary\nspec:\n  gateways:\n  - httpbin-gateway\n  hosts:\n  - canary.33test.dev-sandbox.fpcomplete.com\n  http:\n  - route:\n    - destination:\n        host: httpbin.canary.svc.cluster.local\n        port:\n          number: 8000\n</code></pre>\n<p>The above resource define gateway and virtual service. You could see\nthat we are using TLS here and redirecting HTTP to HTTPS.</p>\n<p>We also have to make sure that namespace has istio injection enabled:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: v1\nkind: Namespace\nmetadata:\n  labels:\n    app.kubernetes.io&#x2F;component: httpbin\n    istio-injection: enabled\n  name: canary\n</code></pre>\n<p>I have the above set of k8s resources managed via\n<a href=\"https://kustomize.io/\">kustomize</a>. Let's deploy them to get the initial environment which\nconsists of only v1 (httpbin) application:</p>\n<pre data-lang=\"shellsession\" class=\"language-shellsession \"><code class=\"language-shellsession\" data-lang=\"shellsession\">❯ kustomize build overlays&#x2F;istio_canary &gt; istio.yaml\n❯ kubectl apply -f istio.yaml\nnamespace&#x2F;canary created\nservice&#x2F;httpbin created\ndeployment.apps&#x2F;httpbin created\ngateway.networking.istio.io&#x2F;httpbin-gateway created\nvirtualservice.networking.istio.io&#x2F;httpbin created\n❯ kubectl apply -f overlays&#x2F;istio_canary&#x2F;certificate.yaml\ncertificate.cert-manager.io&#x2F;httpbin-ingress-cert created\n</code></pre>\n<p>Now I can go and verify in my browser that my application is actually\nup and running:</p>\n<p><img src=\"/images/istio_httpbin_application.png\" alt=\"httpbin: Version 1 application\" /></p>\n<p>Now comes the interesting part. We have to deploy the version two of\nour application and make sure around 20% of our traffic goes to\nit. Let's write the deployment manifest for it:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: apps&#x2F;v1\nkind: Deployment\nmetadata:\n  name: httpbin-v2\n  namespace: canary\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: httpbin\n      version: v2\n  template:\n    metadata:\n      labels:\n        app: httpbin\n        version: v2\n    spec:\n      containers:\n      - image: psibi&#x2F;tornado-websocket:v0.3\n        imagePullPolicy: IfNotPresent\n        name: tornado\n        ports:\n        - containerPort: 8888\n</code></pre>\n<p>And now the destination rule to split the service:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.istio.io&#x2F;v1alpha3\nkind: DestinationRule\nmetadata:\n  name: httpbin\n  namespace: canary\nspec:\n  host: httpbin.canary.svc.cluster.local\n  subsets:\n  - labels:\n      version: v1\n    name: v1\n  - labels:\n      version: v2\n    name: v2\n</code></pre>\n<p>And finally let's modify the virtual service to split 20% of the\ntraffic to the newer version:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.istio.io&#x2F;v1alpha3\nkind: VirtualService\nmetadata:\n  name: httpbin\n  namespace: canary\nspec:\n  gateways:\n  - httpbin-gateway\n  hosts:\n  - canary.33test.dev-sandbox.fpcomplete.com\n  http:\n  - route:\n    - destination:\n        host: httpbin.canary.svc.cluster.local\n        port:\n          number: 8000\n        subset: v1\n      weight: 80\n    - destination:\n        host: httpbin.canary.svc.cluster.local\n        port:\n          number: 8001\n        subset: v2\n      weight: 20\n</code></pre>\n<p>And now if you go again to the browser and refresh it a number of\ntimes (note that we route only 20% of the traffic to the new\ndeployment), you will see the new application eventually:</p>\n<p><img src=\"/images/istio_tornado_application.png\" alt=\"websocket: Version 2 application\" /></p>\n<h2 id=\"testing-deployment\">Testing deployment</h2>\n<p>Let's do around 10 curl requests to our endpoint to see how the\ntraffic is getting routed:</p>\n<pre data-lang=\"shellsession\" class=\"language-shellsession \"><code class=\"language-shellsession\" data-lang=\"shellsession\">❯ seq 10 | xargs -Iz curl -s https:&#x2F;&#x2F;canary.33test.dev-sandbox.fpcomplete.com | rg &quot;&lt;title&gt;&quot;\n    &lt;title&gt;httpbin.org&lt;&#x2F;title&gt;\n    &lt;title&gt;httpbin.org&lt;&#x2F;title&gt;\n    &lt;title&gt;httpbin.org&lt;&#x2F;title&gt;\n&lt;title&gt;tornado WebSocket example&lt;&#x2F;title&gt;\n    &lt;title&gt;httpbin.org&lt;&#x2F;title&gt;\n    &lt;title&gt;httpbin.org&lt;&#x2F;title&gt;\n    &lt;title&gt;httpbin.org&lt;&#x2F;title&gt;\n    &lt;title&gt;httpbin.org&lt;&#x2F;title&gt;\n    &lt;title&gt;httpbin.org&lt;&#x2F;title&gt;\n&lt;title&gt;tornado WebSocket example&lt;&#x2F;title&gt;\n</code></pre>\n<p>And you can confirm how out of the 10 requests, 2 requests are routed\nto the websocket (v2) application. If you have <a href=\"https://kiali.io/\">Kiali</a> deployed,\nyou can even visualize the above traffic flow:</p>\n<p><img src=\"/images/istio_kiali.png\" alt=\"Kiali visualization\" /></p>\n<p>And that summarizes our post on how to achieve canary deployment using\nIstio. While this post shows a basic example, traffic steering and\nrouting is one of the core features of Istio and it offers various\nways to configure the routing decisions made by it. You can find more\nfurther details about it in the <a href=\"https://istio.io/latest/docs/concepts/traffic-management/#virtual-services\">official docs</a>. You can also use a\ncontroller like <a href=\"https://argoproj.github.io/argo-rollouts/features/traffic-management/istio/\">Argo Rollouts with Istio</a> to perform canary\ndeployments and use additional features like <a href=\"https://argoproj.github.io/argo-rollouts/features/analysis/\">analysis</a> and\n<a href=\"https://argoproj.github.io/argo-rollouts/features/experiment/\">experiment</a>.</p>\n<hr />\n<p>If you're looking for a solid Kubernetes platform, batteries included\nwith a first class support of Istio, <a href=\"https://tech.fpcomplete.com/products/kube360/\">check out Kube360</a>.</p>\n<p>If you liked this article, you may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/\">An Istio/mutual TLS debugging story</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-kubernetes-windows/\">Deploying Rust with Windows Containers on Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/\">Cloud Vendor Neutrality</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">Secure defaults with Kubernetes Security with Kube360</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
        "slug": "canary-deployment-istio",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Canary Deployment with Kubernetes and Istio",
        "description": "Want to do canary deployments in your Kubernetes cluster? Read up on our recommended step-by-step process",
        "updated": null,
        "date": "2022-03-24",
        "year": 2022,
        "month": 3,
        "day": 24,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "DevOps",
            "istio",
            "Kubernetes"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Sibi Prabakaran",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/canary-deployment-istio/",
        "components": [
          "blog",
          "canary-deployment-istio"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "what-is-canary-deployment",
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#what-is-canary-deployment",
            "title": "What is Canary Deployment",
            "children": []
          },
          {
            "level": 2,
            "id": "pre-requisites",
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#pre-requisites",
            "title": "Pre requisites",
            "children": []
          },
          {
            "level": 2,
            "id": "istio-concepts",
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#istio-concepts",
            "title": "Istio Concepts",
            "children": []
          },
          {
            "level": 2,
            "id": "application-deployment",
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#application-deployment",
            "title": "Application deployment",
            "children": []
          },
          {
            "level": 2,
            "id": "testing-deployment",
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#testing-deployment",
            "title": "Testing deployment",
            "children": []
          }
        ],
        "word_count": 1364,
        "reading_time": 7,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cloud-native.md",
        "colocated_path": null,
        "content": "<p>You hear &quot;go Cloud-Native,&quot; but if you're like many, you wonder, &quot;what does that mean, and how can applying a Cloud-Native strategy help my company's Dev Team be more productive?&quot;\nAt a high level, Cloud-Native architecture means adapting to the many new possibilities—but a very different set of architectural constraints—offered by the cloud compared to traditional on-premises infrastructure.</p>\n<p>Cloud-Native architecture optimizes systems and software for the cloud. This optimization creates an efficient way to utilize the platform by streamlining the processes and workflows. This is accomplished by harnessing the cloud's inherent strengths: </p>\n<ul>\n<li>its flexibility, </li>\n<li>on-demand infrastructure; and </li>\n<li>robust managed services. </li>\n</ul>\n<p>Cloud-native computing couples these strengths with cloud-optimized technologies such as microservices, containers, and continuous delivery. Cloud-Native takes advantage of the cloud's distributed, scalable and adaptable nature. By doing this, Cloud-Native will maximize your dev team's focus on writing code, reducing operational tasks, creating business value, and keeping your customers happy by building high-impact applications faster, without compromising on quality. You might even think you can’t do cloud-native without using one of the big cloud providers- this simply isn’t true, many of the benefits of cloud-native are the approaches and emphasis on better tooling around automation.</p>\n<h2 id=\"why-move-to-cloud-native-now\">Why Move to Cloud-Native Now?</h2>\n<p><em>#1 - High-Frequency Software Release</em></p>\n<p>Faster and more frequent updates and new features releases allow your organization to respond to user needs in near real-time, increasing user retention. For example, new software versions with novel features can be released incrementally and more often as they become available. In addition, Cloud-native makes high-frequency software possible via continuous integration (CI) and continuous deployment (CD), where full version commits are no longer needed. Instead, one can modify, test, and commit just a few lines of code continuously and automatically to meet changing customer trends, thereby giving your organization an edge. </p>\n<p><em>#2 - Automatic Software Updates</em></p>\n<p>One of the most valuable Cloud-native features is automation. For example, updates are deployed automatically without interfering with core applications or user base. Automated redundancies for infrastructure can automatically move applications between data centers as needed with little to zero human intervention. Even scalability, testing, and resource allocation can be automated. There are many available automation tools in the marketplace, such as FP Complete Corporation's widely accepted tool, <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>.</p>\n<p><em>#3 - Greater Protection from Software Failures</em></p>\n<p>Isolation of containers is another important cloud-native feature. Software failures and bugs can be traced to a specific microservice version, rolled back, or fixed quickly. Software fixes can be tested in isolation without compromising the stability of the entire application. On the other hand, if there's a widespread failure, automation can restore the application to a previous stable state, minimizing downtime. Automated DevOps testing before code goes to production (example: linting and software scrubbing) drives faster bug detection and resolution- reducing the risk of bugs in production.</p>\n<h2 id=\"wow-cloud-native-seems-perfect-what-s-the-catch\">WOW – Cloud-Native Seems Perfect – What's the Catch?</h2>\n<p>Switching over to Cloud-Native architecture requires a thorough assessment of your existing application setup. The biggest question you and your team need to ask before making any moves is, &quot;should our business modernize our current applications, or should we build new applications from scratch and utilize Cloud-Native development practices?&quot;</p>\n<p>If you choose to modernize your existing application, you will save time and money by capitalizing on the cloud's agility, flexibility, and scalability. Your dev team can retain existing application functionality and business logic, re-architect into a Cloud-Native app, and containerize to utilize the cloud platform's strengths.</p>\n<p>You can also build a net-new application using Cloud-Native development practices instead of upgrading your legacy applications. Building from scratch may make more sense from a corporate culture, risk management, and regulatory compliance standpoint. You keep running old application code unchanged while developing and phasing in a platform. Building new applications also allows dev teams to develop applications free from prior architectural constraints, allowing developers to experiment and deliver innovation to users.</p>\n<h2 id=\"three-essential-tools-for-successful-cloud-native-architecture\">Three Essential Tools for Successful Cloud-Native Architecture</h2>\n<p>Whether you decide to create a new Cloud-Native application or modernize your existing ones, your dev team needs to use these three tools for successful implementation of Cloud-Native Architecture:</p>\n<ol>\n<li><em>Microservices Architecture</em>. </li>\n</ol>\n<p>A cloud-native microservice architecture is considered a &quot;best practice&quot; architectural approach for creating cloud applications because each application makes up a set of services. Each service runs its processes and communicates through clearly defined APIs, which provide good foundations for continuous delivery. With microservices, ideally each service is independently deployable This architecture allows each service to be updated independently without interfering with another service. This results in:</p>\n<ul>\n<li>reduced downtime for users; </li>\n<li>simplified troubleshooting; and </li>\n<li>minimized disruptions even if a problem's identified. \nWhich allows for high-frequency updates and continuous delivery. </li>\n</ul>\n<ol start=\"2\">\n<li><em>Container-based Infrastructure Platform</em>.</li>\n</ol>\n<p>Now that your microservice architecture is broken down into individual container-based services, the next essential tool is a system to manage all those containers automatically - known as a ‘container orchestrator. The most widely accepted platform is Kubernetes, an open-source system originally developed in collaboration with Google, Microsoft, and others. It runs the containerized applications and controls the automated deployment, storage, scaling, scheduling, load balancing, updates, and monitors containers across clusters of hosts. Kubernetes supports all major public cloud service providers, including Azure, AWS, Google Cloud Platform, and Oracle Cloud.</p>\n<ol start=\"3\">\n<li><em>CI/CD Pipeline</em>.</li>\n</ol>\n<p>A CI/CD Pipeline is the third essential tool for a cloud-native environment to work seamlessly. Continuous integration and continuous delivery embody a set of operating principles and a collection of practices that allow dev teams to deliver code changes more frequently and reliably. This implementation is known as the CI/CD Pipeline. By automating deployment processes, the CI/CD pipeline will allow your dev team to focus on:</p>\n<ul>\n<li>meeting business requirements; </li>\n<li>code quality; and </li>\n<li>security. \nCI/CD tools preserve the environment-specific parameters that must be included with each delivery. CI/CD automation then performs any necessary service calls to web servers, databases, and other services that may require a restart or follow other procedures when applications are deployed.</li>\n</ul>\n<h2 id=\"cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use\">Cloud-Native Isn't Plug &amp; Play – Is there a Comprehensive Tool that my Dev Team Can Use?</h2>\n<p>As you can probably guess, countless tools make up the cloud-native architecture.  Unfortunately, these tools are complex, require separate authentication, and frequently do not interact with each other. In essence, you are expected to integrate these cloud tools yourself as a user. We at FP Complete became frustrated with this approach. So, to save time and provide a turn-key solution, we created Kube360.  Kube360 puts all necessary tools into one easy-to-use toolbox, accessed via a single sign-on, and operating as a fully integrated environment. Kube360 combines best practices, technologies, and processes into one complete package, and Kube360 has been proven an effective tool at multiple customer site deployments. In addition, Kube360 supports multiple cloud providers and on-premise infrastructure. Kube360 is vendor agnostic, fully customizable, and has no vendor lock-in.</p>\n<p><strong>Kube360 - Centralized Management</strong>. Kube360 employs centralized management, which increases your dev team's productivity. Increased Dev Team productivity will happen through:</p>\n<ul>\n<li>single-sign-on functionality </li>\n<li>speed-up of installation and setup</li>\n<li>Quick access to all tools</li>\n<li>Automation of logs, backups, and alerts</li>\n</ul>\n<p>This simplified administration hides frequent login complexities and allows single-sign-on through existing company identity management. Kube360 also streamlines tool authentication and access, eliminating many standard security holes. In the background, Kube360 automatically runs everyday tasks such as backups, log aggregation, and alerts.</p>\n<p><strong>Kube360 - Automated Features</strong>. Kube360's automated features include:</p>\n<ul>\n<li>automatic backups of the etcd config;</li>\n<li>log aggregation and indexing of all services; and</li>\n<li>integrated monitoring and alert framework.</li>\n</ul>\n<p><strong>Kube360 - Kubernetes Tooling Features</strong>. Kube360 simplifies Kubernetes management and allows you to take advantage of many cloud-native features such as:\nautoscaling; to stay cost efficient with growing and shrinking demands on systems</p>\n<ul>\n<li>high availability;</li>\n<li>health checks; and</li>\n<li>integrated secrets management.</li>\n</ul>\n<p><strong>Kube360 - Service Mesh</strong>.</p>\n<ul>\n<li>Mutual TLS based encryption within the cluster</li>\n<li>Tracing tools</li>\n<li>Rerouting traffic</li>\n<li>Canary deployments</li>\n</ul>\n<p><strong>Kube360 - Integration</strong>.</p>\n<ul>\n<li>Integrates into existing AWS &amp; Azure infrastructures</li>\n<li>Deploys into existing VPCs</li>\n<li>Leverages existing subnets</li>\n<li>Communicates with components outside of Kube360</li>\n<li>Supports multiple clusters per organization</li>\n<li>Installed by FP Complete team or customer</li>\n</ul>\n<p>As you can see – Kube360 is one of the most comprehensive tools you can rely on for Cloud Native architecture. Kube360 is your one-stop, fully integrated enterprise Kubernetes ecosystem. Kube360 standardizes containerization, software deployment, fault tolerance, auto-scaling, auto-healing, and security - by design. Kube360's modular, standardized architecture mitigates proprietary lock-in, high support costs, and obsolescence. In addition, Kube360 delivers a seamless deployment experience for you and your team.\nFind out how Kube360 can make your business more efficient, more reliable, and more secure, all in a fraction of the time. Speed up your dev team's productivity - <a href=\"https://tech.fpcomplete.com/contact-us/\">Contact us today!</a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloud-native/",
        "slug": "cloud-native",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Confused about Cloud-Native? Want to speed up your dev team's productivity?",
        "description": "Learn about Cloud-Native architecture.",
        "updated": null,
        "date": "2022-01-17",
        "year": 2022,
        "month": 1,
        "day": 17,
        "taxonomies": {
          "tags": [
            "kubernetes",
            "cloud native"
          ],
          "categories": [
            "devsecops",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete",
          "keywords": "devsecops, devops",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/cloud-native/",
        "components": [
          "blog",
          "cloud-native"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "why-move-to-cloud-native-now",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-native/#why-move-to-cloud-native-now",
            "title": "Why Move to Cloud-Native Now?",
            "children": []
          },
          {
            "level": 2,
            "id": "wow-cloud-native-seems-perfect-what-s-the-catch",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-native/#wow-cloud-native-seems-perfect-what-s-the-catch",
            "title": "WOW – Cloud-Native Seems Perfect – What's the Catch?",
            "children": []
          },
          {
            "level": 2,
            "id": "three-essential-tools-for-successful-cloud-native-architecture",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-native/#three-essential-tools-for-successful-cloud-native-architecture",
            "title": "Three Essential Tools for Successful Cloud-Native Architecture",
            "children": []
          },
          {
            "level": 2,
            "id": "cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-native/#cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
            "title": "Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?",
            "children": []
          }
        ],
        "word_count": 1482,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/levana-nft-launch.md",
        "colocated_path": null,
        "content": "<p><em>FP Complete Corporation, headquartered in Charlotte, North Carolina, is a global technology company building next-generation software to solve complex problems.  We specialize in Server-Side Software Engineering, DevSecOps, Cloud-Native Computing, Distributed Ledger, and Advanced Programming Languages. We have been a full-stack technology partner in business for 10+ years, delivering reliable, repeatable, and highly secure software.  Our team of engineers, strategically located in over 13 countries, offers our clients one-stop advanced software engineering no matter their size.</em></p>\n<p>For the past few months, the FP Complete engineering team has been working with <a href=\"https://levana.finance/\">Levana Protocol</a> on a DeFi platform for leveraged assets on the Terra blockchain. But more recently, we've additionally been helping launch the <a href=\"https://meteors.levana.finance/\">Levana Dragons meteor shower</a>. This NFT launch completed in the middle of last week, and to date is the largest single NFT event in the Terra ecosystem. We were very excited to be a part of this. You can read more about the NFT launch itself on <a href=\"https://blog.levana.finance/recap-of-the-levana-meteor-shower-128919193f9b\">the Levana Protocol blog post</a>.</p>\n<p>We received a lot of positive feedback about the smoothness of this launch, which was pretty wonderful feedback to hear. People expressed interest in learning about the technical decisions we made that led to such a smooth event. We also had a few hiccups occur during the launch and post-launch that are worth addressing as well.</p>\n<p>So strap in for a journey involving cloud technologies, DevOps practices, Rust, React, and—of course—Dragons.</p>\n<h2 id=\"overview-of-the-event\">Overview of the event</h2>\n<p>The Levana Dragons meteor shower was an event consisting of 44 separate &quot;showers&quot;, or drops during which NFT meteors would be issued. Participants in a shower competed by contributing UST (a Terra-specific stablecoin tied to US Dollars) to a specific Terra wallet. Contributions from a single wallet across the shower were aggregated into a single contribution, and contributions of a higher amount resulted in a better meteor. At the least granular level, this meant stratification into legendary, ancient, rare, and common meteors. But higher contributions also lead to the greater likelihood of receiving an egg inside your meteor.</p>\n<p>Each shower was separated from the next by 1 hour, and we opened up the site about 24 hours before the first shower occurred. That means the site was active for contributions for about 67 hours straight. Then, following the showers, we needed to mint the actual NFTs, ship them to users' wallets, and open up the &quot;cave&quot; page where users could view their NFTs.</p>\n<p>So all told, this was an event that spanned many days, had lots of bouts of high activity, was involved in a game that incorporated many financial transactions, and any downtime, slowness, or poor behavior could result in user frustration or worse. On top of that, given the short timeframe this event was intended to be active, attacks such as DDoS taking down the site could be catastrophic for success of the showers. And the absolute worst case would be a compromise allowing an attacker to redirect funds to a different wallet.</p>\n<p>All that said, let's dive in.</p>\n<h2 id=\"backend-server\">Backend server</h2>\n<p>A major component of the meteor drop was to track contributions to the destination wallet, and provide high level data back to users about these activities. This kind of high level data included the floor prices per shower, the timestamps of the upcoming drops, total meteors a user had acquired so far, and more. All this information is publicly available on the blockchain, and in principle could have been written as frontend logic. However, the overhead of having every visitor to the site downloading essentially the entire history of transactions with the destination wallet would have made the site unusable.</p>\n<p>Instead, we implemented a backend web server. We used Rust (with Axum) for this for multiple reasons:</p>\n<ul>\n<li>We're <a href=\"https://tech.fpcomplete.com/rust/\">very familiar with Rust</a></li>\n<li>Rust is a high performance language, and there were serious concerns about needing to withstand surges in traffic and DDoS attacks</li>\n<li>Due to CosmWasm already heavily leveraging Rust, Rust was already in use on the project</li>\n</ul>\n<p>The server was responsible for keeping track of configuration data (like the shower timestamps and destination wallet address), downloading transaction information from the blockchain (using the <a href=\"https://fcd.terra.dev/apidoc\">Full Client Daemon</a>), and answering queries to the frontend (described next) providing this information.</p>\n<p>We could have kept data in a mutable database like PostgreSQL, but instead we decided to keep all data in memory and download from scratch from the blockchain on each application load. Given the size of the data, these two decisions initially seemed very wise. We'll see some outcomes of this when we analyze performance and look at some of our mistakes below.</p>\n<h2 id=\"react-frontend\">React frontend</h2>\n<p>The primary interface users interacted with was a standard React frontend application. We used TypeScript, but otherwise stuck with generic tools and libraries wherever possible. We didn't end up using any state management libraries or custom CSS systems. Another thing to note is that this frontend is going to expand and evolve over time to include additional functionality around the evolving NFT concept, some of which has already happened, and we'll discuss below.</p>\n<p>One specific item that popped up was mobile optimization. Initially, the plan was for the meteor shower site to be desktop-only. After a few beta runs, it became apparent that the majority of users were using mobile devices. As a DAO, a primary goal of Levana is to allow for distributed governance of all products and services, and therefore we felt it vital to be responsive to this community request. Redesigning the interface for mobile and then rewriting the relevant HTML and CSS took up a decent chunk of time.</p>\n<h2 id=\"hosting-infrastructure\">Hosting infrastructure</h2>\n<p>Many DApps sites are exclusively client side, leveraging frontend logic interacting with the blockchain and smart contracts exclusively. For these kinds of sites, hosting options like Vercel work out very nicely. However, as described above, this application was a combo frontend/backend. Instead of splitting the hosting between two different options, we decided to host both the static frontend app and the backend dynamic app in a single place.</p>\n<p>At FP Complete, we typically use Kubernetes for this kind of deployment. In this case, however, we went with Amazon ECS. This isn't a terribly large delta from our standard Kubernetes deployments, following many of the same patterns: container-based application, rolling deployments with health checks, autoscaling and load balancers, externalized TLS cert management, and centralized monitoring and logging. No major issues there.</p>\n<p>Additionally, to help reduce burden on the backend application and provide a better global experience for the site, we put Amazon CloudFront in front of the application, which allowed caching the static files in data centers around the world.</p>\n<p>Finally, we codified all of this infrastructure using Terraform, our standard tool for Infrastructure as Code.</p>\n<h2 id=\"gitlab\">GitLab</h2>\n<p>GitLab is a standard part of our FP Complete toolchain. We leverage it for internal projects for its code hosting, issue tracking, Docker registry, and CI integration. While we will often adapt our tools to match our client needs, in this case we ended up using our standard tool, and things went very well.</p>\n<p>We ended up with a four-stage CI build process:</p>\n<ol>\n<li>Lint and build the frontend code, producing an artifact with the built static assets</li>\n<li>Build a static Rust application from the backend, embedding the static files from (1), and run standard Rust lints (<code>clippy</code> and <code>fmt</code>), producing an artifact with the single file compiled binary</li>\n<li>Generate a Docker image from the static binary in (2)</li>\n<li>Deploy the new Docker image to either the dev or prod ECS cluster</li>\n</ol>\n<p>Steps (3) and (4) are set up to only run on the <code>master</code> and <code>prod</code> branches. This kind of automated deployment setup made it easy for our distributed team to get changes into a real environment for review quickly. However, it also opened a security hole we needed to address.</p>\n<h2 id=\"aws-lockdown\">AWS lockdown</h2>\n<p>Due to the nature of this application, any kind of downtime during the active showers could have resulted in a lot of egg on our faces and a missed opportunity for the NFT raise. However, there was a far scarier potential outcome. Changing a single config value in production—the destination wallet—would have enabled a nefarious actor to siphon away funds intended for NFTs. This was the primary concern we had during the launch.</p>\n<p>We considered multiple social engineering approaches to the problem, such as advertising to potentially users the correct wallet address they should be using. However, we decided that most likely users would not be checking addresses before sending their funds. We <em>did</em> set up some emergency &quot;shower halted&quot; page and put in place an on-call team to detect and deploy such measures if necessary, but fortunately nothing along those lines occurred.</p>\n<p>However, during the meteor shower, we did instate an AWS account lockdown. This included:</p>\n<ul>\n<li>Switching <a href=\"https://tech.fpcomplete.com/products/zehut/\">Zehut</a>, a tool we use for granting temporary AWS credentials, into read-only credentials mode</li>\n<li>Disabling GitLab CI's production credentials, so that GitLab users could not cause a change in prod</li>\n</ul>\n<p>We additionally vetted all other components in the pipeline of DNS resolution, such as domain name registrar, Route 53, and other AWS services for hosting.</p>\n<p>These are generally good practices, and over time we intend to refine the AWS permissions setup for Levana's AWS account in general. However, this launch was the first time we needed to use AWS for app deployment, and time did not permit a thorough AWS permissions analysis and configuration.</p>\n<h2 id=\"during-the-shower\">During the shower</h2>\n<p>As I just mentioned, during the shower we had an on-call team ready to jump into action and a playbook to address potential issues. Issues essentially fell into three categories:</p>\n<ol>\n<li>Site is slow/down/bad in some way</li>\n<li>Site is actively malicious, serving the wrong content and potentially scamming people</li>\n<li>Some kind of social engineering attack is underway</li>\n</ol>\n<p>The FP Complete team were responsible for observing (1) and (2). I'll be honest that this is not our strong suit. We are a team that typically builds backends and designs DevOps solutions, not an on-call operations team. However, we were the experts in both the DevOps hosting, as well as the app itself. Fortunately, no major issues popped up, and the on-call team got to sit on their hands the whole time.</p>\n<p>Out of a preponderance of caution, we did take a few extra steps before the showers started to try and ensure we were ready for any attack:</p>\n<ol>\n<li>We bumped the replica count in ECS from 2 desired instances to 5. We had autoscaling in place already, but we wanted extra buffer just to be safe.</li>\n<li>We increased the instance size from 512 CPU units to 2048 CPU units.</li>\n</ol>\n<p>In all of our load testing pre-launch, we had seen that 512 CPU units was sufficient to handle 100,000 requests per second per instance with 99th percentile latency of 3.78ms. With these bumped limits in production, and in the middle of the highest activity on the site, we were very pleased to see the following CPU and memory usage graphs:</p>\n<p><img src=\"/images/blog/levana-nft/cpu.png\" alt=\"CPU usage\" /></p>\n<p><img src=\"/images/blog/levana-nft/memory.png\" alt=\"Memory usage\" /></p>\n<p>This was a nice testament to the power of a Rust-written web service, combined with proper autoscaling and CloudFront caching.</p>\n<h2 id=\"image-creation\">Image creation</h2>\n<p>Alright, let's put the app itself to the side for a second. We knew that, at the end of the shower, we would need to quickly mint NFTs for everyone wallet that donated more than $8 during a single shower. There are a few problems with this:</p>\n<ul>\n<li>We had no idea how many users would contribute.</li>\n<li>Generating the images is a relatively slow process.</li>\n<li>Making the images available on IPFS—necessary for how NFTs work—was potentially going to be a bottleneck.</li>\n</ul>\n<p>What we ended up doing was writing a Python script that pregenerated 100,000 or so meteor images. We did this generation directly on an Amazon EC2 instance. Then, instead of uploading the images to an IPFS hosting/pinning service, we ran the IPFS daemon directly on this EC2 instance. We additionally backed up all the images on S3 for redundant storage. Then we launched a <em>second</em> EC2 instance for redundant IPFS hosting.</p>\n<p>This Python script not only generated the images, but also generated a CSV file mapping the image Content ID (IPFS address) together with various pieces of metadata about the meteor image, such as the meteor body. We'll use this CID/meteor image metadata mapping for correct minting next.</p>\n<p>All in all, this worked just fine. However, there were some hurdles getting there, and we have plans to change this going forward in future stages of the NFT evolution. We'll mention those below.</p>\n<h2 id=\"minting\">Minting</h2>\n<p>Once the shower finished, we needed to get NFTs into user wallets as quickly as possible. That meant we needed two different things:</p>\n<ol>\n<li>All the NFT images on IPFS, which we had.</li>\n<li>A set of CSV files providing the NFTs to be generated, together with all of their metadata and owners.</li>\n</ol>\n<p>The former was handled by the previous step. The latter was additional pieces of Rust tooling we wrote that leveraged the same internal libraries we wrote for the backend application. The purpose of this tooling was to:</p>\n<ul>\n<li>Aggregate the total set of contributions from the blockchain.</li>\n<li>Stratify contributions into individual meteors of different rarity.</li>\n<li>Apply the appropriate algorithms to randomly decide which meteors receive an egg and which don't.</li>\n<li>Assign eggs among the meteors.</li>\n<li>Assign additionally metadata to the meteors.</li>\n<li>Choose an appropriate and unique meteor image for each meteor based on its needed metadata. (This relies on the Python-generated CSV file above.)</li>\n</ul>\n<p>This process produced a few different pieces of data:</p>\n<ul>\n<li>CSV files for meteor NFT generation. There's nothing secret about these, you could reconstruct them yourself by analyzing the NFT minting on the blockchain.</li>\n<li>The distribution of attributes (such as essence, crystals, distance, etc.) among the meteors for calculating rarity of individual traits. Again, this can be derived easily from public information.</li>\n<li>A file that tracks the meteor/egg mapping. This is the one outcome from this process that is a closely guarded secret.</li>\n</ul>\n<p>This final point is also influencing the design of the next few stages of this project. Specifically, while a smart contract would be the more natural way to interact with NFTs in general, we cannot expose the meteor/egg mapping on the blockchain. Therefore, the &quot;cracking&quot; phase (which will allow users to exchange meteors for their potential eggs) will need to work with another backend application.</p>\n<p>In any event, this metadata-generation process was something we tested multiple times on data from our beta runs, and were ready to produce and send over to Knowhere.art for minting soon after the shower. I believe users got NFTs in their wallets within 8 hours of the end of the shower, which was a pretty good timeframe overall.</p>\n<h2 id=\"opening-the-cave\">Opening the cave</h2>\n<p>The final step was opening the cave, a new page on the meteor site that allows users to view their meteors. This phase was achieved by updating the configuration values of the backend to include:</p>\n<ul>\n<li>The smart contract address of the NFT collection</li>\n<li>The total number of meteors</li>\n<li>The trait distribution</li>\n</ul>\n<p>Once we switched the config values, the cave opened up, and users were able to access it. Besides pulling the static information mentioned above from the server, all cave page interactions occur fully client side, with the client querying the blockchain using the Terra.js library.</p>\n<p>And that's where we're at today. The showers completed, users got their meteors, the cave is open, and we're back to work on implementing the cracking phase of this project. W00t!</p>\n<h2 id=\"problems\">Problems</h2>\n<p>Overall, this project went pretty smoothly in production. However, there were a few gotcha moments worth mentioning.</p>\n<h3 id=\"fcd-rate-limiting\">FCD rate limiting</h3>\n<p>The biggest issue we hit during the showers, and the one that had the biggest potential to break everything, was FCD rate limiting. We'd done extensive testing prior to the real showers on testnet, with many volunteer testers in addition to bots. We never ran into a single example that I'm aware of where rate limiting kicked in.</p>\n<p>However, the real production shower run into such rate limiting issues about 10 showers into the event. (We'll look at how they manifested in a moment.) There are multiple potentially contributing factors for this:</p>\n<ul>\n<li>There was simply far greater activity in the real event than we had tested for.</li>\n<li>Most of our testing was limited to just 10 showers, and the real event went for 44.</li>\n<li>There may be different rate limiting rules for FCD on mainnet versus testnet.</li>\n</ul>\n<p>Whatever the case, we began to notice the rate limiting when we tried to roll out a new feature. We implemented the Telescope functionality, which allowed users to see the historical floor prices in previous showers.</p>\n<p><img src=\"/images/blog/levana-nft/telescope.png\" alt=\"Telescope\" /></p>\n<p>After pushing the change to ECS, however, we noticed that the new deployment didn't go live. The reason was that, during the initial data load process, the new processes were receiving rate limiting responses and dying. We tried fixing this by adding a delay or other kinds of retry logic. However, none of these combinations allowed the application to begin processing requests within ECS's readiness check period. (We could have simply turned off health checks, but that would have opened a new can of worms.)</p>\n<p>This problem was fairly critical. Not being able to roll out new features or bug fixes was worrying. But more troubling was the lack of autohealing. The existing instances continued to run fine, because they only needed to download small amounts of data from FCD to stay up-to-date, and therefore never triggered the rate limiting. But if any of those instances went down, ECS wouldn't be able to replace them with healthy instances.</p>\n<p>Fortunately, we had already written the majority of a caching solution in prior weeks, and had not finished the work because we thought it wasn't a priority. After a few hair-raising hours of effort, we got a solution in place which:</p>\n<ul>\n<li>Saved all transactions to a YAML file (a binary format would have been a better choice, but YAML was the easiest to roll out)</li>\n<li>Uploaded this YAML file to S3</li>\n<li>Ran this save/upload process on a loop, updating every 10 minutes</li>\n<li>Modified the application logic to start off by first downloading the YAML file from S3, and then doing a delta load from there using FCD</li>\n</ul>\n<p>This reduced startup time significantly, bypassed the rate limiting completely, and allowed us to roll out new features and not worry about the entire site going down.</p>\n<h3 id=\"ipfs-hosting\">IPFS hosting</h3>\n<p>FP Complete's DevOps approach is decidedly cloud-focused. For large blob storage, our go-to solution is almost always cloud-based blob storage, which would be S3 in the case of Amazon. We had zero experience with large scale IPFS data hosting prior to this project, which presented a unique challenge.</p>\n<p>As mentioned, we didn't want to go with one of the IPFS pinning services, since the rate limiting may have prevented us from uploading all the pregenerated images. (Rate limiting is beginning to sound like a pattern here...) Being comfortable with S3, we initially tried hosting the images using <a href=\"https://github.com/ipfs/go-ds-s3\">go-ds-s3</a>, a plugin for the <code>ipfs</code> CLI that uses S3 for storage. We still don't know why, but this never worked correctly for us. Instead, we reverted to storing the raw image data on Amazon EBS, which is more expensive and less durable, but actually worked. To fix the durability issue, we backed up all the raw image files to S3.</p>\n<p>Overall, however, we're not happy with this outcome. The cost for this hosting is relatively high, and we haven't set up a truly fault-tolerant, highly available hosting. At this point, we would like to switch over to an IPFS pinning service, such as Pinata. Now that the images are available on IPFS, issuing API calls to pin those files should be easier than uploading the complete images. We're planning on using this as a framework going forward for other images, namely:</p>\n<ul>\n<li>Generate the raw images on EC2</li>\n<li>Upload for durability to S3</li>\n<li>Run <code>ipfs</code> locally to make the images available on IPFS</li>\n<li>Pin the images to a service like Pinata</li>\n<li>Take down the EC2 instance</li>\n</ul>\n<p>The next issue we ran into was... RATE LIMITING, again. This time, we discovered that Cloudflare's IPFS gateway was rate limiting users on downloading their meteor images, resulting in a situation where users would see only some of their meteors appear in their cave page. We solved this one by sticking CloudFront in front of the S3 bucket holding the meteor images and serving from there instead.</p>\n<p>Going forward, when it's available, <a href=\"https://blog.cloudflare.com/introducing-r2-object-storage/\">Cloudflare R2</a> is a promising alternative to the S3+CloudFront offering, due to reduced storage cost and entirely removed bandwidth costs.</p>\n<h2 id=\"lessons-learned\">Lessons learned</h2>\n<p>This project was a great mix of leveraging existing expertise and pairing with some new challenges. Some of the top lessons we learned here were:</p>\n<ol>\n<li>We got a lot of experience with working directly with the LCD and FCD APIs for Terra from Rust code. Previously, with our DeFi work, this almost exclusively sat behind Terra.js usage.</li>\n<li>IPFS was a brand-new topic for us, and we got to play with some pretty extreme cases right off the bat. Understanding the concepts in pinning and gateways will help us immensely with future NFT work.</li>\n<li>Since ECS is a relatively unusual technology for us, we got to learn quite a few of the idiosyncrasies it has versus Kubernetes, our more standard toolchain.</li>\n<li>While rate limiting is a concept we're familiar with and have worked with many times in the past, these particular obstacles were all new, and each of them surprising in different ways. Typically, we would have some simpler workarounds for these rate limiting issues, such as using authenticated requests. Having to solve each problem in such an extreme way was surprising.</li>\n<li>And while we've been involved in blockchain and smart contract work for years, this was our first time working directly with NFTs. This was probably the simplest lesson learned. The API for querying the NFTs contracts is <a href=\"https://github.com/CosmWasm/cw-nfts/blob/main/packages/cw721/README.md\">fairly straightforward</a>, and represented a small portion of the time spent on this project.</li>\n</ol>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>We're very excited to have been part of such a successful event as the Levana Dragons NFT meteor shower. This was a fun site to work on, with a huge and active user base, and some interesting challenges. It was great to pair together some of our standard cloud DevOps practices with blockchain and smart contract common practices. And using Rust brought some great advantages we're quite happy with.</p>\n<p>Going forward, we're looking forward to getting to continue evolving the backend, frontend, and DevOps of this project, just like the NFTs themselves will be evolving. Happy dragon luck to all!</p>\n<p><em>Interested in learning more? Check out these relevant articles</em></p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps, part 1</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Deploying Rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n<li><a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360®</a></li>\n<li><a href=\"https://tech.fpcomplete.com/products/zehut/\">Zehut</a></li>\n</ul>\n<p><em>Does this kind of work sound interesting? Consider <a href=\"https://tech.fpcomplete.com/jobs/\">applying to work at FP Complete</a>.</em></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
        "slug": "levana-nft-launch",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Levana NFT Launch",
        "description": "We were excited to recently help Levana Protocol with their NFT launch. This blog post explains some technical details behind the scenes that allowed this to happen.",
        "updated": null,
        "date": "2021-11-17",
        "year": 2021,
        "month": 11,
        "day": 17,
        "taxonomies": {
          "tags": [
            "blockchain",
            "rust",
            "devops"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wesley Crook",
          "keywords": "blockchain, NFT, cryptocurrency, Terra",
          "blogimage": "/images/blog-listing/blockchain.png",
          "image": "images/blog/thumbs/levana-nft-launch.png"
        },
        "path": "/blog/levana-nft-launch/",
        "components": [
          "blog",
          "levana-nft-launch"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "overview-of-the-event",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#overview-of-the-event",
            "title": "Overview of the event",
            "children": []
          },
          {
            "level": 2,
            "id": "backend-server",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#backend-server",
            "title": "Backend server",
            "children": []
          },
          {
            "level": 2,
            "id": "react-frontend",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#react-frontend",
            "title": "React frontend",
            "children": []
          },
          {
            "level": 2,
            "id": "hosting-infrastructure",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#hosting-infrastructure",
            "title": "Hosting infrastructure",
            "children": []
          },
          {
            "level": 2,
            "id": "gitlab",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#gitlab",
            "title": "GitLab",
            "children": []
          },
          {
            "level": 2,
            "id": "aws-lockdown",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#aws-lockdown",
            "title": "AWS lockdown",
            "children": []
          },
          {
            "level": 2,
            "id": "during-the-shower",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#during-the-shower",
            "title": "During the shower",
            "children": []
          },
          {
            "level": 2,
            "id": "image-creation",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#image-creation",
            "title": "Image creation",
            "children": []
          },
          {
            "level": 2,
            "id": "minting",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#minting",
            "title": "Minting",
            "children": []
          },
          {
            "level": 2,
            "id": "opening-the-cave",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#opening-the-cave",
            "title": "Opening the cave",
            "children": []
          },
          {
            "level": 2,
            "id": "problems",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#problems",
            "title": "Problems",
            "children": [
              {
                "level": 3,
                "id": "fcd-rate-limiting",
                "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#fcd-rate-limiting",
                "title": "FCD rate limiting",
                "children": []
              },
              {
                "level": 3,
                "id": "ipfs-hosting",
                "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#ipfs-hosting",
                "title": "IPFS hosting",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "lessons-learned",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#lessons-learned",
            "title": "Lessons learned",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 3960,
        "reading_time": 20,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/blockchain-technology-smart-contracts-save-money/",
            "title": "Blockchain Technology, Smart Contracts, and Your Company"
          }
        ]
      },
      {
        "relative_path": "blog/announcing-amber-ci-secret-tool.md",
        "colocated_path": null,
        "content": "<p>Years ago, <a href=\"https://travis-ci.org/\">Travis CI</a> introduced a method for passing secret values from your repository into the Travis CI system. This method relies on encryption to ensure that anyone can provide a new secret, but only the CI system itself can read those secrets. I've always thought that the Travis approach to secrets was one of the best around, and was disappointed that other CI tools continued to use the more standard &quot;set and update secrets in a web interface&quot; approach. (We'll get into the advantages of the encrypted-secrets approach a bit later.)</p>\n<p>Fast-forward to earlier this year, and for running <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a> deployment jobs, we found that the secrets-in-CI-web-interface approach simply wasn't scaling. So I hacked together a quick script that used GPG and symmetric key encryption to encrypt a <code>secrets.sh</code> file containing the relevant secrets for CI (or, really, CD in this case). This worked, but had some downsides.</p>\n<p>A few weeks ago, I finally bit the bullet and rewrote this ugly script. Instead of using GPG and symmetric key encryption, I used <a href=\"https://lib.rs/crates/sodiumoxide\"><code>sodiumoxide</code></a> and public key encryption. This addressed essentially all the pain points I had with our CD setup. However, this tool was very much custom-built for Kube360.</p>\n<p>Over the weekend, I extracted the general-purpose components of this tool into a <a href=\"https://github.com/fpco/amber\">new open source repository</a>. This blog post is announcing the first public release of Amber, a tool geared at CI/CD systems for better management of secret data over time. There's basic information in that repo to describe how to use the tool. This blog post is intended to go into more detail on why I believe encrypted-secrets is a better approach than web-interface-of-secrets.</p>\n<h2 id=\"the-pain-points\">The pain points</h2>\n<p>There are two primary issues with the standard CI secrets management approach:</p>\n<ol>\n<li>It can be tedious to manage a large number of values inside a web interface. I've personally made mistakes copy-pasting values. And if you ever need to run a script locally for testing purposes, copying all the values out each time is an even bigger pain. (More on that below.)</li>\n<li>It's completely reasonable for secret values to change over time. However, there's no evidence of this in the source repository feeding into the CI system. Instead, the changes happen opaquely, and can never be observed as having changed, nor an old build faithfully reproduced with the original values. (This is pretty similar to why we believe <a href=\"https://tech.fpcomplete.com/blog/2017/04/ci-build-process-in-code-repository/\">your CI build process should be in your code repository</a>.)</li>\n</ol>\n<p>With encrypted values within a repository, both of these things change. Adding new encrypted values is now a command line call, which for many of us is less tedious and more foolproof than web interfaces. The encrypted secrets are stored in the Git repository itself, so as values change over time, the files provide evidence of that fact. And checking out an old commit from the repository will allow you to rerun a build with exactly the same secrets as when the commit was made.</p>\n<h2 id=\"why-public-key\">Why public key</h2>\n<p>One of the important changes I made from the GPG script mentioned above was public key, instead of symmetric key, encryption. With symmetric key encryption, you use the same key to encrypt and decrypt data. That means that all people who want to encrypt a value into the repository need access to a piece of secret data. While encrypting new secret values isn't <em>that</em> common an activity, requiring access to that secret data is best avoided.</p>\n<p>Instead, with public key encryption, we generate a secret key and public key. The public key lives inside the repository, in the same file as the secrets themselves. With that in place, anyone with access to the repo can encrypt new values, without any ability to read existing values.</p>\n<p>Further, since the public key is available in the repository, Amber is able to perform sanity checks to ensure that its secret key matches up with the public key in the repository. While the encryption algorithms we use provide the ability to ensure message integrity, this self-check provides for nicer diagnostics, clearly distinguishing &quot;message corrupted&quot; from &quot;looks like you're using the wrong secret key for this repository.&quot;</p>\n<h2 id=\"minimizing-deltas\">Minimizing deltas</h2>\n<p>Amber is optimized for the Git repository case. This includes wanting to minimize the deltas when updating secrets. This resulted in three design decisions:</p>\n<ul>\n<li>\n<p>The config file format is YAML. Its whitespace-sensitive formatting makes it a great choice to minimize the number of lines affected when updating a secret. While other formats (like TOML) would have been great choices too, I stuck with YAML as, anecdotally, it seems to have stronger overall language support for people wishing to write companion tools.</p>\n</li>\n<li>\n<p>In addition to storing the secret name and encrypted value (the ciphertext), Amber additionally includes a SHA256 digest of the secret. This means that, if you encrypt the same value twice, Amber can detect this and avoid generating a new ciphertext. This has the additional benefit of letting users check if they know the secret value without being able to decrypt the file.</p>\n</li>\n<li>\n<p>The most natural representation of this data would be a YAML mapping, something like:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">secrets:\n  NAME1:\n    sha256: deadbeef\n    cipher: abc123\n</code></pre>\n<p>However, in most languages, the ordering of keys in a mapping is arbitrary. This makes it harder to read these files, and means that arbitrary minor changes may result in large deltas. Instead, Amber stores secrets in an array:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">secrets:\n- name: NAME1\n  sha256: deadbeef\n  cipher: abc123\n</code></pre>\n</li>\n</ul>\n<p>This all works together to achieve what for me is the goal of secrets in a repository: you can trivially see in a <code>git diff</code> which secrets values were added, removed, or updated.</p>\n<h2 id=\"local-running\">Local running</h2>\n<p>Ideally production deployments are only ever run from the official CI/CD system designated for that. However:</p>\n<ol>\n<li>Sometimes during development it's much easier to iterate by doing non-production deployments from your local system.</li>\n<li>As a realist, I have to admit that even the best run DevOps teams may occasionally need to bend the rules for expediency or better debugging of a production issue.</li>\n</ol>\n<p>For Kube360, it wasn't unreasonable to have about a dozen secret values for a standard deployment. Copy/pasting all of those to your local machine each time you want to debug an issue wasn't feasible. This encouraged some worst practices, such as keeping the secret values in a plain-text shell script file locally. For a development cluster, that's not the worst thing in the world. But lax security practices in dev tend to bleed into prod too easily.</p>\n<p>Copying a single secret value from CI secrets or a team password manager is a completely different story. It takes 30 seconds at the beginning of a debug session. I feel no objections to doing so.</p>\n<p>Even this may be something we can bypass with cloud secrets managers, which I'll mention below.</p>\n<h2 id=\"what-s-with-the-name\">What's with the name?</h2>\n<p>As we all know, there are two hard problems in computer science:</p>\n<ol>\n<li>Cache invalidation</li>\n<li>Naming things</li>\n<li>Off-by-one errors</li>\n</ol>\n<p>I named this tool Amber based on Jurassic Park, and the idea of some highly important data (dinosaur DNA) being trapped in amber under layers of sediment. This fit in nicely with my image of storing encrypted secrets inside the commits of a Git repository. But since I just finished playing &quot;Legend of Zelda: Skyward Sword,&quot; a more appropriate image seems to be:</p>\n<p><img src=\"/images/blog/amber-zelda.png\" alt=\"Zelda trapped in amber\" /></p>\n<h2 id=\"implementation\">Implementation</h2>\n<p>I wrote this tool in Rust. It's a pretty small codebase currently, clocking in at only 445 SLOC of Rust code. It's also a pretty simple overall implementation, if anyone is interested in a first project to contribute to.</p>\n<h2 id=\"future-enhancements\">Future enhancements</h2>\n<p>Future enhancements will be driven by internal and customer needs at FP Complete, as well as feedback we receive on the issue tracker and pull requests. I have a few ideas ranging from concrete to nebulous for enhancements:</p>\n<ul>\n<li>Masking values. Currently, <code>amber exec</code> will simply run the child process without modifying its output at all. A standard CI system feature is to mask secret values from output. Implementing such as change in Amber should be straightforward. (<a href=\"https://github.com/fpco/amber/issues/1\">Issue #1</a>)</li>\n<li>Tie-ins with cloud secrets management systems. Currently, Amber's only source of the secret key is via environment variables. There are many use cases where grabbing the data from a secrets manager, such as AWS Secrets Manager or Azure Key Vault, would be a better choice. In particular, during deployments, this could allow delegating access to secrets to existing cloud-native permissions mechanisms. See <a href=\"https://github.com/fpco/amber/issues/2\">issue #2</a> and <a href=\"https://github.com/fpco/amber/pull/4\">pull request #4</a> for some more information. One possible approach here is to follow a pattern of naming the secret based on the public key, leading to a zero-config approach to discovering the secret key (since the public key is already in the repository).</li>\n<li>Additional platform support. Currently, we're building executables for x86-64 on Linux (static via musl), Windows, and Mac. Cross compilation support from Rust is great, and one of the reasons I prefer writing CI tools like this in Rust. However, the <code>sodiumoxide</code> library depends on <code>libsodium</code>, so additional GitHub Actions setup will be necessary to get these builds working.</li>\n<li>Auto-generation of passwords. In our Kube360 work, a common need is to generate a temporary password to be used by different components in the system (e.g., an OpenID Connect client secret used by both the Identity Provider and Service Provider). A simple <code>amber gen-password CLIENT_SECRET</code> subcommand may be nice.</li>\n<li>I haven't released this code to <a href=\"https://crates.io/\">crates</a>, but if there's interest I'd be happy to do so.</li>\n<li>Support for encrypted files in addition to encrypted environment variables. I haven't really thought through what the interface for this may look like.</li>\n</ul>\n<h2 id=\"get-started\">Get started</h2>\n<p>There are <a href=\"https://github.com/fpco/amber#readme\">instructions in the repo</a> for getting started with Amber. The basic steps are:</p>\n<ul>\n<li>Download the executable from <a href=\"https://github.com/fpco/amber/releases\">the release page</a> or build it yourself</li>\n<li>Use <code>amber init</code> to create an <code>amber.yaml</code> file and a secret key</li>\n<li>Store the secret key somewhere safe, like your password manager, and additionally within your CI system's secrets\n<ul>\n<li>In theory, this is the last value you'll ever store there!</li>\n</ul>\n</li>\n<li>Add your secrets with <code>amber encrypt</code></li>\n<li>Commit <code>amber.yaml</code> to your repository</li>\n<li>Modify your CI scripts to download the Amber executable and use <code>amber exec</code> to run commands that need secrets</li>\n</ul>\n<h2 id=\"more-from-fp-complete\">More from FP Complete</h2>\n<p>FP Complete is an IT consulting firm specializing in server-side development, DevOps, Rust, and Haskell. A large part of our consulting involves improving and automating build and deployment pipelines. If you're interested in additional help from FP Complete in one of these domains, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us</a>.</p>\n<p>Interested in working with a team of DevOps, Rust, and Haskell engineers to solve real world problems? We're actively <a href=\"https://tech.fpcomplete.com/jobs/\">hiring senior and lead DevOps engineers</a>.</p>\n<p>Want to read more? Check out:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/\">Our blog</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Our Rust homepage</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/",
        "slug": "announcing-amber-ci-secret-tool",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing Amber, encrypted secrets management",
        "description": "We've released a new tool, Amber, to help better manage secrets in Git repositories for CI purposes. Read more about the motivation and how to get started.",
        "updated": null,
        "date": "2021-08-17",
        "year": 2021,
        "month": 8,
        "day": 17,
        "taxonomies": {
          "tags": [
            "kubernetes",
            "rust"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/devops.png",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/announcing-amber.png"
        },
        "path": "/blog/announcing-amber-ci-secret-tool/",
        "components": [
          "blog",
          "announcing-amber-ci-secret-tool"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "the-pain-points",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#the-pain-points",
            "title": "The pain points",
            "children": []
          },
          {
            "level": 2,
            "id": "why-public-key",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#why-public-key",
            "title": "Why public key",
            "children": []
          },
          {
            "level": 2,
            "id": "minimizing-deltas",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#minimizing-deltas",
            "title": "Minimizing deltas",
            "children": []
          },
          {
            "level": 2,
            "id": "local-running",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#local-running",
            "title": "Local running",
            "children": []
          },
          {
            "level": 2,
            "id": "what-s-with-the-name",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#what-s-with-the-name",
            "title": "What's with the name?",
            "children": []
          },
          {
            "level": 2,
            "id": "implementation",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#implementation",
            "title": "Implementation",
            "children": []
          },
          {
            "level": 2,
            "id": "future-enhancements",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#future-enhancements",
            "title": "Future enhancements",
            "children": []
          },
          {
            "level": 2,
            "id": "get-started",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#get-started",
            "title": "Get started",
            "children": []
          },
          {
            "level": 2,
            "id": "more-from-fp-complete",
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#more-from-fp-complete",
            "title": "More from FP Complete",
            "children": []
          }
        ],
        "word_count": 1874,
        "reading_time": 10,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/istio-mtls-debugging-story.md",
        "colocated_path": null,
        "content": "<p>Last week, our team was working on a feature enhancement to <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>. We work with clients in regulated industries, and one of the requirements was fully encrypted traffic throughout the cluster. While we've supported Istio's mutual TLS (mTLS) as an optional feature for end-user applications, not all of our built-in services were using mTLS strict mode. We were working on rolling out that support.</p>\n<p>One of the cornerstones of Kube360 is our centralized authentication system, which is primarily supplied by a service (called <code>k3dash</code>) that receives incoming traffic, performs authentication against an external identity provider (such as Okta, Azure AD, or others), and then provides those credentials to the other services within the clusters, such as the Kubernetes Dashboard or Grafana. This service in particular was giving some trouble.</p>\n<p>Before diving into the bugs and the debugging journey, however, let's review both Istio's mTLS support and relevant details of how <code>k3dash</code> operates.</p>\n<p><em>Interested in solving these kinds of problems? We're looking for experienced DevOps engineers to join our global team. We're hiring globally, and particularly looking for another US lead engineer. If you're interesting, <a href=\"mailto:[email protected]\">send your CV to [email protected]</a>.</em></p>\n<h2 id=\"what-is-mtls\">What is mTLS?</h2>\n<p>In a typical Kubernetes setup, encrypted traffic comes into the cluster and hits a load balancer. That load balancer terminates the TLS connection, resulting in the decrypted traffic. That decrypted traffic is then sent to the relevant service within the cluster. Since traffic within the cluster is typically considered safe, for many use cases this is an acceptable approach.</p>\n<p>But for some use cases, such as handling Personally Identifiable Information (PII), extra safeguards may be desired or required. In those cases, we would like to ensure that <em>all</em> network traffic, even traffic inside the same cluster, is encrypted. That gives extra guarantees against both snooping (reading data in transit) and spoofing (faking the source of data) attacks. This can help mitigate the impact of other flaws in the system.</p>\n<p>Implementing this complete data-in-transit encryption system manually requires a major overhaul to essentially every application in the cluster. You'll need to teach all of them to terminate their own TLS connections, issue certificates for all applications, and add a new Certificate Authority for all applications to respect.</p>\n<p>Istio's mTLS handles this outside of the application. It installs a sidecar that communicates with your application over a localhost connection, bypassing exposed network traffic. It uses sophisticated port forwarding rules (via IP tables) to redirect incoming and outgoing traffic to and from the pod to go via the sidecar. And the Envoy sidecar in the proxy handles all the logic of obtaining TLS certificates, refreshing keys, termination, etc.</p>\n<p>The way Istio handles all of this is pretty incredible. When it works, it works great. And when it fails, it can be disastrously difficult to debug. Which is what happened here (though thankfully it took less than a day to get to a conclusion). In the realm of <em>epic foreshadowment</em>, let me point out three specific points about Istio's mTLS worth mentioning.</p>\n<ul>\n<li>In strict mode, which is what we're going for, the Envoy sidecar will reject any incoming plaintext communication.</li>\n<li>Something I hadn't recognized at first, but now have fully internalized: normally, if you make an HTTP connection to a host that doesn't exist, you'll get a failed connection error. You definitely <em>won't</em> get an HTTP response. With Istio, however, you'll <em>always</em> make a successful outgoing HTTP connection, since your connection is going to Envoy itself. If the Envoy proxy cannot make the connection, it will return an HTTP response body with a 503 error message, like most proxies.</li>\n<li>The Envoy proxy has special handling for some protocols. Most importantly, if you make a plaintext HTTP outgoing connection, the Envoy proxy has sophisticated abilities to parse the outgoing request, understand details about various headers, and do intelligent routing.</li>\n</ul>\n<p>OK, that's mTLS. Let's talk about the other player here: <code>k3dash</code>.</p>\n<h2 id=\"k3dash-and-reverse-proxying\"><code>k3dash</code> and reverse proxying</h2>\n<p>The primary method <code>k3dash</code> uses to provide authentication credentials to other services inside the cluster is HTTP reverse proxying. This is a common technique, and common libraries exist for doing it. In fact, <a href=\"https://www.stackage.org/package/http-reverse-proxy\">I wrote one such library</a> years ago. We've already mentioned a common use case of reverse proxying: load balancing. In a reverse proxy situation, incoming traffic is received by one server, which analyzes the incoming request, performs some transformations, and then chooses a destination service to forward the request to.</p>\n<p>One of the most important aspects of reverse proxying is header management. There are a few different things you can do at the header level, such as:</p>\n<ul>\n<li>Remove hop-by-hop headers, such as <code>transfer-encoding</code>, which apply to a single hop and not the end-to-end communication between client and server.</li>\n<li>Inject new headers. For example, in <code>k3dash</code>, we regularly inject headers recognized by the final services for authentication purposes.</li>\n<li>Leave headers completely untouched. This is often the case with headers like <code>content-type</code>, where we typically want the client and final server to exchange data without any interference.</li>\n</ul>\n<p>As one <em>epic foreshadowment</em> example, consider the <code>Host</code> header in a typical reverse proxy situation. I may have a single load balancer handling traffic for a dozen different domain names, including domain names <code>A</code> and <code>B</code>. And perhaps I have a single service behind the reverse proxy serving the traffic for both of those domain names. I need to make sure that my load balancer forwards on the <code>Host</code> header to the final service, so it can decide how to respond to the request.</p>\n<p><code>k3dash</code> in fact uses the library linked above for its implementation, and is following fairly standard header forwarding rules, plus making some specific modifications within the application.</p>\n<p>I think that's enough backstory, and perhaps you're already beginning to piece together what went wrong based on my clues above. Anyway, let's dive in!</p>\n<h2 id=\"the-problem\">The problem</h2>\n<p>One of my coworkers, Sibi, got started on the Istio mTLS strict mode migration. He got strict mode turned on in a test cluster, and then began to figure out what was broken. I don't know all the preliminary changes he made. But when he reached out to me, he'd gotten us to a point where the Kubernetes load balancer was successfully receiving the incoming requests for <code>k3dash</code> and forwarding them along to <code>k3dash</code>. <code>k3dash</code> was able to log the user in and provide its own UI display. All good so far.</p>\n<p>However, following through from the main UI to the Kubernetes Dashboard would fail, and we'd end up with this error message in the browser:</p>\n<blockquote>\n<p>upstream connect error or disconnect/reset before headers. reset reason: connection failure</p>\n</blockquote>\n<p>Sibi believed this to be a problem with the <code>k3dash</code> codebase itself and asked me to step in to help debug.</p>\n<h2 id=\"the-wrong-rabbit-hole-and-incredible-laziness\">The wrong rabbit hole, and incredible laziness</h2>\n<p>This whole section is just a cathartic gripe session on how I foot-gunned myself. I'm entirely to blame for my own pain, as we're about to see.</p>\n<p>It seemed pretty clear that the outgoing connection from the <code>k3dash</code> pod to the <code>kubernetes-dashboard</code> pod was failing. (And this turned out to be a safe guess.) The first thing I wanted to do was make a simpler repro, which in this case involved <code>kubectl exec</code>ing into the <code>k3dash</code> container and <code>curl</code>ing to the in-cluster service endpoint. Essentially:</p>\n<pre><code>$ curl -ivvv http:&#x2F;&#x2F;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local&#x2F;\n*   Trying 172.20.165.228...\n* TCP_NODELAY set\n* Connected to kube360-kubernetes-dashboard.kube360-system.svc.cluster.local (172.20.165.228) port 80 (#0)\n&gt; GET &#x2F; HTTP&#x2F;1.1\n&gt; Host: kube360-kubernetes-dashboard.kube360-system.svc.cluster.local\n&gt; User-Agent: curl&#x2F;7.58.0\n&gt; Accept: *&#x2F;*\n&gt;\n&lt; HTTP&#x2F;1.1 503 Service Unavailable\nHTTP&#x2F;1.1 503 Service Unavailable\n&lt; content-length: 84\ncontent-length: 84\n&lt; content-type: text&#x2F;plain\ncontent-type: text&#x2F;plain\n&lt; date: Wed, 14 Jul 2021 15:29:04 GMT\ndate: Wed, 14 Jul 2021 15:29:04 GMT\n&lt; server: envoy\nserver: envoy\n&lt;\n* Connection #0 to host kube360-kubernetes-dashboard.kube360-system.svc.cluster.local left intact\nupstream connect error or disconnect&#x2F;reset before headers. reset reason: local reset\n</code></pre>\n<p>This reproed the problem right away. Great! I was now completely convinced that the problem was not <code>k3dash</code> specific, since neither <code>curl</code> nor <code>k3dash</code> could make the connection, and they both gave the same <code>upstream connect error</code> message. I could think of a few different reasons for this to happen, none of which were correct:</p>\n<ul>\n<li>The outgoing packets from the container were not being sent to the Envoy proxy. I strongly believed this one for a while. But if I'd thought a bit harder, I would have realized that this was completely impossible. That <code>upstream connect error</code> message was of course coming from the Envoy proxy itself! If we were having a normal connection failure, we would have received the error message at the TCP level, not as an HTTP 503 response code. Next!</li>\n<li>The Envoy sidecar was receiving the packets, but the mesh was confused enough that it couldn't figure out how to connect to the destination Envoy sidecar. This turned out to be partially right, but not in the way I thought.</li>\n</ul>\n<p>I futzed around with lots of different attempts here but was essentially stalled. Until Sibi noticed something fascinating. It turns out that the following, seemingly nonsensical command <em>did</em> work:</p>\n<pre><code>curl http:&#x2F;&#x2F;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local:443&#x2F;\n</code></pre>\n<p>For some reason, making an <em>insecure</em> HTTP request over 443, the <em>secure</em> HTTPS port, worked. This made no sense, of course. Why would using the wrong port fix everything? And this is where incredible laziness comes into play. You see, Kubernetes Dashboard's default configuration uses TLS, and requires all of that setup I mentioned above about passing around certificates and updating accepted Certificate Authorities. But you can turn off that requirement, and make it listen on plain text. Since (1) this was intracluster communication, and (2) we've always had strict mTLS on our roadmap, we decided to simply turn off TLS in the Kubernetes Dashboard. However, when doing so, I forgot to switch the port number from 443 to 80.</p>\n<p>Not to worry though! I <em>did</em> remember to correctly configure <code>k3dash</code> to communicate with Kubernetes Dashboard, using insecure HTTP, over port 443. Since both parties agreed on the port, it didn't matter that it was the wrong port.</p>\n<p>But this was all very frustrating. It meant that the &quot;repro&quot; wasn't a repro at all. <code>curl</code>ing on the wrong port was giving the same error message, but for a different reason. In the meanwhile, we went ahead and changed Kubernetes Dashboard to listen on port 80 and <code>k3dash</code> to connect on port 80. We thought there <em>may</em> be a possibility that the Envoy proxy was giving some special treatment to the port number, which in retrospect doesn't really make much sense. In any event, this ended at a situation where our &quot;repro&quot; wasn't a repro at all.</p>\n<h2 id=\"the-bug-is-in-k3dash\">The bug is in <code>k3dash</code></h2>\n<p>Now it was clear that Sibi was right. <code>curl</code> could connect, <code>k3dash</code> couldn't. The bug <em>must</em> be inside <code>k3dash</code>. But I couldn't figure out how. Being the author of essentially all the HTTP libraries involved in this toolchain, I began to worry that my HTTP client library itself may somehow be the source of the bug. I went down a rabbit hole there too, putting together some minimal sample program outside <code>k3dash</code>. I <code>kubectl cp</code>ed them over and then ran them... and everything worked fine. Phew, my libraries were working, but not <code>k3dash</code>.</p>\n<p>Then I did the thing I should have done at the very beginning. I looked at the logs very, very carefully. Remember, <code>k3dash</code> is doing a reverse proxy. So, it receives an incoming request, modifies it, makes the new request, and then sends a modified response back. The logs included the modified outgoing HTTP request (some fields modified to remove private information):</p>\n<pre><code>2021-07-15 05:20:39.820662778 UTC ServiceRequest Request {\n  host                 = &quot;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local&quot;\n  port                 = 80\n  secure               = False\n  requestHeaders       = [(&quot;X-Real-IP&quot;,&quot;127.0.0.1&quot;),(&quot;host&quot;,&quot;test-kube360-hostname.hidden&quot;),(&quot;upgrade-insecure-requests&quot;,&quot;1&quot;),(&quot;user-agent&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;accept&quot;,&quot;text&#x2F;html,application&#x2F;xhtml+xml,application&#x2F;xml;q=0.9,image&#x2F;avif,image&#x2F;webp,image&#x2F;apng,*&#x2F;*;q=0.8,application&#x2F;signed-exchange;v=b3;q=0.9&quot;),(&quot;sec-gpc&quot;,&quot;1&quot;),(&quot;referer&quot;,&quot;http:&#x2F;&#x2F;test-kube360-hostname.hidden&#x2F;dash&quot;),(&quot;accept-language&quot;,&quot;en-US,en;q=0.9&quot;),(&quot;cookie&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;x-forwarded-for&quot;,&quot;192.168.0.1&quot;),(&quot;x-forwarded-proto&quot;,&quot;http&quot;),(&quot;x-request-id&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;x-envoy-attempt-count&quot;,&quot;3&quot;),(&quot;x-envoy-internal&quot;,&quot;true&quot;),(&quot;x-forwarded-client-cert&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;Authorization&quot;,&quot;&lt;REDACTED&gt;&quot;)]\n  path                 = &quot;&#x2F;&quot;\n  queryString          = &quot;&quot;\n  method               = &quot;GET&quot;\n  proxy                = Nothing\n  rawBody              = False\n  redirectCount        = 0\n  responseTimeout      = ResponseTimeoutNone\n  requestVersion       = HTTP&#x2F;1.1\n}\n</code></pre>\n<p>I tried to leave in enough content here to give you the same overwhelmed sense that I had looking it. Keep in mind the <code>requestHeaders</code> field is in practice about three times as long. Anyway, with the slimmed down headers, and all my hints throughout, see if you can guess what the problem is.</p>\n<p>Ready? It's the <code>Host</code> header! Let's take a quote from the <a href=\"https://istio.io/latest/docs/ops/configuration/traffic-management/traffic-routing/\">Istio traffic routing documentation</a>. Regarding HTTP traffic, it says:</p>\n<blockquote>\n<p>Requests are routed based on the port and <em><code>Host</code></em> header, rather than port and IP. This means the destination IP address is effectively ignored. For example, <code>curl 8.8.8.8 -H &quot;Host: productpage.default.svc.cluster.local&quot;</code>, would be routed to the <code>productpage</code> Service.</p>\n</blockquote>\n<p>See the problem? <code>k3dash</code> is behaving like a standard reverse proxy, and including the <code>Host</code> header, which is almost always the right thing to do. But not here! In this case, that <code>Host</code> header we're forwarding is confusing Envoy. Envoy is trying to connect to something (<code>test-kube360-hostname.hidden</code>) that doesn't respond to its mTLS connections. That's why we get the <code>upstream connect error</code>. And that's why we got the same response as when we used the wrong port number, since Envoy is configured to only receive incoming traffic on a port that the service is actually listening to.</p>\n<h2 id=\"the-fix\">The fix</h2>\n<p>After all of that, the fix is rather anticlimactic:</p>\n<pre data-lang=\"diff\" class=\"language-diff \"><code class=\"language-diff\" data-lang=\"diff\">-(\\(h, _) -&gt; not (Set.member h _serviceStripHeaders))\n+-- Strip out host headers, since they confuse the Envoy proxy\n+(\\(h, _) -&gt; not (Set.member h _serviceStripHeaders) &amp;&amp; h &#x2F;= &quot;Host&quot;)\n</code></pre>\n<p>We already had logic in <code>k3dash</code> to strip away specific headers for each service. And it turns out this logic was primarily used to strip out the <code>Host</code> header for services that got confused when they saw it! Now we just need to strip away the <code>Host</code> header for all the services instead. Fortunately none of our services perform any logic based on the <code>Host</code> header, so with that in place, we should be good. We deployed the new version of <code>k3dash</code>, and voilà! everything worked.</p>\n<h2 id=\"the-moral-of-the-story\">The moral of the story</h2>\n<p>I walked away from this adventure with a much better understanding of how Istio interacts with applications, which is great. I got a great reminder to look more carefully at log messages before hardening my assumptions about the source of a bug. And I got a great kick in the pants for being lazy about port number fixes.</p>\n<p>All in all, it was about six hours of debugging fun. And to quote a great Hebrew phrase on it, &quot;היה טוב, וטוב שהיה&quot; (it was good, and good that it <em>was</em> (in the past)).</p>\n<hr />\n<p>As I mentioned above, we're actively looking for new DevOps candidates, especially US based candidates. If you're interested in working with a global team of experienced DevOps, Rust, and Haskell engineers, consider <a href=\"mailto:[email protected]\">sending us your CV</a>.</p>\n<p>And if you're looking for a solid Kubernetes platform, batteries included, so you can offload this kind of tedious debugging to some other unfortunate souls (read: us), <a href=\"https://tech.fpcomplete.com/products/kube360/\">check out Kube360</a>.</p>\n<p>If you liked this article, you may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-kubernetes-windows/\">Deploying Rust with Windows Containers on Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/\">Cloud Vendor Neutrality</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">Secure defaults with Kubernetes Security with Kube360</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
        "slug": "istio-mtls-debugging-story",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "An Istio/mutual TLS debugging story",
        "description": "While rolling out Istio's strict mTLS mode in our Kube360 product, we ran into an interesting corner case problem.",
        "updated": null,
        "date": "2021-07-20",
        "year": 2021,
        "month": 7,
        "day": 20,
        "taxonomies": {
          "tags": [
            "kubernetes",
            "regulated"
          ],
          "categories": [
            "devops",
            "kube360",
            "it-compliance"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/devops.png",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/istio-mtls-debugging-story.png"
        },
        "path": "/blog/istio-mtls-debugging-story/",
        "components": [
          "blog",
          "istio-mtls-debugging-story"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "what-is-mtls",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#what-is-mtls",
            "title": "What is mTLS?",
            "children": []
          },
          {
            "level": 2,
            "id": "k3dash-and-reverse-proxying",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#k3dash-and-reverse-proxying",
            "title": "k3dash and reverse proxying",
            "children": []
          },
          {
            "level": 2,
            "id": "the-problem",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-problem",
            "title": "The problem",
            "children": []
          },
          {
            "level": 2,
            "id": "the-wrong-rabbit-hole-and-incredible-laziness",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-wrong-rabbit-hole-and-incredible-laziness",
            "title": "The wrong rabbit hole, and incredible laziness",
            "children": []
          },
          {
            "level": 2,
            "id": "the-bug-is-in-k3dash",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-bug-is-in-k3dash",
            "title": "The bug is in k3dash",
            "children": []
          },
          {
            "level": 2,
            "id": "the-fix",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-fix",
            "title": "The fix",
            "children": []
          },
          {
            "level": 2,
            "id": "the-moral-of-the-story",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-moral-of-the-story",
            "title": "The moral of the story",
            "children": []
          }
        ],
        "word_count": 2642,
        "reading_time": 14,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          }
        ]
      },
      {
        "relative_path": "blog/cloud-vendor-neutrality.md",
        "colocated_path": null,
        "content": "<p>Earlier this week, Amazon removed Parler from its platform. As a company hosting a network service on a cloud provider today, should you worry about such actions from cloud vendors? And what steps should you be taking now?</p>\n<p>In this post, we'll explore some of the risks associated with being tied to a single vendor, and the costs involved in breaking the dependency. I'll also give some recommendations on low hanging fruit.</p>\n<p>Ultimately, how far down the vendor neutrality path you want to go is a company specific risk mitigation strategy. In this post, we'll explore the raw information, but deeper analysis would be based on your company's specific situation. As usual, if you would like more direct help from the team at FP Complete in understanding these topics, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us for a consultation</a>.</p>\n<h2 id=\"what-is-vendor-neutrality\">What is vendor neutrality?</h2>\n<p>Vendor neutrality is not a binary. There are various levels on a spectrum from an application that leverages many vendor-specific services to an application which runs on any Linux machine in the world. Achieving complete vendor neutrality is almost never the goal. Instead, most companies interested in this topic are looking to reduce their dependencies where reasonable.</p>\n<p>To be more concrete, let's say you're on Amazon, and you're looking into what database options to use in your application. Your team comes up with three options:</p>\n<ol>\n<li>Build it using DynamoDB, an Amazon-specific proprietary offering</li>\n<li>Build it using PostgreSQL hosted on Amazon's RDS service</li>\n<li>Build it using PostgreSQL which your team manages themselves</li>\n</ol>\n<p>Option (1) provides no vendor neutrality. If you, for any reason, decide to leave Amazon, you'll need to rewrite large parts of your application to move from DynamoDB. This may be a significant undertaking, introducing a major barrier to exit from Amazon.</p>\n<p>Option (2), while still leveraging an Amazon service, does not fall into that same trap. Your application will speak to PostgreSQL, an open source database that can be hosted anywhere in the world. If you're dissatisfied with RDS, you can migrate to another offering fairly easy. PostgreSQL hosted offerings are available on other cloud providers. And by using RDS, you'll get some features more easily, such as backups and replication.</p>\n<p>Option (3) is the most vendor neutral. You'll be forced to implement all features of PostgreSQL you want yourself. Maybe this will entail creating a Docker image with a fully configured PostgreSQL instance. Moving this to Azure or on-prem is even easier than option (2). But we may be at the point of diminishing returns, as we'll discuss below.</p>\n<p>To summarize: vendor neutrality is a spectrum measuring how tied you are to a specific vendor, and how difficult it would be to move to a different one.</p>\n<h2 id=\"advantages-of-vendor-neutrality\">Advantages of vendor neutrality</h2>\n<p>The current situation with Parler is an extreme example of the advantages of vendor neutrality. I would imagine most companies doing business with Amazon don't have a reasonable expectation that Amazon would decide to remove them from their platform. Again, this is a risk assessment scenario, and you need to analyze the risk for your own business. A company hosting uncensored political discourse is in a different risk category from a someone running a personal blog.</p>\n<p>But this is far from the only advantage of vendor neutrality. Let's analyze some of the most common concerns I've seen for companies to remain vendor neutral.</p>\n<ul>\n<li><strong>Price sensitivity</strong> Cloud costs can be a major part of a company's budget, and costs can vary radically between providers. Various providers are also willing to give large incentives for companies to switch platforms. But if you've designed your application deeply around one provider, the cost of switching may not exceed the long term cost savings, leaving you at your current provider's mercy.</li>\n<li><strong>Regulatory obligations</strong> Some governments may have requirements that your software run on specific vendor hardware, or specific on-prem environments. Building up your software around one provider may prevent you from offering your services in those cases.</li>\n<li><strong>Client preference</strong> Similarly, if you provide managed software to companies, they may have a built-in cloud provider preference. If you've built your software on Google Cloud, but they have a corporate policy that all new projects live on Azure, you may lose the sale.</li>\n<li><strong>Geographic distribution</strong> For lowest latency, you'll want to put your services as close to the clients as possible. And it may turn out that the provider you've chosen simply doesn't have a presence there. Or a competitor may be closer. Or a service you want to peer with is on different provider, and the data costs will be much lower if you switch providers.</li>\n</ul>\n<p>There are many more examples, this isn't an exhaustive list. What I want to motivate here is that vendor neutrality isn't just a fringe ideal for companies afraid of platform eviction. There are many reasons a normal company in its normal course of business may wish to be vendor neutral. You should analyze these cases, as well as others that may apply to your company, and assess the value of neutrality.</p>\n<h2 id=\"costs-of-vendor-neutrality\">Costs of vendor neutrality</h2>\n<p>Vendor neutrality does not come for free. A primary value proposition of most cloud providers is quick time to market. By leveraging existing services, your team can offload creation and maintenance of complex systems. Eschewing such services and building from scratch will impact your time to market, and potentially have other impacts (like increase bug rate, reduced reliability, etc).</p>\n<p>I often see engineers decrying the evils of vendor lock-in without taking these costs into account. As a business, you'll need to find a way to adequately and accurately measure these costs as you make decisions, instead of turning it into a quasi-religious crusade against all forms of lock-in.</p>\n<p>With these trade-offs in mind, I'll finish off this post by explaining some of the most bang-for-the-buck moves you can make, which:</p>\n<ul>\n<li>Move you much farther along the vendor neutral spectrum</li>\n<li>Do not cost significant engineering work, if undertaken early on and designed correctly</li>\n<li>Provide additional benefits whenever possible</li>\n</ul>\n<h2 id=\"leverage-open-source-tools\">Leverage open source tools</h2>\n<p>The hardest lock-in to overcome is dedication to a proprietary tool. Without naming names, some large 6-letter database companies have made a great reputation of leveraging lock-in with major increases in licensing fees. Once you're tied into that model, it's difficult to disengage.</p>\n<p>Open source tools provide a major protection against this. Assuming the licenses are correct—and you should be sure to check that—no one can ever take your open source tools away from you. Sure, a provider may decide to stop maintaining the software. Or perhaps future releases may be closed source instead. Or perhaps they won't address your bug reports without paying for a support contract. But ultimately, you retain lots of freedom to take the software, modify it as necessary, and deploy it everywhere.</p>\n<p>There has long been a debate between the features and maturity of proprietary versus open source tooling. As always, we cannot make our decisions in a vacuum, and the flexibility of open source is not the be-all and end-all for a business. However, in the past decade in particular, open source has come to dominate large parts of the deployment space.</p>\n<p>To pick on the example above: while DynamoDB is a powerful and flexible database option on AWS, it's far from unique. Cassandra, Redis, PostgreSQL, and dozens of other open source databases are readily available, with companies offering support, commercial hosting, and paid consulting services.</p>\n<p>We've seen a major shift occur as well in the software development language space. Many of the biggest tech companies in the world not only <em>use</em> open source languages, but provide their own complete language ecosystems, free of charge. Google's Go, Microsoft's .NET Core, Mozilla's <a href=\"https://tech.fpcomplete.com/rust/\">Rust</a>, and Apple's Swift are some prime examples.</p>\n<p>Far from being the scrappy underdog, we've seen a shift where open source is the de facto standard, and proprietary options are viewed as niche. You're no longer trading quality for flexibility. You can often have your cake and eat it too.</p>\n<h3 id=\"kubernetes\">Kubernetes</h3>\n<p>I decided to give one open source player its own subsection in this context. Kubernetes is an orchestration management tool, managing various cloud resources for hosting containerized applications in both Linux and Windows. The first notable thing in this context is that Kubernetes has effectively supplanted other proprietary and cloud-specific offerings. Those offerings still exist, but from a market share standpoint, Kubernetes is clearly in a dominant position.</p>\n<p>The second thing to note is that Kubernetes is a tool supported by many of the largest cloud providers. Google created Kubernetes, Microsoft provides significant support, and all three top cloud providers (Google, Azure, and AWS) offer native Kubernetes services.</p>\n<p>The final thing to note is that Kubernetes really goes beyond a single service. In many ways, it functions as a cloud abstraction layer. When you use Kubernetes, you often times write your applications to target Kubernetes <em>instead of</em> targeting the underlying vendor. Instead of using a cloud Load Balancer, you'll use an ingress and service in Kubernetes. This drastically reduces the cost of remaining vendor neutral.</p>\n<p>As a plug, in <a href=\"https://tech.fpcomplete.com/products/kube360/\">our own Kubernetes offering</a>, we've focused on combining commonly used open source components to provide a batteries-included experience with minimized vendor lock-in. We've already used it internally and for customers to easily migrate services between different cloud providers, and from the cloud to on-prem.</p>\n<div class=\"text-center\"><a href=\"/products/kube360\" class=\"button-coral\">Learn more about Kube360</a></div>\n<h2 id=\"high-value-cloud-services\">High value cloud services</h2>\n<p>Some cloud services provide an interesting combination of delivering high value with minimal lock-in costs. The greatest example of that is blob storage services, such as S3. The durability and availability guarantees cloud providers offer around your data is far greater than most teams would be able to provide on their own. The cost of usage is significantly far lower than rolling your own solution using block storage in the cloud. And finally: the lock-in risks tend to be small. There are tools available to abstract the different vendor APIs for blob storage (and we include such a tool in Kube360). And even without such tools, generally the impact on a codebase from blob storage selection is minimal.</p>\n<p>Another example is services which host open source offerings. The RDS example above fits in nicely here. We generally recommend using hosted database offerings from cloud providers, since the cost is close to what you would pay to set it up yourself, you get lots of features quickly, and migration to a different option is trivial.</p>\n<p>And one final example is services like load balancers and auto-scaling groups. These are services that are impossible to implement fully yourself, would be far more expensive to implement to any extent using cloud virtual machines, and introduce virtually no lock-in. If you're moving from AWS to Azure, you'll need to change your infrastructure code to use Azure equivalents to those services. But generally, these can be seen at the same level of commodity as the virtual machines themselves. You're paying for a fairly standard service, you're rarely locking yourself in to a vendor-specific feature.</p>\n<h2 id=\"multicloud-vs-hybrid-cloud\">Multicloud vs hybrid cloud</h2>\n<p>In previous discussions, the topic of vendor neutrality typically introduces the two confusing terms &quot;multicloud&quot; and &quot;hybrid cloud.&quot; There is some disagreement in the tech space around what the former term means, but I'm going to define these two terms as:</p>\n<ul>\n<li><strong>Multicloud</strong> means that your service is capable of running on multiple different cloud providers and/or on-prem environments, but each environment will be autonomous from others</li>\n<li><strong>Hybrid cloud</strong> means that you can simultaneously run your service on multiple cloud providers, and they will replicate data, load balance, and perform other intelligent operations between the different providers</li>\n</ul>\n<p>Multicloud is a much easier thing to attain than hybrid cloud. Hybrid cloud introduces many new kinds of distributed systems failure models, as well as risks around major data transfer costs and latencies. There are certainly some potential advantages for hybrid cloud setups, but in our experience the much lower hanging fruit is in targeting multicloud.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>Summing up, there are many reasons a company may decide to keep their applications vendor neutral. Each of these reasons can be seen as a risk mitigation strategy, and a proper risk assessment and cost analysis should be performed. While current events has people's attention on vendor eviction, plenty of other reasons exist.</p>\n<p>On the other hand, vendor neutrality is not free, and should not be pursued to the detriment of the business. Finding high value, low cost moves to increase your neutrality is your best bet. Such moves may include:</p>\n<ul>\n<li>Opting for open source where possible</li>\n<li>Using a platform like <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kubernetes</a> that encourages more neutrality</li>\n<li>Opt for cloud services that are more easily swappable, such as load balancers</li>\n</ul>\n<p>If you would like more information or help with a vendor neutrality risk assessment, we would love to chat.</p>\n<div class=\"text-center\"><a href=\"/contact-us/\" class=\"button-coral\">Contact us for more information</a></div>\n<p>If you liked this post, you may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/why-we-built-kube360/\">Why we built Kube360</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/understanding-cloud-deployments/\">Understanding Cloud Software Deployments</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/",
        "slug": "cloud-vendor-neutrality",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cloud Vendor Neutrality",
        "description": "Amazon recently removed Parler from its platform, causing some people to ask if and how they should protect themselves from cloud providers. In this post, we'll explore costs and benefits of keeping yourself cloud vendor neutral, and how to approach it expediently.",
        "updated": null,
        "date": "2021-01-13",
        "year": 2021,
        "month": 1,
        "day": 13,
        "taxonomies": {
          "tags": [
            "devops",
            "insights"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/devops.png",
          "image": "images/blog/cloud-vendor-neutrality.png"
        },
        "path": "/blog/cloud-vendor-neutrality/",
        "components": [
          "blog",
          "cloud-vendor-neutrality"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "what-is-vendor-neutrality",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#what-is-vendor-neutrality",
            "title": "What is vendor neutrality?",
            "children": []
          },
          {
            "level": 2,
            "id": "advantages-of-vendor-neutrality",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#advantages-of-vendor-neutrality",
            "title": "Advantages of vendor neutrality",
            "children": []
          },
          {
            "level": 2,
            "id": "costs-of-vendor-neutrality",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#costs-of-vendor-neutrality",
            "title": "Costs of vendor neutrality",
            "children": []
          },
          {
            "level": 2,
            "id": "leverage-open-source-tools",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#leverage-open-source-tools",
            "title": "Leverage open source tools",
            "children": [
              {
                "level": 3,
                "id": "kubernetes",
                "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#kubernetes",
                "title": "Kubernetes",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "high-value-cloud-services",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#high-value-cloud-services",
            "title": "High value cloud services",
            "children": []
          },
          {
            "level": 2,
            "id": "multicloud-vs-hybrid-cloud",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#multicloud-vs-hybrid-cloud",
            "title": "Multicloud vs hybrid cloud",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2235,
        "reading_time": 12,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
            "title": "An Istio/mutual TLS debugging story"
          }
        ]
      },
      {
        "relative_path": "blog/rust-kubernetes-windows.md",
        "colocated_path": null,
        "content": "<p>A few years back, we <a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">published a blog post</a> about deploying a Rust application using Docker and Kubernetes. That application was a Telegram bot. We're going to do something similar today, but with a few meaningful differences:</p>\n<ol>\n<li>We're going to be deploying a web app. Don't get too excited: this will be an incredibly simply piece of code, basically copy-pasted from the <a href=\"https://actix.rs/docs/application/\">actix-web documentation</a>.</li>\n<li>We're going to build the deployment image on Github Actions</li>\n<li>And we're going to be building this using Windows Containers instead of Linux. (Sorry for burying the lead.)</li>\n</ol>\n<p>We put this together for testing purposes when rolling out Windows support in our <a href=\"https://tech.fpcomplete.com/products/kube360/\">managed Kubernetes product, Kube360®</a> here at FP Complete. I wanted to put this post together to demonstrate a few things:</p>\n<ul>\n<li>How pleasant and familiar Windows Containers workflows were versus the more familiar Linux approaches</li>\n<li>Github Actions work seamlessly for building Windows Containers</li>\n<li>With the correct configuration, Kubernetes is a great platform for deploying Windows Containers</li>\n<li>And, of course, how wonderful the Rust toolchain is on Windows</li>\n</ul>\n<p>Alright, let's dive in! And if any of those topics sound interesting, and you'd like to learn more about FP Complete offerings, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us for more information on our offerings</a>.</p>\n<h2 id=\"prereqs\">Prereqs</h2>\n<p>Quick sidenote before we dive in. Windows Containers only run on Windows machines. Not even all Windows machines will support Windows Containers. You'll need Windows 10 Pro or a similar license, and have Docker installed on that machine. You'll also need to ensure that Docker is set to use Windows instead of Linux containers.</p>\n<p>If you have all of that set up, you'll be able to follow along with most of the steps below. If not, you won't be able to build or run the Docker images on your local machine.</p>\n<p>Also, for running the application on Kubernetes, you'll need a Kubernetes cluster with Windows nodes. I'll be using the FP Complete Kube360 test cluster on Azure in this blog post, though we've previously tested in on both AWS and on-prem clusters too.</p>\n<h2 id=\"the-rust-application\">The Rust application</h2>\n<p>The source code for this application will be, by far, the most uninteresting part of this post. As mentioned, it's basically a copy-paste of an example straight from the actix-web documentation featuring mutable state. It turns out this was a great way to test out basic Kubernetes functionality like health checks, replicas, and autohealing.</p>\n<p>We're going to build this using the latest stable Rust version as of writing this post, so create a <code>rust-toolchain</code> file with the contents:</p>\n<pre><code>1.47.0\n</code></pre>\n<p>Our <code>Cargo.toml</code> file will be pretty vanilla, just adding in the dependency on <code>actix-web</code>:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[package]\nname = &quot;windows-docker-web&quot;\nversion = &quot;0.1.0&quot;\nauthors = [&quot;Michael Snoyman &lt;[email protected]&gt;&quot;]\nedition = &quot;2018&quot;\n\n[dependencies]\nactix-web = &quot;3.1&quot;\n</code></pre>\n<p>If you want to see the <code>Cargo.lock</code> file I compiled with, it's <a href=\"https://github.com/fpco/windows-docker-web/blob/f8a3192e63f2e699cc67716488a633f5e0893446/Cargo.lock\">available in the source repo</a>.</p>\n<p>And finally, the actual code in <code>src/main.rs</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use actix_web::{get, web, App, HttpServer};\nuse std::sync::Mutex;\n\nstruct AppState {\n    counter: Mutex&lt;i32&gt;,\n}\n\n#[get(&quot;&#x2F;&quot;)]\nasync fn index(data: web::Data&lt;AppState&gt;) -&gt; String {\n    let mut counter = data.counter.lock().unwrap();\n    *counter += 1;\n    format!(&quot;Counter is at {}&quot;, counter)\n}\n\n#[actix_web::main]\nasync fn main() -&gt; std::io::Result&lt;()&gt; {\n    let host = &quot;0.0.0.0:8080&quot;;\n    println!(&quot;Trying to listen on {}&quot;, host);\n    let app_state = web::Data::new(AppState {\n        counter: Mutex::new(0),\n    });\n    HttpServer::new(move || App::new().app_data(app_state.clone()).service(index))\n        .bind(host)?\n        .run()\n        .await\n}\n</code></pre>\n<p>This code creates an application state (a mutex of an <code>i32</code>), defines a single <code>GET</code> handler that increments that variable and prints the current value, and then hosts this on <code>0.0.0.0:8080</code>. Not too shabby.</p>\n<p>If you're following along with the code, now would be a good time to <code>cargo run</code> and make sure you're able to load up the site on your <code>localhost:8080</code>.</p>\n<h2 id=\"dockerfile\">Dockerfile</h2>\n<p>If this is your first foray into Windows Containers, you may be surprised to hear me say &quot;Dockerfile.&quot; Windows Container images can be built with the same kind of Dockerfiles you're used to from the Linux world. This even supports more advanced features, such as multistage Dockerfiles, which we're going to take advantage of here.</p>\n<p>There are a number of different base images provided by Microsoft for Windows Containers. We're going to be using Windows Server Core. It provides enough capabilities for installing Rust dependencies (which we'll see shortly), without including too much unneeded extras. Nanoserver is a much lighterweight image, but it doesn't play nicely with the Microsoft Visual C++ runtime we're using for the <code>-msvc</code> Rust target.</p>\n<p><strong>NOTE</strong> I've elected to use the <code>-msvc</code> target here instead of <code>-gnu</code> for two reasons. Firstly, it's closer to the actual use cases we need to support in Kube360, and therefore made a better test case. Also, as the default target for Rust on Windows, it seemed appropriate. It should be possible to set up a more minimal nanoserver-based image based on the <code>-gnu</code> target, if someone's interested in a &quot;fun&quot; side project.</p>\n<p>The <a href=\"https://github.com/fpco/windows-docker-web/blob/f8a3192e63f2e699cc67716488a633f5e0893446/Dockerfile\">complete Dockerfile is available on Github</a>, but let's step through it more carefully. As mentioned, we'll be performing a multistage build. We'll start with the build image, which will install the Rust build toolchain and compile our application. We start off by using the Windows Server Core base image and switching the shell back to the standard <code>cmd.exe</code>:</p>\n<pre><code>FROM mcr.microsoft.com&#x2F;windows&#x2F;servercore:1809 as build\n\n# Restore the default Windows shell for correct batch processing.\nSHELL [&quot;cmd&quot;, &quot;&#x2F;S&quot;, &quot;&#x2F;C&quot;]\n</code></pre>\n<p>Next we're going to install the Visual Studio buildtools necessary for building Rust code:</p>\n<pre><code># Download the Build Tools bootstrapper.\nADD https:&#x2F;&#x2F;aka.ms&#x2F;vs&#x2F;16&#x2F;release&#x2F;vs_buildtools.exe &#x2F;vs_buildtools.exe\n\n# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload,\n# excluding workloads and components with known issues.\nRUN vs_buildtools.exe --quiet --wait --norestart --nocache \\\n    --installPath C:\\BuildTools \\\n    --add Microsoft.Component.MSBuild \\\n    --add Microsoft.VisualStudio.Component.Windows10SDK.18362 \\\n    --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64\t\\\n || IF &quot;%ERRORLEVEL%&quot;==&quot;3010&quot; EXIT 0\n</code></pre>\n<p>And then we'll modify the entrypoint to include the environment modifications necessary to use those buildtools:</p>\n<pre><code># Define the entry point for the docker container.\n# This entry point starts the developer command prompt and launches the PowerShell shell.\nENTRYPOINT [&quot;C:\\\\BuildTools\\\\Common7\\\\Tools\\\\VsDevCmd.bat&quot;, &quot;&amp;&amp;&quot;, &quot;powershell.exe&quot;, &quot;-NoLogo&quot;, &quot;-ExecutionPolicy&quot;, &quot;Bypass&quot;]\n</code></pre>\n<p>Next up is installing <code>rustup</code>, which is fortunately pretty easy:</p>\n<pre><code>RUN curl -fSLo rustup-init.exe https:&#x2F;&#x2F;win.rustup.rs&#x2F;x86_64\nRUN start &#x2F;w rustup-init.exe -y -v &amp;&amp; echo &quot;Error level is %ERRORLEVEL%&quot;\nRUN del rustup-init.exe\n\nRUN setx &#x2F;M PATH &quot;C:\\Users\\ContainerAdministrator\\.cargo\\bin;%PATH%&quot;\n</code></pre>\n<p>Then we copy over the relevant source files and kick off a build, storing the generated executable in <code>c:\\output</code>:</p>\n<pre><code>COPY Cargo.toml &#x2F;project&#x2F;Cargo.toml\nCOPY Cargo.lock &#x2F;project&#x2F;Cargo.lock\nCOPY rust-toolchain &#x2F;project&#x2F;rust-toolchain\nCOPY src&#x2F; &#x2F;project&#x2F;src\nRUN cargo install --path &#x2F;project --root &#x2F;output\n</code></pre>\n<p>And with that, we're done with our build! Time to jump over to our runtime image. We don't need the Visual Studio buildtools in this image, but we do need the Visual C++ runtime:</p>\n<pre><code>FROM mcr.microsoft.com&#x2F;windows&#x2F;servercore:1809\n\nADD https:&#x2F;&#x2F;download.microsoft.com&#x2F;download&#x2F;6&#x2F;A&#x2F;A&#x2F;6AA4EDFF-645B-48C5-81CC-ED5963AEAD48&#x2F;vc_redist.x64.exe &#x2F;vc_redist.x64.exe\nRUN c:\\vc_redist.x64.exe &#x2F;install &#x2F;quiet &#x2F;norestart\n</code></pre>\n<p>With that in place, we can copy over our executable from the build image and set it as the default <code>CMD</code> in the image:</p>\n<pre><code>COPY --from=build c:&#x2F;output&#x2F;bin&#x2F;windows-docker-web.exe &#x2F;\n\nCMD [&quot;&#x2F;windows-docker-web.exe&quot;]\n</code></pre>\n<p>And just like that, we've got a real life Windows Container. If you'd like to, you can test it out yourself by running:</p>\n<pre><code>&gt; docker run --rm -p 8080:8080 fpco&#x2F;windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446\n</code></pre>\n<p>If you connect to port 8080, you should see our painfully simple app. Hurrah!</p>\n<h2 id=\"building-with-github-actions\">Building with Github Actions</h2>\n<p>One of the nice things about using a multistage Dockerfile for performing the build is that our CI scripts become very simple. Instead of needing to set up an environment with correct build tools or any other configuration, our script:</p>\n<ul>\n<li>Logs into the Docker Hub registry</li>\n<li>Performs a <code>docker build</code></li>\n<li>Pushes to the Docker Hub registry</li>\n</ul>\n<p>The downside is that there is no build caching at play with this setup. There are multiple methods to mitigate this problem, such as creating helper build images that pre-bake the dependencies. Or you can perform the builds on the host on CI and only use the Dockerfile for generating the runtime image. Those are interesting tweaks to try out another time. </p>\n<p>Taking on the simple multistage approach though, we have the following in our <code>.github/workflows/container.yml</code> file:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">name: Build a Windows container\n\non:\n    push:\n        branches: [master]\n\njobs:\n    build:\n        runs-on: windows-latest\n\n        steps:\n        - uses: actions&#x2F;checkout@v1\n\n        - name: Build and push\n          shell: bash\n          run: |\n            echo &quot;${{ secrets.DOCKER_HUB_TOKEN }}&quot; | docker login --username fpcojenkins --password-stdin\n            IMAGE_ID=fpco&#x2F;windows-docker-web:$GITHUB_SHA\n            docker build -t $IMAGE_ID .\n            docker push $IMAGE_ID\n</code></pre>\n<p>I like following the convention of tagging my images with the Git SHA of the commit. Other people prefer different tagging schemes, it's all up to you.</p>\n<h2 id=\"manifest-files\">Manifest files</h2>\n<p>Now that we have a working Windows Container image, the next step is to deploy it to our Kube360 cluster. Generally, we use ArgoCD and Kustomize for managing app deployments within Kube360, which lets us keep a very nice Gitops workflow. Instead, for this blog post, I'll show you the raw manifest files. It will also let us play with the <code>k3</code> command line tool, which also happens to be written in Rust.</p>\n<p>First we'll have a Deployment manifest to manage the pods running the application itself. Since this is a simple Rust application, we can put very low resource limits on this. We're going to disable the Istio sidebar, since it's not compatible with Windows. We're going to ask Kubernetes to use the Windows machines to host these pods. And we're going to set up some basic health checks. All told, this is what our manifest file looks like:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: apps&#x2F;v1\nkind: Deployment\nmetadata:\n  name: windows-docker-web\n  labels:\n    app.kubernetes.io&#x2F;component: webserver\nspec:\n  replicas: 1\n  minReadySeconds: 5\n  selector:\n    matchLabels:\n      app.kubernetes.io&#x2F;component: webserver\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io&#x2F;component: webserver\n      annotations:\n        sidecar.istio.io&#x2F;inject: &quot;false&quot;\n    spec:\n      runtimeClassName: windows-2019\n      containers:\n        - name: windows-docker-web\n          image: fpco&#x2F;windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446\n          ports:\n            - name: http\n              containerPort: 8080\n          readinessProbe:\n            httpGet:\n              path: &#x2F;\n              port: 8080\n            initialDelaySeconds: 10\n            periodSeconds: 10\n          livenessProbe:\n            httpGet:\n              path: &#x2F;\n              port: 8080\n            initialDelaySeconds: 10\n            periodSeconds: 10\n          resources:\n            requests:\n              memory: 128Mi\n              cpu: 100m\n            limits:\n              memory: 128Mi\n              cpu: 100m\n</code></pre>\n<p>Awesome, that's the most complicated by far of the three manifests. Next we'll put a fairly stock-standard Service in front of that deployment:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: v1\nkind: Service\nmetadata:\n  name: windows-docker-web\n  labels:\n    app.kubernetes.io&#x2F;component: webserver\nspec:\n  ports:\n  - name: http\n    port: 80\n    targetPort: http\n  type: ClusterIP\n  selector:\n    app.kubernetes.io&#x2F;component: webserver\n</code></pre>\n<p>This exposes a services on port 80, and targets the <code>http</code> port (port 8080) inside the deployment. Finally, we have our Ingress. Kube360 uses external DNS to automatically set DNS records, and cert-manager to automatically grab TLS certificates. Our manifest looks like this:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.k8s.io&#x2F;v1beta1\nkind: Ingress\nmetadata:\n  annotations:\n    cert-manager.io&#x2F;cluster-issuer: letsencrypt-ingress-prod\n    kubernetes.io&#x2F;ingress.class: nginx\n    nginx.ingress.kubernetes.io&#x2F;force-ssl-redirect: &quot;true&quot;\n  name: windows-docker-web\nspec:\n  rules:\n  - host: windows-docker-web.az.fpcomplete.com\n    http:\n      paths:\n      - backend:\n          serviceName: windows-docker-web\n          servicePort: 80\n  tls:\n  - hosts:\n    - windows-docker-web.az.fpcomplete.com\n    secretName: windows-docker-web-tls\n</code></pre>\n<p>Now that we have our application inside a Docker image, and we have our manifest files to instruct Kubernetes on how to run it, we just need to deploy these manifests and we'll be done.</p>\n<h2 id=\"launch\">Launch</h2>\n<p>With our manifests in place, we can finally deploy them. You can use <code>kubectl</code> directly to do this. Since I'm deploying to Kube360, I'm going to use the <code>k3</code> command line tool, which automates the process of logging in, getting temporary Kubernetes credentials, and providing those to the <code>kubectl</code> command via an environment variable. These steps could be run on Windows, Mac, or Linux. But since we've done the rest of this post on Windows, I'll use my Windows machine for this too.</p>\n<pre><code>&gt; k3 init test.az.fpcomplete.com\n&gt; k3 kubectl apply -f deployment.yaml\nWeb browser opened to https:&#x2F;&#x2F;test.az.fpcomplete.com&#x2F;k3-confirm?nonce=c1f764d8852f4ff2a2738fb0a2078e68\nPlease follow the login steps there (if needed).\nThen return to this terminal.\nPolling the server. Please standby.\nChecking ...\nThanks, got the token response. Verifying token is valid\nRetrieving a kubeconfig for use with k3 kubectl\nKubeconfig retrieved. You are now ready to run kubectl commands with `k3 kubectl ...`\ndeployment.apps&#x2F;windows-docker-web created\n&gt; k3 kubectl apply -f ingress.yaml\ningress.networking.k8s.io&#x2F;windows-docker-web created\n&gt; k3 kubectl apply -f service.yaml\nservice&#x2F;windows-docker-web created\n</code></pre>\n<p>I told <code>k3</code> to use the <code>test.az.fpcomplete.com</code> cluster. On the first <code>k3 kubectl</code> call, it detected that I did not have valid credentials for the cluster, and opened up my browser to a page that allowed me to log in. One of the design goals in Kube360 is to strongly leverage existing identity providers, such as Azure AD, Google Directory, Okta, Microsoft 365, and others. This is not only more secure than copy-pasting <code>kubeconfig</code> files with permanent credentials around, but more user friendly. As you can see, the process above was pretty automated.</p>\n<p>It's easy enough to check that the pods are actually running and healthy:</p>\n<pre><code>&gt; k3 kubectl get pods\nNAME                                  READY   STATUS    RESTARTS   AGE\nwindows-docker-web-5687668cdf-8tmn2   1&#x2F;1     Running   0          3m2s\n</code></pre>\n<p>Initially, the ingress controller looked like this while it was getting TLS certificates:</p>\n<pre><code>&gt; k3 kubectl get ingress\nNAME                        CLASS    HOSTS                                  ADDRESS   PORTS     AGE\ncm-acme-http-solver-zlq6j   &lt;none&gt;   windows-docker-web.az.fpcomplete.com             80        0s\nwindows-docker-web          &lt;none&gt;   windows-docker-web.az.fpcomplete.com             80, 443   3s\n</code></pre>\n<p>And after cert-manager gets the TLS certificate, it will switch over to:</p>\n<pre><code>&gt; k3 kubectl get ingress\nNAME                 CLASS    HOSTS                                  ADDRESS          PORTS     AGE\nwindows-docker-web   &lt;none&gt;   windows-docker-web.az.fpcomplete.com   52.151.225.139   80, 443   90s\n</code></pre>\n<p>And finally, our site is live! Hurrah, a Rust web application compiled for Windows and running on Kubernetes inside Azure.</p>\n<p><strong>NOTE</strong> Depending on when you read this post, the web app may or may not still be live, so don't be surprised if you don't get a response if you try to connect to that host.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>This post was a bit light on actual Rust code, but heavy on a lot of Windows scripting. As I think many Rustaceans already know, the dev experience for Rust on Windows is top notch. What may not have been obvious is how pleasant the Docker experience is on Windows. There are definitely some pain points, like the large images involved and needing to install the VC runtime. But overall, with a bit of cargo-culting, it's not too bad. And finally, having a cluster with Windows support ready via Kube360 makes deployment a breeze.</p>\n<p>If anyone has follow up questions about anything here, please <a href=\"https://twitter.com/snoyberg\">reach out to me on Twitter</a> or <a href=\"https://tech.fpcomplete.com/contact-us/\">contact our team at FP Complete</a>. In addition to our <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360 product offering</a>, FP Complete provides many related services, including:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/platformengineering/\">DevOps consulting</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust consulting and training</a></li>\n<li><a href=\"https://tech.fpcomplete.com/services/\">General training and consulting services</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/\">Haskell consulting and training</a></li>\n</ul>\n<p>If you liked this post, please check out some related posts:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Deploying Rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">The Rust Crash Course eBook</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/understanding-cloud-auth/\">Understanding cloud auth</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
        "slug": "rust-kubernetes-windows",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Deploying Rust with Windows Containers on Kubernetes",
        "description": "An example of deploying Rust inside a Windows Containers as a web service hosted on Kubernetes",
        "updated": null,
        "date": "2020-10-26",
        "year": 2020,
        "month": 10,
        "day": 26,
        "taxonomies": {
          "tags": [
            "rust",
            "devops",
            "kubernetes"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "image": "images/blog/rust-windows-kube360.png"
        },
        "path": "/blog/rust-kubernetes-windows/",
        "components": [
          "blog",
          "rust-kubernetes-windows"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "prereqs",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#prereqs",
            "title": "Prereqs",
            "children": []
          },
          {
            "level": 2,
            "id": "the-rust-application",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#the-rust-application",
            "title": "The Rust application",
            "children": []
          },
          {
            "level": 2,
            "id": "dockerfile",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#dockerfile",
            "title": "Dockerfile",
            "children": []
          },
          {
            "level": 2,
            "id": "building-with-github-actions",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#building-with-github-actions",
            "title": "Building with Github Actions",
            "children": []
          },
          {
            "level": 2,
            "id": "manifest-files",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#manifest-files",
            "title": "Manifest files",
            "children": []
          },
          {
            "level": 2,
            "id": "launch",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#launch",
            "title": "Launch",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2573,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
            "title": "An Istio/mutual TLS debugging story"
          }
        ]
      },
      {
        "relative_path": "blog/paradigm-shift-key-to-competing.md",
        "colocated_path": null,
        "content": "<p>It used to be that being technically mature was thought to be a good thing; now, that view is not so cut and dried.  As you look at topics like containerization, cloud migration, and DevOps, it is easy to see why young companies get to claim the term “Cloud Native.”  At the same time, those who have been in business for decades are frequently relegated to the legions of those needing ‘transforming.’  While this is, of course, an overgeneralization, it feels right more often than not.  So, what are the ‘mature’ to do? </p>\n<p>Talking to several older small and medium sized businesses, a few strategic changes help propel those who are thinking about tech ‘transformation’ into becoming better, faster, more cost-effective, and more secure.  These strategies include focusing on containerizing business logic, cloud-enabling their enterprise, and taking a fresh look at open source offerings for their infrastructure.  If we look at these topics from an executive seat rather than an engineering one, a path and a plan emerges. </p>\n<a href=\"/devops/why-what-how/\">\n<p style=\"text-align:center;font-size:2em;border-width: 3px 0;border-color:#ff8d6e;border-style: dashed;margin:1em 0;padding:0.25em 0;font-weight: bold\">\nCheck Out The Why, What, and How of DevSecOps\n</p>\n</a>\n<p>Containerization is not a new topic; it has just evolved.   We have all gone from monolithic solutions to distributed computing.  From there, we bought small Linux servers, and they felt like containers; then, virtualization came to market, and the VM became the new container.  Now, we have Docker and Kubernetes.  Docker containers represent a considerable paradigm shift in that they do not require a lot of hardware or yet another OS license…., and when managed by Kubernetes, they create an entire ecosystem with little overhead.  Kubernetes take Docker containers and handle horizontal scaling, fault tolerance, automated monitoring, etc. within a DevOps toolset and frame.   What makes this setup even more impressive is Open Source; yet, supported by ‘the most prominent’ tech infrastructure firms. </p>\n<p>Once we start embracing modern container architectures, the conversation gets fascinating. All cloud and virtualization providers are now battling each other to get customers to deploy these standardized workloads onto their proprietary platforms.   While there are always a few complications, Docker and Kubernetes run on AWS, Azure, VMWare, GCP, etc., with little (or no) alterations if you follow the Open Source path. </p>\n<p>So imagine....once we were trying to figure out how to build in fault tolerance, scalability, continuous develop/deploy, and automate testing.....now all we need to do is follow a DevOps approach using Open Source frameworks like Docker and Kubernetes....and voila....you are there (well it isn’t that easy....but a darn sight easier than it used to be).  Oh....and by the way, all of this is far easier to deploy in the cloud than on-premise, but that is a topic for another day. </p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/paradigm-shift-key-to-competing/",
        "slug": "paradigm-shift-key-to-competing",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "A Paradigm Shift is Key to Competing",
        "description": "",
        "updated": null,
        "date": "2020-10-16",
        "year": 2020,
        "month": 10,
        "day": 16,
        "taxonomies": {
          "categories": [
            "devops",
            "insights"
          ],
          "tags": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wes Crook",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/paradigm-shift-key-to-competing/",
        "components": [
          "blog",
          "paradigm-shift-key-to-competing"
        ],
        "summary": null,
        "toc": [],
        "word_count": 485,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/devops-in-the-enterprise.md",
        "colocated_path": null,
        "content": "<p>Is it Enterprise DevOps or DevOps in the enterprise?  I guess it all depends on where you sit.  DevOps has been a significant change to how many modern technology organizations approach systems development and support.  While many have found it to be a major productivity boost, it represents a threat in &quot;BTTWWHADI&quot; evangelists in some organizations.  Let's start with two definitions: </p>\n<ul>\n<li>\n<p>DevOps: DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from Agile methodology. Credit: https://en.wikipedia.org/wiki/DevOps </p>\n</li>\n<li>\n<p>BTTWWHADI : This is shorthand for &quot;But That's The Way We Have Always Done It.&quot;  Credit: Unknown </p>\n</li>\n</ul>\n<h2 id=\"where-we-come-from\">Where we come from...</h2>\n<p>If we look at some successful Enterprise technology areas, they have had long term success by sticking with what works.  Cleanly partitioned technical responsibilities (analysts, developers, DBAs, network admins, sysadmins, etc.), a waterfall approach to development, a &quot;stay in your lane&quot; accountability matrix (e.g., you write the app, I'll get it platformed), rack 'em and stack 'em approach to hardware, etc.</p>\n<p>While no one can deny this type of discipline has served many well, Enterprise technology's current generation offers us a much more flexible approach.  Today, virtually all hardware is virtualized (on and off-premise), and cloud vendors offer things like platforms as a service, databases as a service, security as a service...etc.   These innovations have allowed my companies to completely re-think how they want to be spending their technology resource (budget, people, mindshare)….with the most enlightened organizations quickly concluding that they should spend their human capital in spaces where they can create competitive advantages while purchasing those parts of their technology ecosystem what more commoditized.</p>\n<p>An example of this would be in a retail company to think more about creating business intelligence than setting up new hardware for a database server.   A database can be scaled in the cloud, leaving the retail enterprise more human capital to figure out how to drive revenue.  Those who are not embracing the change DevOps affords are most often using a BTTWWHADI argument.   </p>\n<h2 id=\"not-everyone-is-ready-for-a-revolution\">Not everyone is ready for a revolution...</h2>\n<p>So, if DevOps is such a revolution, why do you have so many corporations having such an issue trying to get DevOps strategies to work for them? The answer lies in culture. For DevOps to be effective, an organization needs to be willing to take out a blank sheet of paper and draw a picture of what could be if they tore down yesterday's constraints and looked toward today's innovations. They need to match that picture up against their current staff, recognize that many jobs (and many skills) need to be re-learned or acquired.  No longer is so much specialization required in many specific fixed assets (like data centers, computers, network devices, security devices, etc.)  In a modern DevOps world, much of the infrastructure is virtualized (giving rise to infrastructure as code). </p>\n<p>To some extent, this means that your infrastructure staff will start to look more and more like developers.  Instead of a team plugging in servers, routers, and load balancers into a network backbone, they will be using scripting to configure equivalent services on virtualized hardware. On the development and operational side, CI/CD pipelines and process automation drive out many manual processes involved in yesterday's software development lifecycle. For development, the beginnings of this revolution date back to test-driven development. Today's modern pipelines go from development through testing, integration, and deployment. While everything is automatable, many have stopping points in their pipeline where human interactions are required to review test results or require confirmation about final deployments to production.   Whether you are in infrastructure or development, BTTWWHADI just won't do and more.  To compete, everyone will need to skill up and focus on architecture, automation, XaaS, and scripting/coding to decrease time to market while improving quality and resilience. </p>\n<h2 id=\"so-what-s-the-big-deal\">So, what's the big deal…</h2>\n<p>DevOps can be a threat to those who aren't ready for it (the BTTWWHADI crowd).  If your job is configuring hardware or running manual software tests, you might see these functions being automated into 'coding' jobs.  This function change could pose a severe career problem for those team members who don't see this evolution coming and fail to get prepared through education and training.  Unprepared staff becomes resistive to change (understandably), yet, those who are prepared end up in a better position (read: more career security, mobility, and better paid) as automation experts are now far more sought after than traditional hardware configuration engineers (as a gross generalization).  Please do not misunderstand; traditional system engineers are still valuable members of most enterprise teams, but as DevOps and virtualization take hold, those jobs will change.  Get prepared, train your staff, and address the culture change head-on. </p>\n<p>If you need help with your journey, <a href=\"https://tech.fpcomplete.com/contact-us/\">contact FP Complete</a>.  This is who we are and what we do. </p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/",
        "slug": "devops-in-the-enterprise",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps in the Enterprise: What could be better? What could go wrong?",
        "description": "",
        "updated": null,
        "date": "2020-10-09",
        "year": 2020,
        "month": 10,
        "day": 9,
        "taxonomies": {
          "categories": [
            "devops",
            "insights"
          ],
          "tags": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wes Crook",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/devops-in-the-enterprise/",
        "components": [
          "blog",
          "devops-in-the-enterprise"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "where-we-come-from",
            "permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#where-we-come-from",
            "title": "Where we come from...",
            "children": []
          },
          {
            "level": 2,
            "id": "not-everyone-is-ready-for-a-revolution",
            "permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#not-everyone-is-ready-for-a-revolution",
            "title": "Not everyone is ready for a revolution...",
            "children": []
          },
          {
            "level": 2,
            "id": "so-what-s-the-big-deal",
            "permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#so-what-s-the-big-deal",
            "title": "So, what's the big deal…",
            "children": []
          }
        ],
        "word_count": 827,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cloud-for-non-natives.md",
        "colocated_path": null,
        "content": "<p>Does this mean if you weren't born in the cloud, you'll never be as good as those who are?    </p>\n<p>When thinking about building from scratch or modernizing an existing technology environment, we tend to see one of a few different things happening: </p>\n<ul>\n<li>Staff will read up, and you will try it on your own. </li>\n<li>Managers will hire someone who says they have done it before. </li>\n<li>Leaders will engage a large software vendor or consulting firm to help get them to the promised land. </li>\n</ul>\n<p>While all of these strategies can work, we often find one of the following happens: </p>\n<ul>\n<li>Trial and error result in very expensive under delivery. </li>\n<li>Existing teams become disaffected and resistive because they perceive being left behind. </li>\n<li>Something gets delivered, but costs go up, and reliability goes down. </li>\n<li>New hires come in, make the magic happen, and then move on without leaving enough knowhow to continue without them. </li>\n<li>Vendors use proprietary software, and a new age of vendor lock-in ensues. </li>\n</ul>\n<p>There is a better way of approaching modernizing a business-focused, legacy world.  Our core approach at FP complete is: </p>\n<ul>\n<li>Be vendor agnostic </li>\n<li>Build a road map based on business outcomes </li>\n<li>Deeply understand and implement DevOps concepts </li>\n<li>Be ruthlessly focused on architecture from the start </li>\n<li>Containerize everything* </li>\n<li>Virtualize everything*</li>\n</ul>\n<p>While this approach is straightforward, staying focused on outcomes is the key: </p>\n<ul>\n<li>The business logic is the key to build your ecosystem once and properly so you can focus on what matters. </li>\n<li>Integrate security by design as security is a non-non-negotiable. </li>\n<li>All alerts and logs centrally as managing and operating via complete transparency is key. </li>\n<li>Ensure Containers are made to scale horizontally and be fault-tolerant from the start. </li>\n<li>Ensure you are on-prem and cloud-agnostic. </li>\n<li>Be open-source but get enterprise support. </li>\n</ul>\n<p>How do you get help without breaking the bank, compromising your values, or getting locked in? </p>\n<p>At FP Complete, we believe the way to get started is to: </p>\n<ul>\n<li>Build DevOps expertise, acquire DevOps Tooling. </li>\n<li>Get help constructing your roadmap to ensure technical focus aligns with business results. </li>\n<li>Get help designing how your applications will get containerized to be cloud-ready. </li>\n<li>Acquire Enterprise support for your newly open-sourced world. </li>\n</ul>\n<p>FP Complete has a unique track record in these activities.  We are not built on recurring revenue from long term consulting.   We are built on helping our customers build better software, run better technology operations, and achieve better business outcomes.  We come from diverse backgrounds and have serviced a myriad of industries.  We often find that others have already solved many of our client's problems, and our expertise lies in matching existing solutions to places where they are needed most. </p>\n<p>So, what is the best way to get started: </p>\n<ol>\n<li>Please send us a mail or call us up. </li>\n<li>We will walk through your aspirations and provide a high-level road map for achieving your goals at no cost. </li>\n<li>If you like what you see, invite us in for a POC based on a 100% ROI. </li>\n<li>Scale from there. </li>\n</ol>\n<p>If you are unsure about the claims in this post, shoot me an email...you won't get a bot response… you'll get me. </p>\n<p>*Note: the exceptions to these rules are usually around ultra-low latency requirements. </p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloud-for-non-natives/",
        "slug": "cloud-for-non-natives",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cloud for Non-Natives",
        "description": "Faster time to market and lower failure rate are the beginning of the many benefits DevOps offers companies. Discover the measurable metrics and KPIs, as well as the true business value DevOps offers.",
        "updated": null,
        "date": "2020-10-02",
        "year": 2020,
        "month": 10,
        "day": 2,
        "taxonomies": {
          "tags": [
            "devops",
            "insights"
          ],
          "categories": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wes Crook",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/cloud-for-non-natives/",
        "components": [
          "blog",
          "cloud-for-non-natives"
        ],
        "summary": null,
        "toc": [],
        "word_count": 545,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/rust-for-devops-tooling.md",
        "colocated_path": null,
        "content": "<p>A beginner's guide to writing your DevOps tools in Rust.</p>\n<h2 id=\"introduction\">Introduction</h2>\n<p>In this blog post we'll cover some basic DevOps use cases for Rust and why \nyou would want to use it.\nAs part of this, we'll also cover a few common libraries you will likely use\nin a Rust-based DevOps tool for AWS.</p>\n<p>If you're already familiar with writing DevOps tools in other languages,\nthis post will explain why you should try Rust.</p>\n<p>We'll cover why Rust is a particularly good choice of language to write your DevOps\ntooling and critical cloud infrastructure software in.\nAnd we'll also walk through a small demo DevOps tool written in Rust. \nThis project will be geared towards helping someone new to the language ecosystem \nget familiar with the Rust project structure.</p>\n<p>If you're brand new to Rust, and are interested in learning the language, you may want to start off with our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a>.</p>\n<h2 id=\"what-makes-the-rust-language-unique\">What Makes the Rust Language Unique</h2>\n<blockquote>\n<p>Rust is a systems programming language focused on three goals: safety, speed, \nand concurrency. It maintains these goals without having a garbage collector, \nmaking it a useful language for a number of use cases other languages aren’t \ngood at: embedding in other languages, programs with specific space and time \nrequirements, and writing low-level code, like device drivers and operating systems. </p>\n</blockquote>\n<p><em>The Rust Book (first edition)</em></p>\n<p>Rust was initially created by Mozilla and has since gained widespread adoption and\nsupport. As the quote from the Rust book alludes to, it was designed to fill the \nsame space that C++ or C would (in that it doesn’t have a garbage collector or a runtime).\nBut Rust also incorporates zero-cost abstractions and many concepts that you would\nexpect in a higher level language (like Go or Haskell).\nFor that, and many other reasons, Rust's uses have expanded well beyond that\noriginal space as low level safe systems language.</p>\n<p>Rust's ownership system is extremely useful in efforts to write correct and \nresource efficient code. Ownership is one of the killer features of the Rust \nlanguage and helps programmers catch classes of resource errors at compile time \nthat other languages miss or ignore.</p>\n<p>Rust is an extremely performant and efficient language, comparable to the speeds \nyou see with idiomatic everyday C or C++.\nAnd since there isn’t a garbage collector in Rust, it’s a lot easier to get \npredictable deterministic performance.</p>\n<h2 id=\"rust-and-devops\">Rust and DevOps</h2>\n<p>What makes Rust unique also makes it very useful for areas stemming from robots \nto rocketry, but are those qualities relevant for DevOps?\nDo we care if we have efficient executables or fine grained control over \nresources, or is Rust a bit overkill for what we typically need in DevOps?</p>\n<p><em>Yes and no</em></p>\n<p>Rust is clearly useful for situations where performance is crucial and actions \nneed to occur in a deterministic and consistent way. That obviously translates to \nlow-level places where previously C and C++ were the only game in town. \nIn those situations, before Rust, people simply had to accept the inherent risk and \nadditional development costs of working on a large code base in those languages.\nRust now allows us to operate in those areas but without the risk that C and C++\ncan add.</p>\n<p>But with DevOps and infrastructure programming we aren't constrained by those \nrequirements. For DevOps we've been able to choose from languages like Go, Python, \nor Haskell because we're not strictly limited by the use case to languages without \ngarbage collectors. Since we can reach for other languages you might argue \nthat using Rust is a bit overkill, but let's go over a few points to counter this.</p>\n<h3 id=\"why-you-would-want-to-write-your-devops-tools-in-rust\">Why you would want to write your DevOps tools in Rust</h3>\n<ul>\n<li>Small executables relative to other options like Go or Java</li>\n<li>Easy to port across different OS targets</li>\n<li>Efficient with resources (which helps cut down on your AWS bill) </li>\n<li>One of the fastest languages (even when compared to C)</li>\n<li>Zero cost abstractions - Rust is a low level performant language which also\ngives the us benefits of a high level language with its generics and abstractions.</li>\n</ul>\n<p>To elaborate on some of these points a bit further:</p>\n<h4 id=\"os-targets-and-cross-compiling-rust-for-different-architectures\">OS targets and Cross Compiling Rust for different architectures</h4>\n<p>For DevOps it's also worth mentioning the (relative) ease with which you can \nport your Rust code across different architectures and different OS's. </p>\n<p>Using the official Rust toolchain installer <code>rustup</code>, it's easy to get the \nstandard library for your target platform.\nRust <a href=\"https://doc.rust-lang.org/nightly/rustc/platform-support.html\">supports a great number of platforms</a>\nwith different tiers of support.\nThe docs for the <code>rustup</code> tool has <a href=\"https://rust-lang.github.io/rustup/cross-compilation.html\">a section</a>\ncovering how you can access pre-compiled artifacts for various architectures.\nTo install the target platform for an architecture (other than the host platform which is installed by default)\nyou simply need to run <code>rustup target add</code>:</p>\n<pre><code>$ rustup target add x86_64-pc-windows-msvc \ninfo: downloading component &#x27;rust-std&#x27; for &#x27;x86_64-pc-windows-msvc&#x27;\ninfo: installing component &#x27;rust-std&#x27; for &#x27;x86_64-pc-windows-msvc&#x27;\n</code></pre>\n<p>Cross compilation is already built into the Rust compiler by default. \nOnce the <code>x86_64-pc-windows-msvc</code> target is installed you can build for Windows \nwith the <code>cargo</code> build tool using the <code>--target</code> flag:</p>\n<pre><code>cargo build --target=x86_64-pc-windows-msvc\n</code></pre>\n<p>(the default target is always the host architecture)</p>\n<p>If one of your dependencies links to a native (i.e. non-Rust) library, you will\nneed to make sure that those cross compile as well. Doing <code>rustup target add</code>\nonly installs the Rust standard library for that target. However for the other \ntools that are often needed when cross-compiling, there is the handy\n<a href=\"https://github.com/rust-embedded/cross\">github.com/rust-embedded/cross</a> tool.\nThis is essentially a wrapper around cargo which does all cross compilation in \ndocker images that have all the necessary bits (linkers) and pieces installed.</p>\n<h4 id=\"small-executables\">Small Executables</h4>\n<p>A key unique feature of Rust is that it doesn't need a runtime or a garbage collector.\nCompare this to languages like Python or Haskell: with Rust the lack of any runtime\ndependencies (Python), or system libraries (as with Haskell) is a huge advantage \nfor portability.</p>\n<p>For practical purposes, as far as DevOps is concerned, this portability means \nthat Rust executables are much easier to deploy than scripts.\nWith Rust, compared to Python or Bash, we don't need to set up the environment for \nour code ahead of time. This frees us up from having to worry if the runtime \ndependencies for the language are set up.</p>\n<p>In addition to that, with Rust you're able to produce 100% static executables for \nLinux using the MUSL libc (and by default Rust will statically link all Rust code). \nThis means that you can deploy your Rust DevOps tool's binaries across your Linux \nservers without having to worry if the correct <code>libc</code> or other libraries were \ninstalled beforehand.</p>\n<p>Creating static executables for Rust is simple. As we discussed before, when discussing\ndifferent OS targets, it's easy with Rust to switch the target you're building against.\nTo compile static executables for the Linux MUSL target all you need to do is add \nthe <code>musl</code> target with:</p>\n<pre><code>$ rustup target add x86_64-unknown-linux-musl\n</code></pre>\n<p>Then you can using this new target to build your Rust project as a fully static \nexecutable with:</p>\n<pre><code>$ cargo build --target x86_64-unknown-linux-musl\n</code></pre>\n<p>As a result of not having a runtime or a garbage collector, Rust executables \ncan be extremely small. For example, there is a common DevOps tool called \nCredStash that was originally written in Python but has since been \nported to Go (GCredStash) and now Rust (RuCredStash).</p>\n<p>Comparing the executable sizes of the Rust versus Go implementations of CredStash,\nthe Rust executable is nearly a quarter of the size of the Go variant. </p>\n<table><thead><tr><th>Implementation</th><th>Executable Size</th></tr></thead><tbody>\n<tr><td>Rust CredStash: (RuCredStash Linux amd64)</td><td>3.3 MB</td></tr>\n<tr><td>Go CredStash: (GCredStash Linux amd64 v0.3.5)</td><td>11.7 MB</td></tr>\n</tbody></table>\n<p>Project links:</p>\n<ul>\n<li><a href=\"https://github.com/psibi/rucredstash\">github.com/psibi/rucredstash</a></li>\n<li><a href=\"https://github.com/winebarrel/gcredstash\">github.com/winebarrel/gcredstash</a></li>\n</ul>\n<p>This is by no means a perfect comparison, and 8 MB may not seem like a lot, but\nconsider the advantage automatically of having executables that are a quarter of the \nsize you would typically expect. </p>\n<p>This cuts down on the size your Docker images, AWS AMI's, or Azure VM images need\nto be - and that helps speed up the time it takes to spin up new deployments.</p>\n<p>With a tool of this size, having an executable that is 75% smaller than it \nwould be otherwise is not immediately apparent. On this scale the difference, 8 MB,\nis still quite cheap.\nBut with larger tools (or collections of tools and Rust based software) the benefits\nadd up and the difference begins to be a practical and worthwhile consideration.</p>\n<p>The Rust implementation was also not strictly written with the resulting size of \nthe executable in mind. So if executable size was even more important of a \nfactor other changes could be made - but that's beyond the scope of this post.</p>\n<h4 id=\"rust-is-fast\">Rust is fast</h4>\n<p>Rust is very fast even for common idiomatic everyday Rust code. And not only that\nit's arguably easier to work with than with C and C++ and catch errors in your \ncode.</p>\n<p>For the Fortunes benchmark (which exercises the ORM, \ndatabase connectivity, dynamic-size collections, sorting, server-side templates, \nXSS countermeasures, and character encoding) Rust is second and third, only lagging \nbehind the first place C++ based framework by 4 percent. </p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-fortunes.png\" style=\"max-width:95%\">\n<p>In the benchmark for database access for a single query Rust is first and second:</p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-single-query.png\" style=\"max-width:95%\">\n<p>And in a composite of all the benchmarks Rust based frameworks are second and third place.</p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-composite.png\" style=\"max-width:95%\">\n<p>Of course language and framework benchmarks are not real life, however this is \nstill a fair comparison of the languages as they relate to others (within the context \nand the focus of the benchmark).</p>\n<p>Source: <a href=\"https://www.techempower.com/benchmarks/\">https://www.techempower.com/benchmarks</a></p>\n<h3 id=\"why-would-you-not-want-to-write-your-devops-tools-in-rust\">Why would you not want to write your DevOps tools in Rust?</h3>\n<p>For medium to large projects, it’s important to have a type system and compile \ntime checks like those in Rust versus what you would find in something like Python\nor Bash.\nThe latter languages let you get away with things far more readily. This makes \ndevelopment much &quot;faster&quot; in one sense.</p>\n<p>Certain situations, especially those with involving small project codebases, would \nbenefit more from using an interpreted language. In these cases, being able to quickly \nchange pieces of the code without needing to re-compile and re-deploy the project\noutweighs the benefits (in terms of safety, execution speed, and portability)\nthat languages like Rust bring. </p>\n<p>Working with and iterating on a Rust codebase in those circumstances, with frequent\nbut small codebases changes, would be needlessly time-consuming\nIf you have a small codebase with few or no runtime dependencies, then it wouldn't\nbe worth it to use Rust.</p>\n<h2 id=\"demo-devops-project-for-aws\">Demo DevOps Project for AWS</h2>\n<p>We'll briefly cover some of the libraries typically used for an AWS focused \nDevOps tool in a walk-through of a small demo Rust project here. \nThis aims to provide a small example that uses some of the libraries you'll likely\nwant if you’re writing a CLI based DevOps tool in Rust. Specifically for this \nexample we'll show a tool that does some basic operations against AWS S3 \n(creating new buckets, adding files to buckets, listing the contents of buckets).</p>\n<h3 id=\"project-structure\">Project structure</h3>\n<p>For AWS integration we're going to utilize the <a href=\"https://www.rusoto.org/\">Rusoto</a> library.\nSpecifically for our modest demo Rust DevOps tools we're going to pull in the \n<a href=\"https://docs.rs/rusoto_core/0.45.0/rusoto_core/\">rusoto_core</a> and the \n<a href=\"https://docs.rs/rusoto_s3/0.45.0/rusoto_s3/\">rusoto_s3</a> crates (in Rust a <em>crate</em>\nis akin to a library or package).</p>\n<p>We're also going to use the <a href=\"https://docs.rs/structopt/0.3.16/structopt/\">structopt</a> crate\nfor our CLI options. This is a handy, batteries included CLI library that makes \nit easy to create a CLI interface around a Rust struct. </p>\n<p>The tool operates by matching the CLI option and arguments the user passes in \nwith a <a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs#L211\"><code>match</code> expression</a>.</p>\n<p>We can then use this to match on that part of the CLI option struct we've defined \nand call the appropriate functions for that option.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">match opt {\n    Opt::Create { bucket: bucket_name } =&gt; {\n        println!(&quot;Attempting to create a bucket called: {}&quot;, bucket_name);\n        let demo = S3Demo::new(bucket_name);\n        create_demo_bucket(&amp;demo);\n    },\n</code></pre>\n<p>This matches on the <a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs#L182\"><code>Create</code></a>\nvariant of the <code>Opt</code> enum. </p>\n<p>We then use <code>S3Demo::new(bucket_name)</code> to create a new <code>S3Client</code> which we can\nuse in the standalone <code>create_demo_bucket</code> function that we've defined \nwhich will create a new S3 bucket.</p>\n<p>The tool is fairly simple with most of the code located in \n<a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs\">src/main.rs</a></p>\n<h3 id=\"building-the-rust-project\">Building the Rust project</h3>\n<p>Before you build the code in this project, you will need to install Rust. \nPlease follow <a href=\"https://www.rust-lang.org/tools/install\">the official install instructions here</a>.</p>\n<p>The default build tool for Rust is called Cargo. It's worth getting familiar \nwith <a href=\"https://doc.rust-lang.org/cargo/guide/\">the docs for Cargo</a>\nbut here's a quick overview for building the project.</p>\n<p>To build the project run the following from the root of the \n<a href=\"https://github.com/fpco/rust-aws-devops\">git repo</a>:</p>\n<pre><code>cargo build\n</code></pre>\n<p>You can then use <code>cargo run</code> to run the code or execute the code directly\nwith <code>./target/debug/rust-aws-devops</code>:</p>\n<pre><code>$ .&#x2F;target&#x2F;debug&#x2F;rust-aws-devops \n\nRunning tool\nRustAWSDevops 0.1.0\nMike McGirr &lt;[email protected]&gt;\n\nUSAGE:\n    rust-aws-devops &lt;SUBCOMMAND&gt;\n\nFLAGS:\n    -h, --help       Prints help information\n    -V, --version    Prints version information\n\nSUBCOMMANDS:\n    add-object       Add the specified file to the bucket\n    create           Create a new bucket with the given name\n    delete           Try to delete the bucket with the given name\n    delete-object    Remove the specified object from the bucket\n    help             Prints this message or the help of the given subcommand(s)\n    list             Try to find the bucket with the given name and list its objects``\n</code></pre>\n<p>Which will output the nice CLI help output automatically created for us \nby <code>structopt</code>.</p>\n<p>If you're ready to build a release version (with optimizations turn on which \nwill make compilation take slightly longer) run the following:</p>\n<pre><code>cargo build --release\n</code></pre>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>As this small demo showed, it's not difficult to get started using Rust to write\nDevOps tools. And even then we didn't need to make a trade-off between ease of\ndevelopment and performant fast code. </p>\n<p>Hopefully the next time you're writing a new piece of DevOps software, \nanything from a simple CLI tool for a specific DevOps operation or you're writing \nthe next Kubernetes, you'll consider reaching for Rust.\nAnd if you have further questions about Rust, or need help implementing your Rust \nproject, please feel free to reach out to FP Complete for Rust engineering \nand training!</p>\n<p>Want to learn more Rust? Check out our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a>. And for more information, check out our <a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/",
        "slug": "rust-for-devops-tooling",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Using Rust for DevOps tooling",
        "description": "A beginner's guide to writing your DevOps tools in Rust.",
        "updated": null,
        "date": "2020-09-09",
        "year": 2020,
        "month": 9,
        "day": 9,
        "taxonomies": {
          "tags": [
            "devops",
            "rust",
            "insights"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mike McGirr",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/rust-for-devops-tooling/",
        "components": [
          "blog",
          "rust-for-devops-tooling"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "introduction",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#introduction",
            "title": "Introduction",
            "children": []
          },
          {
            "level": 2,
            "id": "what-makes-the-rust-language-unique",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#what-makes-the-rust-language-unique",
            "title": "What Makes the Rust Language Unique",
            "children": []
          },
          {
            "level": 2,
            "id": "rust-and-devops",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#rust-and-devops",
            "title": "Rust and DevOps",
            "children": [
              {
                "level": 3,
                "id": "why-you-would-want-to-write-your-devops-tools-in-rust",
                "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#why-you-would-want-to-write-your-devops-tools-in-rust",
                "title": "Why you would want to write your DevOps tools in Rust",
                "children": [
                  {
                    "level": 4,
                    "id": "os-targets-and-cross-compiling-rust-for-different-architectures",
                    "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#os-targets-and-cross-compiling-rust-for-different-architectures",
                    "title": "OS targets and Cross Compiling Rust for different architectures",
                    "children": []
                  },
                  {
                    "level": 4,
                    "id": "small-executables",
                    "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#small-executables",
                    "title": "Small Executables",
                    "children": []
                  },
                  {
                    "level": 4,
                    "id": "rust-is-fast",
                    "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#rust-is-fast",
                    "title": "Rust is fast",
                    "children": []
                  }
                ]
              },
              {
                "level": 3,
                "id": "why-would-you-not-want-to-write-your-devops-tools-in-rust",
                "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#why-would-you-not-want-to-write-your-devops-tools-in-rust",
                "title": "Why would you not want to write your DevOps tools in Rust?",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "demo-devops-project-for-aws",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#demo-devops-project-for-aws",
            "title": "Demo DevOps Project for AWS",
            "children": [
              {
                "level": 3,
                "id": "project-structure",
                "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#project-structure",
                "title": "Project structure",
                "children": []
              },
              {
                "level": 3,
                "id": "building-the-rust-project",
                "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#building-the-rust-project",
                "title": "Building the Rust project",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2540,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/",
            "title": "Cloud Vendor Neutrality"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
            "title": "Levana NFT Launch"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/",
            "title": "Rust: Of course it compiles, right?"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
            "title": "Deploying Rust with Windows Containers on Kubernetes"
          },
          {
            "permalink": "https://tech.fpcomplete.com/rust/",
            "title": "FP Complete Rust"
          }
        ]
      },
      {
        "relative_path": "blog/devops-unifying-dev-ops-qa.md",
        "colocated_path": null,
        "content": "<p>The term DevOps has been around for many years. Small and big companies adopt DevOps concepts for different purposes, e.g. to increase the quality of software. In this blog post, we define DevOps, present its pros and cons, highlight a few concepts and see how these can impact the entire organization.</p>\n<h2 id=\"what-is-devops\">What is DevOps?</h2>\n<p>At a high level, DevOps is understood as a technical, organizational and cultural shift in a company to run software more efficiently, reliably, and securely. From this first definition, we can see that DevOps is much more than &quot;use tool X&quot; or &quot;move to the cloud&quot;. DevOps starts with the understanding that development (Dev), operations (Ops) and quality assurance (QA) are not treated as siloed disciplines anymore. Instead, they all come together in shared processes and responsibilities across collaborating teams. DevOps achieves this through various techniques. In the section &quot;Implementation&quot;, we present a few of these concepts.</p>\n<h2 id=\"benefits\">Benefits</h2>\n<p>Benefits of applying DevOps include:</p>\n<ul>\n<li>Cost savings through higher efficiency.</li>\n<li>Faster software iteration cycles, where updates take less time from development to running in production.</li>\n<li>More security, reliability, and fault tolerance when running software.</li>\n<li>Stronger bonds between different stakeholders in the organization including non-technical staff.</li>\n<li>Enable more data-driven decisions.</li>\n</ul>\n<p>Let's have a look <em>how</em> these benefits can be achieved by applying DevOps ideas:</p>\n<h2 id=\"how-to-implement-devops\">How to implement DevOps</h2>\n<h3 id=\"automation-and-continuous-integration-ci-continuous-delivery-cd\">Automation and Continuous Integration (CI) / Continuous Delivery (CD)</h3>\n<p>Automation refers to a key aspect of the engineering-driven part of DevOps. With automation, we aim to reduce the need for human action, and thus the possibility of human error, as far as possible by sending your software through an automated and well-understood pipeline of actions. These automated actions can build your software, run unit tests, integrate it with existing systems, run system tests, deploy it, and provide feedback on each step. What we are\ndescribing here is usually referred to as <strong>Continuous Integration (CI)</strong> and <strong>Continuous Delivery (CD)</strong>. Adopting CI/CD invests in a low-risk and low-cost way of crossing the chasm between &quot;software that is working on an engineer's laptop&quot; and &quot;software that running securely and reliably on production servers&quot;.</p>\n<p>CI/CD is usually tied to a platform on top of which the automated actions are run, e.g., Gitlab. The platform accepts software that should be passed through the pipeline, executes the automated actions on servers which are usually abstracted away, and provides feedback to the engineering team. These actions can be highly customized and tied together in different ways. For example, one action only compiles the source code and provides the build artifacts to subsequent actions. Another action can be responsible for running a test-suite, another one can deploy software. Such actions can be defined for different types of software: A website can be automatically deployed to a server, or a Desktop application can be made available to your customers without human interaction.</p>\n<p>Besides the fact that CI/CD can be used for all kinds of software, there are other advantages to consider:</p>\n<ol>\n<li><strong>The CI/CD pipeline is well-understood and maintained by the teams</strong>: the actions that are run in a pipeline can be flexibly updated, extended, etc. <a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#infra-as-code\">Infrastructure as Code</a> can be a powerful concept here.</li>\n<li><strong>Run in standardized environments</strong>: Version conflicts between tools and configuration or dependency mismatches only have to be fixed once when the pipeline is built. Once a pipeline is working, it will continue to work as the underlying servers and their software versions don't change. No more conflicts between operating systems, tools, versions of tools across different engineers. Pipelines are highly reproducible. Containerization can be a game-changer here.</li>\n<li><strong>Feedback</strong>: Actions sometimes fail, e.g. because a unit test does not pass. The CI/CD platform usually allows different reporting mechanisms: E-mail someone, update the project status on your repository overview page, block subsequent actions or cancel other pipelines.</li>\n</ol>\n<p>The next sections cover more DevOps concepts that benefit from automation.</p>\n<h3 id=\"multiple-environments\">Multiple Environments</h3>\n<p>The CI/CD can be extended by deploying software to different environments. These deployments can happen in individual actions defined in your pipeline. Besides the production environment, which runs user-facing software, staging and testing environments can be defined where software is deployed to. For example, a testing environment can be used by the engineering team for peer-reviewing and validating software changes. Once the team agreed on new software, it can be deployed to a staging environment. A usual purpose of the staging environment is to mimic the production environment as closely as possible. Further tests can be run in a staging environment to make sure the software is ready to be used by real users. Finally, the software reaches production-readiness and is deployed to a production environment. Such a production deployment can be designed using a gradual rollout, i.e. canary deployments.</p>\n<p>Different environments not only realize different semantics and confidence levels of running software, e.g. as described in the previous paragraph, but also serve as an agreed-upon view on software in the entire organization. Multi-environment deployments make your software and quality thereof easier to understand. This is because of the gained insights when running software, in particular on infrastructure that is close to a production setting. Generally, running software gives much more insights into the performance, reliability, security, production-readiness and overall quality. Different teams, e.g. security experts or a dedicated QA-team (if your organization follows this practice) can be consulted at different software quality stages, i.e. different environments in which software runs. Additionally, non-technical staff can use environments, e.g. specialized ones for demo purposes.</p>\n<p>Ultimately, integrating multiple environments structures QA and smoothens the interactions between different teams.</p>\n<h3 id=\"fail-early\">Fail early</h3>\n<p>No matter how well things are working in an organization that builds software, bugs happen and bugs are expensive. The cost of bugs can be projected to the manpower invested into fixing the bug, the loss of reputation due to angry customers, and generally negative business impact. Since we can't fully avoid bugs, there exist concepts to reduce both the frequency and impact of bugs. &quot;Fail early&quot; is one of these concepts.</p>\n<p>The basic idea is to catch bugs and other flaws in your software as early in the development process as possible. When software is developed, unit tests, compiler errors and peer reviews count towards the early and cheap mechanisms to detect and fix flaws. Ideally, a unit test tells the developer that the software is not correct, or, a second pair of eyes reveals a potential performance issue during a code review. In both cases, not much time and effort is lost and the flaw can be easily fixed. However, other bugs might make it through these initial checks and land in testing or staging environments. Other types of tests and QA should be in place to check the software quality. Worst case, the bug outlives all checks and is in production. There, bugs have much higher impact and require more effort by many stakeholders, e.g. the bug fix by the engineering team and the apology to the customers.</p>\n<p>To save costs, cheap checks such as running a test suite in an automated pipeline should be executed early. This will save costs as flaws discovered later in the process result in higher costs. Thus, failing early increases cost efficiency.</p>\n<h3 id=\"rollbacks\">Rollbacks</h3>\n<p>DevOps can also help to react quickly to changes. One example of a sudden change is a bug, as described in the last section, which is discovered in the production environment. Rollbacks, for example as manually triggered pipelines, can recover the well-functioning of a production service in a timely manner. This can be useful when the bug is a hard one and needs hours to be identified and fixed. These hours of degraded customer experience or even downtime makes paying customers unhappy. A faster mechanism is desired, which minimizes the gap between a faulty system and a recovered system. A rollback can be a fast and effective way to recover system state without exposing customers to company failure much.</p>\n<h3 id=\"policies\">Policies</h3>\n<p>DevOps concepts impose a challenge to security and permission management as these span the entire organization. Policies can help to formulate authorizations and rules during operations. For example, implementing the following security requirements may be required:</p>\n<ul>\n<li>A deployment or rollback in production should not be triggered by anyone but a well-defined set of people in authority.</li>\n<li>Some actions in a CI/CD pipeline should always be run while other actions are intended to be triggered manually or only run under certain conditions.</li>\n<li>The developers might require slightly different permissions than a dedicated QA team to perform their day-to-day work.</li>\n<li>Humans and machine users can have different capabilities but should always have the least privileges assigned to them.</li>\n</ul>\n<p>The authentication and authorization tools provided by CI/CD providers or cloud vendors can help to design such policies according to your organizational needs.</p>\n<h3 id=\"observability\">Observability</h3>\n<p>As software is running and users are interacting with your applications, insights such as error rates, performance statistics, resource usages, etc. can help to identify bottlenecks, mitigate future issues, and drive business decisions through data. There exist two major ways to establish different forms of observability:</p>\n<ul>\n<li><strong>Logging</strong>: Events in text form that software outputs to inform about the application's status and health. Different types of logging messages, e.g. indicating the severity of an error event, can help to aggregate and display log messages in a central place, where it can be used by engineering teams for debugging purposes.</li>\n<li><strong>Metrics</strong>: Information about the running software that is not generated by the application itself. For example, the CPU or memory usage of the underlying machine that runs the software, network statistics, HTTP error rates, etc. As with logging, metrics can help to spot bottlenecks and mitigate them before they have a business impact. Visualizing aggregated metrics data facilitates communication across technical and non-technical teams and leverages data-driven decisions. Metrics dashboards can strengthen the shared ownership of software across teams.</li>\n</ul>\n<p>Logging and metrics can help to define goals and to align a development team with a QA team for example.</p>\n<h2 id=\"disadvantages\">Disadvantages</h2>\n<p>So far, we only looked at the benefits and characteristics of DevOps. Let's have a brief look at the other side of the coin by commenting on the possible negative side effects and disadvantages of adopting DevOps concepts.</p>\n<ul>\n<li>\n<p>The investment into DevOps can be huge as it is a company-wide, multi-discipline, and multi-team transformation that not only requires technical implementation effort but also training for people, re-structuring and aligning teams.</p>\n</li>\n<li>\n<p>This goes along with the first point but it's worth emphasizing it: The cultural impact on your organization can be challenging due to human factors. While a new automation mechanism can be estimated and implemented reasonably well, tracking the progress of changing people's way of communication, feeling of ownership, aligning to new processes can be hard and might lead to no gained efficiencies, which DevOps promises, short-term. Due to the high impact of DevOps, it is a long-term investment.</p>\n</li>\n<li>\n<p>The technical backbone of DevOps, e.g. CI/CD pipelines, cloud vendors, integration of authorization and authentication, likely results in increased expenses through new contracts and licenses with new players. However, through the dominance of open source in modern DevOps tooling, e.g. through using Kubernetes, vendor lock-in can be avoided.</p>\n</li>\n</ul>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>In this blog post, we explored the definition of DevOps and presented several DevOps concepts and use-cases. Furthermore, we evaluated benefits and disadvantages. Adopting DevOps is an investment into a low-friction and automated way of developing, testing, and running software. Technical improvements, e.g. automation, as well as increased collaboration between teams of different disciplines ultimately improve the efficiency in your organization long-term.</p>\n<p>However, DevOps represents not only technical effort but also impacts the entire company, e.g. how teams communicate with each other, how issues are resolved, and what teams feel responsible for. Finding the right balance and choosing the best concepts and tools for your teams represents a challenge. We can help you with identifying and solving the DevOps transformation in your organization.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/",
        "slug": "devops-unifying-dev-ops-qa",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps: Unifying Dev, Ops, and QA",
        "description": "The term DevOps has been around for many years. Small and big companies adopt DevOps concepts for different purposes, e.g. to increase the quality of software. In this blog post, we define DevOps, present its pros and cons, highlight a few concepts and see how these can impact the entire organization.",
        "updated": null,
        "date": "2020-08-24",
        "year": 2020,
        "month": 8,
        "day": 24,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Moritz Hoffmann",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/devops-unifying-dev-ops-qa/",
        "components": [
          "blog",
          "devops-unifying-dev-ops-qa"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "what-is-devops",
            "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#what-is-devops",
            "title": "What is DevOps?",
            "children": []
          },
          {
            "level": 2,
            "id": "benefits",
            "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#benefits",
            "title": "Benefits",
            "children": []
          },
          {
            "level": 2,
            "id": "how-to-implement-devops",
            "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#how-to-implement-devops",
            "title": "How to implement DevOps",
            "children": [
              {
                "level": 3,
                "id": "automation-and-continuous-integration-ci-continuous-delivery-cd",
                "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#automation-and-continuous-integration-ci-continuous-delivery-cd",
                "title": "Automation and Continuous Integration (CI) / Continuous Delivery (CD)",
                "children": []
              },
              {
                "level": 3,
                "id": "multiple-environments",
                "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#multiple-environments",
                "title": "Multiple Environments",
                "children": []
              },
              {
                "level": 3,
                "id": "fail-early",
                "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#fail-early",
                "title": "Fail early",
                "children": []
              },
              {
                "level": 3,
                "id": "rollbacks",
                "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#rollbacks",
                "title": "Rollbacks",
                "children": []
              },
              {
                "level": 3,
                "id": "policies",
                "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#policies",
                "title": "Policies",
                "children": []
              },
              {
                "level": 3,
                "id": "observability",
                "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#observability",
                "title": "Observability",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "disadvantages",
            "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#disadvantages",
            "title": "Disadvantages",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2023,
        "reading_time": 11,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-deployments/",
            "title": "Understanding Cloud Software Deployments"
          }
        ]
      },
      {
        "relative_path": "blog/devops-for-developers.md",
        "colocated_path": null,
        "content": "<p>In this post, I describe my personal journey as a developer skeptical\nof the seemingly ever-growing, ever more complex, array of &quot;ops&quot;\ntools. I move towards adopting some of these practices, ideas and\ntools. I write about how this journey helps me to write software\nbetter and understand discussions with the ops team at work.</p>\n<div style=\"border:1px solid black;background-color:#f8f8f8;margin-bottom:1em;padding: 0.5em 0.5em 0 0.5em;\">\n<p><strong>Table of Contents</strong></p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#on-being-skeptical\">On being skeptical</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#the-humble-app\">The humble app</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#disk-failures-are-not-that-common\">Disk failures are not that common</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#auto-deployment-is-better-than-manual\">Auto-deployment is better than manual</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#backups-become-worth-it\">Backups become worth it</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#deployment-staging\">Deployment staging</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#packaging-with-docker-is-good\">Packaging with Docker is good</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#custodians-multiple-processes-are-useful\">Custodians multiple processes are useful</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#kubernetes-provides-exactly-that\">Kubernetes provides exactly that</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#declarative-is-good-vendor-lock-in-is-bad\">Declarative is good, vendor lock-in is bad</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#more-advanced-rollout\">More advanced rollout</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#relationship-between-code-and-deployed-state\">Relationship between code and deployed state</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#argocd\">ArgoCD</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#infra-as-code\">Infra-as-code</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#where-the-dev-meets-the-ops\">Where the dev meets the ops</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#what-we-do\">What we do</a></li>\n</ul>\n</div>\n<h2 id=\"on-being-skeptical\">On being skeptical</h2>\n<p>I would characterise my attitudes to adopting technology in two\nstages:</p>\n<ul>\n<li>Firstly, I am conservative and dismissive, in that I will usually\ndisregard any popular new technology as a bandwagon or trend. I'm a\nslow adopter.</li>\n<li>Secondly, when I actually encounter a situation where I've suffered,\nI'll then circle back to that technology and give it a try, and if I\ncan really find the nugget of technical truth in there, then I'll\nadopt it.</li>\n</ul>\n<p>Here are some things that I disregarded for a year or more before\ntrying: Emacs, Haskell, Git, Docker, Kubernetes, Kafka. The whole\nNoSQL trend came, wrecked havoc, and went, while I had my back turned,\nbut I am considering using Redis for a cache at the moment.</p>\n<h2 id=\"the-humble-app\">The humble app</h2>\n<p>If you’re a developer like me, you’re probably used to writing your\nsoftware, spending most of your time developing, and then finally\ndeploying your software by simply creating a machine, either a\ndedicated machine or a virtual machine, and then uploading a binary of\nyour software (or source code if it’s interpreted), and then running\nit with the copy pasted config of systemd or simply running the\nsoftware inside GNU screen. It's a secret shame that I've done this,\nbut it's the reality.</p>\n<p>You might use nginx to reverse-proxy to the service. Maybe you set up\na PostgreSQL database or MySQL database on that machine. And then you\nwalk away and test out the system, and later you realise you need some\nslight changes to the system configuration. So you SSH into the system\nand makes the small tweaks necessary, such as port settings, encoding\nsettings, or an additional package you forgot to add. Sound familiar?</p>\n<p>But on the whole, your work here is done and for most services this is\npretty much fine. There are plenty of services running that you have\nseen in the past 30 years that have been running like this.</p>\n<h2 id=\"disk-failures-are-not-that-common\">Disk failures are not that common</h2>\n<p>Rhetoric about processes going down due to a hardware failure are\nprobably overblown. Hard drives don’t crash very often. They don’t\nreally wear out as quickly as they used to, and you can be running a\nsystem for years before anything even remotely concerning happens.</p>\n<h2 id=\"auto-deployment-is-better-than-manual\">Auto-deployment is better than manual</h2>\n<p>When you start to iterate a little bit quicker, you get bored of\nmanually building and copying and restarting the binary on the\nsystem. This is especially noticeable if you forget the steps later\non.</p>\n<!-- Implementing Auto-Deployment -->\n<p>If you’re a little bit more advanced you might have some special\nscripts or post-merge git hooks, so that when you push to your repo it\nwould apply to the same machine and you have some associated token on\nyour CI machine that is capable of uploading a binary and running a\ncommand like copy and restart (e.g. SSH key or API\nkey). Alternatively, you might implement a polling system on the\nactual production system which will check if any updates have occurred\nin get and if so pull down a new binary. This is how we were doing\nthings in e.g. 2013.</p>\n<h2 id=\"backups-become-worth-it\">Backups become worth it</h2>\n<p>Eventually, if you're lucky, your service starts to become slightly\nmore important; maybe it’s used in business and people actually are\nusing it and storing valuable things in the database. You start to\nthink that back-ups are a good idea and worth the investment.</p>\n<!-- Redundancy of DB -->\n<p>You probably also have a script to back up the database, or replicate\nit on a separate machine, for redundancy.</p>\n<h2 id=\"deployment-staging\">Deployment staging</h2>\n<p>Eventually, you might have a staged deployment strategy. So you might\nhave a developer testing machine, you might have a QA machine, a\nstaging machine, and finally a production machine. All of these are\nconfigured in pretty much the same way, but they are deployed at\ndifferent times and probably the system administrator is the only one\nwith access to deploy to production.</p>\n<!-- Continuum -->\n<p>It’s clear by this point that I’m describing a continuum from &quot;hobby\nproject&quot; to &quot;enterprise serious business synergy solutions&quot;.</p>\n<h2 id=\"packaging-with-docker-is-good\">Packaging with Docker is good</h2>\n<p>Docker effectively leads to collapsing all of your system dependencies\nfor your binary to run into one contained package. This is good,\nbecause dependency management is hell. It's also highly wasteful,\nbecause its level of granularity is very wide. But this is a trade-off\nwe accept for the benefits.</p>\n<h2 id=\"custodians-multiple-processes-are-useful\">Custodians multiple processes are useful</h2>\n<p>Docker doesn’t have much to say about starting and restarting\nservices. I’ve explored using CoreOS with the hosting provider Digital\nOcean, and simply running a fresh virtual machine, with the given\nDocker image.</p>\n<p>However, you quickly run into the problem of starting up and tearing\ndown:</p>\n<ul>\n<li>When you start the service, you need certain liveness checks\nand health checks, so if the service fails to start then you should\nnot stop the existing service from running, for example. You should\nkeep the existing ones running.</li>\n<li>If the process fails at any time during running then you should also\nrestart the process. I thought about this point a lot, and came to the\nconclusion that it’s better to have your process be restarted than to\nassume that the reason it failed was so dangerous that the process\nshouldn’t start again. Probably it’s more likely that there is an\nexception or memory issue that happened in a pathological case which\nyou can investigate in your logging system. But it doesn’t mean that\nyour users should suffer by having downtime.</li>\n<li>The natural progression of this functionality is to support\ndifferent rollout strategies. Do you want to switch everything to the\nnew system in one go, do you want it to be deployed piece-by-piece?</li>\n</ul>\n<!-- Summary: You Realise Worth Of Ops Tools -->\n<p>It’s hard to fully appreciate the added value of ops systems like\nKubernetes, Istio/Linkerd, Argo CD, Prometheus, Terraform, etc. until\nyou decide to design a complete architecture yourself, from scratch,\nthe way you want it to work in the long term.</p>\n<h2 id=\"kubernetes-provides-exactly-that\">Kubernetes provides exactly that</h2>\n<p>What system happens to accept Docker images, provide custodianship,\nroll out strategies, and trivial redeploy? Kubernetes.</p>\n<p>It provides this classical monitoring and custodian responsibilities\nthat plenty of other systems have done in the past. However, unlike\nsimply running a process and testing if it’s fine and then turning off\nanother process, Kubernetes buys into Docker all the way.  Processes\nare isolated from each other, in both the network on the file\nsystem. Therefore, you can very reliably start and stop the services\non the same machine. Nothing about a process's machine state is\npersistent, therefore you are forced to design your programs in a way\nthat state is explicitly stored either ephemerally, or elsewhere.</p>\n<!-- Cloud Managed Databases Make This Practical -->\n<p>In the past it might be a little bit scarier to have your database\nrunning in such system, what if it automatically wipes out the\ndatabase process? With today’s cloud base deployments, it's more\ncommon to use a managed database such as that provided by Amazon,\nDigital Ocean, Google or Azure. The whole problem of updating and\nbacking up your database can pretty much be put to one\nside. Therefore, you are free to mess with the configuration or\ntopology of your cluster as much as you like without affecting your\ndatabase.</p>\n<h2 id=\"declarative-is-good-vendor-lock-in-is-bad\">Declarative is good, vendor lock-in is bad</h2>\n<p>A very appealing feature of a deployment system like Kubernetes is\nthat everything is automatic and declarative. You stick all of your\nconfiguration in simple YAML files (which is also a curse because YAML\nhas its own warts and it's not common to find formal schemas for it).\nThis is also known as &quot;infrastructure as code&quot;.</p>\n<p>Ideally, you should have as much as possible about your infrastructure\nin code checked in to a repo so that you can reproduce it and track\nit.</p>\n<p>There is also a much more straight-forward path to migrate from one\nservice provider to another service provider. Kubernetes is supported\non all the major service providers (Google, Amazon, Azure), therefore\nyou are less vulnerable to vendor lock-in. They also all provide\nmanaged databases that are standard (PostgreSQL, for example) with\ntheir normal wire protocols. If you were using the vendor-specific\nAPIs to achieve some of this, you'd be stuck on one vendor. I, for\nexample, am not sure whether to go with Amazon or Azure on a big\npersonal project right now. If I use Kubernetes, I am mitigating risk.</p>\n<p>With something like Terraform you can go one step further, in which\nyou write code that can create your cluster completely from\nscratch. This is also more vendor independent/mitigated.</p>\n<h2 id=\"more-advanced-rollout\">More advanced rollout</h2>\n<p>Your load balancer and your DNS can also be in code. Typically a load\nbalancer that does the job is nginx. However, for more advanced\ndeployments such as A/B or green/blue deployments, you may need\nsomething more advanced like Istio or Linkerd.</p>\n<p>Do I really want to deploy a new feature to all of my users? Maybe,\nthat might be easier. Do I want to deploy a different way of marketing\nmy product on the website to all users at once? If I do that, then I\ndon’t exactly know how effective it is. So, I could perhaps do a\ndeployment in which half of my users see one page and half of the\nusers see another page. These kinds of deployments are\nstraight-forwardly achieved with Istio/Linkerd-type service meshes,\nwithout having to change any code in your app.</p>\n<h2 id=\"relationship-between-code-and-deployed-state\">Relationship between code and deployed state</h2>\n<p>Let's think further than this.</p>\n<p>You've set up your cluster with your provider, or Terraform. You've\nset up your Kubernetes deployments and services. You've set up your CI\nto build your project, produce a Docker image, and upload the images\nto your registry. So far so good.</p>\n<p>Suddenly, you’re wondering, how do I actually deploy this? How do I\ncall Kubernetes, with the correct credentials, to apply this new\nDoctor image to the appropriate deployment?</p>\n<p>Actually, this is still an ongoing area of innovation. An obvious way\nto do it is: you put some details on your CI system that has access to\nrun kubectl, then set the image with the image name and that will try\nto do a deployment. Maybe the deployment fails, you can look at that\nresult in your CI dashboard.</p>\n<p>However, the question comes up as what is currently actually deployed\non production? Do we really have infrastructure as code here?</p>\n<p>It’s not that I edited the file and that update suddenly got\nreflected. There’s no file anywhere in Git that contains what the\ncurrent image is. Head scratcher.</p>\n<p>Ideally, you would have a repository somewhere which states exactly\nwhich image should be deployed right now. And if you change it in a\ncommit, and then later revert that commit, you should expect the\nproduction is also reverted to reflect the code, right?</p>\n<h2 id=\"argocd\">ArgoCD</h2>\n<p>One system which attempts to address this is ArgoCD. They implement\nwhat they call &quot;GitOps&quot;. All state of the system is reflected in a Git\nrepo somewhere. In Argo CD, after your GitHub/Gitlab/Jenkins/Travis CI\nsystem has pushed your Docker image to the Docker repository, it makes\na gRPC call to Argo, which becomes aware of the new image. As an\nadmin, you can now trivially look in the UI and click &quot;Refresh&quot; to\nredeploy the new version.</p>\n<h2 id=\"infra-as-code\">Infra-as-code</h2>\n<p>The common running theme in all of this is\ninfrastructure-as-code. It’s immutability. It’s declarative. It’s\nremoving the number of steps that the human has to do or care\nabout. It’s about being able to rewind. It’s about redundancy. And\nit’s about scaling easily.</p>\n<!-- Circling Back -->\n<p>When you really try to architect your own system, and your business\nwill lose money in the case of ops mistakes, then you start to think\nthat all of these advantages of infrastructure as code start looking\nreally attractive.</p>\n<p>But before you really sit down and think about this stuff, however, it\nis pretty hard to empathise or sympathise with the kind of concerns\nthat people using these systems have.</p>\n<!-- Downsides/Tax -->\n<p>There are some downsides to these tools, as with any:</p>\n<ul>\n<li>Docker is quite wasteful of time and space</li>\n<li>Kubernetes is undoubtedly complex, and leans heavily on YAML</li>\n<li><a href=\"https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/\">All abstractions are leaky</a>,\ntherefore tools like this all leak</li>\n</ul>\n<h2 id=\"where-the-dev-meets-the-ops\">Where the dev meets the ops</h2>\n<p>Now that I’ve started looking into these things and appreciating their\nuse, I interact a lot more with the ops side of our DevOps team at work,\nand I can also be way more helpful in assisting them with the\ninformation that they need, and also writing apps which anticipate the\nkind of deployment that is going to happen. The most difficult\nchallenge typically is metrics and logging, for run-of-the-mill apps,\nI’m not talking about high-performance apps.</p>\n<!-- An Exercise -->\n<p>One way way to bridge the gap between your ops team and dev team,\ntherefore, might be an exercise meeting in which you do have a dev\nperson literally sit down and design an app architecture and\ninfrastructure, from the ground up using the existing tools that we\nhave that they are aware of and then your ops team can point out the\nadvantages and disadvantages of their proposed solution. Certainly,\nI think I would have benefited from such a mentorship, even for an\nhour or two.</p>\n<!-- Head-In-The-Sand Also Works -->\n<p>It may be that your dev team and your ops team are completely separate\nand everybody’s happy. The devs write code, they push it, and then it\nmagically works in production and nobody has any issues. That’s\ncompletely fine. If anything it would show that you have a very good\nprocess. In fact, that’s pretty much how I’ve worked for the past\neight years at this company.</p>\n<p>However, you could derive some benefit if your teams are having\ndifficulty communicating.</p>\n<p>Finally, the tools in the ops world aren't perfect, and they're made\nby us devs. If you have a hunch that you can do better than these\ntools, you should learn more about them, and you might be right.</p>\n<h2 id=\"what-we-do\">What we do</h2>\n<p>FP Complete are using a great number of these tools, and we're writing\nour own, too. If you'd like to know more, email use at\n<a href=\"mailto:[email protected]\">[email protected]</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/",
        "slug": "devops-for-developers",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps for (Skeptical) Developers",
        "description": null,
        "updated": null,
        "date": "2020-08-16",
        "year": 2020,
        "month": 8,
        "day": 16,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/devops-for-developers/",
        "components": [
          "blog",
          "devops-for-developers"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "on-being-skeptical",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#on-being-skeptical",
            "title": "On being skeptical",
            "children": []
          },
          {
            "level": 2,
            "id": "the-humble-app",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#the-humble-app",
            "title": "The humble app",
            "children": []
          },
          {
            "level": 2,
            "id": "disk-failures-are-not-that-common",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#disk-failures-are-not-that-common",
            "title": "Disk failures are not that common",
            "children": []
          },
          {
            "level": 2,
            "id": "auto-deployment-is-better-than-manual",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#auto-deployment-is-better-than-manual",
            "title": "Auto-deployment is better than manual",
            "children": []
          },
          {
            "level": 2,
            "id": "backups-become-worth-it",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#backups-become-worth-it",
            "title": "Backups become worth it",
            "children": []
          },
          {
            "level": 2,
            "id": "deployment-staging",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#deployment-staging",
            "title": "Deployment staging",
            "children": []
          },
          {
            "level": 2,
            "id": "packaging-with-docker-is-good",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#packaging-with-docker-is-good",
            "title": "Packaging with Docker is good",
            "children": []
          },
          {
            "level": 2,
            "id": "custodians-multiple-processes-are-useful",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#custodians-multiple-processes-are-useful",
            "title": "Custodians multiple processes are useful",
            "children": []
          },
          {
            "level": 2,
            "id": "kubernetes-provides-exactly-that",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#kubernetes-provides-exactly-that",
            "title": "Kubernetes provides exactly that",
            "children": []
          },
          {
            "level": 2,
            "id": "declarative-is-good-vendor-lock-in-is-bad",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#declarative-is-good-vendor-lock-in-is-bad",
            "title": "Declarative is good, vendor lock-in is bad",
            "children": []
          },
          {
            "level": 2,
            "id": "more-advanced-rollout",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#more-advanced-rollout",
            "title": "More advanced rollout",
            "children": []
          },
          {
            "level": 2,
            "id": "relationship-between-code-and-deployed-state",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#relationship-between-code-and-deployed-state",
            "title": "Relationship between code and deployed state",
            "children": []
          },
          {
            "level": 2,
            "id": "argocd",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#argocd",
            "title": "ArgoCD",
            "children": []
          },
          {
            "level": 2,
            "id": "infra-as-code",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#infra-as-code",
            "title": "Infra-as-code",
            "children": []
          },
          {
            "level": 2,
            "id": "where-the-dev-meets-the-ops",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#where-the-dev-meets-the-ops",
            "title": "Where the dev meets the ops",
            "children": []
          },
          {
            "level": 2,
            "id": "what-we-do",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#what-we-do",
            "title": "What we do",
            "children": []
          }
        ],
        "word_count": 2618,
        "reading_time": 14,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/",
            "title": "DevOps for (Skeptical) Developers"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/",
            "title": "DevOps: Unifying Dev, Ops, and QA"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
            "title": "An Istio/mutual TLS debugging story"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
            "title": "Deploying Rust with Windows Containers on Kubernetes"
          }
        ]
      },
      {
        "relative_path": "blog/our-history-containerization.md",
        "colocated_path": null,
        "content": "<p>FP Complete has been working with containerization (or OS-level virtualization) since before it was popularized by Docker.  What follows is a brief history of how and why we got started using containers, and how our use of containerization has evolved as new technology has emerged.</p>\n<h2 id=\"brief-history\">Brief history</h2>\n<p>Our first foray into containerization started at the beginning of the company, when we were building a web-based integrated development environment for Haskell.  We needed a secure and cost-effective way to be able to compile and run Haskell code on the server side.  While giving each active user their own virtual machine with dedicated CPU and memory would have satisfied the first requirement (security), it would have been far from cost effective.  GHC, the de-facto standard Haskell compiler, is notoriously resource hungry, so the VM would have to be quite large (it's not uncommon to need 4 GB or more of RAM to compile a fairly straightforward piece of software).  We needed a way to share CPU and memory resources between multiple users securely and be able to shift load around a cluster of virtual machines to keep usage balanced and avoid one heavy user from impacting the experience of others users on the same VM.  This sounds like a job for container orchestration!  Unfortunately, Docker didn't exist yet, let alone Kubernetes.  The state of the art for Linux containers at the time was LXC, which was mostly a collection of shell scripts that helped with using the Linux kernel features that underly all Linux container solutions, but at a much lower level than Docker.  On top of this we built everything we needed to distribute &quot;images&quot; of a base filesystem plus overlay for local changes, isolated container networks, and ability to shift load based on VM and container utilization -- that is, many of the things Docker and Kubernetes do now, but tailored specifically for our application's needs.</p>\n<p>When Docker came on the scene, we embraced it despite some early growing pains, since it was much easier to use and more general purpose than our &quot;bespoke&quot; system and we thought it likely that it would soon become a de-facto standard, which is exactly what happened.  For internal and customer solutions, Docker allowed us to create much more nimble and efficient deployment solutions that satisfied the requirement for immutable infrastructure.  Prior to Docker, we achieved immutability by building VM images and spinning up virtual machines; a much slower and heavier process than building a Docker image and running it on an already-provisioned VM.  This also allowed us to run multiple applications isolated from one another on a single VM without worry of interference with each other.</p>\n<p>Finally Kubernetes arrived.  While it was not the first orchestration platform, it was the first that wholeheartedly standardized on Docker containers.  Once again we embraced it, despite some early growing pains, due to its ease of use, multi-cloud support, fast pace of improvement, and backing of a major company (Google).  We once again bet that Kubernetes would become the de-facto standard, which is again exactly what happened.  With Kubernetes, instead of having to think about which VM a container would run on, we can have a cluster of general-purpose nodes and let the orchestrator worry about what runs on which node.  This lets us squeeze yet more efficiency out of our resources.  Due to its ease of use and built-in support for common rollout strategies, we can give developers the ability to deploy their apps directly, and since it is so easy to tie into CI/CD pipelines we can drastically simplify automated deployment processes.</p>\n<p>Going forward, we continue to keep up with the latest developments in containerization and are constantly evaluating new and alternative technologies, to stay on the forefront of DevOps.</p>\n<h2 id=\"why-we-really-like-it\">Why we really like it</h2>\n<ul>\n<li>\n<p>Supports <a href=\"https://tech.fpcomplete.com/platformengineering/immutable-infrastructure/\">immutable infrastructure</a>.</p>\n</li>\n<li>\n<p>Fast build and deployment processes.</p>\n</li>\n<li>\n<p>Low overhead and efficient use of compute resources.</p>\n</li>\n<li>\n<p>Easy integration with CI/CD pipelines.</p>\n</li>\n<li>\n<p>Isolation of applications from others running on the same machine.</p>\n</li>\n<li>\n<p>Bundles dependencies with the application, so they can be tested together and there's no risk of deploying to an incorrect environment.</p>\n</li>\n<li>\n<p>Developers on various platforms can build and test the application in a consistent environment.</p>\n</li>\n</ul>\n<h2 id=\"limitations-of-the-technology\">Limitations of the technology</h2>\n<ul>\n<li>\n<p>Containers and container orchestration are most mature on Linux, although Docker and Kubernetes do now support running Windows containers on machines running Windows, and most modern server operating system have support for some kind of containerization (but not necessarily Docker or Kubernetes).</p>\n</li>\n<li>\n<p>Containers and container orchestration add additional layers of abstraction and complexity.  This can, at times, make diagnosing problems more difficult.</p>\n</li>\n<li>\n<p>Legacy applications can be tricky to containerize since they assume they are running on a persistent machine rather than an ephemeral one.  While this can be mitigated using persistent volumes, it makes the containerization strategy less straightforward.</p>\n</li>\n<li>\n<p>While properly configured containers are relatively secure, all containers running on a host share a single operating system kernel which means there is greater risk that a process can use a security vulnerability to &quot;break out&quot; of its container than when using VMs.</p>\n</li>\n</ul>\n<h2 id=\"resources\">Resources</h2>\n<p>From FP Complete:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/platformengineering/containerization/\">Introduction to Containerization concepts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/platformengineering/immutable-infrastructure/\">Introduction to Immutable Infrastructure concepts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/deploying_haskell_apps_with_kubernetes/\">Webinar: Deploying Haskell apps with Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Blog post: Deploying rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/02/immutability-docker-haskells-st-type/\">Blog post: Immutability, Docker, and Haskell's ST type</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/01/containerize-legacy-app/\">Blog post: Containerizing a legacy application: an overview</a></li>\n</ul>\n<p>From the web:</p>\n<ul>\n<li><a href=\"https://www.docker.com/resources/what-container\">What is a container?</a></li>\n<li><a href=\"https://www.docker.com/get-started\">Get started with Docker</a></li>\n<li><a href=\"https://kubernetes.io/docs/concepts/\">Kubernetes concepts</a></li>\n<li><a href=\"https://kubernetes.io/docs/setup/\">Getting started with Kubernetes</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
        "slug": "our-history-containerization",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Our history with containerization",
        "description": "FP Complete has a long history of working with containers, beginning before Docker existed and staying ahead of advances in the technology.",
        "updated": null,
        "date": "2020-08-13",
        "year": 2020,
        "month": 8,
        "day": 13,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops",
            "docker",
            "kubernetes"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/our-history-containerization/",
        "components": [
          "blog",
          "our-history-containerization"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "brief-history",
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/#brief-history",
            "title": "Brief history",
            "children": []
          },
          {
            "level": 2,
            "id": "why-we-really-like-it",
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/#why-we-really-like-it",
            "title": "Why we really like it",
            "children": []
          },
          {
            "level": 2,
            "id": "limitations-of-the-technology",
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/#limitations-of-the-technology",
            "title": "Limitations of the technology",
            "children": []
          },
          {
            "level": 2,
            "id": "resources",
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/#resources",
            "title": "Resources",
            "children": []
          }
        ],
        "word_count": 957,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cloud-deployment-models-advantages-and-disadvantages.md",
        "colocated_path": null,
        "content": "<p>In this post we show a couple of options when it comes to a cloud\ndeployment model. Depending on the needs of your organization some\noptions may suit you better than others.</p>\n<h1 id=\"private-cloud\">Private Cloud</h1>\n<p>A private cloud is cloud infrastructure that only members of your organization\ncan utilize. It is typically owned and managed by the organization itself and\nis hosted on premises but it could also be managed by a third party in a secure\ndatacenter. This deployment model is best suited for organizations that deal\nwith sensitive data and/or are required to uphold certain security standards by\nvarious regulations.</p>\n<p>Advantages:</p>\n<ul>\n<li>Organization specific</li>\n<li>High degree of security and level of control</li>\n<li>Ability to choose your resources (ie. specialized hardware)</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Lack of elasticity and capacity to scale (bursts)</li>\n<li>Higher cost</li>\n<li>Requires a significant amount of engineering effort</li>\n</ul>\n<h1 id=\"public-cloud\">Public Cloud</h1>\n<p>Public cloud refers to cloud infrastructure that is located and\naccessed over the public network. It provides a convenient way to\nburst and scale your project depending on the use and is typically\npay-per-use. Popular examples include <a href=\"https://aws.amazon.com\">Amazon AWS</a>,\n<a href=\"https://cloud.google.com/\">Google Cloud Platform</a> and <a href=\"https://azure.microsoft.com/\">Microsoft\nAzure</a>.</p>\n<p>Advantages:</p>\n<ul>\n<li>Scalability/Flexibility/Bursting</li>\n<li>Cost effective</li>\n<li>Ease of use</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Shared resources</li>\n<li>Operated by third party</li>\n<li>Unreliability</li>\n<li>Less secure</li>\n</ul>\n<h1 id=\"hybrid-cloud\">Hybrid Cloud</h1>\n<p>This type of cloud infrastructure assumes that you are hosting your system both\non private and public cloud . One use case might be regulation requiring data\nto be stored in a locked down private data center but have the application\nprocessing parts available on the public cloud and talking to the private\ncomponents over a secure tunnel.</p>\n<p>Another example is hosting most of the system inside a private cloud and having\na clone of the system on the public cloud to allow for rapid scaling and\naccommodating bursts of new usage that would otherwise not be possible on the\nprivate cloud.</p>\n<p>Advantages:</p>\n<ul>\n<li>Cost effective</li>\n<li>Scalability/Flexibility</li>\n<li>Balance of convenience and security</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Same disadvantages as the public cloud</li>\n</ul>\n<h1 id=\"multi-cloud\">Multi-Cloud</h1>\n<p>This option is a variant of the hybrid cloud but we refer to it when we mean\n&quot;using multiple public cloud providers&quot;. It is mostly used for mission critical\nsystems that want to minimize the amount of down time if a specific service on\na particular cloud goes down (e.g., the S3 outage of 2017 that took down a lot\nof web services with it). This option is arguably the most advanced option and\nsacrifices convenience for security and reliability. It requires significant\nexpertise and engineering effort to get right since most platforms vary widely\nbetween the type of resources and services that they provide in subtle ways.</p>\n<p>When chosing a cloud deployment model weigh the advantages and disadvantages of\neach option as it relates to your business objectives. </p>\n<p>If you liked this post you may also like: <a href=\"https://tech.fpcomplete.com/blog/intro-to-devops-on-govcloud/\">Introduction to DevOps on AWS Gov Cloud</a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/",
        "slug": "cloud-deployment-models-advantages-and-disadvantages",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cloud Deployment Models: Advantages and Disadvantages",
        "description": "Choosing the correct Cloud Deployment Model is crucial. Discover the advantages and disadvantages of each and how to choose the best one for your organization.",
        "updated": null,
        "date": "2020-08-07T13:41:00Z",
        "year": 2020,
        "month": 8,
        "day": 7,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "blogimage": "/images/blog-listing/deployment.png"
        },
        "path": "/blog/cloud-deployment-models-advantages-and-disadvantages/",
        "components": [
          "blog",
          "cloud-deployment-models-advantages-and-disadvantages"
        ],
        "summary": null,
        "toc": [
          {
            "level": 1,
            "id": "private-cloud",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#private-cloud",
            "title": "Private Cloud",
            "children": []
          },
          {
            "level": 1,
            "id": "public-cloud",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#public-cloud",
            "title": "Public Cloud",
            "children": []
          },
          {
            "level": 1,
            "id": "hybrid-cloud",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#hybrid-cloud",
            "title": "Hybrid Cloud",
            "children": []
          },
          {
            "level": 1,
            "id": "multi-cloud",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#multi-cloud",
            "title": "Multi-Cloud",
            "children": []
          }
        ],
        "word_count": 486,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/understanding-cloud-auth.md",
        "colocated_path": null,
        "content": "<p>The topics of authentication and authorization usually appear simple but turn out to hide significant complexity. That's because, at its core, auth is all about answering two questions:</p>\n<ul>\n<li>Who are you</li>\n<li>What are you allowed to do</li>\n</ul>\n<p>However, the devil is in the details. Seasoned IT professionals, software developers, and even typical end users are fairly accustomed at this point to many of the most common requirements and pain points around auth.</p>\n<p>Cloud authentication and authorization is not drastically different from non-cloud systems, at least in principle. However, there are a few things about the cloud and its common use cases that introduce some curve balls:</p>\n<ul>\n<li>As with most auth systems, cloud providers each have their own idiosyncracies</li>\n<li>Cloud auth systems have almost always been designed from the outset to work API first, and interact with popular web technologies</li>\n<li>Security is usually taken very seriously in cloud, leading to workflows arguably more complex than other systems</li>\n<li>Cloud services themselves typically need some method to authenticate to the cloud, e.g. a virtual machine gaining access to private blob storage</li>\n<li>Many modern DevOps tools are commonly deployed to cloud systems, and introduce extra layers of complexity and indirection</li>\n</ul>\n<p>This blog post series is going to focus on the full picture of authentication and authorization, focusing on a cloud mindset. There is significant overlap with non-cloud systems in this, but we'll be covering those details as well to give a complete picture. Once we have those concepts and terms in place, we'll be ready to tackle the quirks of individual cloud providers and commonly used tooling.</p>\n<h2 id=\"goals-of-authentication\">Goals of authentication</h2>\n<p>We're going to define authentication as proving your identity to a service provider. A service provider can be anything from a cloud provider offering virtual machines, to your webmail system, to a bouncer at a bartender who has your name on a list. The identity is an equally flexible concept, and could be &quot;my email address&quot; or &quot;my user ID in a database&quot; or &quot;my full name.&quot;</p>\n<p>To help motivate the concepts we'll be introducing, let's understand what goals we're trying to achieve with typical authentication systems.</p>\n<ul>\n<li>Allow a user to prove who he/she is</li>\n<li>Minimize the number of passwords a user has to memorize</li>\n<li>Minimize the amount of work IT administrator have to do to create new user accounts, maintain them, and ultimately shut them down\n<ul>\n<li>That last point is especially important; no one wants the engineer who was just fired to still be able to authenticate to one of the systems</li>\n</ul>\n</li>\n<li>Provide security against common attack vectors, like compromised passwords or lost devices</li>\n<li>Provide a relatively easy-to-use method for user authentication</li>\n<li>Allow a computer program/application/service (lets call these all apps) to prove what it is</li>\n<li>Provide a simple way to allocate, securely transmit, and store credentials necessary for those proofs</li>\n<li>Ensure that credentials can be revoked when someone leaves a company or an app is no longer desired (or is compromised)</li>\n</ul>\n<h2 id=\"goals-of-authorization\">Goals of authorization</h2>\n<p>Once we know the identity of something or someone, the next question is: what are they allowed to do? That's where authorization comes into play. A good authorization provides these kinds of features:</p>\n<ul>\n<li>Fine grained control, when necessary, of who can do what</li>\n<li>Ability to grant common sets of permissions as a bundle, avoiding tedium and mistakes</li>\n<li>A centralized collection of authorization rules</li>\n<li>Ability to revoke a permission, and see that change propagated quickly to multiple systems</li>\n<li>Ability to delegate permissions from one identity to another\n<ul>\n<li>For example: if I'm allowed to read a file on some cloud storage server, it would be nice if I could let my mail client do that too, without the mail program pretending it's me</li>\n</ul>\n</li>\n<li>To avoid mistakes, it would be nice to assume a smaller set of permissions when performing some operations\n<ul>\n<li>For example: as a super user/global admin/root user, I'd like to be able to say &quot;I don't want to accidentally delete systems files right now&quot;</li>\n</ul>\n</li>\n</ul>\n<p>In simple systems, the two concepts of authentication and authorization is straightforward. For example, on a single-user computer system, my username would be my identity, I would authenticate using my password, and as that user I would be authorized to do anything on the computer system.</p>\n<p>However, most modern systems end up with many additional layers of complexity. Let's step through what some of these concepts are.</p>\n<h2 id=\"users-and-policies\">Users and policies</h2>\n<p>A basic concept of authentication would be a <em>user</em>. This typically would refer to a real human being accessing some service. Depending on the system, they may use identifiers like usernames or email addresses. User accounts are often times given to non-users, like automated processes or Continuous Integration (CI) jobs. However, most modern systems would recommend using a service account (discussed below) or similar instead.</p>\n<p>Sometimes, the user is the end of the story. When I log into my personal Gmail account, I'm allowed to read and write emails in that account. However, when dealing with multiuser shared systems, some form of permissions management comes along as well. Most cloud providers have a robust and sophisticated set of policies, where you can specify fine-grained individual permissions within a policy.</p>\n<p>As an example, with AWS, the S3 file storage service provides an array of individual actions from the obvious (read, write, and delete an object) to the more obscure (like setting retention policies on an object). You can also specify which files can be affected by these permissions, allowing a user to, for example, have read and write access in one directory, but read-only access in another.</p>\n<p>Managing all of these individual permissions each time for each user is tedious and error prone. It makes it difficult to understand what a user can actually do. Common practice is to create a few policies across your organization, and assign them appropriately to each user, trying to minimize the amount of permissions granted out.</p>\n<h2 id=\"groups\">Groups</h2>\n<p>Within the world of authorization, groups are a natural extensions of users and policies. Odds are you'll have multiple users and multiple policies. And odds are that you're likely to have groups of users who need to have similar sets of policy documents. You <em>could</em> create a large master policy that encompasses the smaller policies, but that could be difficult to maintain. You could also apply each individual policy document to each user, but that's difficult to keep track of.</p>\n<p>Instead, with groups, you can assign multiple policies to a group, and multiple groups to a user. If you have a billing team that needs access to the billing dashboard, plus the list of all users in the system, you may have a <code>BillingDashboard</code> policy as well as a <code>ListUsers</code> policy, and assign both policies to a <code>BillingTeam</code> group. You may then also assign the <code>ListUsers</code> policy to the <code>Operators</code> group.</p>\n<h2 id=\"roles\">Roles</h2>\n<p>There's a downside with this policies and groups setup described above. Even if I'm a superadmin on my cloud account, I may not want to have the responsibility of all those powers at all times. It's far too easy to accidentally destroy vital resources like a database server. Often, we would like to artificially limit our permissions while operating with a service.</p>\n<p>Roles allow us to do this. With roles, we create a named role for some set of operations, assign a set of policies to it, and provide some way for users to <em>assume</em> that role. When you assume that role, you can perform actions using that set of permissions, but audit trails will still be able to trace back to the original user who performed the actions.</p>\n<p>Arguably a cloud best practice is to grant users only enough permissions to assume various roles, and otherwise unable to perform any meaningful actions. This forces a higher level of stated intent when interacting with cloud APIs.</p>\n<h2 id=\"service-accounts\">Service accounts</h2>\n<p>Some cloud providers and tools support the concept of a service account. While users <em>can</em> be used for both real human beings and services, there is often a mismatch. For example, we typically want to enable multi-factor authentication on real user accounts, but alternative authentication schemes on services.</p>\n<p>One approach to this is service accounts. Service accounts vary among different providers, but typically allow defining some kind of service, receiving some secure token or password, and assigning either roles or policies to that service account.</p>\n<p>In some cases, such as Amazon's EC2, you can assign roles directly to cloud machines, allowing programs running on those machines to easily and securely assume those roles, without needing to store any kinds of tokens or secrets. This concept nicely ties in with roles for users, making role-based management of both users and services and emerging best practice in industry.</p>\n<h2 id=\"rbac-vs-acl\">RBAC vs ACL</h2>\n<p>The system described above is known as Role Based Access Control, or RBAC. Many people are likely familiar with the related concept known as Access Control Lists, or ACL. With ACLs, administrators typically have more work to do, specifically managing large numbers of resources and assigning users to each of those per-resource lists. Using groups or roles significantly simplifies the job of the operator, and reduces the likelihood of misapplied permissions.</p>\n<h2 id=\"single-sign-on\">Single sign-on</h2>\n<p>Most modern DevOps platforms have multiple systems, each requiring separate authentication. For example, in a modern Kubernetes-based deployment, you're likely to have:</p>\n<ul>\n<li>The underlying cloud vendor\n<ul>\n<li>Both command line and web based access</li>\n</ul>\n</li>\n<li>Kubernetes itself\n<ul>\n<li>Both command line access and the Kubernetes Dashboard</li>\n</ul>\n</li>\n<li>A monitoring dashboard</li>\n<li>A log aggregation system</li>\n<li>Other company-specific services</li>\n</ul>\n<p>That's in addition to maintaining a company's standard directory, such as Active Directory or G Suite. Maintaining this level of duplication among user accounts is time consuming, costly, and dangerous. Furthermore, while it's reasonable to securely lock down a single account via MFA and other mechanisms, expecting users to maintain such information for all of these systems securely is unreasonable. And some of these systems don't even provide such security mechanisms.</p>\n<p>Instead, single sign-on provides a standards-based, secure, and simple method for authenticating to these various systems. In some cases, user accounts still need to be created in each individual system. In those cases, automated user provisioning is ideal. We'll talk about some of that in later posts. In other cases, like AWS's identity provider mechanism, it's possible for temporary identifiers to be generated on-the-fly for each SSO-based login, with roles assigned.</p>\n<p>Deeper questions arise about where permissions management is handled. Should the central directory, like Active Directory, maintain permissions information for all systems? Should a single role in the directory represent permissions information in all of the associated systems? Should a separate set of role mappings be maintained for each service?</p>\n<p>Typically, organizations end up including some of each, depending on the functionality available in the underlying tooling, and organizational discretion on how much information to include in a directory.</p>\n<h2 id=\"going-deeper\">Going deeper</h2>\n<p>What we've covered here sets the stage for understanding many cloud-specific authentication and authorization schemes. Going forward, we're going to cover a look into common auth protocols, followed by a review of specific cloud providers and tools, specifically AWS, Azure, and Kubernetes.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/",
        "slug": "understanding-cloud-auth",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Understanding cloud auth",
        "description": "Authentication and authorization are a core component to any secure system. In this overview post, we will begin analyzing common patterns in cloud auth",
        "updated": null,
        "date": "2020-07-29",
        "year": 2020,
        "month": 7,
        "day": 29,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/understanding-cloud-auth/",
        "components": [
          "blog",
          "understanding-cloud-auth"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "goals-of-authentication",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#goals-of-authentication",
            "title": "Goals of authentication",
            "children": []
          },
          {
            "level": 2,
            "id": "goals-of-authorization",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#goals-of-authorization",
            "title": "Goals of authorization",
            "children": []
          },
          {
            "level": 2,
            "id": "users-and-policies",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#users-and-policies",
            "title": "Users and policies",
            "children": []
          },
          {
            "level": 2,
            "id": "groups",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#groups",
            "title": "Groups",
            "children": []
          },
          {
            "level": 2,
            "id": "roles",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#roles",
            "title": "Roles",
            "children": []
          },
          {
            "level": 2,
            "id": "service-accounts",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#service-accounts",
            "title": "Service accounts",
            "children": []
          },
          {
            "level": 2,
            "id": "rbac-vs-acl",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#rbac-vs-acl",
            "title": "RBAC vs ACL",
            "children": []
          },
          {
            "level": 2,
            "id": "single-sign-on",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#single-sign-on",
            "title": "Single sign-on",
            "children": []
          },
          {
            "level": 2,
            "id": "going-deeper",
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#going-deeper",
            "title": "Going deeper",
            "children": []
          }
        ],
        "word_count": 1863,
        "reading_time": 10,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
            "title": "Deploying Rust with Windows Containers on Kubernetes"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-deployments/",
            "title": "Understanding Cloud Software Deployments"
          },
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/security/",
            "title": "Security in a DevOps World"
          }
        ]
      },
      {
        "relative_path": "blog/understanding-devops-roles-and-responsibilities.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/understanding-devops-roles-and-responsibilities/",
        "slug": "understanding-devops-roles-and-responsibilities",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Understanding DevOps Roles and Responsibilities",
        "description": "Companies are implementing DevOps at an increasingly rapid rate. Discover the roles and responsibilities and how to implement DevOps into your latest project.",
        "updated": null,
        "date": "2020-07-24T13:12:00Z",
        "year": 2020,
        "month": 7,
        "day": 24,
        "taxonomies": {
          "categories": [
            "insights",
            "devops"
          ],
          "tags": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "html": "hubspot-blogs/understanding-devops-roles-and-responsibilities.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/understanding-devops-roles-and-responsibilities/",
        "components": [
          "blog",
          "understanding-devops-roles-and-responsibilities"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/preparing-for-cloud-computing-trends.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/preparing-for-cloud-computing-trends/",
        "slug": "preparing-for-cloud-computing-trends",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Preparing for Upcoming Cloud Computing Trends",
        "description": "Cloud Computing is growing at a rate 7 times faster than the rest of IT with no signs of slowing in the coming years. Discover all the trends businesses should be preparing for in order to succeed in 2020 and beyond. ",
        "updated": null,
        "date": "2020-07-24T11:05:00Z",
        "year": 2020,
        "month": 7,
        "day": 24,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "insights",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "html": "hubspot-blogs/preparing-for-cloud-computing-trends.html",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/preparing-for-cloud-computing-trends/",
        "components": [
          "blog",
          "preparing-for-cloud-computing-trends"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cloud-preparation-checklist.md",
        "colocated_path": null,
        "content": "<p>While moving to the cloud brings many benefits associated with it, we\nneed to also be aware of the pain points associated with such a move.\nThis post will discuss those pain points, provide ways to mitigate them, and\ngive you a checklist which can be used if you plan to migrate your\napplications to cloud. We will also discuss the advantages of\nmoving to the cloud.</p>\n<h2 id=\"common-pain-points\">Common pain points</h2>\n<p>One of the primary pain points in moving to the cloud is selecting the\nappropriate tools for a specific usecase. We have an abundance of tools\navailable, with many solving the same problem in different ways. To give\nyou a basic idea, this is the CNCF's (Cloud Native Computing\nFoundation) recommended path through the cloud native technologies:</p>\n<img src=\"/images/insights/cloud-prep-checklist/landscape.png\" alt=\"Cloud Native Landscape\" title=\"Cloud Native Landscape\" width=\"100%\">\n<p></p>\n<p>Picking the right tool is hard, and this is where having experience\nwith them comes in handy.</p>\n<p>Also, the existing knowledge of on-premises data centers may not be\ndirectly transferable when you plan to move to the cloud. An individual might\nhave to undergo a basic training to understand the terminology and the\nconcepts used by a particular cloud vendor. An on-premises system\nadministrator might be used to setting up firewalls via\n<a href=\"https://en.wikipedia.org/wiki/Iptables\">Iptables</a>, but he might also\nwant to consider using <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html\">Security\ngroups</a>\nif he plans to accomplish the same goals in the AWS ecosystem (for EC2 instances).</p>\n<p>Another point to consider while moving to the cloud is the ease with which you\ncan easily get locked in to a single vendor. You might start using\nAmazon's <a href=\"https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html\">Auto Scaling\nGroups</a>\nto automatically handle the load of your application. But when you plan\nto switch to another cloud vendor the migration might not be\nstraightforward. Switching between cloud services isn't easy, and if you want portability, you\nneed to make sure that your applications are built with a multi-cloud\nstrategy. This will allow you to easily switch between vendors if such a\nscenario arises. Taking advantage of containers and Kubernetes may give\nyou additional flexibility and ease portability between different cloud\nvendors.</p>\n<h2 id=\"advantages-of-moving\">Advantages of moving</h2>\n<p>Despite the pain points listed above, there are many advantages involved in\nmoving your applications to cloud. Note that even big media services\nprovider like\n<a href=\"https://netflixtechblog.com/four-reasons-we-choose-amazons-cloud-as-our-computing-platform-4aceb692afec\">Netflix</a>\nhas moved on to the cloud instead of building and managing their own\ndata center solution.</p>\n<h3 id=\"cost\">Cost</h3>\n<p>One of the primary advantages of leveraging the cloud is avoiding\nthe cost of building your\nown data center. Building a secure data center is not trivial. By\noffloading this activity to an external cloud provider, you can instead build your\napplications on top of the infrastructure provided by them. This not\nonly saves the initial capital expenditure but also saves headaches from\nreplacing hardware, such as replacing failing network switches. But note that\nswitching to the cloud will not magically save cost. Depending on your\napplication's architecture and workload, you have to be aware of the\nchoices you make and make sure that your choices are cost efficient.</p>\n<h3 id=\"uptime\">Uptime</h3>\n<p>Cloud vendors provide SLAs (Service Level Agreements) where they state\ninformation about uptime and the guarantees they make. This is a\nsnapshot from the Amazon Compute SLA:</p>\n<p><img src=\"/images/insights/cloud-prep-checklist/sla.png\" alt=\"SLA\" title=\"SLA\" /></p>\n<p>All major cloud providers have historically provided excellent uptime,\nespecially for applications that properly leverage availability zones.\nBut depending on a specific\nusecase/applications, you should define the acceptable uptime for your\napplication and make sure that your SLA matches with it. Also depending\non the requirements, you can architect your application such that it has\nmulti region deployments to provide a better uptime in case there is an\noutage in one region.</p>\n<h3 id=\"security-and-compliance\">Security and Compliance</h3>\n<p>Cloud deployments provide an extra benefit when working in regulated industries\nor with government projects. In many cases, cloud vendors provide regulation-compliant\nhardware.\nBy using cloud providers, we can take advantage of the various\ncompliance standards (eg: HIPAA, PCI etc) they meet.\nValidating an on-premises data center against such standards can be a time consuming,\nexpensive process. Relying on already validated hardware can be faster, cheaper, easier,\nand more reliable.</p>\n<p>Broadening the security topic, cloud vendors typically also provide\na wide range of additional security tools.</p>\n<p>Despite these boons,\nproper care must still be taken, and best practices must still be followed,\nto deploy an application securely.\nAlso, be aware that running on compliant hardware does not automatically\nensure compliance of the software. Code and infrastructure must still meet\nvarious standards.</p>\n<h3 id=\"ease-of-scaling\">Ease of scaling</h3>\n<p>With cloud providers, you can easily add and remove machines or add more\npower (RAM, CPU etc) to them. The ease with which you can horizontally and\nvertically scale your application without worrying about your\ninfrastructure is powerful, and can revolutionize how your approach\nhardware allocation. As your applications load increases,\nyou can easily scale up in a few minutes.</p>\n<p>One of the perhaps surprising benefits of this is that you don't need to\npreemptively scale up your hardware. Many cloud deployments are able\nto reduce the total compute capacity available in a cluster, relying\non the speed of cloud providers to scale up in response to increases in demand.</p>\n<h3 id=\"focus-on-problem-solving\">Focus on problem solving</h3>\n<p>With no efforts in maintaining the on-premises data center, you can\ninstead put your effort in your application and the problem it solves.\nThis allows you to focus on your core business problems and your\ncustomers.</p>\n<p>While not technically important, the cloud providers have energy\nefficient data centers and run it on better efficiency. As a case study,\n<a href=\"https://cloud.google.com/blog/topics/google-cloud-next/our-heads-in-the-cloud-but-were-keeping-the-earth-in-mind\">Google even uses machine learning technology to make its data centers\nmore\nefficient</a>.\nHence, it might be environmentally a better decision to run your\napplications on cloud.</p>\n<h2 id=\"getting-ready-for-cloud\">Getting ready for Cloud</h2>\n<p>Once you are ready for migrating to the cloud, you can plan for the next\nsteps and initiate the process. We have the following general checklist\nwhich we usually take and tailor it based on our clients requirements:</p>\n<h3 id=\"checklist\">Checklist</h3>\n<ul>\n<li>Make a list of your applications and dependencies which need to be\nmigrated.</li>\n<li>Benchmark your applications to establish cloud performance\nKPIs (Key Performance Indicators).</li>\n<li>List out any required compliance requirements for your\napplication and plan for ensuring it.</li>\n<li>Onboard relevant team members to the cloud service's use management\nsystem, ideally integrating with existing user directories and\nleveraging features like single sign on and automated user provisioning.</li>\n<li>Establish access controls to your cloud service, relying on role based\nauthorization techniques.</li>\n<li>Evaluate your migration options. You might want to re-architect it\nto take advantage of cloud-native technologies. Or you might simply\ndecide to shift the existing application without any changes.</li>\n<li>Create your migration plan in a Runbook.</li>\n<li>Have a rollback plan in case migration fails.</li>\n<li>Test your migration and rollback plans in a separate environment.</li>\n<li>Communicate about the migration to internal stakeholders and customers.</li>\n<li>Execute your cloud migration.</li>\n<li>Prune your on-premises infrastructure.</li>\n<li>Optimize your cloud infrastructure for your workloads.</li>\n</ul>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hope we were able to present you the challenges involved in\nmigration to cloud and how to prepare for them. We have helped various\ncompanies in migration and other devops services. Free feel to <a href=\"https://www.fpcomplete.com/contact-us/\">reach out to\nus</a> regarding any questions on\ncloud migrations or any of the other services.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/",
        "slug": "cloud-preparation-checklist",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cloud preparation checklist",
        "description": "Considering a move to the cloud? Read up on cloud advantages, common pain points, and our recommended step by step process",
        "updated": null,
        "date": "2020-07-22",
        "year": 2020,
        "month": 7,
        "day": 22,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Sibi Prabakaran",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/cloud-preparation-checklist/",
        "components": [
          "blog",
          "cloud-preparation-checklist"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "common-pain-points",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#common-pain-points",
            "title": "Common pain points",
            "children": []
          },
          {
            "level": 2,
            "id": "advantages-of-moving",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#advantages-of-moving",
            "title": "Advantages of moving",
            "children": [
              {
                "level": 3,
                "id": "cost",
                "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#cost",
                "title": "Cost",
                "children": []
              },
              {
                "level": 3,
                "id": "uptime",
                "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#uptime",
                "title": "Uptime",
                "children": []
              },
              {
                "level": 3,
                "id": "security-and-compliance",
                "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#security-and-compliance",
                "title": "Security and Compliance",
                "children": []
              },
              {
                "level": 3,
                "id": "ease-of-scaling",
                "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#ease-of-scaling",
                "title": "Ease of scaling",
                "children": []
              },
              {
                "level": 3,
                "id": "focus-on-problem-solving",
                "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#focus-on-problem-solving",
                "title": "Focus on problem solving",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "getting-ready-for-cloud",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#getting-ready-for-cloud",
            "title": "Getting ready for Cloud",
            "children": [
              {
                "level": 3,
                "id": "checklist",
                "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#checklist",
                "title": "Checklist",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 1276,
        "reading_time": 7,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/devops-security-and-privacy-strategies.md",
        "colocated_path": null,
        "content": "<p>DevOps Security and Privacy—FP Complete’s\ncomprehensive, easy to understand guide designed\nto help you understand why they’re so critical to\nthe safety of your DevOps strategy.</p>\n<p>The following is a transcription of a live\nwebinar given by <a href=\"https://tech.fpcomplete.com/\">FP Complete</a>\nFounder and Chairman Aaron Contorer, on\n<a href=\"https://www.youtube.com/user/FPComplete\">FP Complete's YouTube Channel</a>.</p>\n<h2 id=\"introducing-aaron\">Introducing Aaron</h2>\n<p>I’m the Founder and Chairman of <a href=\"https://tech.fpcomplete.com/\">FP Complete</a>,\nwhere we help companies use state-of-the-art tools and\ntechniques to produce secure, lightning-fast,\nfeature-rich software, faster and more often.</p>\n<p>Before founding FP Complete, I was an\nexecutive at Microsoft, where I served as program\nmanager for distributed systems, and general\nmanager of Visual C++, the leading software\ndevelopment tool at that time. Also, I \narchitected MSN’s move to Internet-based server\nsoftware, served as the full-time technology\nadviser to Bill Gates, and I founded and ran the\ncompany’s Productivity Tools Team for complex\nsoftware engineering projects.</p>\n<p>Okay, so enough about me. Let’s begin this\ndiscussion by recognizing our industry’s\nunfortunate—but preventable—reality:</p>\n<h2 id=\"breaches-are-happening-far-too-often\">Breaches are happening far too often</h2>\n<p>We all know how bad the state of the\nworld is within security and privacy\nright now. Projects are getting very\ncomplicated. And I—just as a sample—want\nto point out that this is a very typical\nbreach. Monzo said that for six months\nunauthorized people had access to\npeople’s secret code numbers, their pin\nnumbers. I’m not singling them out at\nall, but rather saying… “This is very\ntypical.” They’re a bank, and they\ncompromised this type of data for months\nand months.</p>\n<p>How does it happen? It’s not only\nbecause of logging and monitoring not\nbeing in place, although that can be a\nbig factor. It’s because of complexity.\nHonestly, we’re all trying very hard to\ndo our jobs, but users keep asking and\nexecutives keep asking for new features.\nAnd that integration just creates point\nafter point where problems can happen,\nand things get overlooked.</p>\n<h2 id=\"opportunities-for-penetration-are-everywhere\">Opportunities for penetration are everywhere</h2>\n<p>I would argue that today’s\napplications are more about assembling\nbuilding blocks than they are about just\nwriting new code. But every time you\nincrease that complexity by adding more\nbuilding blocks, you increase the number\nof interface points between\ncomponents—the number of places where\nsomebody might have done something wrong.\nAnd so we’re really creating a system of\nentry points between component A and\ncomponent B. But entry points—that sounds\nlike something I would compromise if I\nwere a security violator, right?\nFurthermore, we’re manually configuring\nour systems. People aren’t using\ncontinuous deployment. And so there is\nsome wizard who’s supposed to go set up\nthe latest server or integrate it with\nthe database or integrate it with the web\nwith a firewall or whatever they’re\nsupposed to do. Every manual step creates\nfurther opportunities for penetration,\nfor defects, because people are\nimperfect. Even the best person in your\nteam doing a process a hundred times\nmight do it wrong, one or two times. An\nautomated scanner is going to find that\ntime, and it’s going to break into your\nsystem before you know it.</p>\n<h2 id=\"let-s-talk-devsecops\">Let’s talk DevSecOps</h2>\n<p>DevSecOps—DevOps with security stuck\nright in the middle. And I think that’s a\ngood way of looking at this problem. We\nwant to integrate all the different parts\nof our engineering into one pool of\nautomation, and include security and\nquality assurance as part of that\nautomated process. We talked earlier\nabout automated testing being part of our\nbuilds. But we want to go much farther\nthan that, as technical teams. We want to\nstart from the beginning of our projects,\ntalking about how secure they need to be.\nWhat are the risks that they’re supposed\nto defend against or or not create? We\nwant every member of the team to\nunderstand that system downtime—because\nsomebody broke in and trashed it, or even\nworse privacy violations which you can\nnever undo, because when people’s\npersonal information has been published,\nyou can’t unpublish it—we need to let our\nteam members know that these are\npriorities and put them on the to-do list\nfor the project. And we can’t call it\nsomething done if the security part isn’t\ndone. It’s not something we tack on at\nthe end. We don’t build in unsecured,\ncrazy, poorly architected apps, and then\nat the end, ask someone to build a brick\nwall around them. Because as soon as one\nlittle person gets through the brick\nwall, it’s open season. So, we want the\nengineers to know everything they do\nshould be checked for security. That’s a\nculture change to say that it’s\neveryone’s job.</p>\n<p>We need to integrate quality assurance\nwith security, which means somebody is\nchecking the software we wrote for\nweaknesses; somebody is trying to break\nin or, at least, trying to run tools that\nwill show us common ways to break in and\nweather their presence.</p>\n<p>And we need to inspect our cloud\nsystems that are running to make sure\nthat our deployment, and our system\noperations and administration, is as\nsecure as we meant it to be. Did somebody\nomit a step? We want to discover that\nright away and fix it. Or, ideally,\nautomate the way we set up all of our\nsystems using, for example, an\norchestration software package to\nautomatically configure our servers, so\nit isn’t the case that late in the day,\npeople are more likely to make a mistake.\nBecause, well written scripts do just as\ngood a job even when they’re tired.</p>\n<p>And we want to make sure that all of\nour systems are updated and patched and\nnot tell people that security is a waste\nof time and they should get back to work\non features.</p>\n<h2 id=\"process-tips\">Process tips</h2>\n<p>To do all this, we need to have a\nsimple design. And I would encourage\npeople to focus on the idea that\nsimplicity and modular design are great\nways to make a system easier to check for\nsecurity holes.</p>\n<p>We want to make sure that credentials\nthat are used in our modular\nsystems—where one piece of software is\nlogging into another service or another\npiece of software database—are kept in\nproperly secured credential storage. A\ncommon form of security violations is you\nlook at somebody’s source code and… Oh\nlook! There’s the password for the\ndatabase server right there …because the\napp had to connect to the server. That’s\ninappropriate design. There are special\ncredential storage services—your team\nshould use them.</p>\n<p>And we want to make sure that quality\ncontrol remains central to our culture,\nas developers of software, and that\nincludes DevOps, that includes system\nadministration. Too often, we have a good\npiece of software, and then it’s deployed\nincorrectly. And that’s where the problem\noccurs. So if you’re going to test\nwhether your code is written properly,\nmaybe also test whether the servers\nconfigured properly, from time to time.\nIt’s time well spent.</p>\n<h2 id=\"how-to-strengthen-your-security\">How to strengthen your security</h2>\n<p>So how can you move forward on\nsecurity? The good news is, while it may\nsound like a scary and intimidating area,\nthere are lots of practical steps you can\ntake right now, and you don’t even have\nto take them all at the same time, you\ncan take them incrementally. Here are\nsome great steps though that I highly\nrecommend.</p>\n<p>One is that—in your engineering team,\nand if you have multiple teams—in each\nengineering team somebody is explicitly\nthe security person. Somebody knows that\nit’s their job to keep an eye out for\nsecurity issues and prevention and that\nif there’s a problem they’re the person\nwho’s going to hear about it. They should\nhave the power to look into anything they\nneed to make sure there isn’t a security\nhole in the system.</p>\n<p>Use best practices from other\ncompanies. This is a great idea\nthroughout all of DevOps, including\nDevSecOps. You don’t have to reinvent\nanything. You can learn best practices\nand get a checklist together of what\nother companies have found helpful to\nlook for to find opportunities to secure\nyour system incrementally. We just piece\nby piece chip away at the risks that are\npresent in our systems. We don’t have to\nwait until some magic day when all of\nsecurity happens at once.</p>\n<p>Teach your people about security. A\nlot of security problems happen because\none person didn’t realize… Who didn’t\nknow that you’re supposed to not put\npasswords in the source code where\neveryone can see them? Well, one person\ntyped a password into the source code,\nbut now it’s there for everyone. So be\nsure that training and security, and how\nimportant it is, and how to do it is\navailable to everyone in your team. And\nmake sure that there’s a checklist. Who\ntook the security training? Who’s not\nbeen to security training yet?</p>\n<p>Scary but true fact: You should,\naccording to Price Waterhouse Coopers, if\nyou want to be a normal IT operation, be\nspending 11 to 15% of your IT budget on\nsecurity overall. That’s a significant\nnumber. And I think we can all agree that\nwith more internet work and more\nimporting of modules and stuff, we, if\nanything, could be worried that that\nnumber is going to go up. So automation\nthrough DevOps is really a way to keep a\nlid on that number. But I wouldn’t think\nof it as a way to make that number drive\ndown towards zero. Security is everyone’s\njob, and it’s going to remain that\nway.</p>\n<p>Beyond that, I’d say use it use the\nother techniques we talked about earlier\nin this presentation. You don’t have to\nbe the next Equifax, of having no\nmonitoring. You don’t have to allow silly\nmistakes by having no automation. And you\ndon’t have to create more security holes\nby reinventing your own tools and\nprocesses using components. Reuse is your\nfriend.</p>\n<h2 id=\"7-tech-ideas-you-can-start-now\">7 tech ideas you can start now</h2>\n<p>I won’t spend too long on this, but I\nwanted this for people who are more\nhands-on or the people who are\nsupervising hands-on engineers. These are\nsome practical steps that you can take to\nstart turning on pieces of security,\nright now. Every one of these—except\nperhaps service-oriented architecture—is\nsomething that literally you could task\nsomebody to do this week or next\nweek.</p>\n<p>These are straightforward tasks.</p>\n<ol>\n<li>Ensure all databases have firewalls on them. They’re a common data breach source!</li>\n<li>Use a password manager to generate secure passwords; enable two-factor authentication.</li>\n<li>Use roles and policies to assign specific permissions to users and services instead of running everything from root credentials or privileged users.</li>\n<li>Use bastion hosts or VPNs to limit access to internal machines.</li>\n<li>Use service-oriented architecture (SOA) to break off components that need high privilege.</li>\n<li>Include code analysis tools in the dev process and enforce fixes prior to deployment.</li>\n<li>Test your servers with automated scanners for break-in vulnerabilities.</li>\n</ol>\n<h2 id=\"fast-to-market-reliable-and-secure\">Fast to market, reliable, and secure</h2>\n<p>It’s a winning formula!</p>\n<p>So, in short, you have a choice to\nturn on DevOps to use a lot of technology\nthat’s been solved, a lot of best\npractices and engineering techniques that\nhave already been solved and tested at\nnumerous other companies—clients of ours,\nfamous internet companies, everyone. When\nI say “everyone”, the truth is the\nminority of companies are already using\nproper DevOps. But enough companies that\nyou don’t have to be the first, you don’t\nhave to be the Pioneer. DevOps is a\nwinning formula that will get you to\nmarket faster, and more reliable, and\nwith better security. Or you could be the\nnext Equifax and the next Capital One,\nwhich is the default situation.</p>\n<h2 id=\"need-help-with-devops-security-and-privacy\">Need help with DevOps Security and Privacy?</h2>\n<p>FP Complete offers corporations its\nDevOps Success Program which offers\nadvanced Privacy and Security software\nengineering mentoring among many other\nmoving parts in the DevOps world.</p>\n<p>For more information, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/",
        "slug": "devops-security-and-privacy-strategies",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps Security and Privacy Strategies",
        "description": "DevOps Security and Privacy—FP Complete’s comprehensive, easy to understand guide designed to help you understand why they’re so critical to the safety of your DevOps strategy. The following is a transcription of a live webinar given by FP Complete Founder and Chairman Aaron Contorer, on FP Complete’s YouTube Channel. I’m the Founder and Chairman of FP Complete, where we […]",
        "updated": null,
        "date": "2020-05-29",
        "year": 2020,
        "month": 5,
        "day": 29,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "blogimage": "/images/blog-listing/network-security.png"
        },
        "path": "/blog/devops-security-and-privacy-strategies/",
        "components": [
          "blog",
          "devops-security-and-privacy-strategies"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "introducing-aaron",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#introducing-aaron",
            "title": "Introducing Aaron",
            "children": []
          },
          {
            "level": 2,
            "id": "breaches-are-happening-far-too-often",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#breaches-are-happening-far-too-often",
            "title": "Breaches are happening far too often",
            "children": []
          },
          {
            "level": 2,
            "id": "opportunities-for-penetration-are-everywhere",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#opportunities-for-penetration-are-everywhere",
            "title": "Opportunities for penetration are everywhere",
            "children": []
          },
          {
            "level": 2,
            "id": "let-s-talk-devsecops",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#let-s-talk-devsecops",
            "title": "Let’s talk DevSecOps",
            "children": []
          },
          {
            "level": 2,
            "id": "process-tips",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#process-tips",
            "title": "Process tips",
            "children": []
          },
          {
            "level": 2,
            "id": "how-to-strengthen-your-security",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#how-to-strengthen-your-security",
            "title": "How to strengthen your security",
            "children": []
          },
          {
            "level": 2,
            "id": "7-tech-ideas-you-can-start-now",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#7-tech-ideas-you-can-start-now",
            "title": "7 tech ideas you can start now",
            "children": []
          },
          {
            "level": 2,
            "id": "fast-to-market-reliable-and-secure",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#fast-to-market-reliable-and-secure",
            "title": "Fast to market, reliable, and secure",
            "children": []
          },
          {
            "level": 2,
            "id": "need-help-with-devops-security-and-privacy",
            "permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#need-help-with-devops-security-and-privacy",
            "title": "Need help with DevOps Security and Privacy?",
            "children": []
          }
        ],
        "word_count": 2005,
        "reading_time": 11,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/rapid-devops-success.md",
        "colocated_path": null,
        "content": "<p>Continuous integration and deployment, monitoring and logging, and security and privacy—FP Complete’s comprehensive, easy to understand guide designed to help you learn why those three DevOps strategies collectively create an environment where high-quality software can be developed quicker and more efficiently than ever before.</p>\n<p>Aaron Contorer, founder and chairman of FP Complete, presented the following webinar. Read below for a transcript of the video.</p>\n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/5U11unR_py0\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n<h2 id=\"introducing-aaron\">Introducing Aaron</h2>\n<p>I’m the Founder and Chairman of <a href=\"https://tech.fpcomplete.com/\">FP Complete</a>,\nwhere we help companies use\nstate-of-the-art tools and techniques to\nproduce secure, lightning-fast,\nfeature-rich software, faster and more\noften.</p>\n<p>Before founding FP Complete, I was an\nexecutive at Microsoft, where I served as\nprogram manager for distributed systems,\nand general manager of Visual C++, the\nleading software development tool at that\ntime. Also, I architected MSN’s move\nto Internet-based server software, served\nas the full-time technology adviser to\nBill Gates, and I founded and ran the\ncompany’s Productivity Tools Team for\ncomplex software engineering\nprojects.</p>\n<p>Okay, so enough about me. Let’s begin\nthis presentation by stating the\nobvious:</p>\n<h2 id=\"software-development-is-complicated\">Software development is complicated</h2>\n<p>As information technology and software\npeople, it’s easy to recognize how things\nare changing at an astonishing speed. To\nkeep pace, we need tools and processes\nthat allow us to rapidly deploy better\ncode more frequently with fewer errors.\nIs that a high bar to reach? Yes, of\ncourse, it is. But it absolutely must be\nmet—that is <em>if</em> you\nwant your company to survive.</p>\n<h2 id=\"inefficiencies-are-everywhere\">Inefficiencies are everywhere</h2>\n<p>In most companies, I would argue that\nthe information technology team and the\nsoftware engineering team are not totally\ntrusted by the rest of the company.</p>\n<p>Of course, I don’t mean they’re not\ntrusted as in they’re not good, smart\npeople. What I mean is that they don’t\nmeet their deadlines, leading to sprints\nbecoming longer than initially expected,\nultimately causing everyone to feel\nrushed and end results lacking in\nquality.</p>\n<h2 id=\"it-has-lost-management-s-trust\">IT has lost management’s trust</h2>\n<p>When management begins to not trust\nengineering and IT, a bad dynamic\ndevelops. No longer does the team get to\nfocus on building great things for their\nend-users. Instead, they’re forced to\nfocus on solving their struggles and\ndealing with interpersonal friction.</p>\n<p>Believe it or not, the problems we’re\nhaving aren’t people-problems. It’s not\nthat they lack good intentions or\nbrainpower.</p>\n<p>Instead, the problem is this:</p>\n<h2 id=\"modern-software-ancient-tech\">Modern software, ancient tech</h2>\n<p><strong>Modern software development can’t be performed using ancient technologies applied within simplistic workflows.</strong></p>\n<p>I often like to say…</p>\n<p><em>“The best craftsperson with a\nhandsaw cannot do woodworking as\nefficiently as a robotic cutting\ntool.”</em></p>\n<p>When we automate our work, it becomes\nfaster and easier to replicate. We don’t\nbuild in lots of mistakes. As a result,\nwe get to move on with our lives instead\nof going back and reworking things over\nand over again.</p>\n<p>When we automate with good tools and\nbetter processes programmed in, and we\nrepeat this same process every time,\neveryone can trust that our work will be\nperformed with quality, and our systems\nwill be more safe and secure.</p>\n<p>Sounds ideal, doesn’t it? Of course,\nit does.</p>\n<p>But how do you do it? How do you\nevolve from the environment you’re\noperating in today to the utopia DevOps\nstrategies will allow you to live and\nwork within well into the future?</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/",
        "slug": "rapid-devops-success",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Webinar Review: Learn Rapid DevOps Success",
        "description": "Continuous integration and deployment, monitoring and logging, and security and privacy—FP Complete’s comprehensive, easy to understand guide designed to help you learn why...",
        "updated": null,
        "date": "2020-05-29",
        "year": 2020,
        "month": 5,
        "day": 29,
        "taxonomies": {
          "tags": [
            "devops",
            "insights"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/rapid-devops-success/",
        "components": [
          "blog",
          "rapid-devops-success"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "introducing-aaron",
            "permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#introducing-aaron",
            "title": "Introducing Aaron",
            "children": []
          },
          {
            "level": 2,
            "id": "software-development-is-complicated",
            "permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#software-development-is-complicated",
            "title": "Software development is complicated",
            "children": []
          },
          {
            "level": 2,
            "id": "inefficiencies-are-everywhere",
            "permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#inefficiencies-are-everywhere",
            "title": "Inefficiencies are everywhere",
            "children": []
          },
          {
            "level": 2,
            "id": "it-has-lost-management-s-trust",
            "permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#it-has-lost-management-s-trust",
            "title": "IT has lost management’s trust",
            "children": []
          },
          {
            "level": 2,
            "id": "modern-software-ancient-tech",
            "permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#modern-software-ancient-tech",
            "title": "Modern software, ancient tech",
            "children": []
          }
        ],
        "word_count": 582,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/rust-devops.md",
        "colocated_path": null,
        "content": "<p>On February 2, 2020, one of FP Complete's Lead Software Engineers—Mike McGirr—presented a webinar on using Rust for creating DevOps tooling.</p>\n<h2 id=\"webinar-outline\">Webinar Outline</h2>\n<p>FP Complete is hosting a functional programming\nwebinar on, “Learn Rapid Rust with DevOps Success\nStrategies.” A beginner’s guide including sample Rust\ndemonstration on writing your DevOps tools with Rust\nover Haskell. An introduction to Rust, with basic DevOps\nuse cases, and the library ecosystem, airing on\nFebruary 5th, 2020.</p>\n<p>The webinar will be hosted by Mike McGirr, a DevOps\nSoftware Engineer at FP Complete which will provide an\nabundance of Rust information with respect to\nfunctional programming and DevOps, featuring (safety,\nspeed and accuracy) that make it unique and contributes\nto its popularity, and its possible preference as a\nlanguage of choice for operating systems over Haskell,\nweb browsers and device drivers among others. The\nwebinar offers an interesting opportunity to learn and\nuse Rust in developing real world projects aside from\nHaskell or other functional programming languages\navailable today.</p>\n<h2 id=\"topics-covered\">Topics covered</h2>\n<p>During the webinar we will cover the following\ntopics:</p>\n<ul>\n<li>A quick intro and background into the Rust programming language</li>\n<li>Some scenarios and reasons why you would want to use Rust for writing your DevOps tooling (and some reasons why you wouldn’t)</li>\n<li>A small example of using the existing AWS libraries to create   a basic DevOps tool</li>\n<li>How to Integrate FP into your Organization</li>\n</ul>\n<p>Mike Mcgirr, a Lead Software Engineer at FP\nComplete,will help us understand reasoning that\nsupports using Rust over other functional programming\nlanguages offered in the market today.</p>\n<h2 id=\"more-about-your-host\">More about your host</h2>\n<p>The webinar will be hosted by Mike McGirr, a veteran\nDevOps Software Engineer at FP Complete. With years of\nexperience in DevOps software development, Mike will\nwalk us through a first in a series of Rust webinars\ndiscussing why we would, and how we could utilize Rust\nas a functional programming language to build DevOps\nover other functional programming languages available\nin the market today. Mike will also share with us a\nsmall example script written in Rust showing how Rust\nmay be used.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-devops/",
        "slug": "rust-devops",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Rust with DevOps Success Strategies",
        "description": "Wednesday Feb 5th, 2020, at 10:00 AM PST. Webinar Outline: FP Complete is hosting a functional programming webinar on, “Learn Rapid Rust with DevOps Success Strategies.” A beginner’s guide including sample Rust demonstration on writing your DevOps tools with Rust over Hasell. An introduction to Rust, with basic DevOps use cases, and the library ecosystem, […]",
        "updated": null,
        "date": "2020-02-05",
        "year": 2020,
        "month": 2,
        "day": 5,
        "taxonomies": {
          "tags": [
            "devops",
            "rust",
            "insights"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mike McGirr",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/rust-devops/",
        "components": [
          "blog",
          "rust-devops"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "webinar-outline",
            "permalink": "https://tech.fpcomplete.com/blog/rust-devops/#webinar-outline",
            "title": "Webinar Outline",
            "children": []
          },
          {
            "level": 2,
            "id": "topics-covered",
            "permalink": "https://tech.fpcomplete.com/blog/rust-devops/#topics-covered",
            "title": "Topics covered",
            "children": []
          },
          {
            "level": 2,
            "id": "more-about-your-host",
            "permalink": "https://tech.fpcomplete.com/blog/rust-devops/#more-about-your-host",
            "title": "More about your host",
            "children": []
          }
        ],
        "word_count": 351,
        "reading_time": 2,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/",
            "title": "Collect in Rust, traverse in Haskell and Scala"
          }
        ]
      },
      {
        "relative_path": "blog/what_is_govcloud.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2019/05/what_is_govcloud/",
        "slug": "what-is-govcloud",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "What is GovCloud?",
        "description": "Devops, FedRAMP Compliance, and Making your Migration to GovCloud Successful - What is GovCloud?",
        "updated": null,
        "date": "2019-05-28T17:54:00Z",
        "year": 2019,
        "month": 5,
        "day": 28,
        "taxonomies": {
          "tags": [
            "devops",
            "aws",
            "govcloud"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "J Boyer",
          "html": "hubspot-blogs/what_is_govcloud.html",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/2019/05/what_is_govcloud/",
        "components": [
          "blog",
          "2019",
          "05",
          "what_is_govcloud"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/deploying_haskell_apps_with_kubernetes.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/deploying_haskell_apps_with_kubernetes/",
        "slug": "deploying-haskell-apps-with-kubernetes",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Deploying Haskell Apps with Kubernetes",
        "description": "This webinar describes how to Deploy Haskell applications using Kubernetes. Topics to be discussed include creation of a Kube cluster using Terraform and Kops, describe pods, deployments, services, load balancers, etc., deployment of a built image using kubectl and deploy, and more.",
        "updated": null,
        "date": "2018-09-11T16:24:00Z",
        "year": 2018,
        "month": 9,
        "day": 11,
        "taxonomies": {
          "tags": [
            "haskell",
            "devops"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/deploying_haskell_apps_with_kubernetes.html",
          "blogimage": "/images/blog-listing/kubernetes.png"
        },
        "path": "/blog/deploying_haskell_apps_with_kubernetes/",
        "components": [
          "blog",
          "deploying_haskell_apps_with_kubernetes"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
            "title": "Our history with containerization"
          },
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/containerization/",
            "title": "Containerization"
          }
        ]
      },
      {
        "relative_path": "blog/devsecops.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/devsecops/",
        "slug": "devsecops",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevSecOps - Putting the Sec in DevOps",
        "description": "With today's tremendous security pressures, DevOps teams are moving to continuous development and integration, but continuous security is harder to integrate. To better understand how to secure your DevOps and protect your network read on.",
        "updated": null,
        "date": "2018-07-18T13:11:00Z",
        "year": 2018,
        "month": 7,
        "day": 18,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/devsecops.html",
          "blogimage": "/images/blog-listing/network-security.png"
        },
        "path": "/blog/devsecops/",
        "components": [
          "blog",
          "devsecops"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/deploying-rust-with-docker-and-kubernetes.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/",
        "slug": "deploying-rust-with-docker-and-kubernetes",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Deploying Rust with Docker and Kubernetes",
        "description": "Using a tiny Rust app to demonstrate deploying Rust with Docker and Kubernetes.",
        "updated": null,
        "date": "2018-07-17T14:36:00Z",
        "year": 2018,
        "month": 7,
        "day": 17,
        "taxonomies": {
          "tags": [
            "rust",
            "devops",
            "kubernetes"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Allen",
          "html": "hubspot-blogs/deploying-rust-with-docker-and-kubernetes.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/2018/07/deploying-rust-with-docker-and-kubernetes/",
        "components": [
          "blog",
          "2018",
          "07",
          "deploying-rust-with-docker-and-kubernetes"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
            "title": "Levana NFT Launch"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
            "title": "Our history with containerization"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
            "title": "Deploying Rust with Windows Containers on Kubernetes"
          },
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/containerization/",
            "title": "Containerization"
          }
        ]
      },
      {
        "relative_path": "blog/devops-to-prepare-for-a-blockchain-world.md",
        "colocated_path": null,
        "content": "<h2 id=\"introduction\">Introduction</h2>\n<p>As the world adopts blockchain technologies, your IT infrastructure — and its\npredictability — become critical. Many companies lack the levels of automation\nand control needed to survive in this high-opportunity, high-threat environment.</p>\n<p>Are your software, cloud, and server systems automated and robust enough? Do you\nhave enough quality control for both your development and your online operations?\nOr will you join the list of companies bruised by huge data breaches and loss o\nf control over their own computer systems? If you are involved in blockchain, or\nany industry for that matter, these are the questions you need to ask yourself.</p>\n<p>Blockchain will require you to put more information online than ever before,\ncreating huge exposures for organizations that do not have a handle on their\nsecurity. Modern DevOps technologies, including many open-source systems, offer\npowerful solutions that can improve your systems to a level suitable for use with\nblockchain.</p>\n<h2 id=\"are-companies-really-ready-for-blockchain-technology\">Are companies REALLY ready for Blockchain technology?</h2>\n<p>The answer to it is most of the companies are NOT and those who are need to audit\nor reevaluate whether they are. The reason is BlockChain puts data to public making\nit prone to outside attacks if systems are not hardenend and updated on timely\nmanner.</p>\n<p>Big companies such as Equifax had millions of records stolen, Heartland credit\nprocessing was hacked and eventually had to pay 110 million and Airbus A400M due \nto wrong installation of manual software patch resulted in death of everyone on\non the plain. These are few of many such big companies that was hacked due to poorly\nimplemented IT technology.</p>\n<p>Once hailed as unhackable, blockchains are now getting hacked. According to a MIT\ntechnology review, hackers have stolen nearly $2 billion worth of cryptocurrency\nsince the beginning of 2017.</p>\n<h2 id=\"big-question-why-companies-are-getting-hacked\">Big Question: Why Companies are getting hacked ?</h2>\n<p>Blockchain itself isn't always the problem. Sometimes the blockchain is  secure \nbut the IT infrastructure is not capable to supporting it. There are cases where \nopen firewalls, unencrypted data, poor testing and manual errors were reasons \nbehind the hacking.</p>\n<p>So, the question to ask is: Is the majority of your IT infrastructure secure \nand reliable enough to support Blockchain Technology ?</p>\n<h2 id=\"what-is-an-it-factory\">What is an IT Factory ?</h2>\n<p>IT factory as per <a href=\"https://www.fpcomplete.com/our-team/\">Aaron Contorer</a>, founder \nand Chariman of FP Complete is divided into 3 parts</p>\n<ol>\n<li>Development</li>\n<li>Deployment</li>\n<li>System Operations</li>\n</ol>\n<p>If IT factory is implemented properly at each stage it could result in a new and\nbetter IT services leading to a more reliable, scalable and secure environment.</p>\n<p>Deployment is a bridge that allows software running on a developer laptop all the\nway to a scalable system and running Ops for monitoring. With DevOps practice,\nwe can ensure all the three stages of IT factory implemented.</p>\n<p>But, the key to build a working IT factory is Automation that ensure each step\nin the deployment process is reliable. With microservices architecture ,building\nand testing a reliable containerized based system is much easier now compared to\nthe earlier days.</p>\n<p>The only way to ensure a reliable, reproducible system is if companies start\nautomating each step of their software life cycle journey. Companies that are ensuring\ngood DevOps practices have a robust IT infrastructure compared to those that are\nNOT.</p>\n<h2 id=\"devops-for-blockchain\">DevOps for Blockchain</h2>\n<p>DevOps tools helps BlockChain better as it can ensure all code is tracked, tested,\ndeployed automatically, audited and Quality Assurance tested along each stage of\nthe delivery pipeline.</p>\n<p>The other benefits of having DevOps methods implemented in BlockChain is that it \nreduces the overall operational cost to companies, speeds up the overall pace of \nsoftware development and release cycle, improves the software quality and increases\nthe productivity.</p>\n<p>The following DevOps methods, if implemented in Blockchain, can be very helpful</p>\n<p><strong>1. Engineer for Safety</strong></p>\n<ul>\n<li>With proper version control tool like GITHUB , source code can be viewed,\ntracked with proper history of all changes to the base</li>\n<li>Development tools used by developers should be of the same version, should be\ntracked and should be  uniform across the project</li>\n<li>Continuous Integration (CI) pipeline must be implemented at the development\nstage to ensure nothing breaks on each commit. There are tools such as Jenkins,\nBamboo, Code Pipeline and many more that can help in setting up a proper CI .</li>\n<li>Each commit should be properly tested using test case management system with\nproper unit test cases for each commit</li>\n<li>Each Project should also have an Issue tracking system like JIRA, GITLAB etc\nto ensure all requests are properly tracked and closed.</li>\n</ul>\n<p><strong>2. Deploy for Safety</strong></p>\n<ul>\n<li>Continuous Deployment via DevOps tools to ensure code is automatically deployed\nto each environment</li>\n<li>Each environment (Development, Testing, DR, Production) should be a replica\nof each other</li>\n<li>Allow automation to setup all relevant infrastructure related to allow successful\ndeployment of code</li>\n<li>Setup infrastructure as code (IAC) to provision infrastructure that helps in\nreducing manual errors</li>\n<li>Sanity of each deployment by running test cases to ensure each component is\nfunctioning as expected</li>\n<li>Running Security testing after each Deployment on each environment</li>\n<li>Ensure system can be  RollBack/Rollforward without any manual intervention like\nCanary/Blue-Green Deployment</li>\n<li>Use container based deployments that provide more reliability for deployments</li>\n</ul>\n<p><strong>3. Operate for Safety</strong></p>\n<ul>\n<li>Set up Continuous Automated Monitoring and Logging</li>\n<li>Set up Anomaly detection and alerting mechanism</li>\n<li>Set up Automated Response and Recovery for any failures</li>\n<li>Ensure a Highly Available and scalable system for reliability</li>\n<li>Ensure data is encrypted for all outbound and inbound communication</li>\n<li>Ensure separation of admin powers, database powers, deployment powers , user \naccess etc. The more the powers are separated the lesser the risk</li>\n</ul>\n<p><strong>4. Separate for Safety</strong></p>\n<ul>\n<li>Separate each system internally from each other by using multiple small networks.\nFor Eg: database/backend on private subnets while UI on public subnets</li>\n<li>Set Internal and MutFirewalls ensure the database systems are protected with no access</li>\n<li>Separate Responsibility and credentials for reduce risk of exposure</li>\n</ul>\n<p><strong>5. Human systems</strong></p>\n<p>Despite keeping hardware and software checks, most the breaking of blockchain\nsystems today has happened because of &quot;People&quot; or &quot;Human Errors&quot;.</p>\n<p>Most people try hacks/workaround to get stuff working on production with no knowledge\non the impacts it could do on the system. Sometimes these stuff are not documented\nmaking it hard for the other person to fix it. Sometimes asking others to login\nto unauthorized systems by sharing credentials over calls paves a path for unsecure\nsystems</p>\n<p>To ensure companies must,</p>\n<ul>\n<li>Train people to STOP doing manual efforts to fix a broken system.</li>\n<li>Train people  NOT to do &quot;Social Engineering&quot; like asking colleagues \nto login to systems on their behalf, sharing passwords etc.</li>\n</ul>\n<p><strong>6. Quality Assurance</strong></p>\n<ul>\n<li>Need to review the Architectural as well as best practices are ensured in the\nproduct life cycle</li>\n<li>Need to ensure the code deploy pipeline has scope for penetration Testing</li>\n<li>Need to ensure there is weekly/monthly auditing of metrics, logs , systems to\ncheck for threats to the systems</li>\n<li>Each component and patch on system should be tested and approved by QA before\nrolling out to Production</li>\n<li>Companies could also hire third parties to audit their system on their behalf</li>\n</ul>\n<h2 id=\"how-to-get-there\">How to get there ?</h2>\n<p>The good news is &quot;IT IS POSSIBLE&quot;. There is no need for giant or all-in-one solutions.</p>\n<p>Companies that are starting fresh need  to start at the early phase of development\nto building a reliable system by focussing on above 6 points mentioned above. They\nneed to start thinking on all areas in the &quot;Plan and Design&quot; phase itself.</p>\n<p>For companies who are already on production or nearing production does not need\nto have to start fresh . They can start making incremental progress but it needs\nto start TODAY.</p>\n<p>Automation is the only SCIENCE in IT that can reduce errors and help towards building \na more and more reliable system. It will in the future save money and resources that \ncan be redirected to focus on other areas.</p>\n<p>To conclude, <a href=\"https://www.fpcomplete.com\">FP Complete</a> has been a leading consultant \non providing DevOps services. We excel at what we do and if you are looking to implement \nDevOps in your BlockChain. Please feel free to reach out to us for free consultations.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/",
        "slug": "devops-to-prepare-for-a-blockchain-world",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps to Prepare for a Blockchain World",
        "description": "This webinar describes how Devops can be used to prepare any company that is interested in adopting blockchain technology. Many companies lack the level of automation and control needed to survive in this high-opportunity, high-threat environment but DevOps technologies can offer powerful solutions.",
        "updated": null,
        "date": "2018-06-07T08:03:00Z",
        "year": 2018,
        "month": 6,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming",
            "devops"
          ],
          "tags": [
            "devops",
            "blockchain"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/devops-to-prepare-for-a-blockchain-world/",
        "components": [
          "blog",
          "devops-to-prepare-for-a-blockchain-world"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "introduction",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#introduction",
            "title": "Introduction",
            "children": []
          },
          {
            "level": 2,
            "id": "are-companies-really-ready-for-blockchain-technology",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#are-companies-really-ready-for-blockchain-technology",
            "title": "Are companies REALLY ready for Blockchain technology?",
            "children": []
          },
          {
            "level": 2,
            "id": "big-question-why-companies-are-getting-hacked",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#big-question-why-companies-are-getting-hacked",
            "title": "Big Question: Why Companies are getting hacked ?",
            "children": []
          },
          {
            "level": 2,
            "id": "what-is-an-it-factory",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#what-is-an-it-factory",
            "title": "What is an IT Factory ?",
            "children": []
          },
          {
            "level": 2,
            "id": "devops-for-blockchain",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#devops-for-blockchain",
            "title": "DevOps for Blockchain",
            "children": []
          },
          {
            "level": 2,
            "id": "how-to-get-there",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#how-to-get-there",
            "title": "How to get there ?",
            "children": []
          }
        ],
        "word_count": 1354,
        "reading_time": 7,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/controlling-access-to-nomad-clusters.md",
        "colocated_path": null,
        "content": "<p>In this blog post, we will learn how to control access to nomad.</p>\n<h2 id=\"introduction\">Introduction</h2>\n<p>Nomad is an application scheduler, that helps you schedule application-processes\nefficiently, across multiple servers, and keep your infrastructure costs low.\nNomad is capable of scheduling containers, virtual machines, as well as isolated\nforked processes.</p>\n<p>There are other schedulers available, such as Kubernetes, Mesos or Docker Swarm,\nbut each has different mechanisms for securing access. By following this post,\nyou will understand the main components in securing your Nomad cluster, but the\noverall idea is valid across any of the other schedulers available.</p>\n<p>One of Nomad's selling points, and why you could consider it over tools like\nKubernetes, is that you can schedule not only containers, but also QEMU\nimages, LXC, isolated <code>fork/exec</code> processes, and even Java applications in a\nchroot(!). All you need is a driver implemented for Nomad. On the other hand,\nits community is smaller than Kubernetes, so the tradeoffs have to be measured\non a project-by-project basis.</p>\n<p>We will start by deploying a test cluster and configuring access control lists\n(ACLs).</p>\n<h2 id=\"overview\">Overview</h2>\n<ul>\n<li>Nomad uses tokens to authenticate client requests.</li>\n<li>Each token is associated with policies.</li>\n<li>Policies are a collection of rules to allow or deny operations on resources.</li>\n</ul>\n<p>In this tutorial, we will:</p>\n<ol>\n<li>Setup our environment to run nomad inside a Vagrant virtual machine for running experiments</li>\n<li>We generate a root/admin token (usually known as the &quot;management&quot; token) and activate ACLs</li>\n<li>Using the management token, we add a new &quot;non-admin&quot; policy and create a token associated with this new policy</li>\n<li>Use the &quot;non-admin&quot; token to demonstrate access control.</li>\n</ol>\n<h2 id=\"setup-the-environment\">Setup the environment</h2>\n<p>Pre-requisites:</p>\n<ul>\n<li>POSIX shell, such as GNU Bash</li>\n<li>Vagrant &gt; <code>2.0.1</code></li>\n<li>Nomad demo <a href=\"https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/Vagrantfile\"><code>Vagrantfile</code></a></li>\n</ul>\n<p>We will run everything from within a virtual machine with all the necessary\nconfiguration and applications. Execute the following commands on your shell:</p>\n<pre><code>$ cd $(mktemp --directory)\n$ curl -LO https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;hashicorp&#x2F;nomad&#x2F;master&#x2F;demo&#x2F;vagrant&#x2F;Vagrantfile\n$ vagrant up\n    ...\n    lines and lines of Vagrant output\n    this might take a while\n    ...\n$ vagrant ssh\n    ...\n    Message of the day greeting from VM\n    Anything after this point is being executed inside the virtual machine\n    ...\nvagrant@nomad:~$ nomad version\nNomad vX.X.X\nvagrant@nomad:~$ uname -n\nnomad\n</code></pre>\n<p>Depending on your system and the version of <code>Vagrantfile</code> used, the prompt may\nbe different.</p>\n<h2 id=\"setup-nomad\">Setup Nomad</h2>\n<p>We configure nomad to execute both as server and client for convenience, as\nopposed to a production environment where the server is remote and client is\nlocal to each machine or node. Create a <code>nomad-agent.conf</code> with the following\ncontents:</p>\n<pre><code>bind_addr = &quot;0.0.0.0&quot;\ndata_dir = &quot;&#x2F;var&#x2F;lib&#x2F;nomad&quot;\nregion = &quot;global&quot;\nacl {\n  enabled = true\n}\nserver {\n  enabled              = true\n  bootstrap_expect     = 1\n  authoritative_region = &quot;global&quot;\n}\nclient {\n  enabled = true\n}\n</code></pre>\n<p>Then, execute:</p>\n<pre><code>vagrant@nomad:~$ sudo nomad agent -config=nomad-agent.conf # sudo is needed to run as a client\n</code></pre>\n<p>You should see output indicating that Nomad is running.</p>\n<blockquote>\n<p>Clients need root access to be able to execute processes, while servers only\ncommunicate to synchronize state.</p>\n</blockquote>\n<h2 id=\"acl-bootstrap\">ACL Bootstrap</h2>\n<p>On another terminal, after running <code>vagrant ssh</code> from our temporary working\ndirectory, run the following command:</p>\n<pre><code>vagrant@nomad:~$ nomad acl bootstrap\n\nAccessor ID  = 2f34299b-0403-074d-83e2-60511341a54c\nSecret ID    = 9fff6a06-b991-22db-7fed-55f17918e846\nName         = Bootstrap Token\nType         = management\nGlobal       = true\nPolicies     = n&#x2F;a\nCreate Time  = 2018-02-14 19:09:23.424119008 +0000 UTC\nCreate Index = 13\nModify Index = 13\n</code></pre>\n<p>This <code>Secret ID</code> is our <code>management</code> (admin) token. This token is valid globally\nand all operations are permitted. No policies are necessary while authenticating\nwith the management token, and so, none are configured by default.</p>\n<p>It is important to copy the <code>Accessor ID</code> and <code>Secret ID</code> to some file, for\nsafekeeping, as we will need these values later. For a production environment,\nit is safest to store these in a separate vault permanently.</p>\n<p>Once ACLs are on, all operations are denied <em>unless</em> a valid token is provided\nwith each request, and the operation we want is allowed by a policy associated\nwith the provided token.</p>\n<pre><code>vagrant@nomad:~$ nomad node-status\nError querying node status: Unexpected response code: 403 (Permission denied)\n\nvagrant@nomad:~$ export NOMAD_TOKEN=&#x27;9fff6a06-b991-22db-7fed-55f17918e846&#x27; # Secret ID, above\nvagrant@nomad:~$ nomad node-status\n\nID        DC   Name   Class   Drain  Status\n1f638a17  dc1  nomad  &lt;none&gt;  false  ready\n</code></pre>\n<h2 id=\"designing-policies\">Designing policies</h2>\n<p>Policies are a collection of (ideally, non-overlapping) roles, that provide\naccess to different operations. The table below shows typical users of a Nomad\ncluster.</p>\n<table><thead><tr><th>Role</th><th>Namespace</th><th>Agent</th><th>Node</th><th>Remarks</th></tr></thead><tbody>\n<tr><td>Anonymous</td><td><code>deny</code></td><td><code>deny</code></td><td><code>deny</code></td><td>Unnecessary, as token-less requests are denied all operations.</td></tr>\n<tr><td>Developer</td><td><code>write</code></td><td><code>deny</code></td><td><code>read</code></td><td>Developers are permitted to debug their applications, but not to perform cluster management</td></tr>\n<tr><td>Logger</td><td><code>list-jobs</code>, <code>read-logs</code></td><td><code>deny</code></td><td><code>read</code></td><td>Automated log aggregators or analyzers that need read access to logs</td></tr>\n<tr><td>Job requester</td><td><code>submit-job</code></td><td><code>deny</code></td><td><code>deny</code></td><td>CI systems create new jobs, but don't interact with running jobs.</td></tr>\n<tr><td>Infrastructure</td><td><code>read</code></td><td><code>write</code></td><td><code>write</code></td><td>DevOps teams perform cluster management but seldom need to interact with running jobs.</td></tr>\n</tbody></table>\n<blockquote>\n<p>For namespace access, <code>read</code> is equivalent to\n<code>[read-job, list-jobs]</code>. <code>write</code> is equivalent to\n<code>[list-jobs, read-job, submit-job, read-logs, read-fs, dispatch-job]</code>.</p>\n</blockquote>\n<blockquote>\n<p>In the event that operators do need to have access to namespaces, one can\nalways create a token that has <em>both</em> Developer and Infrastructure policies\nattached. This is equivalent to having a <code>management</code> token.</p>\n</blockquote>\n<p>We have left out multi-region and multi-namespace setups here. We have assumed\neverything to be running under the <code>default</code> namespace. It should be noted that\non production deployments, with much larger needs, the policies could be\ndesigned per-namespace, and tracked between regions.</p>\n<h2 id=\"policy-specification\">Policy specification</h2>\n<p>Policies are expressed by a combination of rules Note that the <code>deny</code> rule will\npreside over any conflicting capability.</p>\n<p>Nomad accepts a JSON payload with the name and description of a policy, along\nwith a <em>quoted</em> JSON or HCL document with rules, like the following.</p>\n<pre><code>{\n  &quot;Description&quot;: &quot;Agent and node management&quot;,\n  &quot;Name&quot;: &quot;infrastructure&quot;,\n  &quot;Rules&quot;: &quot;{\\&quot;agent\\&quot;:{\\&quot;policy\\&quot;:\\&quot;write\\&quot;},\\&quot;node\\&quot;:{\\&quot;policy\\&quot;:\\&quot;write\\&quot;}}&quot;\n}\n</code></pre>\n<p>This policy matches what we have in the table above.\nCreate an <code>infrastructure.json</code> with the content above for use in the next step.</p>\n<blockquote>\n<p>TIP:</p>\n<p>To avoid error-prone quoting, one could write the policies in YAML:</p>\n<pre><code>Name: infrastructure\nDescription: Agent and node management\nRules:\n  agent:\n    policy: write\n  node:\n    policy: write\n</code></pre>\n<p>And then, convert them to JSON with the necessary quoting, by:</p>\n<pre><code>$ yaml2json &lt; infrastructure.yaml | jq &#x27;.Rules = (.Rules | @text)&#x27; &gt; infrastructure.json\n</code></pre>\n</blockquote>\n<h2 id=\"adding-a-policy\">Adding a policy</h2>\n<p>To add the policy, simply make an HTTP POST request to the server. The\n<code>NOMAD_TOKEN</code> below is the &quot;management&quot; token that we first created.</p>\n<pre><code>vagrant@nomad:~$ curl \\\n    --request POST \\\n    --data @infrastructure.json \\\n    --header &quot;X-Nomad-Token: ${NOMAD_TOKEN}&quot; \\\n    https:&#x2F;&#x2F;127.0.0.1:4646&#x2F;v1&#x2F;acl&#x2F;policy&#x2F;infrastructure\n\nvagrant@nomad:~$ nomad acl policy list\nName            Description\ninfrastructure  Agent and node management\n\nvagrant@nomad:~$ nomad acl policy info infrastructure\nName        = infrastructure\nDescription = Agent and node management\nRules       = {&quot;agent&quot;:{&quot;policy&quot;:&quot;write&quot;},&quot;node&quot;:{&quot;policy&quot;:&quot;write&quot;}}\nCreateIndex = 425\nModifyIndex = 425\n</code></pre>\n<h2 id=\"creating-a-token-for-a-policy\">Creating a token for a policy</h2>\n<p>We now create a token for the <code>infrastructure</code> policy, and attempt a few operations\nwith it:</p>\n<pre><code>vagrant@nomad:~$ nomad acl token create \\\n    -name=&#x27;devops-team&#x27; \\\n    -type=&#x27;client&#x27; \\\n    -global=&#x27;true&#x27; \\\n    -policy=&#x27;infrastructure&#x27;\n\nAccessor ID  = 927ea7a4-e689-037f-be89-54a2cdbd338c\nSecret ID    = 26832c8d-9315-c1ef-aabf-2058c8632da8\nName         = devops-team\nType         = client\nGlobal       = true\nPolicies     = [infrastructure]\nCreate Time  = 2018-02-15 19:53:59.97900843 +0000 UTC\nCreate Index = 432\nModify Index = 432\n\nvagrant@nomad:~$ export NOMAD_TOKEN=&#x27;26832c8d-9315-c1ef-aabf-2058c8632da8&#x27; # change the token to the new one with the &quot;infrastructure&quot; policy attached\nvagrant@nomad:~$ nomad status\nError querying jobs: Unexpected response code: 403 (Permission denied)\n\nvagrant@nomad:~$ nomad node-status\nID        DC   Name   Class   Drain  Status\n1f638a17  dc1  nomad  &lt;none&gt;  false  ready\n</code></pre>\n<p>As you can see, anyone with the <code>devops-team</code> token will be allowed to\nrun operations on nodes, but not on jobs -- i.e. on namespace resources.</p>\n<h2 id=\"where-to-go-next\">Where to go next</h2>\n<p>The example above demonstrates adding one of the policies from our list at the\nbeginning. Adding the rest of them and trying different commands could be a\ngood exercise.</p>\n<p>As a reference, the FP Complete team maintains a\n<a href=\"https://github.com/fpco/nomad-acl-policies\">repository</a> with\npolicies ready for use.</p>\n<h4 id=\"related-articles\">Related articles</h4>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/\">DevOps best practices: immutability</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/\">How to implement containers to streamline your DevOps workflow</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2016/05/stack-security-gnupg-keys/\">Stack security: GnuPG keys</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/",
        "slug": "controlling-access-to-nomad-clusters",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Controlling access to Nomad clusters",
        "description": "Learn how to control access to your Nomad clusters on a per-role basis. This will get you the benefits of application schedulers such as Nomad and Kubernetes with all the security guarantees your services need but without the complex and lengthy setup that some other popular tools demand.",
        "updated": null,
        "date": "2018-05-17T13:21:00Z",
        "year": 2018,
        "month": 5,
        "day": 17,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/2018/05/controlling-access-to-nomad-clusters/",
        "components": [
          "blog",
          "2018",
          "05",
          "controlling-access-to-nomad-clusters"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "introduction",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#introduction",
            "title": "Introduction",
            "children": []
          },
          {
            "level": 2,
            "id": "overview",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#overview",
            "title": "Overview",
            "children": []
          },
          {
            "level": 2,
            "id": "setup-the-environment",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#setup-the-environment",
            "title": "Setup the environment",
            "children": []
          },
          {
            "level": 2,
            "id": "setup-nomad",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#setup-nomad",
            "title": "Setup Nomad",
            "children": []
          },
          {
            "level": 2,
            "id": "acl-bootstrap",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#acl-bootstrap",
            "title": "ACL Bootstrap",
            "children": []
          },
          {
            "level": 2,
            "id": "designing-policies",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#designing-policies",
            "title": "Designing policies",
            "children": []
          },
          {
            "level": 2,
            "id": "policy-specification",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#policy-specification",
            "title": "Policy specification",
            "children": []
          },
          {
            "level": 2,
            "id": "adding-a-policy",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#adding-a-policy",
            "title": "Adding a policy",
            "children": []
          },
          {
            "level": 2,
            "id": "creating-a-token-for-a-policy",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#creating-a-token-for-a-policy",
            "title": "Creating a token for a policy",
            "children": []
          },
          {
            "level": 2,
            "id": "where-to-go-next",
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#where-to-go-next",
            "title": "Where to go next",
            "children": [
              {
                "level": 4,
                "id": "related-articles",
                "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#related-articles",
                "title": "Related articles",
                "children": []
              }
            ]
          }
        ],
        "word_count": 1393,
        "reading_time": 7,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/continuous-integration-delivery-best-practices.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/continuous-integration-delivery-best-practices/",
        "slug": "continuous-integration-delivery-best-practices",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Best practices when implementing continuous integration and delivery",
        "description": "Although, there are countless reasons to ditch the old ways of development and adopt DevOps practices, the change from one to the another can be an intimidating task. Use these best practices to ensure your company succeeds during these transitions. ",
        "updated": null,
        "date": "2018-04-11T12:49:00Z",
        "year": 2018,
        "month": 4,
        "day": 11,
        "taxonomies": {
          "categories": [
            "devops",
            "kub360"
          ],
          "tags": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Deni Bertovic",
          "html": "hubspot-blogs/continuous-integration-delivery-best-practices.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/continuous-integration-delivery-best-practices/",
        "components": [
          "blog",
          "continuous-integration-delivery-best-practices"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fintech-best-practices-devops-priorities-for-financial-technology-applications.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/fintech-best-practices-devops-priorities-for-financial-technology-applications/",
        "slug": "fintech-best-practices-devops-priorities-for-financial-technology-applications",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FinTech best practices: DevOps Priorities for Financial Technology Applications",
        "description": "Modern software development is complicated, but developing software for the FinTech industry adds a whole new dimension of complexity. Adopting modern DevOps principals will ensure your software adheres to FinTech best practices. This blog explains how you can get started and be successful.",
        "updated": null,
        "date": "2018-04-05T12:21:00Z",
        "year": 2018,
        "month": 4,
        "day": 5,
        "taxonomies": {
          "categories": [
            "devops",
            "kube360"
          ],
          "tags": [
            "devops",
            "fintech"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/fintech-best-practices-devops-priorities-for-financial-technology-applications.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/fintech-best-practices-devops-priorities-for-financial-technology-applications/",
        "components": [
          "blog",
          "fintech-best-practices-devops-priorities-for-financial-technology-applications"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/recover-your-elasticsearch.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/04/recover-your-elasticsearch/",
        "slug": "recover-your-elasticsearch",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Recover your Elasticsearch",
        "description": "When using Elasticsearch you may run into cluster problems that could lose data because of a corrupt index. All is not lost because there are ways to recover your Elasticsearch. Find out how to bring the cluster to a healthy state with minimal or no data loss in such situation. ",
        "updated": null,
        "date": "2018-04-03T13:42:00Z",
        "year": 2018,
        "month": 4,
        "day": 3,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Alexey Kuleshevich",
          "html": "hubspot-blogs/recover-your-elasticsearch.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/2018/04/recover-your-elasticsearch/",
        "components": [
          "blog",
          "2018",
          "04",
          "recover-your-elasticsearch"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/without-performance-tests-we-will-have-a-bad-time-forever.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/without-performance-tests-we-will-have-a-bad-time-forever/",
        "slug": "without-performance-tests-we-will-have-a-bad-time-forever",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Without performance tests, we will have a bad time, forever",
        "description": "When writing Haskell software code, you cannot assume performance is optimized. You must utilize automated testing and eliminate human inspection.  Performance regression is not an option, or you will have a bad day.",
        "updated": null,
        "date": "2018-03-15T11:36:00Z",
        "year": 2018,
        "month": 3,
        "day": 15,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Niklas Hambüchen",
          "html": "hubspot-blogs/without-performance-tests-we-will-have-a-bad-time-forever.html",
          "blogimage": "/images/blog-listing/qa.png"
        },
        "path": "/blog/without-performance-tests-we-will-have-a-bad-time-forever/",
        "components": [
          "blog",
          "without-performance-tests-we-will-have-a-bad-time-forever"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/how-to-implement-containers-to-streamline-your-devops-workflow.md",
        "colocated_path": null,
        "content": "<h1 id=\"what-are-docker-containers\">What are Docker Containers?</h1>\n<p>Docker containers are a form of &quot;lightweight&quot; virtualization They allow a\nprocess or process group to run in an environment with its own file system,\nsomewhat like <code>chroot</code> jails , and also with its own process table, users and\ngroups and, optionally, virtual network and resource limits. For most purposes,\nthe processes in a container think they have an entire OS to themselves and do\nnot have access to anything outside the container (unless explicitly granted).\nThis lets you precisely control the environment in which your processes run,\nallows multiple processes on the same (virtual) machine that have completely\ndifferent (even conflicting) requirements, and significantly increases isolation\nand container security.</p>\n<p>In addition to containers, Docker makes it easy to build and distribute images\nthat wrap up an application with its complete runtime environment.</p>\n<p>For more information, see \n<a href=\"https://www.cio.com/article/2924995/software/what-are-containers-and-why-do-you-need-them.html\">What are containers and why do you need them?</a> \nand \n<a href=\"https://containerjournal.com/2017/01/11/containers-devops-anyway/\">What Do Containers Have to Do with DevOps, Anyway?</a>.</p>\n<h1 id=\"containers-vs-virtual-machines-vms\">Containers vs Virtual Machines (VMs)</h1>\n<p>The difference between the &quot;lightweight&quot; virtualization of containers and\n&quot;heavyweight&quot; virtualization of VMs boils down to that, for the former, the\nvirtualization happens at the kernel level while for the latter it happens at\nthe hypervisor level. In other words, all the containers on a machine share the\nsame kernel, and code in the kernel isolates the containers from each other\nwhereas each VM acts like separate hardware and has its own kernel.</p>\n<img alt=\"Docker Carrying Haskell.jpg\" sizes=\"(max-width: 320px) 100vw, 320px\" src=\"/images/hubspot/4536576cadee37e3ea1e0a35a83a97a55015af6773242ecda5f919a7f1628cc5.jpeg\" srcset=\"/images/hubspot/04a7b5b957c890331f8535859d7c8528eadf4d83c82ae65e86ea28fea6f82898.jpeg 160w, /images/hubspot/4536576cadee37e3ea1e0a35a83a97a55015af6773242ecda5f919a7f1628cc5.jpeg 320w, /images/hubspot/1652b04e09bee96b23e47adb5830543a1feac5a48d5488b22602cec12a1b131d.jpeg 480w, /images/hubspot/4a5e5498d817ee00db5fdc27b5827a41a41d07253d95f73e093809cd27d6ea45.jpeg 640w, /images/hubspot/d77567fd61f4146be574d81e707b90ca7f80f3005770d6ef527ff656eb9b913d.jpeg 800w, /images/hubspot/f9971781a2d67ed9b0b30a5798652fdc1975985603d9fde0b60bf89de73faa7a.jpeg 960w\" style=\"width: 320px; margin: 0px 0px 10px 10px; letter-spacing: -0.08px; float: right;\" width=\"320\">\n<p>Containers are much less resource intensive than VMs because they do not need\nto be allocated exclusive memory and file system space or have the overhead of\nrunning an entire operating system. This makes it possible to run many more\ncontainers on a machine than you would VMs. Containers start nearly as fast as\nregular processes (you don't have to wait for the OS to boot), and parts of the\nhost's file system can be easily &quot;mounted&quot; into the container's file system\nwithout any additional overhead of network file system protocols.</p>\n<p>On the other hand, isolation is less guaranteed. If not careful, you can\noversubscribe a machine by running containers that need more resources than the\nmachine has available (this can be mitigated by setting appropriate resource\nlimits on containers). While containers security is an improvement over normal\nprocesses, the shared kernel means the attack surface is greater and there is\nmore risk of leakage between containers than there is between VMs.</p>\n<p>For more information, see <a href=\"https://blog.netapp.com/blogs/containers-vs-vms/\">Docker containers vs. virtual machines: What's the\ndifference?</a> and <a href=\"https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/\">DevOps Best\nPractices: Immutability</a></p>\n<h1 id=\"how-docker-containers-enhance-continuous-delivery-pipelines\">How Docker Containers Enhance Continuous Delivery Pipelines</h1>\n<p>There are, broadly, two areas where containers fit into your devops\nworkflow: for builds, and for deployment. They are often used together,\nbut do not have to be.</p>\n<h3 id=\"builds\">Builds</h3>\n<ul>\n<li>\n<p><strong>Synchronizing build environments:</strong> It can be difficult to keep\nbuild environments synchronized between developers and CI/CD\nservers, which can lead to unexpected build failures or changes in\nbehaviour . Docker images let you specify <em>exactly</em> the build tools,\nlibraries, and other dependencies (including their versions)\nrequired without needing to install them on individual machines, and\ndistribute those images easily. This way you can be sure that\neveryone is using exactly the same build environment.</p>\n</li>\n<li>\n<p><strong>Managing changes to build environments:</strong> Managing changes to\nbuild environments can also be difficult, since you need to roll\nthose out to all developers and build servers at the right time.\nThis can be especially tricky when there are multiple branches of\ndevelopment some of which may need older or newer environments than\neach other. With Docker, you can specify a particular version of the\nbuild image along with the source code, which means a particular\nrevision of the source code will always build in the right\nenvironment.</p>\n</li>\n<li>\n<p><strong>Isolating build environments:</strong> One CI/CD server may have to build\nmultiple projects, which may have conflicting requirements for build\ntools, libraries, and other dependencies. By running each build in\nits own ephemeral container created from potentially different\nDocker images, you can be certain that these builds environments\nwill not interfere with each other.</p>\n</li>\n</ul>\n<h3 id=\"deployment\">Deployment</h3>\n<ul>\n<li>\n<p><strong>Runtime environment bundled with application :</strong> The CD system\nbuilds a complete Docker image which bundles the application's\nenvironment with the application itself and then deploys the whole\nimage as one &quot;atomic&quot; step. There is no chance for configuration\nmanagement scripts to fail at deployment time, and no risk of the\nsystem configuration to be out of sync.</p>\n</li>\n<li>\n<p><strong>Preventing malicious changes:</strong> Container security is improved by\nusing immutable SHA digests to identify Docker images, which means\nthere is no way for a malicious actor to inject malware into your\napplication or its environment.</p>\n</li>\n<li>\n<p><strong>Easily roll back to a previous version:</strong> All it takes to roll\nback is to deploy a previous version of the Docker image. There is\nno worrying about system configuration changes needing to be\nmanually rolled back.</p>\n</li>\n<li>\n<p><strong>Zero downtime rollouts:</strong> In conjunction with container\norchestration tools like Kubernetes, it is easily to roll out new\nimage versions with zero downtime.</p>\n</li>\n<li>\n<p><strong>High availability and horizontal scaling:</strong> Container\norchestration tools like Kubernetes make it easy to distribute the\nsame image to containers on multiple servers, and add/remove\nreplicas at will or automatically.</p>\n</li>\n<li>\n<p><strong>Sharing a server between multiple applications:</strong> Multiple\napplications, or multiple versions of the same application (e.g. a\ndev and qa deployment), can run on the same server even if they have\nconflicting dependencies, since their runtime environments are\ncompletely separate.</p>\n</li>\n<li>\n<p><strong>Isolating applications:</strong> When multiple applications are deployed\nto a server in containers, they are isolated from one another.\nContainer security means each has its own file system, processes,\nand users there is less risk that they interfere with each other\nintentionally. When data <em>does</em> need to be shared between\napplications, parts of the host file system can be mounted into\nmultiple containers, but this is something you have full control\nover.</p>\n</li>\n</ul>\n<p>For more information, see:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/03/continuous-integration/\">Continuous Integration: An Overview</a></li>\n<li><a href=\"https://docs.microsoft.com/en-us/dotnet/standard/containerized-lifecycle-architecture/docker-application-lifecycle/containers-foundation-for-devops-collaboration\">Containers as the foundation for DevOps collaboration</a></li>\n<li><a href=\"https://www.sumologic.com/blog/devops/how-containerization-enables-devops/\">Docker and DevOps -- Enabling DevOps Teams Through Containerization</a>.</li>\n</ul>\n<h1 id=\"implementing-containers-into-your-devops-workflow\">Implementing Containers into Your DevOps Workflow</h1>\n<p>Containers can be integrated into your DevOps toolchain incrementally.\nOften it makes sense to start with the build environment, and then move\non to the deployment environment. This is a very broad overview of the\nsteps for a simple approach, without delving into the technical details\nvery much or covering all the possible variations.</p>\n<h3 id=\"requirements\">Requirements</h3>\n<ul>\n<li>Docker Engine installed on build servers and/or application servers</li>\n<li>Access to a Docker Registry. This is where Docker images are stored\nand pulled. There are numerous services that provide registries, and\nit's also easy to run your own.</li>\n</ul>\n<h3 id=\"containerizing-the-build-environment\">Containerizing the build environment</h3>\n<p>Many CI/CD systems now include built-in Docker support or easily enable\nit through plugins, but   <code>docker</code>   is a command-line application which\ncan be called from any build script even if your CI/CD system does not\nhave explicit support.</p>\n<ol>\n<li>\n<p>Determine your build environment requirements and write\na <code>Dockerfile</code> based on an existing Docker image, which is the\nspecification used to build an image for build containers. If you\nalready use a configuration management tool, you can use it within\nthe Dockerfile. Always specify precise versions of base images and\ninstalled packages so that image builds are consistent and upgrades\nare deliberate.</p>\n</li>\n<li>\n<p>Build the image using   <code>docker build</code>   and push it to the Docker\nregistry using   <code>docker push</code> .</p>\n</li>\n<li>\n<p>Create a <code>Dockerfile</code> for the application that is based on the build\nimage (specify the exact version of the base build image). This file\nbuilds the application, adds any required runtime dependencies that\naren't in the build image, and tests the application. A multi-stage\n <code>Dockerfile</code>  can be used if you don't want the application\ndeployment image to include all the build dependencies.</p>\n</li>\n<li>\n<p>Modify CI build scripts to build the application image and push it\nto the Docker registry. The image should be tagged with the build\nnumber, and possibly additional information such as the name of the\nbranch.</p>\n</li>\n<li>\n<p>If you are not yet ready to deploy with Docker, you can extract the\nbuild artifacts from the resulting Docker image.</p>\n</li>\n</ol>\n<p>It is best to <em>also</em> integrate building the build image itself into your\ndevops automation tools.</p>\n<h3 id=\"containerizing-deployment\">Containerizing deployment</h3>\n<p>This can be easier if your CD tool has support for Docker, but that is\nby no means necessary. We also recommend deploying to a container\norchestration system such as Kubernetes in most cases.</p>\n<p>Half the work has already been done, since the build process creates and\npushes an image containing the application and its environment.</p>\n<ul>\n<li>\n<p>If using Docker directly, now it's a matter of updating deployment\nscripts to use   <code>docker run</code>   on the application server with the\nimage and tag that was pushed in the previous section (after\nstopping any existing container). Ideally your application accepts\nits configuration via environment variables, in which case you use\nthe   <code>-e</code>   argument to specify those values depending on which\nstage is being deployed. If a configuration file are used, write it\nto the host file system and then use the   <code>-v</code>   argument to mount\nit to the correct path in the container.</p>\n</li>\n<li>\n<p>If using a container orchestration system such as Kubernetes, you\nwill typically have the deployment script connect to the\norchestration API endpoint to trigger an image update (e.g. using  \n<code>kubectl set image</code> , a Helm chart, or better yet, a\n<code>kustomization</code>.).</p>\n</li>\n</ul>\n<p>Once deployed, tools such as Prometheus are well suited to docker\ncontainer monitoring and alerting, but this can be plugged into existing\nmonitoring systems as well.</p>\n<p>FP Complete has implemented this kind of DevOps workflow, and\nsignificantly more complex ones, for many clients and would love to\ncount you among them! <a href=\"https://tech.fpcomplete.com/contact-us/\">Contact us about our Devops Services</a> page.</p>\n<p>For more information, see <a href=\"https://techbeacon.com/how-secure-container-lifecycle\">How to secure the container\nlifecycle</a> and <a href=\"https://tech.fpcomplete.com/blog/2017/01/containerize-legacy-app/\">Containerizing\na legacy application: an\noverview</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
        "slug": "how-to-implement-containers-to-streamline-your-devops-workflow",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "How to Implement Containers to Streamline Your DevOps Workflow",
        "description": "Many technology companies have been rapidly implementing Docker Containers to enhance their continuous delivery pipeline. However, implementing containers into your DevOps workflow can be difficult. Learn how to execute this process efficiently and securely here. ",
        "updated": null,
        "date": "2018-01-31T08:00:00Z",
        "year": 2018,
        "month": 1,
        "day": 31,
        "taxonomies": {
          "tags": [
            "devops",
            "docker"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "blogimage": "/images/blog-listing/container.png"
        },
        "path": "/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
        "components": [
          "blog",
          "how-to-implement-containers-to-streamline-your-devops-workflow"
        ],
        "summary": null,
        "toc": [
          {
            "level": 1,
            "id": "what-are-docker-containers",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#what-are-docker-containers",
            "title": "What are Docker Containers?",
            "children": []
          },
          {
            "level": 1,
            "id": "containers-vs-virtual-machines-vms",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containers-vs-virtual-machines-vms",
            "title": "Containers vs Virtual Machines (VMs)",
            "children": []
          },
          {
            "level": 1,
            "id": "how-docker-containers-enhance-continuous-delivery-pipelines",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#how-docker-containers-enhance-continuous-delivery-pipelines",
            "title": "How Docker Containers Enhance Continuous Delivery Pipelines",
            "children": [
              {
                "level": 3,
                "id": "builds",
                "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#builds",
                "title": "Builds",
                "children": []
              },
              {
                "level": 3,
                "id": "deployment",
                "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#deployment",
                "title": "Deployment",
                "children": []
              }
            ]
          },
          {
            "level": 1,
            "id": "implementing-containers-into-your-devops-workflow",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#implementing-containers-into-your-devops-workflow",
            "title": "Implementing Containers into Your DevOps Workflow",
            "children": [
              {
                "level": 3,
                "id": "requirements",
                "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#requirements",
                "title": "Requirements",
                "children": []
              },
              {
                "level": 3,
                "id": "containerizing-the-build-environment",
                "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containerizing-the-build-environment",
                "title": "Containerizing the build environment",
                "children": []
              },
              {
                "level": 3,
                "id": "containerizing-deployment",
                "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containerizing-deployment",
                "title": "Containerizing deployment",
                "children": []
              }
            ]
          }
        ],
        "word_count": 1754,
        "reading_time": 9,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/",
            "title": "Controlling access to Nomad clusters"
          }
        ]
      },
      {
        "relative_path": "blog/signs-your-business-needs-a-devops-consultant.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/signs-your-business-needs-a-devops-consultant/",
        "slug": "signs-your-business-needs-a-devops-consultant",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Signs Your Business Needs a DevOps Consultant",
        "description": "Today’s business challenges cause issues with traditional deployment models. Find out why a DevOps consultant may be right for you. ",
        "updated": null,
        "date": "2018-01-18T15:06:00Z",
        "year": 2018,
        "month": 1,
        "day": 18,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "insights",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/signs-your-business-needs-a-devops-consultant.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/signs-your-business-needs-a-devops-consultant/",
        "components": [
          "blog",
          "signs-your-business-needs-a-devops-consultant"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/devops-value-how-to-measure-the-success-of-devops.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/devops-value-how-to-measure-the-success-of-devops/",
        "slug": "devops-value-how-to-measure-the-success-of-devops",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps Value: How to Measure the Success of DevOps",
        "description": "Faster time to market and lower failure rate are the beginning of the many benefits DevOps offers companies. Discover the measurable metrics and KPIs, as well as the true business value DevOps offers.",
        "updated": null,
        "date": "2018-01-04T13:51:00Z",
        "year": 2018,
        "month": 1,
        "day": 4,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/devops-value-how-to-measure-the-success-of-devops.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/devops-value-how-to-measure-the-success-of-devops/",
        "components": [
          "blog",
          "devops-value-how-to-measure-the-success-of-devops"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/nat-gateways-in-amazon-govcloud.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/nat-gateways-in-amazon-govcloud/",
        "slug": "nat-gateways-in-amazon-govcloud",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "NAT Gateways in Amazon GovCloud",
        "description": "Since AWS GovCloud has no managed NAT gateways this task is left for you to set up. This post is the third in a series to explain how you can make it work.",
        "updated": null,
        "date": "2017-11-30T14:25:00Z",
        "year": 2017,
        "month": 11,
        "day": 30,
        "taxonomies": {
          "categories": [
            "devops",
            "kube360"
          ],
          "tags": [
            "devops",
            "aws",
            "govcloud"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Yghor Kerscher",
          "html": "hubspot-blogs/nat-gateways-in-amazon-govcloud.html",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/nat-gateways-in-amazon-govcloud/",
        "components": [
          "blog",
          "nat-gateways-in-amazon-govcloud"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager/",
        "slug": "my-devops-journey-and-how-i-became-a-recovering-it-operations-manager",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "My DevOps Journey and How I Became a Recovering IT Operations Manager",
        "description": "Learn how containerization and automated deployments laid the groundwork for what would become know as DevOps for a Fortune 500 IT company.",
        "updated": null,
        "date": "2017-11-15T13:30:00Z",
        "year": 2017,
        "month": 11,
        "day": 15,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Steve Bogdan",
          "html": "hubspot-blogs/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager/",
        "components": [
          "blog",
          "my-devops-journey-and-how-i-became-a-recovering-it-operations-manager"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/amazon-govcloud-has-no-route53-how-to-solve-this.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/amazon-govcloud-has-no-route53-how-to-solve-this/",
        "slug": "amazon-govcloud-has-no-route53-how-to-solve-this",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Amazon GovCloud has no Route53! How to solve this?",
        "description": "Since Route53 is not yet available on Amazon GovCloud you need to find a different way to create custom DNS records for your services. We tell you how. ",
        "updated": null,
        "date": "2017-11-08T14:12:00Z",
        "year": 2017,
        "month": 11,
        "day": 8,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops",
            "aws",
            "govcloud"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Yghor Kerscher",
          "html": "hubspot-blogs/amazon-govcloud-has-no-route53-how-to-solve-this.html",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/amazon-govcloud-has-no-route53-how-to-solve-this/",
        "components": [
          "blog",
          "amazon-govcloud-has-no-route53-how-to-solve-this"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/intro-to-devops-on-govcloud.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/intro-to-devops-on-govcloud/",
        "slug": "intro-to-devops-on-govcloud",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Intro to Devops on GovCloud",
        "description": "If you have strict compliance criteria that require you to use AWS GovCloud, there are some obstacles you will encounter that we will help you address.",
        "updated": null,
        "date": "2017-10-26T11:02:00Z",
        "year": 2017,
        "month": 10,
        "day": 26,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops",
            "govcloud"
          ]
        },
        "authors": [],
        "extra": {
          "author": "J Boyer",
          "html": "hubspot-blogs/intro-to-devops-on-govcloud.html",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/intro-to-devops-on-govcloud/",
        "components": [
          "blog",
          "intro-to-devops-on-govcloud"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/",
            "title": "Cloud Deployment Models: Advantages and Disadvantages"
          }
        ]
      },
      {
        "relative_path": "blog/credstash.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/08/credstash/",
        "slug": "credstash",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Manage Secrets on AWS with credstash and terraform",
        "description": "Managing secrets is hard. Moving them around securely is even harder. Learn how to get secrets to the Cloud with terraform and credstash.",
        "updated": null,
        "date": "2017-08-28T15:00:00Z",
        "year": 2017,
        "month": 8,
        "day": 28,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops",
            "aws"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Alexey Kuleshevich",
          "html": "hubspot-blogs/credstash.html",
          "blogimage": "/images/blog-listing/aws.png"
        },
        "path": "/blog/2017/08/credstash/",
        "components": [
          "blog",
          "2017",
          "08",
          "credstash"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/functional-programming-and-modern-devops.md",
        "colocated_path": null,
        "content": "<p>In this presentation, Aaron Contorer presents on how modern tools can\nbe used to reach the Engineering sweet spot.</p>\n<iframe width=\"100%\" height=\"315\"\nsrc=\"https://www.youtube.com/embed/ybSBCVhVWs8\" frameborder=\"0\"\nallow=\"accelerometer; autoplay; encrypted-media; gyroscope;\npicture-in-picture\" allowfullscreen></iframe>\n<br>\n<br>\n<h2 id=\"do-you-know-fp-complete\">Do you know FP Complete</h2>\n<p>At FP Complete, we do so many things to help companies it’s hard to\nencapsulate our impact in a few words. They say a picture is worth a\nthousand words, so a video has to be worth 10,000 words (at\nleast). Therefore, to tell all we can in as little time as possible,\ncheck out our explainer video. It’s only 108 seconds to get the full\nstory of FP Complete.</p>\n<iframe allowfullscreen=\n            \"allowfullscreen\" height=\"315\" src=\n            \"https://www.youtube.com/embed/JCcuSn_lFKs\"\n            target=\"_blank\" width=\n            \"100%\"></iframe>\n<br>\n<br>\n<p>Reach us to on <a href=\"mailto:[email protected]\">[email protected]</a> if you have suggestions or if\nyou would like to learn more about FP Complete and the services we\noffer.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/functional-programming-and-modern-devops/",
        "slug": "functional-programming-and-modern-devops",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Functional Programming and Modern DevOps",
        "description": "In this presentation, Aaron Contorer presents on how modern tools can be used to reach the Engineering sweet spot.",
        "updated": null,
        "date": "2017-08-11",
        "year": 2017,
        "month": 8,
        "day": 11,
        "taxonomies": {
          "tags": [
            "devops",
            "haskell",
            "insights"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/functional-programming-and-modern-devops/",
        "components": [
          "blog",
          "functional-programming-and-modern-devops"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "do-you-know-fp-complete",
            "permalink": "https://tech.fpcomplete.com/blog/functional-programming-and-modern-devops/#do-you-know-fp-complete",
            "title": "Do you know FP Complete",
            "children": []
          }
        ],
        "word_count": 162,
        "reading_time": 1,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/continuous-integration.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/03/continuous-integration/",
        "slug": "continuous-integration",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Continuous Integration: an overview",
        "description": "Continuous integration makes development teams more productive and releases less stressful. Catch regressions quickly and deploy applications automatically.",
        "updated": null,
        "date": "2017-03-03T17:11:00Z",
        "year": 2017,
        "month": 3,
        "day": 3,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/continuous-integration.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/2017/03/continuous-integration/",
        "components": [
          "blog",
          "2017",
          "03",
          "continuous-integration"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
            "title": "How to Implement Containers to Streamline Your DevOps Workflow"
          }
        ]
      },
      {
        "relative_path": "blog/immutability-docker-haskells-st-type.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/02/immutability-docker-haskells-st-type/",
        "slug": "immutability-docker-haskells-st-type",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Immutability, Docker, and Haskell's ST type",
        "description": "Immutability in software development is a well known constant in functional programming but is relatively new in modern devops and the parallels are worth examining.",
        "updated": null,
        "date": "2017-02-13T15:24:00Z",
        "year": 2017,
        "month": 2,
        "day": 13,
        "taxonomies": {
          "tags": [
            "haskell",
            "docker",
            "devops"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/immutability-docker-haskells-st-type.html",
          "blogimage": "/images/blog-listing/docker.png"
        },
        "path": "/blog/2017/02/immutability-docker-haskells-st-type/",
        "components": [
          "blog",
          "2017",
          "02",
          "immutability-docker-haskells-st-type"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
            "title": "Our history with containerization"
          }
        ]
      },
      {
        "relative_path": "blog/quickcheck.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/01/quickcheck/",
        "slug": "quickcheck",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "QuickCheck and Magic of Testing",
        "description": "Discover the power of random testing in Haskell with QuickCheck. Learn how to use function properties and software specification to write bug-free software.",
        "updated": null,
        "date": "2017-01-24T14:24:00Z",
        "year": 2017,
        "month": 1,
        "day": 24,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Alexey Kuleshevich",
          "html": "hubspot-blogs/quickcheck.html",
          "blogimage": "/images/blog-listing/qa.png"
        },
        "path": "/blog/2017/01/quickcheck/",
        "components": [
          "blog",
          "2017",
          "01",
          "quickcheck"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/syllabus/",
            "title": "Applied Haskell Syllabus"
          }
        ]
      },
      {
        "relative_path": "blog/containerize-legacy-app.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/01/containerize-legacy-app/",
        "slug": "containerize-legacy-app",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Containerizing a legacy application: an overview",
        "description": "Running your legacy apps in Docker containers takes the pain out of deployment and puts you on a path to modern practices.  Learn what is involved in containerizing your app.",
        "updated": null,
        "date": "2017-01-12T15:45:00Z",
        "year": 2017,
        "month": 1,
        "day": 12,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/containerize-legacy-app.html",
          "blogimage": "/images/blog-listing/container.png"
        },
        "path": "/blog/2017/01/containerize-legacy-app/",
        "components": [
          "blog",
          "2017",
          "01",
          "containerize-legacy-app"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
            "title": "How to Implement Containers to Streamline Your DevOps Workflow"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
            "title": "Our history with containerization"
          }
        ]
      },
      {
        "relative_path": "blog/devops-best-practices-multifaceted-testing.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-multifaceted-testing/",
        "slug": "devops-best-practices-multifaceted-testing",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Devops best practices: Multifaceted Testing",
        "description": ".",
        "updated": null,
        "date": "2016-11-28T18:00:00Z",
        "year": 2016,
        "month": 11,
        "day": 28,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/devops-best-practices-multifaceted-testing.html",
          "blogimage": "/images/blog-listing/qa.png"
        },
        "path": "/blog/2016/11/devops-best-practices-multifaceted-testing/",
        "components": [
          "blog",
          "2016",
          "11",
          "devops-best-practices-multifaceted-testing"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-at-fpco-2020/",
            "title": "Rust at FP Complete, 2020 update"
          },
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/security/",
            "title": "Security in a DevOps World"
          }
        ]
      },
      {
        "relative_path": "blog/devops-best-practices-immutability.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/",
        "slug": "devops-best-practices-immutability",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Devops best practices: Immutability",
        "description": ".",
        "updated": null,
        "date": "2016-11-13T18:00:00Z",
        "year": 2016,
        "month": 11,
        "day": 13,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/devops-best-practices-immutability.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/2016/11/devops-best-practices-immutability/",
        "components": [
          "blog",
          "2016",
          "11",
          "devops-best-practices-immutability"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/",
            "title": "Controlling access to Nomad clusters"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
            "title": "How to Implement Containers to Streamline Your DevOps Workflow"
          }
        ]
      },
      {
        "relative_path": "blog/docker-demons-pid1-orphans-zombies-signals.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/10/docker-demons-pid1-orphans-zombies-signals/",
        "slug": "docker-demons-pid1-orphans-zombies-signals",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Docker demons: PID-1, orphans, zombies, and signals",
        "description": ".",
        "updated": null,
        "date": "2016-10-05T02:00:00Z",
        "year": 2016,
        "month": 10,
        "day": 5,
        "taxonomies": {
          "tags": [
            "devops",
            "docker"
          ],
          "categories": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/docker-demons-pid1-orphans-zombies-signals.html",
          "blogimage": "/images/blog-listing/docker.png"
        },
        "path": "/blog/2016/10/docker-demons-pid1-orphans-zombies-signals/",
        "components": [
          "blog",
          "2016",
          "10",
          "docker-demons-pid1-orphans-zombies-signals"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/rust/pid1/",
            "title": "Implementing pid1 with Rust and async/await"
          }
        ]
      },
      {
        "relative_path": "blog/docker-split-images.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/12/docker-split-images/",
        "slug": "docker-split-images",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The split-image approach to building minimal runtime Docker images",
        "description": ".",
        "updated": null,
        "date": "2015-12-15T00:00:00Z",
        "year": 2015,
        "month": 12,
        "day": 15,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "devops",
            "docker"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/docker-split-images.html",
          "blogimage": "/images/blog-listing/docker.png"
        },
        "path": "/blog/2015/12/docker-split-images/",
        "components": [
          "blog",
          "2015",
          "12",
          "docker-split-images"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/kubernetes.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/11/kubernetes/",
        "slug": "kubernetes",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Kubernetes for Haskell Services",
        "description": ".",
        "updated": null,
        "date": "2015-11-19T19:00:00Z",
        "year": 2015,
        "month": 11,
        "day": 19,
        "taxonomies": {
          "categories": [
            "devops"
          ],
          "tags": [
            "haskell",
            "kubernetes"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Tim Dysinger",
          "html": "hubspot-blogs/kubernetes.html",
          "blogimage": "/images/blog-listing/kubernetes.png"
        },
        "path": "/blog/2015/11/kubernetes/",
        "components": [
          "blog",
          "2015",
          "11",
          "kubernetes"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/distributing-packages-without-sysadmin.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/distributing-packages-without-sysadmin/",
        "slug": "distributing-packages-without-sysadmin",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Distributing our packages without a sysadmin",
        "description": ".",
        "updated": null,
        "date": "2015-05-13T00:00:00Z",
        "year": 2015,
        "month": 5,
        "day": 13,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "insights",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/distributing-packages-without-sysadmin.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/2015/05/distributing-packages-without-sysadmin/",
        "components": [
          "blog",
          "2015",
          "05",
          "distributing-packages-without-sysadmin"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 52
  },
  {
    "name": "devsecops",
    "slug": "devsecops",
    "path": "/categories/devsecops/",
    "permalink": "https://tech.fpcomplete.com/categories/devsecops/",
    "pages": [
      {
        "relative_path": "blog/cloud-native.md",
        "colocated_path": null,
        "content": "<p>You hear &quot;go Cloud-Native,&quot; but if you're like many, you wonder, &quot;what does that mean, and how can applying a Cloud-Native strategy help my company's Dev Team be more productive?&quot;\nAt a high level, Cloud-Native architecture means adapting to the many new possibilities—but a very different set of architectural constraints—offered by the cloud compared to traditional on-premises infrastructure.</p>\n<p>Cloud-Native architecture optimizes systems and software for the cloud. This optimization creates an efficient way to utilize the platform by streamlining the processes and workflows. This is accomplished by harnessing the cloud's inherent strengths: </p>\n<ul>\n<li>its flexibility, </li>\n<li>on-demand infrastructure; and </li>\n<li>robust managed services. </li>\n</ul>\n<p>Cloud-native computing couples these strengths with cloud-optimized technologies such as microservices, containers, and continuous delivery. Cloud-Native takes advantage of the cloud's distributed, scalable and adaptable nature. By doing this, Cloud-Native will maximize your dev team's focus on writing code, reducing operational tasks, creating business value, and keeping your customers happy by building high-impact applications faster, without compromising on quality. You might even think you can’t do cloud-native without using one of the big cloud providers- this simply isn’t true, many of the benefits of cloud-native are the approaches and emphasis on better tooling around automation.</p>\n<h2 id=\"why-move-to-cloud-native-now\">Why Move to Cloud-Native Now?</h2>\n<p><em>#1 - High-Frequency Software Release</em></p>\n<p>Faster and more frequent updates and new features releases allow your organization to respond to user needs in near real-time, increasing user retention. For example, new software versions with novel features can be released incrementally and more often as they become available. In addition, Cloud-native makes high-frequency software possible via continuous integration (CI) and continuous deployment (CD), where full version commits are no longer needed. Instead, one can modify, test, and commit just a few lines of code continuously and automatically to meet changing customer trends, thereby giving your organization an edge. </p>\n<p><em>#2 - Automatic Software Updates</em></p>\n<p>One of the most valuable Cloud-native features is automation. For example, updates are deployed automatically without interfering with core applications or user base. Automated redundancies for infrastructure can automatically move applications between data centers as needed with little to zero human intervention. Even scalability, testing, and resource allocation can be automated. There are many available automation tools in the marketplace, such as FP Complete Corporation's widely accepted tool, <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>.</p>\n<p><em>#3 - Greater Protection from Software Failures</em></p>\n<p>Isolation of containers is another important cloud-native feature. Software failures and bugs can be traced to a specific microservice version, rolled back, or fixed quickly. Software fixes can be tested in isolation without compromising the stability of the entire application. On the other hand, if there's a widespread failure, automation can restore the application to a previous stable state, minimizing downtime. Automated DevOps testing before code goes to production (example: linting and software scrubbing) drives faster bug detection and resolution- reducing the risk of bugs in production.</p>\n<h2 id=\"wow-cloud-native-seems-perfect-what-s-the-catch\">WOW – Cloud-Native Seems Perfect – What's the Catch?</h2>\n<p>Switching over to Cloud-Native architecture requires a thorough assessment of your existing application setup. The biggest question you and your team need to ask before making any moves is, &quot;should our business modernize our current applications, or should we build new applications from scratch and utilize Cloud-Native development practices?&quot;</p>\n<p>If you choose to modernize your existing application, you will save time and money by capitalizing on the cloud's agility, flexibility, and scalability. Your dev team can retain existing application functionality and business logic, re-architect into a Cloud-Native app, and containerize to utilize the cloud platform's strengths.</p>\n<p>You can also build a net-new application using Cloud-Native development practices instead of upgrading your legacy applications. Building from scratch may make more sense from a corporate culture, risk management, and regulatory compliance standpoint. You keep running old application code unchanged while developing and phasing in a platform. Building new applications also allows dev teams to develop applications free from prior architectural constraints, allowing developers to experiment and deliver innovation to users.</p>\n<h2 id=\"three-essential-tools-for-successful-cloud-native-architecture\">Three Essential Tools for Successful Cloud-Native Architecture</h2>\n<p>Whether you decide to create a new Cloud-Native application or modernize your existing ones, your dev team needs to use these three tools for successful implementation of Cloud-Native Architecture:</p>\n<ol>\n<li><em>Microservices Architecture</em>. </li>\n</ol>\n<p>A cloud-native microservice architecture is considered a &quot;best practice&quot; architectural approach for creating cloud applications because each application makes up a set of services. Each service runs its processes and communicates through clearly defined APIs, which provide good foundations for continuous delivery. With microservices, ideally each service is independently deployable This architecture allows each service to be updated independently without interfering with another service. This results in:</p>\n<ul>\n<li>reduced downtime for users; </li>\n<li>simplified troubleshooting; and </li>\n<li>minimized disruptions even if a problem's identified. \nWhich allows for high-frequency updates and continuous delivery. </li>\n</ul>\n<ol start=\"2\">\n<li><em>Container-based Infrastructure Platform</em>.</li>\n</ol>\n<p>Now that your microservice architecture is broken down into individual container-based services, the next essential tool is a system to manage all those containers automatically - known as a ‘container orchestrator. The most widely accepted platform is Kubernetes, an open-source system originally developed in collaboration with Google, Microsoft, and others. It runs the containerized applications and controls the automated deployment, storage, scaling, scheduling, load balancing, updates, and monitors containers across clusters of hosts. Kubernetes supports all major public cloud service providers, including Azure, AWS, Google Cloud Platform, and Oracle Cloud.</p>\n<ol start=\"3\">\n<li><em>CI/CD Pipeline</em>.</li>\n</ol>\n<p>A CI/CD Pipeline is the third essential tool for a cloud-native environment to work seamlessly. Continuous integration and continuous delivery embody a set of operating principles and a collection of practices that allow dev teams to deliver code changes more frequently and reliably. This implementation is known as the CI/CD Pipeline. By automating deployment processes, the CI/CD pipeline will allow your dev team to focus on:</p>\n<ul>\n<li>meeting business requirements; </li>\n<li>code quality; and </li>\n<li>security. \nCI/CD tools preserve the environment-specific parameters that must be included with each delivery. CI/CD automation then performs any necessary service calls to web servers, databases, and other services that may require a restart or follow other procedures when applications are deployed.</li>\n</ul>\n<h2 id=\"cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use\">Cloud-Native Isn't Plug &amp; Play – Is there a Comprehensive Tool that my Dev Team Can Use?</h2>\n<p>As you can probably guess, countless tools make up the cloud-native architecture.  Unfortunately, these tools are complex, require separate authentication, and frequently do not interact with each other. In essence, you are expected to integrate these cloud tools yourself as a user. We at FP Complete became frustrated with this approach. So, to save time and provide a turn-key solution, we created Kube360.  Kube360 puts all necessary tools into one easy-to-use toolbox, accessed via a single sign-on, and operating as a fully integrated environment. Kube360 combines best practices, technologies, and processes into one complete package, and Kube360 has been proven an effective tool at multiple customer site deployments. In addition, Kube360 supports multiple cloud providers and on-premise infrastructure. Kube360 is vendor agnostic, fully customizable, and has no vendor lock-in.</p>\n<p><strong>Kube360 - Centralized Management</strong>. Kube360 employs centralized management, which increases your dev team's productivity. Increased Dev Team productivity will happen through:</p>\n<ul>\n<li>single-sign-on functionality </li>\n<li>speed-up of installation and setup</li>\n<li>Quick access to all tools</li>\n<li>Automation of logs, backups, and alerts</li>\n</ul>\n<p>This simplified administration hides frequent login complexities and allows single-sign-on through existing company identity management. Kube360 also streamlines tool authentication and access, eliminating many standard security holes. In the background, Kube360 automatically runs everyday tasks such as backups, log aggregation, and alerts.</p>\n<p><strong>Kube360 - Automated Features</strong>. Kube360's automated features include:</p>\n<ul>\n<li>automatic backups of the etcd config;</li>\n<li>log aggregation and indexing of all services; and</li>\n<li>integrated monitoring and alert framework.</li>\n</ul>\n<p><strong>Kube360 - Kubernetes Tooling Features</strong>. Kube360 simplifies Kubernetes management and allows you to take advantage of many cloud-native features such as:\nautoscaling; to stay cost efficient with growing and shrinking demands on systems</p>\n<ul>\n<li>high availability;</li>\n<li>health checks; and</li>\n<li>integrated secrets management.</li>\n</ul>\n<p><strong>Kube360 - Service Mesh</strong>.</p>\n<ul>\n<li>Mutual TLS based encryption within the cluster</li>\n<li>Tracing tools</li>\n<li>Rerouting traffic</li>\n<li>Canary deployments</li>\n</ul>\n<p><strong>Kube360 - Integration</strong>.</p>\n<ul>\n<li>Integrates into existing AWS &amp; Azure infrastructures</li>\n<li>Deploys into existing VPCs</li>\n<li>Leverages existing subnets</li>\n<li>Communicates with components outside of Kube360</li>\n<li>Supports multiple clusters per organization</li>\n<li>Installed by FP Complete team or customer</li>\n</ul>\n<p>As you can see – Kube360 is one of the most comprehensive tools you can rely on for Cloud Native architecture. Kube360 is your one-stop, fully integrated enterprise Kubernetes ecosystem. Kube360 standardizes containerization, software deployment, fault tolerance, auto-scaling, auto-healing, and security - by design. Kube360's modular, standardized architecture mitigates proprietary lock-in, high support costs, and obsolescence. In addition, Kube360 delivers a seamless deployment experience for you and your team.\nFind out how Kube360 can make your business more efficient, more reliable, and more secure, all in a fraction of the time. Speed up your dev team's productivity - <a href=\"https://tech.fpcomplete.com/contact-us/\">Contact us today!</a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloud-native/",
        "slug": "cloud-native",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Confused about Cloud-Native? Want to speed up your dev team's productivity?",
        "description": "Learn about Cloud-Native architecture.",
        "updated": null,
        "date": "2022-01-17",
        "year": 2022,
        "month": 1,
        "day": 17,
        "taxonomies": {
          "tags": [
            "kubernetes",
            "cloud native"
          ],
          "categories": [
            "devsecops",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete",
          "keywords": "devsecops, devops",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/cloud-native/",
        "components": [
          "blog",
          "cloud-native"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "why-move-to-cloud-native-now",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-native/#why-move-to-cloud-native-now",
            "title": "Why Move to Cloud-Native Now?",
            "children": []
          },
          {
            "level": 2,
            "id": "wow-cloud-native-seems-perfect-what-s-the-catch",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-native/#wow-cloud-native-seems-perfect-what-s-the-catch",
            "title": "WOW – Cloud-Native Seems Perfect – What's the Catch?",
            "children": []
          },
          {
            "level": 2,
            "id": "three-essential-tools-for-successful-cloud-native-architecture",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-native/#three-essential-tools-for-successful-cloud-native-architecture",
            "title": "Three Essential Tools for Successful Cloud-Native Architecture",
            "children": []
          },
          {
            "level": 2,
            "id": "cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
            "permalink": "https://tech.fpcomplete.com/blog/cloud-native/#cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
            "title": "Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?",
            "children": []
          }
        ],
        "word_count": 1482,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 1
  },
  {
    "name": "functional programming",
    "slug": "functional-programming",
    "path": "/categories/functional-programming/",
    "permalink": "https://tech.fpcomplete.com/categories/functional-programming/",
    "pages": [
      {
        "relative_path": "blog/axum-hyper-tonic-tower-part4.md",
        "colocated_path": null,
        "content": "<p>This is the fourth and final post in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li>Today's post: How to combine Axum and Tonic services into a single service</li>\n</ol>\n<h2 id=\"single-port-two-protocols\">Single port, two protocols</h2>\n<p>That heading is a lie. Both an Axum web application and a gRPC server speak the same protocol: HTTP/2. It may be more fair to say they speak different dialects of it. But importantly, it's trivially easy to look at a request and determine whether it wants to talk to the gRPC server or not. gRPC requests will all include the header <code>Content-Type: application/grpc</code>. So our final step today is to write something that can accept both a gRPC <code>Service</code> and a normal <code>Service</code>, and return one unified service. Let's do it! For reference, complete code is in <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/server-hybrid.rs\"><code>src/bin/server-hybrid.rs</code></a>.</p>\n<p>Let's start off with our <code>main</code> function, and demonstrate what we want this thing to look like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n    let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n    let axum_make_service = axum::Router::new()\n        .route(&quot;&#x2F;&quot;, axum::handler::get(|| async { &quot;Hello world!&quot; }))\n        .into_make_service();\n\n    let grpc_service = tonic::transport::Server::builder()\n        .add_service(EchoServer::new(MyEcho))\n        .into_service();\n\n    let hybrid_make_service = hybrid(axum_make_service, grpc_service);\n\n    let server = hyper::Server::bind(&amp;addr).serve(hybrid_make_service);\n\n    if let Err(e) = server.await {\n        eprintln!(&quot;server error: {}&quot;, e);\n    }\n}\n</code></pre>\n<p>We set up simplistic <code>axum_make_service</code> and <code>grpc_service</code> values, and then use the <code>hybrid</code> function to combine them into a single service. Notice the difference in those names, and the fact that we called <code>into_make_service</code> for the former and <code>into_service</code> for the latter. Believe it or not, that's going to cause us a lot of pain very shortly.</p>\n<p>Anyway, with that yet-to-be-explained <code>hybrid</code> function, spinning up a hybrid server is a piece of cake. But the devil's in the details!</p>\n<p>Also: there are simpler ways of going about the code below using trait objects. I avoided any type erasure techniques, since (1) I thought the code was a bit clearer this way, and (2) it turns into a nicer tutorial in my opinion. The one exception is that I <em>am</em> using a trait object for errors, since Hyper itself does so, and it simplifies the code significantly to use the same error representation across services.</p>\n<h1 id=\"defining-hybrid\">Defining <code>hybrid</code></h1>\n<p>Our <code>hybrid</code> function is going to return a <code>HybridMakeService</code> value:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn hybrid&lt;MakeWeb, Grpc&gt;(make_web: MakeWeb, grpc: Grpc) -&gt; HybridMakeService&lt;MakeWeb, Grpc&gt; {\n    HybridMakeService { make_web, grpc }\n}\n\nstruct HybridMakeService&lt;MakeWeb, Grpc&gt; {\n    make_web: MakeWeb,\n    grpc: Grpc,\n}\n</code></pre>\n<p>I'm going to be consistent and verbose with the type variable names throughout. Here, we have the type variables <code>MakeWeb</code> and <code>Grpc</code>. This reflects the difference between what Axum and Tonic provide from an API perspective. We'll need to provide Axum's <code>MakeWeb</code> with connection information in order to get the request-handling <code>Service</code>. With <code>Grpc</code>, we won't have to do that.</p>\n<p>In any event, we're ready to implement our <code>Service</code> for <code>HybridMakeService</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;ConnInfo, MakeWeb, Grpc&gt; Service&lt;ConnInfo&gt; for HybridMakeService&lt;MakeWeb, Grpc&gt;\nwhere\n    MakeWeb: Service&lt;ConnInfo&gt;,\n    Grpc: Clone,\n{\n    &#x2F;&#x2F; ...\n}\n</code></pre>\n<p>We have the two expected type variables <code>MakeWeb</code> and <code>Grpc</code>, as well as <code>ConnInfo</code>, to represent whatever connection information we're given. <code>Grpc</code> won't care about that at all, but the <code>ConnInfo</code> must match up with what <code>MakeWeb</code> is receiving. Therefore, we have the bound <code>MakeWeb: Service&lt;ConnInfo&gt;</code>. The <code>Grpc: Clone</code> bound will make sense shortly.</p>\n<p>When we receive an incoming connection, we'll need to do two things:</p>\n<ul>\n<li>Get a new <code>Service</code> from <code>MakeWeb</code>. Doing this may happen asynchronously, and may have some an error.\n<ul>\n<li><strong>SIDE NOTE</strong> If you remember the actual implementation of Axum, we know for a fact that neither of these are true. Getting a <code>Service</code> from an Axum <code>IntoMakeService</code> will always succeed, and never does any async work. But there are no APIs in Axum exposing this fact, so we're stuck behind the <code>Service</code> API.</li>\n</ul>\n</li>\n<li>Clone the <code>Grpc</code> we already have.</li>\n</ul>\n<p>Once we have the new <code>Web</code> <code>Service</code> and the cloned <code>Grpc</code>, we'll wrap these up into a new <code>struct</code>, <code>HybridService</code>. We're also going to need some help to perform the necessary async actions, so we'll create a new helper <code>Future</code> type. This all looks like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">type Response = HybridService&lt;MakeWeb::Response, Grpc&gt;;\ntype Error = MakeWeb::Error;\ntype Future = HybridMakeServiceFuture&lt;MakeWeb::Future, Grpc&gt;;\n\nfn poll_ready(\n    &amp;mut self,\n    cx: &amp;mut std::task::Context,\n) -&gt; std::task::Poll&lt;Result&lt;(), Self::Error&gt;&gt; {\n    self.make_web.poll_ready(cx)\n}\n\nfn call(&amp;mut self, conn_info: ConnInfo) -&gt; Self::Future {\n    HybridMakeServiceFuture {\n        web_future: self.make_web.call(conn_info),\n        grpc: Some(self.grpc.clone()),\n    }\n}\n</code></pre>\n<p>Note that we're deferring to <code>self.make_web</code> to say it's ready and passing along its errors. Let's tie this piece off by looking at <code>HybridMakeServiceFuture</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[pin_project]\nstruct HybridMakeServiceFuture&lt;WebFuture, Grpc&gt; {\n    #[pin]\n    web_future: WebFuture,\n    grpc: Option&lt;Grpc&gt;,\n}\n\nimpl&lt;WebFuture, Web, WebError, Grpc&gt; Future for HybridMakeServiceFuture&lt;WebFuture, Grpc&gt;\nwhere\n    WebFuture: Future&lt;Output = Result&lt;Web, WebError&gt;&gt;,\n{\n    type Output = Result&lt;HybridService&lt;Web, Grpc&gt;, WebError&gt;;\n\n    fn poll(self: Pin&lt;&amp;mut Self&gt;, cx: &amp;mut std::task::Context) -&gt; Poll&lt;Self::Output&gt; {\n        let this = self.project();\n        match this.web_future.poll(cx) {\n            Poll::Pending =&gt; Poll::Pending,\n            Poll::Ready(Err(e)) =&gt; Poll::Ready(Err(e)),\n            Poll::Ready(Ok(web)) =&gt; Poll::Ready(Ok(HybridService {\n                web,\n                grpc: this.grpc.take().expect(&quot;Cannot poll twice!&quot;),\n            })),\n        }\n    }\n}\n</code></pre>\n<p>We need to pull in <a href=\"https://lib.rs/crates/pin-project\"><code>pin_project</code></a> to allow us to project the pinned web future inside our <code>poll</code> implementation. (If you're not familiar with <code>pin_project</code>, don't worry, we'll describe things later on with <code>HybridFuture</code>.) When we poll <code>web_future</code>, we could end up in one of three states:</p>\n<ul>\n<li><code>Pending</code>: the <code>MakeWeb</code> isn't ready, so we aren't ready either</li>\n<li><code>Ready(Err(e))</code>: the <code>MakeWeb</code> failed, so we pass along the error</li>\n<li><code>Ready(Ok(web))</code>: the <code>MakeWeb</code> is successful, so package up the new <code>web</code> value with the <code>grpc</code> value</li>\n</ul>\n<p>There's some funny business with that <code>this.grpc.take()</code> to get the cloned <code>Grpc</code> value out of the <code>Option</code>. <code>Future</code>s have an invariant that, once they return <code>Ready</code>, they cannot be polled again. Therefore, it's safe to assume that <code>take</code> will only ever be called once. But all of this pain could be avoided if Axum exposed an <code>into_service</code> method instead.</p>\n<h2 id=\"hybridservice\"><code>HybridService</code></h2>\n<p>The previous types will ultimately produce a <code>HybridService</code>. Let's look at what that is:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct HybridService&lt;Web, Grpc&gt; {\n    web: Web,\n    grpc: Grpc,\n}\n\nimpl&lt;Web, Grpc, WebBody, GrpcBody&gt; Service&lt;Request&lt;Body&gt;&gt; for HybridService&lt;Web, Grpc&gt;\nwhere\n    Web: Service&lt;Request&lt;Body&gt;, Response = Response&lt;WebBody&gt;&gt;,\n    Grpc: Service&lt;Request&lt;Body&gt;, Response = Response&lt;GrpcBody&gt;&gt;,\n    Web::Error: Into&lt;Box&lt;dyn std::error::Error + Send + Sync + &#x27;static&gt;&gt;,\n    Grpc::Error: Into&lt;Box&lt;dyn std::error::Error + Send + Sync + &#x27;static&gt;&gt;,\n{\n    &#x2F;&#x2F; ...\n}\n</code></pre>\n<p>This <code>HybridService</code> will take <code>Request&lt;Body&gt;</code> as input. The underlying <code>Web</code> and <code>Grpc</code> will also take <code>Request&lt;Body&gt;</code> as input, but they'll produce slightly different output: either <code>Response&lt;WebBody&gt;</code> or <code>Response&lt;GrpcBody&gt;</code>. We're going to need to somehow unify those body representations. As mentioned above, we're going to use trait objects for error handling, so no unification there is necessary.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">type Response = Response&lt;HybridBody&lt;WebBody, GrpcBody&gt;&gt;;\ntype Error = Box&lt;dyn std::error::Error + Send + Sync + &#x27;static&gt;;\ntype Future = HybridFuture&lt;Web::Future, Grpc::Future&gt;;\n</code></pre>\n<p>The associated <code>Response</code> type is going to be a <code>Response&lt;...&gt;</code> as well, but its body is going to be the <code>HybridBody&lt;WebBody, GrpcBody&gt;</code> type. We'll get to that later. Similarly, we have two different <code>Future</code>s that may get called, depending on the kind of request. We need to unify over that with a <code>HybridFuture</code> type.</p>\n<p>Next, let's look at <code>poll_ready</code>. We need to check for both <code>Web</code> and <code>Grpc</code> being ready for a new request. And each check can result in one of three cases: <code>Pending</code>, <code>Ready(Err)</code>, or <code>Ready(Ok)</code>. This function is all about pattern matching and unifying the error representation using <code>.into()</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn poll_ready(\n    &amp;mut self,\n    cx: &amp;mut std::task::Context&lt;&#x27;_&gt;,\n) -&gt; std::task::Poll&lt;Result&lt;(), Self::Error&gt;&gt; {\n    match self.web.poll_ready(cx) {\n        Poll::Ready(Ok(())) =&gt; match self.grpc.poll_ready(cx) {\n            Poll::Ready(Ok(())) =&gt; Poll::Ready(Ok(())),\n            Poll::Ready(Err(e)) =&gt; Poll::Ready(Err(e.into())),\n            Poll::Pending =&gt; Poll::Pending,\n        },\n        Poll::Ready(Err(e)) =&gt; Poll::Ready(Err(e.into())),\n        Poll::Pending =&gt; Poll::Pending,\n    }\n}\n</code></pre>\n<p>And finally, we can see <code>call</code>, where the real logic we're trying to accomplish lives. This is where we get to look at the request and determine where to route it:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn call(&amp;mut self, req: Request&lt;Body&gt;) -&gt; Self::Future {\n    if req.headers().get(&quot;content-type&quot;).map(|x| x.as_bytes()) == Some(b&quot;application&#x2F;grpc&quot;) {\n        HybridFuture::Grpc(self.grpc.call(req))\n    } else {\n        HybridFuture::Web(self.web.call(req))\n    }\n}\n</code></pre>\n<p>Amazing. All of this work for essentially 5 lines of meaningful code!</p>\n<h2 id=\"hybridfuture\"><code>HybridFuture</code></h2>\n<p>That's it, we're at the end! The final type we're going to analyze in this series is <code>HybridFuture</code>. (There's also a <code>HybridBody</code> type, but it's similar enough to <code>HybridFuture</code> that it doesn't warrant its own explanation.) The <code>struct</code>'s definition is:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[pin_project(project = HybridFutureProj)]\nenum HybridFuture&lt;WebFuture, GrpcFuture&gt; {\n    Web(#[pin] WebFuture),\n    Grpc(#[pin] GrpcFuture),\n}\n</code></pre>\n<p>Like before, we're using <code>pin_project</code>. This time, let's explore why. The interface for the <code>Future</code> trait requires pinned pointers in memory. Specifically, the first argument to <code>poll</code> is <code>self: Pin&lt;&amp;mut Self&gt;</code>. Rust itself never gives any guarantees about object permanence, and that's absolutely critical to writing an async runtime system.</p>\n<p>The <code>poll</code> method on <code>HybridFuture</code> is therefore going to receive an argument of type <code>Pin&lt;&amp;mut HybridFuture&gt;</code>. The problem is that we need to call the <code>poll</code> method on the underlying <code>WebBody</code> or <code>GrpcBody</code>. Assuming we have the <code>Web</code> variant, the problem we face is that pattern matching on <code>HybridFuture</code> will give us a <code>&amp;WebFuture</code> or <code>&amp;mut WebFuture</code>. It won't give us a <code>Pin&lt;&amp;mut WebFuture&gt;</code>, which is what we need!</p>\n<p><code>pin_project</code> makes a projected data type, and provides a method <code>.project()</code> on the original that gives us those pinned mutable references instead. This allows us to implement the <code>Future</code> trait for <code>HybridFuture</code> correctly, like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;WebFuture, GrpcFuture, WebBody, GrpcBody, WebError, GrpcError&gt; Future\n    for HybridFuture&lt;WebFuture, GrpcFuture&gt;\nwhere\n    WebFuture: Future&lt;Output = Result&lt;Response&lt;WebBody&gt;, WebError&gt;&gt;,\n    GrpcFuture: Future&lt;Output = Result&lt;Response&lt;GrpcBody&gt;, GrpcError&gt;&gt;,\n    WebError: Into&lt;Box&lt;dyn std::error::Error + Send + Sync + &#x27;static&gt;&gt;,\n    GrpcError: Into&lt;Box&lt;dyn std::error::Error + Send + Sync + &#x27;static&gt;&gt;,\n{\n    type Output = Result&lt;\n        Response&lt;HybridBody&lt;WebBody, GrpcBody&gt;&gt;,\n        Box&lt;dyn std::error::Error + Send + Sync + &#x27;static&gt;,\n    &gt;;\n\n    fn poll(self: Pin&lt;&amp;mut Self&gt;, cx: &amp;mut std::task::Context) -&gt; Poll&lt;Self::Output&gt; {\n        match self.project() {\n            HybridFutureProj::Web(a) =&gt; match a.poll(cx) {\n                Poll::Ready(Ok(res)) =&gt; Poll::Ready(Ok(res.map(HybridBody::Web))),\n                Poll::Ready(Err(e)) =&gt; Poll::Ready(Err(e.into())),\n                Poll::Pending =&gt; Poll::Pending,\n            },\n            HybridFutureProj::Grpc(b) =&gt; match b.poll(cx) {\n                Poll::Ready(Ok(res)) =&gt; Poll::Ready(Ok(res.map(HybridBody::Grpc))),\n                Poll::Ready(Err(e)) =&gt; Poll::Ready(Err(e.into())),\n                Poll::Pending =&gt; Poll::Pending,\n            },\n        }\n    }\n}\n</code></pre>\n<p>We unify together the successful response bodies with the <code>HybridBody</code> <code>enum</code> and use a trait object for error handling. And now we're presenting a single unified type for both types of requests. Huzzah!</p>\n<h2 id=\"conclusions\">Conclusions</h2>\n<p>Thank you dear reader for getting through these posts. I hope it was helpful. I definitely felt more comfortable with the Tower/Hyper ecosystem after diving into these details like this. Let's sum up some highlights from this series:</p>\n<ul>\n<li>Tower provides a Rusty interface called <code>Service</code> for async functions from inputs to outputs, or requests to responses, which may fail\n<ul>\n<li>Don't forget, there are two levels of async behavior in this interface: checking whether the <code>Service</code> is ready and then waiting for it to complete processing</li>\n</ul>\n</li>\n<li>HTTP itself necessitates two levels of async functions: a <code>type InnerService = Request -&gt; IO Response</code> for individual requests, and <code>type OuterService = ConnectionInfo -&gt; IO InnerService</code> for the overall connection</li>\n<li>Hyper provides a concrete server implementation that can accept things that look like <code>OuterService</code> and run them\n<ul>\n<li>It uses a lot of traits, some of which are not publicly exposed, to generalize</li>\n<li>It provides significant flexibility in the request and response body representation</li>\n<li>The helper functions <code>service_fn</code> and <code>make_service_fn</code> are a common way to create the two levels of <code>Service</code> necessary</li>\n</ul>\n</li>\n<li>Axum is a lightweight framework sitting on top of Hyper, and exposing a lot of its interface</li>\n<li>gRPC is an HTTP/2 based protocol which can be hosted via Hyper using the Tonic library</li>\n<li>Dispatching between an Axum service and gRPC is conceptually easy: just check the <code>content-type</code> header to see if something is a gRPC request</li>\n<li>But to make that happen, we need a bunch of helper &quot;hybrid&quot; types to unify the different types between Axum and Tonic</li>\n<li>A lot of the time, you can get away with trait objects to enable type erasure, but hybridizing <code>Either</code>-style <code>enum</code>s work as well\n<ul>\n<li>While they're more verbose, they may also be clearer</li>\n<li>There's also a potential performance gain by avoiding dynamic dispatch</li>\n</ul>\n</li>\n</ul>\n<p>If you want to review it, remember that a complete project is available on GitHub at <a href=\"https://github.com/snoyberg/tonic-example\">https://github.com/snoyberg/tonic-example</a>.</p>\n<p>Finally, some more subjective takeaways from me:</p>\n<ul>\n<li>I'm overall liking Axum, and I'm already using it for a new client project.</li>\n<li>I do wish it was a little higher level, and that the type errors weren't quite as intimidating. I think there may be some room in this space for more aggressive type erasure-focused frameworks, exchanging a bit of runtime performance for significantly simpler ergonomics.</li>\n<li>I'm also looking at rewriting our Zehut product to leverage Axum. So far, it's gone pretty well, but other responsibilities have taken me off of that work for the foreseeable future. And there are some <a href=\"https://github.com/tokio-rs/axum/issues/200\">painful compilation issues</a> to be aware of.\n<ul>\n<li><strong>UPDATE January 23, 2022</strong> As <a href=\"https://twitter.com/rbtcollins/status/1484559351490744330?s=21\">pointed out on Twitter</a>, Axum has fixed this issue in newer versions. I've actually already used this improvement in other projects since then, but forgot to update the blog post. Thanks for the reminder Robert!</li>\n</ul>\n</li>\n<li>I do miss strongly typed routes, but overall I'd rather use something like Axum than push farther with <code>routetype</code>. In the future, though, I may look into providing some <code>routetype</code>/<code>axum</code> bridge.</li>\n</ul>\n<p>If this kind of content was helpful, and you're interested in more in the future, please consider <a href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\">subscribing to our blog</a>. Let me know (<a href=\"https://twitter.com/snoyberg\">on Twitter</a> or elsewhere) if you have any requests for additional content like this.</p>\n<p>If you're looking for more Rust content, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
        "slug": "axum-hyper-tonic-tower-part4",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4",
        "description": "Part 4 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
        "updated": null,
        "date": "2021-09-20",
        "year": 2021,
        "month": 9,
        "day": 20,
        "taxonomies": {
          "tags": [
            "rust"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/axum-hyper-tonic-tower-part4.png",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/axum-hyper-tonic-tower-part4/",
        "components": [
          "blog",
          "axum-hyper-tonic-tower-part4"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "single-port-two-protocols",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#single-port-two-protocols",
            "title": "Single port, two protocols",
            "children": []
          },
          {
            "level": 1,
            "id": "defining-hybrid",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#defining-hybrid",
            "title": "Defining hybrid",
            "children": [
              {
                "level": 2,
                "id": "hybridservice",
                "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#hybridservice",
                "title": "HybridService",
                "children": []
              },
              {
                "level": 2,
                "id": "hybridfuture",
                "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#hybridfuture",
                "title": "HybridFuture",
                "children": []
              },
              {
                "level": 2,
                "id": "conclusions",
                "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#conclusions",
                "title": "Conclusions",
                "children": []
              }
            ]
          }
        ],
        "word_count": 2427,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3"
          }
        ]
      },
      {
        "relative_path": "blog/axum-hyper-tonic-tower-part3.md",
        "colocated_path": null,
        "content": "<p>This is the third of four posts in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li>Today's post: Demonstration of Tonic for a gRPC client/server</li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<h2 id=\"tonic-and-grpc\">Tonic and gRPC</h2>\n<p>Tonic is a gRPC client and server library. gRPC is a protocol that sits on top of HTTP/2, and therefore Tonic is built on top of Hyper (and Tower). I already mentioned at the beginning of this series that my ultimate goal is to be able to serve hybrid web/gRPC services over a single port. But for now, let's get comfortable with a standard Tonic client/server application. We're going to create an echo server, which provides an endpoint that will repeat back whatever message you send it.</p>\n<p>The full code for this is <a href=\"https://github.com/snoyberg/tonic-example\">available on GitHub</a>. The repository is structured as a single package with three different crates:</p>\n<ul>\n<li>A library crate providing the protobuf definitions and Tonic-generated server and client items</li>\n<li>A binary crate providing a simple client tool</li>\n<li>A binary crate providing the server executable</li>\n</ul>\n<p>The first file we'll look at is the protobuf definition of our service, located in <code>proto/echo.proto</code>:</p>\n<pre><code>syntax = &quot;proto3&quot;;\n\npackage echo;\n\nservice Echo {\n  rpc Echo (EchoRequest) returns (EchoReply) {}\n}\n\nmessage EchoRequest {\n  string message = 1;\n}\n\nmessage EchoReply {\n  string message = 1;\n}\n</code></pre>\n<p>Even if you're not familiar with protobuf, hopefully the example above is fairly self-explanatory. We need a <code>build.rs</code> file to use <code>tonic_build</code> to compile this file:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    tonic_build::configure()\n        .compile(&amp;[&quot;proto&#x2F;echo.proto&quot;], &amp;[&quot;proto&quot;])\n        .unwrap();\n}\n</code></pre>\n<p>And finally, we have our mammoth <code>src/lib.rs</code> providing all the items we'll need for implementing our client and server:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">tonic::include_proto!(&quot;echo&quot;);\n</code></pre>\n<p>There's nothing terribly interesting about the client. It's a typical <code>clap</code>-based CLI tool that uses Tokio and Tonic. You can <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/client.rs\">read the source on GitHub</a>.</p>\n<p>Let's move onto the important part: the server.</p>\n<h2 id=\"the-server\">The server</h2>\n<p>The Tonic code we put into our library crate generates an <code>Echo</code> trait. We need to implement that trait on some type to make our gRPC service. This isn't directly related to our topic today. It's also fairly straightforward Rust code. I've so far found the experience of writing client/server apps with Tonic to be a real pleasure, specifically because of how easy these kinds of implementations are:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use tonic_example::echo_server::{Echo, EchoServer};\nuse tonic_example::{EchoReply, EchoRequest};\n\npub struct MyEcho;\n\n#[async_trait]\nimpl Echo for MyEcho {\n    async fn echo(\n        &amp;self,\n        request: tonic::Request&lt;EchoRequest&gt;,\n    ) -&gt; Result&lt;tonic::Response&lt;EchoReply&gt;, tonic::Status&gt; {\n        Ok(tonic::Response::new(EchoReply {\n            message: format!(&quot;Echoing back: {}&quot;, request.get_ref().message),\n        }))\n    }\n}\n</code></pre>\n<p>If you look in the <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/server.rs\">source on GitHub</a>, there are two different implementations of <code>main</code>, one of them commented out. That one's the more straightforward approach, so let's start with that:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() -&gt; anyhow::Result&lt;()&gt; {\n    let addr = ([0, 0, 0, 0], 3000).into();\n\n    tonic::transport::Server::builder()\n        .add_service(EchoServer::new(MyEcho))\n        .serve(addr)\n        .await?;\n\n    Ok(())\n}\n</code></pre>\n<p>This uses Tonic's <code>Server::builder</code> to create a new <code>Server</code> value. It then calls <code>add_service</code>, which looks like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;L&gt; Server&lt;L&gt; {\n    pub fn add_service&lt;S&gt;(&amp;mut self, svc: S) -&gt; Router&lt;S, Unimplemented, L&gt;\n    where\n        S: Service&lt;Request&lt;Body&gt;, Response = Response&lt;BoxBody&gt;&gt;\n            + NamedService\n            + Clone\n            + Send\n            + &#x27;static,\n        S::Future: Send + &#x27;static,\n        S::Error: Into&lt;crate::Error&gt; + Send,\n        L: Clone\n}\n</code></pre>\n<p>We've got another <code>Router</code>. This works like in Axum, but it's for routing gRPC calls to the appropriate named service. Let's talk through the type parameters and traits here:</p>\n<ul>\n<li><code>L</code> represents the <em>layer</em>, or the middlewares added to this server. It will default to <a href=\"https://docs.rs/tower/0.4.8/tower/layer/util/struct.Identity.html\"><code>Identity</code></a>, to represent the no middleware case.</li>\n<li><code>S</code> is the new service we're trying to add, which in our case is an <code>EchoServer</code>.</li>\n<li>Our service needs to accept the ever-familiar <code>Request&lt;Body&gt;</code> type, and respond with a <code>Response&lt;BoxBody&gt;</code>. (We'll discuss <code>BoxBody</code> on its own below.) It also needs to be <a href=\"https://docs.rs/tonic/0.5.2/tonic/transport/trait.NamedService.html\"><code>NamedService</code></a> (for routing).</li>\n<li>As usual, there are a bunch of <code>Clone</code>, <code>Send</code>, and <code>'static</code> bounds too, and requirements on the error representation.</li>\n</ul>\n<p>As complicated as all of that appears, the nice thing is that we don't really need to deal with those details in a simple Tonic application. Instead, we simply call the <code>serve</code> method and everything works like magic.</p>\n<p>But we're trying to go off the beaten path and get a better understanding of how this interacts with Hyper. So let's go deeper!</p>\n<h2 id=\"into-service\"><code>into_service</code></h2>\n<p>In addition to the <code>serve</code> method, Tonic's <code>Router</code> type also provides an <a href=\"https://docs.rs/tonic/0.5.2/tonic/transport/server/struct.Router.html#method.into_service\"><code>into_service</code> method</a>. I'm not going to go into all of its glory here, since it doesn't add much to the discussion but adds a lot to the reading you'll have to do.  Instead, suffice it to say that</p>\n<ul>\n<li><code>into_service</code> returns a <code>RouterService&lt;S&gt;</code> value</li>\n<li><code>S</code> must implement <code>Service&lt;Request&lt;Body&gt;, Response = Response&lt;ResBody&gt;&gt;</code></li>\n<li><code>ResBody</code> is a type that Hyper can use for response bodies</li>\n</ul>\n<p>OK, cool? Now we can write our slightly more long-winded <code>main</code> function. First we create our <code>RouterService</code> value:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let grpc_service = tonic::transport::Server::builder()\n    .add_service(EchoServer::new(MyEcho))\n    .into_service();\n</code></pre>\n<p>But now we have a bit of a problem. Hyper expects a &quot;make service&quot; or an &quot;app factory&quot;, and instead we just have a request handling service. So we need to go back to Hyper and use <code>make_service_fn</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_grpc_service = make_service_fn(move |_conn| {\n    let grpc_service = grpc_service.clone();\n    async { Ok::&lt;_, Infallible&gt;(grpc_service) }\n});\n</code></pre>\n<p>Notice that we need to clone a new copy of the <code>grpc_service</code>, and we need to play all the games with splitting up the closure and the async block, plus <code>Infallible</code>, that we saw before. But now, with <em>that</em> in place, we can launch our gRPC service:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let server = hyper::Server::bind(&amp;addr).serve(make_grpc_service);\n\nif let Err(e) = server.await {\n    eprintln!(&quot;server error: {}&quot;, e);\n}\n</code></pre>\n<p>If you want to play with this, you can clone <a href=\"https://github.com/snoyberg/tonic-example\">the tonic-example repo</a> and then:</p>\n<ul>\n<li>Run <code>cargo run --bin server</code> in one terminal</li>\n<li>Run <code>cargo run --bin client &quot;Hello world!&quot;</code> in another</li>\n</ul>\n<p>However, trying to open up http://localhost:3000 in your browser isn't going to work out too well. This server will only handle gRPC connections, not standard web browser requests, RESTful APIs, etc. We've got one final step now: writing something that can handle both Axum and Tonic services and route to them appropriately.</p>\n<h2 id=\"boxbody\"><code>BoxBody</code></h2>\n<p>Let's look into that <code>BoxBody</code> type in a little more detail. We're using the <code>tonic::body::BoxBody</code> <code>struct</code>, which is defined as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub type BoxBody = http_body::combinators::BoxBody&lt;bytes::Bytes, crate::Status&gt;;\n</code></pre>\n<p><code>http_body</code> itself provides its own <code>BoxBody</code>, which is parameterized over the <em>data</em> and <em>error</em>. Tonic uses the <code>Status</code> type for errors, and represents the different status codes a gRPC service can return. For those not familiar with <code>Bytes</code>, here's a quick excerpt from <a href=\"https://docs.rs/bytes/1.1.0/bytes/\">the docs</a></p>\n<blockquote>\n<p><code>Bytes</code> is an efficient container for storing and operating on contiguous slices of memory. It is intended for use primarily in networking code, but could have applications elsewhere as well.</p>\n<p><code>Bytes</code> values facilitate zero-copy network programming by allowing multiple <code>Bytes</code> objects to point to the same underlying memory. This is managed by using a reference count to track when the memory is no longer needed and can be freed.</p>\n</blockquote>\n<p>When you see <code>Bytes</code>, you can semantically think of it as a byte slice or byte vector. The underlying <code>BoxBody</code> from the <code>http_body</code> crate represents some kind of implementation of the <a href=\"https://docs.rs/http-body/0.4.3/http_body/trait.Body.html\"><code>http_body::Body</code></a> trait. The <code>Body</code> trait represents a streaming HTTP body, and contains:</p>\n<ul>\n<li>Associated types for <code>Data</code> and <code>Error</code>, corresponding to the type parameters to <code>BoxBody</code></li>\n<li><code>poll_data</code> for asynchronously reading more data from the body</li>\n<li>Helper <code>map_data</code> and <code>map_err</code> methods for manipulating the <code>Data</code> and <code>Error</code> associated types</li>\n<li>A <code>boxed</code> method for some type erasure, allowing us to get back a <code>BoxBody</code></li>\n<li>A few other helper methods around size hints and HTTP/2 trailing data</li>\n</ul>\n<p>The important thing to note for our purposes is that &quot;type erasure&quot; here isn't really complete type erasure. When we use <code>boxed</code> to get a trait object representing the body, we still have type parameters to represent the <code>Data</code> and <code>Error</code>. Therefore, if we end up with two different representations of <code>Data</code> or <code>Error</code>, they won't be compatible with each other. And let me ask you: do you think Axum will use the same <code>Status</code> error type to represent errors that Tonic does? (Hint: it doesn't.) So when we get to it next time, we'll have some footwork to do around unifying error types.</p>\n<h2 id=\"almost-there\">Almost there!</h2>\n<p>We'll tie up next week with the final post in this series, tying together all the different things we've seen so far.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part4\">Read part 4 now</a></p>\n<p>If you're looking for more Rust content, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
        "slug": "axum-hyper-tonic-tower-part3",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3",
        "description": "Part 3 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
        "updated": null,
        "date": "2021-09-13",
        "year": 2021,
        "month": 9,
        "day": 13,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/axum-hyper-tonic-tower-part3.png",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/axum-hyper-tonic-tower-part3/",
        "components": [
          "blog",
          "axum-hyper-tonic-tower-part3"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "tonic-and-grpc",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#tonic-and-grpc",
            "title": "Tonic and gRPC",
            "children": []
          },
          {
            "level": 2,
            "id": "the-server",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#the-server",
            "title": "The server",
            "children": []
          },
          {
            "level": 2,
            "id": "into-service",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#into-service",
            "title": "into_service",
            "children": []
          },
          {
            "level": 2,
            "id": "boxbody",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#boxbody",
            "title": "BoxBody",
            "children": []
          },
          {
            "level": 2,
            "id": "almost-there",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#almost-there",
            "title": "Almost there!",
            "children": []
          }
        ],
        "word_count": 1583,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4"
          }
        ]
      },
      {
        "relative_path": "blog/axum-hyper-tonic-tower-part2.md",
        "colocated_path": null,
        "content": "<p>This is the second of four posts in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li>Today's post: Understanding Hyper, and first experiences with Axum</li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<p>I recommend checking out the first post in the series if you haven't already.</p>\n<p class=\"text-center\" style=\"border: 1px solid #000;border-radius:1rem;padding:1rem;background-color:#f1f1f1\">\n  <a class=\"btn btn-primary\" href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\" target=\"_blank\">\n    Subscribe to our blog via email\n  </a>\n  <br>\n  <small>Email subscriptions come from our <a target=\"_blank\" href=\"/feed/\">Atom feed</a> and are handled by <a target=\"_blank\" href=\"https://blogtrottr.com\">Blogtrottr</a>. You will only receive notifications of blog posts, and can unsubscribe any time.</small>\n</p>\n<h2 id=\"quick-recap\">Quick recap</h2>\n<ul>\n<li>Tower provides a <code>Service</code> trait, which is basically an asynchronous function from requests to responses</li>\n<li><code>Service</code> is parameterized on the request type, and has an associated type for <code>Response</code></li>\n<li>It also has an associated <code>Error</code> type, and an associated <code>Future</code> type</li>\n<li><code>Service</code> allows async behavior in both checking whether the service is ready to accept a request, and for handling the request</li>\n<li>A web application ends up having two sets of async request/response behavior\n<ul>\n<li>Inner: a service that accepts HTTP requests and returns HTTP responses</li>\n<li>Outer: a service that accepts the incoming network connections and returns an inner service</li>\n</ul>\n</li>\n</ul>\n<p>With that in mind, let's look at Hyper.</p>\n<h2 id=\"services-in-hyper\">Services in Hyper</h2>\n<p>Now that we've got Tower under our belts a bit, it's time to dive into the specific world of Hyper. Much of what we saw above will apply directly to Hyper. But Hyper has a few additional curveballs to deal with:</p>\n<ul>\n<li>Both the <code>Request</code> and <code>Response</code> types are parameterized over the representation of the request/response bodies</li>\n<li>There are a bunch of additional traits and type parameterized in the public API, some not appearing in the docs at all, and many that are unclear</li>\n</ul>\n<p>In place of the <code>run</code> function we had in our previous fake server example, Hyper follows a builder pattern for initializing HTTP servers. After providing configuration values, you create an active <code>Server</code> value from your <code>Builder</code> with the <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/struct.Builder.html#method.serve\"><code>serve</code></a> method. Just to get it out of the way now, this is the type signature of <code>serve</code> from the public docs:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub fn serve&lt;S, B&gt;(self, new_service: S) -&gt; Server&lt;I, S, E&gt;\nwhere\n    I: Accept,\n    I::Error: Into&lt;Box&lt;dyn StdError + Send + Sync&gt;&gt;,\n    I::Conn: AsyncRead + AsyncWrite + Unpin + Send + &#x27;static,\n    S: MakeServiceRef&lt;I::Conn, Body, ResBody = B&gt;,\n    S::Error: Into&lt;Box&lt;dyn StdError + Send + Sync&gt;&gt;,\n    B: HttpBody + &#x27;static,\n    B::Error: Into&lt;Box&lt;dyn StdError + Send + Sync&gt;&gt;,\n    E: NewSvcExec&lt;I::Conn, S::Future, S::Service, E, NoopWatcher&gt;,\n    E: ConnStreamExec&lt;&lt;S::Service as HttpService&lt;Body&gt;&gt;::Future, B&gt;,\n</code></pre>\n<p>That's a lot of requirements, and not all of them are clear from the docs. Hopefully we can bring some clarity to this. But for now, let's start off with something simpler: the &quot;Hello world&quot; example from <a href=\"https://hyper.rs\">the Hyper homepage</a>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::{convert::Infallible, net::SocketAddr};\nuse hyper::{Body, Request, Response, Server};\nuse hyper::service::{make_service_fn, service_fn};\n\nasync fn handle(_: Request&lt;Body&gt;) -&gt; Result&lt;Response&lt;Body&gt;, Infallible&gt; {\n    Ok(Response::new(&quot;Hello, World!&quot;.into()))\n}\n\n#[tokio::main]\nasync fn main() {\n    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));\n\n    let make_svc = make_service_fn(|_conn| async {\n        Ok::&lt;_, Infallible&gt;(service_fn(handle))\n    });\n\n    let server = Server::bind(&amp;addr).serve(make_svc);\n\n    if let Err(e) = server.await {\n        eprintln!(&quot;server error: {}&quot;, e);\n    }\n}\n</code></pre>\n<p>This follows the same pattern we established above:</p>\n<ul>\n<li><code>handle</code> is an async function from a <code>Request</code> to a <code>Response</code>, which may fail with an <code>Infallible</code> value.\n<ul>\n<li>Both <code>Request</code> and <code>Response</code> are parameterized with <code>Body</code>, a default HTTP body representation.</li>\n</ul>\n</li>\n<li><code>handle</code> gets wrapped up in <code>service_fn</code> to produce a <code>Service&lt;Request&lt;Body&gt;&gt;</code>. This is like <code>app_fn</code> above.</li>\n<li>We use <code>make_service_fn</code>, like <code>app_factory_fn</code> above, to produce the <code>Service&lt;&amp;AddrStream&gt;</code> (we'll get to that <code>&amp;AddrStream</code> shortly).\n<ul>\n<li>We don't care about the <code>&amp;AddrStream</code> value, so we ignore it</li>\n<li>The return value from the function inside <code>make_service_fn</code> must be a <code>Future</code>, so we wrap with <code>async</code></li>\n<li>The output of that <code>Future</code> must be a <code>Result</code>, so we wrap with an <code>Ok</code></li>\n<li>We need to help the compiler out a bit and provide a type annotation of <code>Infallible</code>, otherwise it won't know the type of the <code>Ok(service_fn(handle))</code> expression</li>\n</ul>\n</li>\n</ul>\n<p>Using this level of abstraction for writing a normal web app is painful for (at least) three different reasons:</p>\n<ul>\n<li>Managing all of these <code>Service</code> pieces manually is a pain</li>\n<li>There's very little in the way high level helpers, like &quot;parse the request body as a JSON value&quot;</li>\n<li>Any kind of mistake in your types may lead to very large, non-local error messages that are difficult to diagnose</li>\n</ul>\n<p>So we'll be more than happy to move on from Hyper to Axum a bit later. But for now, let's continue exploring things at the Hyper layer.</p>\n<h2 id=\"bypassing-service-fn-and-make-service-fn\">Bypassing <code>service_fn</code> and <code>make_service_fn</code></h2>\n<p>What I found most helpful when trying to grok Hyper was implementing a simple app without <code>service_fn</code> and <code>make_service_fn</code>. So let's go through that ourselves here. We're going to create a simple counter app (I'm nothing if not predictable). We'll need two different data types: one for the &quot;app factory&quot;, and one for the app itself. Let's start with the app itself:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct DemoApp {\n    counter: Arc&lt;AtomicUsize&gt;,\n}\n\nimpl Service&lt;Request&lt;Body&gt;&gt; for DemoApp {\n    type Response = Response&lt;Body&gt;;\n    type Error = hyper::http::Error;\n    type Future = Ready&lt;Result&lt;Self::Response, Self::Error&gt;&gt;;\n\n    fn poll_ready(&amp;mut self, _cx: &amp;mut std::task::Context) -&gt; Poll&lt;Result&lt;(), Self::Error&gt;&gt; {\n        Poll::Ready(Ok(()))\n    }\n\n    fn call(&amp;mut self, _req: Request&lt;Body&gt;) -&gt; Self::Future {\n        let counter = self.counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n        let res = Response::builder()\n            .status(200)\n            .header(&quot;Content-Type&quot;, &quot;text&#x2F;plain; charset=utf-8&quot;)\n            .body(format!(&quot;Counter is at: {}&quot;, counter).into());\n        std::future::ready(res)\n    }\n}\n</code></pre>\n<p>This implementation uses the <code>std::future::Ready</code> struct to create a <code>Future</code> which is immediately ready. In other words, our application doesn't perform any async actions. I've set the <code>Error</code> associated type to <code>hyper::http::Error</code>. This error would be generated if, for example, you provided invalid strings to the <code>header</code> method call, such as non-ASCII characters. As we've seen multiple times, <code>poll_ready</code> just advertises that it's always ready to handle another request.</p>\n<p>The implementation of <code>DemoAppFactory</code> isn't terribly different:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct DemoAppFactory {\n    counter: Arc&lt;AtomicUsize&gt;,\n}\n\nimpl Service&lt;&amp;AddrStream&gt; for DemoAppFactory {\n    type Response = DemoApp;\n    type Error = Infallible;\n    type Future = Ready&lt;Result&lt;Self::Response, Self::Error&gt;&gt;;\n\n    fn poll_ready(&amp;mut self, _cx: &amp;mut std::task::Context) -&gt; Poll&lt;Result&lt;(), Self::Error&gt;&gt; {\n        Poll::Ready(Ok(()))\n    }\n\n    fn call(&amp;mut self, conn: &amp;AddrStream) -&gt; Self::Future {\n        println!(&quot;Accepting a new connection from {:?}&quot;, conn);\n        std::future::ready(Ok(DemoApp {\n            counter: self.counter.clone()\n        }))\n    }\n}\n</code></pre>\n<p>We have a different parameter to <code>Service</code>, this time <code>&amp;AddrStream</code>. I did initially find the naming here confusing. In Tower, a <code>Service</code> takes some <code>Request</code>. And with our <code>DemoApp</code>, the <code>Request</code> it takes is a Hyper <code>Request&lt;Body&gt;</code>. But in the case of <code>DemoAppFactory</code>, the <code>Request</code> it's taking is a <code>&amp;AddrStream</code>. Keep in mind that a <code>Service</code> is really just a generalization of failable, async functions from input to output. The input may be a <code>Request&lt;Body&gt;</code>, or may be a <code>&amp;AddrStream</code>, or something else entirely.</p>\n<p>Similarly, the &quot;response&quot; here isn't an HTTP response, but a <code>DemoApp</code>. I again find it easier to use the terms &quot;input&quot; and &quot;output&quot; to avoid the name overloading of request and response.</p>\n<p>Finally, our <code>main</code> function looks much the same as the original from the &quot;Hello world&quot; example:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n    let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n    let factory = DemoAppFactory {\n        counter: Arc::new(AtomicUsize::new(0)),\n    };\n\n    let server = Server::bind(&amp;addr).serve(factory);\n\n    if let Err(e) = server.await {\n        eprintln!(&quot;server error: {}&quot;, e);\n    }\n}\n</code></pre>\n<p>If you're looking to extend your understanding here, I'd recommend extending this example to perform some async actions within the app. How would you modify <code>Future</code>? If you use a trait object, how exactly do you pin?</p>\n<p>But now it's time to take a dive into a topic I've avoided for a while.</p>\n<h2 id=\"understanding-the-traits\">Understanding the traits</h2>\n<p>Let's refresh our memory from above on the signature of <code>serve</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub fn serve&lt;S, B&gt;(self, new_service: S) -&gt; Server&lt;I, S, E&gt;\nwhere\n    I: Accept,\n    I::Error: Into&lt;Box&lt;dyn StdError + Send + Sync&gt;&gt;,\n    I::Conn: AsyncRead + AsyncWrite + Unpin + Send + &#x27;static,\n    S: MakeServiceRef&lt;I::Conn, Body, ResBody = B&gt;,\n    S::Error: Into&lt;Box&lt;dyn StdError + Send + Sync&gt;&gt;,\n    B: HttpBody + &#x27;static,\n    B::Error: Into&lt;Box&lt;dyn StdError + Send + Sync&gt;&gt;,\n    E: NewSvcExec&lt;I::Conn, S::Future, S::Service, E, NoopWatcher&gt;,\n    E: ConnStreamExec&lt;&lt;S::Service as HttpService&lt;Body&gt;&gt;::Future, B&gt;,\n</code></pre>\n<p>Up until preparing this blog post, I have never tried to take a deep dive into understanding all of these bounds. So this will be an adventure for us all! (And perhaps it should end up with some documentation PRs by me...) Let's start off with the type variables. Altogether, we have four: two on the <code>impl</code> block itself, and two on this method:</p>\n<ul>\n<li><code>I</code> represents the incoming stream of connections.</li>\n<li><code>E</code> represents the executor.</li>\n<li><code>S</code> is the service we're going to run. Using our terminology from above, this would be the &quot;app factory.&quot; Using Tower/Hyper terminology, this is the &quot;make service.&quot;</li>\n<li><code>B</code> is the choice of response body the service returns (the &quot;app&quot;, not the &quot;app factory&quot;, using nomenclature above).</li>\n</ul>\n<h3 id=\"i-accept\"><code>I: Accept</code></h3>\n<p><code>I</code> needs to implement the <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/accept/trait.Accept.html\"><code>Accept</code></a> trait, which represents the ability to accept a new connection from some a source. The only implementation out of the box is for <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/conn/struct.AddrIncoming.html\"><code>AddrIncoming</code></a>, which can be created from a <code>SocketAddr</code>. And in fact, that's exactly what <a href=\"https://docs.rs/hyper/0.14.12/src/hyper/server/server.rs.html#66-71\"><code>Server::bind</code> does</a>.</p>\n<p><code>Accept</code> has two associated types. <code>Error</code> must be something that can be converted into an error object, or <code>Into&lt;Box&lt;dyn StdError + Send + Sync&gt;&gt;</code>. This is the requirement of (almost?) every associated error type we look at, so from now on I'll just skip over them. We need to be able to convert whatever error happened into a uniform representation.</p>\n<p>The <code>Conn</code> associated type represents an individual connection. In the case of <code>AddrIncoming</code>, the associated type is <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/conn/struct.AddrStream.html\"><code>AddrStream</code></a>. This type must implement <code>AsyncRead</code> and <code>AsyncWrite</code> for communication, <code>Send</code> and <code>'static</code> so it can be sent to different threads, and <code>Unpin</code>. The requirement for <code>Unpin</code> bubbles up from deeper in the stack, and I honestly don't know what drives it.</p>\n<h3 id=\"s-makeserviceref\"><code>S: MakeServiceRef</code></h3>\n<p><code>MakeServiceRef</code> is one of those traits that doesn't appear in the public documentation. This seems to be intentional. Reading the source:</p>\n<blockquote>\n<p>Just a sort-of &quot;trait alias&quot; of <code>MakeService</code>, not to be implemented by anyone, only used as bounds.</p>\n</blockquote>\n<p>Were you confused as to why we were receiving a reference with <code>&amp;AddrStream</code>? This is the trait that powers that transformation. Overall, the trait bound <code>S: MakeServiceRef&lt;I::Conn, Body, ResBody = B&gt;</code> means:</p>\n<ul>\n<li><code>S</code> must be a <code>Service</code></li>\n<li><code>S</code> will accept input of type <code>&amp;I::Conn</code></li>\n<li>It will in turn produce a <em>new</em> <code>Service</code> as output</li>\n<li>That new service will accept <code>Request&lt;Body&gt;</code> as input, and produce <code>Response&lt;ResBody&gt;</code> as output</li>\n</ul>\n<p>And while we're talking about it: that <code>ResBody</code> has the restriction that it must implement <a href=\"https://docs.rs/hyper/0.14.12/hyper/body/trait.HttpBody.html\"><code>HttpBody</code></a>. As you might guess, the <code>Body</code> struct mentioned above implements <code>HttpBody</code>. There are a number of implementations too. When we get to Tonic and gRPC, we'll see that there are, in fact, other response bodies we have to deal with.</p>\n<h3 id=\"newsvcexec-and-connstreamexec\"><code>NewSvcExec</code> and <code>ConnStreamExec</code></h3>\n<p>The default value for the <code>E</code> parameter is <code>Exec</code>, which does not appear in the generated docs. But of course you can find it <a href=\"https://docs.rs/crate/hyper/0.14.12/source/src/common/exec.rs\">in the source</a>. The concept of <code>Exec</code> is to specify how tasks are spawned off. By default, it leverages <code>tokio::spawn</code>.</p>\n<p>I'm not entirely certain of how all of these plays out, but I believe the two traits in the heading allow for different handling of spawning for the connection service (app factory) versus the request service (app).</p>\n<h2 id=\"using-axum\">Using Axum</h2>\n<p>Axum is the new web framework that kicked off this whole blog post. Instead of dealing directly with Hyper like we did above, let's reimplement our counter web service using Axum. We'll be using <code>axum = &quot;0.2&quot;</code>. The <a href=\"https://docs.rs/axum/0.2.3/axum/index.html\">crate docs</a> provide a great overview of Axum, and I'm not going to try to replicate that information here. Instead, here's my rewritten code. We'll analyze a few key pieces below:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use axum::extract::Extension;\nuse axum::handler::get;\nuse axum::{AddExtensionLayer, Router};\nuse hyper::{HeaderMap, Server, StatusCode};\nuse std::net::SocketAddr;\nuse std::sync::atomic::AtomicUsize;\nuse std::sync::Arc;\n\n#[derive(Clone, Default)]\nstruct AppState {\n    counter: Arc&lt;AtomicUsize&gt;,\n}\n\n#[tokio::main]\nasync fn main() {\n    let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n    let app = Router::new()\n        .route(&quot;&#x2F;&quot;, get(home))\n        .layer(AddExtensionLayer::new(AppState::default()));\n\n    let server = Server::bind(&amp;addr).serve(app.into_make_service());\n\n    if let Err(e) = server.await {\n        eprintln!(&quot;server error: {}&quot;, e);\n    }\n}\n\nasync fn home(state: Extension&lt;AppState&gt;) -&gt; (StatusCode, HeaderMap, String) {\n    let counter = state\n        .counter\n        .fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n    let mut headers = HeaderMap::new();\n    headers.insert(&quot;Content-Type&quot;, &quot;text&#x2F;plain; charset=utf-8&quot;.parse().unwrap());\n    let body = format!(&quot;Counter is at: {}&quot;, counter);\n    (StatusCode::OK, headers, body)\n}\n</code></pre>\n<p>The first thing I'd like to get out of the way is this whole <code>AddExtensionLayer</code>/<code>Extension</code> bit. This is how we're managing shared state within our application. It's not directly relevant to our overall analysis of Tower and Hyper, so I'll suffice with a <a href=\"https://docs.rs/axum/0.2.3/axum/index.html#sharing-state-with-handlers\">link to the docs demonstrating how this works</a>. Interestingly, you may notice that this implementation relies on middlewares, which does in fact leverage Tower, so it's not completely separate.</p>\n<p>Anyway, back to our point at hand. Within our <code>main</code> function, we're now using this <code>Router</code> concept to build up our application:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let app = Router::new()\n    .route(&quot;&#x2F;&quot;, get(home))\n    .layer(AddExtensionLayer::new(AppState::default()));\n</code></pre>\n<p>This says, essentially, &quot;please call the <code>home</code> function when you receive a request for <code>/</code>, and add a middleware that does that whole extension thing.&quot; The <code>home</code> function uses an extractor to get the <code>AppState</code>, and returns a value of type <code>(StatusCode, HeaderMap, String)</code> to represent the response. In Axum, any implementation of the appropriately named <a href=\"https://docs.rs/axum/0.2.3/axum/response/trait.IntoResponse.html\"><code>IntoResponse</code> trait</a> can be returned from handler functions.</p>\n<p>Anyway, our <code>app</code> value is now a <code>Router</code>. But a <code>Router</code> cannot be directly run by Hyper. Instead, we need to convert it into a <code>MakeService</code> (a.k.a. an app factory). Fortunately, that's easy: we call <code>app.into_make_service()</code>. Let's look at that method's signature:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;S&gt; Router&lt;S&gt; {\n    pub fn into_make_service(self) -&gt; IntoMakeService&lt;S&gt;\n    where\n        S: Clone;\n}\n</code></pre>\n<p>And going down the rabbit hole a bit further:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub struct IntoMakeService&lt;S&gt; { &#x2F;* fields omitted *&#x2F; }\n\nimpl&lt;S: Clone, T&gt; Service&lt;T&gt; for IntoMakeService&lt;S&gt; {\n    type Response = S;\n    type Error = Infallible;\n    &#x2F;&#x2F; other stuff omitted\n}\n</code></pre>\n<p>The type <code>Router&lt;S&gt;</code> is a value that can produce a service of type <code>S</code>. <code>IntoMakeService&lt;S&gt;</code> will take some kind of connection info, <code>T</code>, and produce that service <code>S</code> asynchronously. And since <code>Error</code> is <code>Infallible</code>, we know it can't fail. But as much as we say &quot;asynchronously&quot;, looking at the implementation of <code>Service</code> for <code>IntoMakeService</code>, we see a familiar pattern:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn poll_ready(&amp;mut self, _cx: &amp;mut Context&lt;&#x27;_&gt;) -&gt; Poll&lt;Result&lt;(), Self::Error&gt;&gt; {\n    Poll::Ready(Ok(()))\n}\n\nfn call(&amp;mut self, _target: T) -&gt; Self::Future {\n    future::MakeRouteServiceFuture {\n        future: ready(Ok(self.service.clone())),\n    }\n}\n</code></pre>\n<p>Also, notice how that <code>T</code> value for connection info doesn't actually have any bounds or other information. <code>IntoMakeService</code> just throws away the connection information. (If you need it for some reason, see <a href=\"https://docs.rs/axum/0.2.3/axum/routing/struct.Router.html#method.into_make_service_with_connect_info\"><code>into_make_service_with_connect_info</code></a>.) In other words:</p>\n<ul>\n<li><code>Router&lt;S&gt;</code> is a type that lets us add routes and middleware layers</li>\n<li>You can convert a <code>Router&lt;S&gt;</code> into an <code>IntoMakeService&lt;S&gt;</code></li>\n<li>But <code>IntoMakeService&lt;S&gt;</code> is really just a fancy wrapper around an <code>S</code> to appease the Hyper requirements around app factories</li>\n<li>So the real workhorse here is just <code>S</code></li>\n</ul>\n<p>So where does that <code>S</code> type come from? It's built up by all the <code>route</code> and <code>layer</code> calls you make. For example, check out the <code>get</code> function's signature:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub fn get&lt;H, B, T&gt;(handler: H) -&gt; OnMethod&lt;H, B, T, EmptyRouter&gt;\nwhere\n    H: Handler&lt;B, T&gt;,\n\npub struct OnMethod&lt;H, B, T, F&gt; { &#x2F;* fields omitted *&#x2F; }\n\nimpl&lt;H, B, T, F&gt; Service&lt;Request&lt;B&gt;&gt; for OnMethod&lt;H, B, T, F&gt;\nwhere\n    H: Handler&lt;B, T&gt;,\n    F: Service&lt;Request&lt;B&gt;, Response = Response&lt;BoxBody&gt;, Error = Infallible&gt; + Clone,\n    B: Send + &#x27;static,\n{\n    type Response = Response&lt;BoxBody&gt;;\n    type Error = Infallible;\n    &#x2F;&#x2F; and more stuff\n}\n</code></pre>\n<p><code>get</code> returns an <code>OnMethod</code> value. And <code>OnMethod</code> is a <code>Service</code> that takes a <code>Request&lt;B&gt;</code> and returns a <code>Response&lt;BoxBody&gt;</code>. There's some funny business at play regarding the representations of bodies, which we'll eventually dive into a bit more. But with our newfound understanding of Tower and Hyper, the types at play here are no longer inscrutable. In fact, they may even be scrutable!</p>\n<p>And one final note on the example above. Axum works directly with a lot of the Hyper machinery. And that includes the <code>Server</code> type. While the <code>axum</code> crate reexports many things from Hyper, you can use those types directly from Hyper instead if so desired. In other words, Axum is pretty close to the underlying libraries, simply providing some convenience on top. It's one of the reasons I'm pretty excited to get a bit deeper into my experiments with Axum.</p>\n<p>So to sum up at this point:</p>\n<ul>\n<li>Tower provides an abstraction for asynchronous functions from input to output, which may fail. This is called a service.</li>\n<li>HTTP servers have two levels of services. The lower level is a service from HTTP requests to HTTP responses. The upper level is a service from connection information to the lower level service.</li>\n<li>Hyper has a lot of additional traits floating around, some visible, some invisible, which allow for more generality, and also make things a bit more complicated to understand.</li>\n<li>Axum sits on top of Hyper and provides an easier to use interface for many common cases. It does this by providing the same kind of services that Hyper is expecting to see. And it seems to be doing a bunch of fancy footwork around HTTP body representations.</li>\n</ul>\n<p>Next step on our journey: let's look at another library for building Hyper services. We'll follow up on this in our next post.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part3\">Read part 3 now</a></p>\n<p>If you're looking for more Rust content from FP Complete, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
        "slug": "axum-hyper-tonic-tower-part2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2",
        "description": "Part 2 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
        "updated": null,
        "date": "2021-09-06",
        "year": 2021,
        "month": 9,
        "day": 6,
        "taxonomies": {
          "tags": [
            "rust"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/axum-hyper-tonic-tower-part2.png",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/axum-hyper-tonic-tower-part2/",
        "components": [
          "blog",
          "axum-hyper-tonic-tower-part2"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "quick-recap",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#quick-recap",
            "title": "Quick recap",
            "children": []
          },
          {
            "level": 2,
            "id": "services-in-hyper",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#services-in-hyper",
            "title": "Services in Hyper",
            "children": []
          },
          {
            "level": 2,
            "id": "bypassing-service-fn-and-make-service-fn",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#bypassing-service-fn-and-make-service-fn",
            "title": "Bypassing service_fn and make_service_fn",
            "children": []
          },
          {
            "level": 2,
            "id": "understanding-the-traits",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#understanding-the-traits",
            "title": "Understanding the traits",
            "children": [
              {
                "level": 3,
                "id": "i-accept",
                "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#i-accept",
                "title": "I: Accept",
                "children": []
              },
              {
                "level": 3,
                "id": "s-makeserviceref",
                "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#s-makeserviceref",
                "title": "S: MakeServiceRef",
                "children": []
              },
              {
                "level": 3,
                "id": "newsvcexec-and-connstreamexec",
                "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#newsvcexec-and-connstreamexec",
                "title": "NewSvcExec and ConnStreamExec",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "using-axum",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#using-axum",
            "title": "Using Axum",
            "children": []
          }
        ],
        "word_count": 3119,
        "reading_time": 16,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4"
          }
        ]
      },
      {
        "relative_path": "blog/axum-hyper-tonic-tower-part1.md",
        "colocated_path": null,
        "content": "<p>I've played around with various web server libraries and frameworks in Rust, and found various strengths and weaknesses with them. Most recently, I put together an FP Complete solution called Zehut (which I'll blog about another time) that needed to combine a web frontend and gRPC server. I used Hyper, Tonic, and a minimal library I put together called <a href=\"https://github.com/snoyberg/routetype-rs\">routetype</a>. It worked, but I was left underwhelmed. Working directly with Hyper, even with the minimal <code>routetype</code> layer, felt too ad-hoc.</p>\n<p>When I recently saw the release of <a href=\"https://lib.rs/crates/axum\">Axum</a>, it seemed to be speaking to many of the needs I had, especially calling out Tonic support. I decided to make an experiment of replacing the direct Hyper+<code>routetype</code> usage I'd used with Axum. Overall the approach works, but (like the <code>routetype</code> work I'd already done) involved some hairy business around the Hyper and Tower APIs.</p>\n<p>I've been meaning to write some blog post/tutorial/experience report for Hyper+Tower for a while now. So I decided to take this opportunity to step through these four libraries (Tower, Hyper, Axum, and Tonic), with the specific goal in mind of creating hybrid web/gRPC apps. It turned out that there was more information here than I'd anticipated. To make for easier reading, I've split this up into a four part blog post series:</p>\n<ol>\n<li>Today's post: overview of Tower</li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<p>Let's dive in!</p>\n<p class=\"text-center\" style=\"border: 1px solid #000;border-radius:1rem;padding:1rem;background-color:#f1f1f1\">\n  <a class=\"btn btn-primary\" href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\" target=\"_blank\">\n    Subscribe to our blog via email\n  </a>\n  <br>\n  <small>Email subscriptions come from our <a target=\"_blank\" href=\"/feed/\">Atom feed</a> and are handled by <a target=\"_blank\" href=\"https://blogtrottr.com\">Blogtrottr</a>. You will only receive notifications of blog posts, and can unsubscribe any time.</small>\n</p>\n<h2 id=\"what-is-tower\">What is Tower?</h2>\n<p>The first stop on our journey is the <a href=\"https://lib.rs/crates/tower\">tower crate</a>. To quote the docs, which state this succinctly:</p>\n<blockquote>\n<p>Tower provides a simple core abstraction, the <code>Service</code> trait, which represents an asynchronous function taking a request and returning either a response or an error. This abstraction can be used to model both clients and servers.</p>\n</blockquote>\n<p>This sounds fairly straightforward. To express it in Haskell syntax, I'd probably say <code>Request -&gt; IO Response</code>, leveraging the fact that <code>IO</code> handles both error handling and asynchronous I/O. But the <code>Service</code> trait is necessarily more complex than that simplified signature:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub trait Service&lt;Request&gt; {\n    type Response;\n    type Error;\n\n    &#x2F;&#x2F; This is what it says in the generated docs\n    type Future: Future;\n\n    &#x2F;&#x2F; But this more informative piece is in the actual source code\n    type Future: Future&lt;Output = Result&lt;Self::Response, Self::Error&gt;&gt;;\n\n    fn poll_ready(\n        &amp;mut self,\n        cx: &amp;mut Context&lt;&#x27;_&gt;\n    ) -&gt; Poll&lt;Result&lt;(), Self::Error&gt;&gt;;\n    fn call(&amp;mut self, req: Request) -&gt; Self::Future;\n}\n</code></pre>\n<p><code>Service</code> is a trait, parameterized on the types of <code>Request</code>s it can handle. There's nothing specific about HTTP in Tower, so <code>Request</code>s may be lots of different things. And even within Hyper, an HTTP library leveraging Tower, we'll see that there are at least two different types of <code>Request</code> we care about.</p>\n<p>Anyway, two of the associated types here are straightforward: <code>Response</code> and <code>Error</code>. Combining the parameterized <code>Request</code> with <code>Response</code> and <code>Error</code>, we basically have all the information we care about for a <code>Service</code>.</p>\n<p>But it's <em>not</em> all the information Rust cares about. To provide for asynchronous calls, we need to provide a <code>Future</code>. And the compiler needs to know the type of the <code>Future</code> we'll be returning. This isn't really useful information to use as a programmer, but there are <a href=\"https://lib.rs/crates/async-trait\">plenty of pain points already</a> around <code>async</code> code in traits.</p>\n<p>And finally, what about those last two methods? They are there to allow the <code>Service</code> itself to be asynchronous. It took me quite a while to fully wrap my head around this. We have two different components of async behavior going on here:</p>\n<ul>\n<li>The <code>Service</code> may not be immediately ready to handle a new incoming request. For example (coming from <a href=\"https://docs.rs/tower-service/0.3.1/src/tower_service/lib.rs.html#244-257\">the docs on <code>poll_ready</code></a>), the server may currently be at capacity. You need to check <code>poll_ready</code> to find out whether the <code>Service</code> is ready to accept a new request. Then, when it's ready, you use <code>call</code> to initiate handling of a new <code>Request</code>.</li>\n<li>The handling of the request itself is <em>also</em> async, returning a <code>Future</code>, which can be polled/awaited.</li>\n</ul>\n<p>Some of this complexity can be hidden away. For example, instead of giving a concrete type for <code>Future</code>, you can use a trait object (a.k.a. type erasure). Stealing again from the docs, the following is a perfectly valid associated type for <code>Future</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">type Future = Pin&lt;Box&lt;dyn Future&lt;Output = Result&lt;Self::Response, Self::Error&gt;&gt;&gt;&gt;;\n</code></pre>\n<p>However, this incurs some overhead for dynamic dispatch.</p>\n<p>Finally, these two layers of async behavior are often unnecessary. Many times, our server is <em>always</em> ready to handle a new incoming <code>Request</code>. In the wild, you'll often see code that hard-codes the idea that a service is always ready. To quote from those docs for the final time in this section:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn poll_ready(&amp;mut self, cx: &amp;mut Context&lt;&#x27;_&gt;) -&gt; Poll&lt;Result&lt;(), Self::Error&gt;&gt; {\n    Poll::Ready(Ok(()))\n}\n</code></pre>\n<p>This isn't saying that request handling is synchronous in our <code>Service</code>. It's saying that request <em>acceptance</em> always succeeds immediately.</p>\n<p>Going along with the two layers of async handling, there are similarly two layers of error handling. Both accepting the new request may fail, and processing the new request may fail. But as you can see in the code above, it's possible to hard-code something which always succeeds with <code>Ok(())</code>, which is fairly common for <code>poll_ready</code>. When processing the request itself also cannot fail, using <a href=\"https://doc.rust-lang.org/stable/std/convert/enum.Infallible.html\"><code>Infallible</code></a> (and eventually <a href=\"https://doc.rust-lang.org/stable/std/primitive.never.html\">the <code>never</code> type</a>) as the <code>Error</code> associated type is a good call.</p>\n<h2 id=\"fake-web-server\">Fake web server</h2>\n<p>That was all relatively abstract, which is part of the problem with understanding Tower (at least for me). Let's make it more concrete by implementing a fake web server and fake web application. My <code>Cargo.toml</code> file looks like:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[package]\nname = &quot;learntower&quot;\nversion = &quot;0.1.0&quot;\nedition = &quot;2018&quot;\n\n[dependencies]\ntower = { version = &quot;0.4&quot;, features = [&quot;full&quot;] }\ntokio = { version = &quot;1&quot;, features = [&quot;full&quot;] }\nanyhow = &quot;1&quot;\n</code></pre>\n<p>I've uploaded <a href=\"https://gist.github.com/snoyberg/c6c54ed38ec8fac966e362eb212ab421\">the full source code as a Gist</a>, but let's walk through this example. First we define some helper types to represent HTTP request and response values:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub struct Request {\n    pub path_and_query: String,\n    pub headers: HashMap&lt;String, String&gt;,\n    pub body: Vec&lt;u8&gt;,\n}\n\n#[derive(Debug)]\npub struct Response {\n    pub status: u32,\n    pub headers: HashMap&lt;String, String&gt;,\n    pub body: Vec&lt;u8&gt;,\n}\n</code></pre>\n<p>Next we want to define a function, <code>run</code>, which:</p>\n<ul>\n<li>Accepts a web application as an argument</li>\n<li>Loops infinitely</li>\n<li>Generates fake <code>Request</code> values</li>\n<li>Prints out the <code>Response</code> values it gets from the application</li>\n</ul>\n<p>The first question is: how do you represent that web application? It's going to be an implementation of <code>Service</code>, with the <code>Request</code> and <code>Response</code> types being those we defined above. We don't need to know much about the errors, since we'll simply print them. These parts are pretty easy:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub async fn run&lt;App&gt;(mut app: App)\nwhere\n    App: Service&lt;crate::http::Request, Response = crate::http::Response&gt;,\n    App::Error: std::fmt::Debug,\n</code></pre>\n<p>But there's one final bound we need to take into account. We want our fake web server to be able to handle requests concurrently. To do that, we'll use <code>tokio::spawn</code> to create new tasks for handling requests. Therefore, we need to be able to send the request handling to a separate task, which will require bounds of both <code>Send</code> and <code>'static</code>. There are at least two different ways of handling this:</p>\n<ul>\n<li>Cloning the <code>App</code> value in the main task and sending it to the spawned task</li>\n<li>Creating the <code>Future</code> in the main task and sending it to the spawned task</li>\n</ul>\n<p>There are different runtime impacts of making this decision, such as whether the main request accept loop will be blocked or not by the application reporting that it's not available for requests. I decided to go with the latter approach. So we've got one more bound on <code>run</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">App::Future: Send + &#x27;static,\n</code></pre>\n<p>The body of <code>run</code> is wrapped inside a <code>loop</code> to allow simulating an infinitely running server. First we sleep for a bit and then generate our new fake request:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;\n\nlet req = crate::http::Request {\n    path_and_query: &quot;&#x2F;fake&#x2F;path?page=1&quot;.to_owned(),\n    headers: HashMap::new(),\n    body: Vec::new(),\n};\n</code></pre>\n<p>Next, we use the <code>ready</code> method (from the <code>ServiceExt</code> extension trait) to check whether the service is ready to accept a new request:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let app = match app.ready().await {\n    Err(e) =&gt; {\n        eprintln!(&quot;Service not able to accept requests: {:?}&quot;, e);\n        continue;\n    }\n    Ok(app) =&gt; app,\n};\n</code></pre>\n<p>Once we know we can make another request, we get our <code>Future</code>, spawn the task, and then wait for the <code>Future</code> to complete:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let future = app.call(req);\ntokio::spawn(async move {\n    match future.await {\n        Ok(res) =&gt; println!(&quot;Successful response: {:?}&quot;, res),\n        Err(e) =&gt; eprintln!(&quot;Error occurred: {:?}&quot;, e),\n    }\n});\n</code></pre>\n<p>And just like that, we have a fake web server! Now it's time to implement our fake web application. I'll call it <code>DemoApp</code>, and give it an atomic counter to make things slightly interesting:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Default)]\npub struct DemoApp {\n    counter: Arc&lt;AtomicUsize&gt;,\n}\n</code></pre>\n<p>Next comes the implementation of <code>Service</code>. The first few bits are relatively easy:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl tower::Service&lt;crate::http::Request&gt; for DemoApp {\n    type Response = crate::http::Response;\n    type Error = anyhow::Error;\n    #[allow(clippy::type_complexity)]\n    type Future = Pin&lt;Box&lt;dyn Future&lt;Output = Result&lt;Self::Response, Self::Error&gt;&gt; + Send&gt;&gt;;\n\n    &#x2F;&#x2F; Still need poll_ready and call\n}\n</code></pre>\n<p><code>Request</code> and <code>Response</code> get set to the types we defined, we'll use the wonderful <code>anyhow</code> crate's <code>Error</code> type, and we'll use a trait object for the <code>Future</code>. We're going to implement a <code>poll_ready</code> which is always ready for a <code>Request</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn poll_ready(\n    &amp;mut self,\n    _cx: &amp;mut std::task::Context&lt;&#x27;_&gt;,\n) -&gt; Poll&lt;Result&lt;(), Self::Error&gt;&gt; {\n    Poll::Ready(Ok(())) &#x2F;&#x2F; always ready to accept a connection\n}\n</code></pre>\n<p>And finally we get to our <code>call</code> method. We're going to implement some logic to increment the counter, fail 25% of the time, and otherwise echo back the request from the user, with an added <code>X-Counter</code> response header. Let's see it in action:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn call(&amp;mut self, mut req: crate::http::Request) -&gt; Self::Future {\n    let counter = self.counter.clone();\n    Box::pin(async move {\n        println!(&quot;Handling a request for {}&quot;, req.path_and_query);\n        let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n        anyhow::ensure!(counter % 4 != 2, &quot;Failing 25% of the time, just for fun&quot;);\n        req.headers\n            .insert(&quot;X-Counter&quot;.to_owned(), counter.to_string());\n        let res = crate::http::Response {\n            status: 200,\n            headers: req.headers,\n            body: req.body,\n        };\n        Ok::&lt;_, anyhow::Error&gt;(res)\n    })\n}\n</code></pre>\n<p>With all that in place, running our fake web app on our fake web server is nice and easy:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n    fakeserver::run(app::DemoApp::default()).await;\n}\n</code></pre>\n<h2 id=\"app-fn\"><code>app_fn</code></h2>\n<p>One thing that's particularly unsatisfying about the code above is how much ceremony it takes to write a web application. I need to create a new data type, provide a <code>Service</code> implementation for it, and futz around with all that <code>Pin&lt;Box&lt;Future&gt;&gt;</code> business to make things line up. The core logic of our <code>DemoApp</code> is buried inside the <code>call</code> method. It would be nice to provide a helper of some kind that lets us define things more easily.</p>\n<p>You can check out <a href=\"https://gist.github.com/snoyberg/cb72a9cbefc608ec15e05ed70ced1a6b\">the full code as a Gist</a>. But let's talk through it here.  We're going to implement a new helper <code>app_fn</code> function which takes a closure as its argument. That closure will take in a <code>Request</code> value, and then return a <code>Response</code>. But we want to make sure it asynchronously returns the <code>Response</code>. So we'll need our calls to look something like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">app_fn(|req| async { some_code(req).await })\n</code></pre>\n<p>This <code>app_fn</code> function needs to return a type which provides our <code>Service</code> implementation. Let's call it <code>AppFn</code>. Putting these two things together, we get:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub struct AppFn&lt;F&gt; {\n    f: F,\n}\n\npub fn app_fn&lt;F, Ret&gt;(f: F) -&gt; AppFn&lt;F&gt;\nwhere\n    F: FnMut(crate::http::Request) -&gt; Ret,\n    Ret: Future&lt;Output = Result&lt;crate::http::Response, anyhow::Error&gt;&gt;,\n{\n    AppFn { f }\n}\n</code></pre>\n<p>So far, so good. We can see with the bounds on <code>app_fn</code> that we'll accept a <code>Request</code> and return some <code>Ret</code> type, and <code>Ret</code> must be a <code>Future</code> that produces a <code>Result&lt;Response, Error&gt;</code>. Implementing <code>Service</code> for this isn't too bad:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;F, Ret&gt; tower::Service&lt;crate::http::Request&gt; for AppFn&lt;F&gt;\nwhere\n    F: FnMut(crate::http::Request) -&gt; Ret,\n    Ret: Future&lt;Output = Result&lt;crate::http::Response, anyhow::Error&gt;&gt;,\n{\n    type Response = crate::http::Response;\n    type Error = anyhow::Error;\n    type Future = Ret;\n\n    fn poll_ready(\n        &amp;mut self,\n        _cx: &amp;mut std::task::Context&lt;&#x27;_&gt;,\n    ) -&gt; Poll&lt;Result&lt;(), Self::Error&gt;&gt; {\n        Poll::Ready(Ok(())) &#x2F;&#x2F; always ready to accept a connection\n    }\n\n    fn call(&amp;mut self, req: crate::http::Request) -&gt; Self::Future {\n        (self.f)(req)\n    }\n}\n</code></pre>\n<p>We have the same bounds as on <code>app_fn</code>, the associated types <code>Response</code> and <code>Error</code> are straightforward, and <code>poll_ready</code> is the same as it was before. The first interesting bit is <code>type Future = Ret;</code>. We previously went the route of a trait object, which was more verbose and less performant. This time, we already have a type, <code>Ret</code>, that represents the <code>Future</code> the caller of our function will be providing. It's really nice that we get to simply use it here!</p>\n<p>The <code>call</code> method leverages the function provided by the caller to produce a new <code>Ret</code>/<code>Future</code> value per incoming request and hand it back to the web server for processing.</p>\n<p>And finally, our <code>main</code> function can now embed our application logic inside it as a closure. This looks like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n    let counter = Arc::new(AtomicUsize::new(0));\n    fakeserver::run(util::app_fn(move |mut req| {\n        &#x2F;&#x2F; need to clone this from the closure before moving it into the async block\n        let counter = counter.clone();\n        async move {\n            println!(&quot;Handling a request for {}&quot;, req.path_and_query);\n            let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n            anyhow::ensure!(counter % 4 != 2, &quot;Failing 25% of the time, just for fun&quot;);\n            req.headers\n                .insert(&quot;X-Counter&quot;.to_owned(), counter.to_string());\n            let res = crate::http::Response {\n                status: 200,\n                headers: req.headers,\n                body: req.body,\n            };\n            Ok::&lt;_, anyhow::Error&gt;(res)\n        }\n    }))\n    .await;\n}\n</code></pre>\n<h3 id=\"side-note-the-extra-clone\">Side note: the extra clone</h3>\n<p>From bitter experience, both my own and others I've spoken with, that <code>let counter = counter.clone();</code> above is likely the trickiest piece of the code above. It's all too easy to write code that looks something like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let counter = Arc::new(AtomicUsize::new(0));\nfakeserver::run(util::app_fn(move |_req| async move {\n    let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n    Err(anyhow::anyhow!(\n        &quot;Just demonstrating the problem, counter is {}&quot;,\n        counter\n    ))\n}))\n.await;\n</code></pre>\n<p>This looks perfectly reasonable. We move the <code>counter</code> into the closure and then use it. However, the compiler isn't too happy with us:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">error[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n   --&gt; src\\main.rs:96:57\n    |\n95  |       let counter = Arc::new(AtomicUsize::new(0));\n    |           ------- captured outer variable\n96  |       fakeserver::run(util::app_fn(move |_req| async move {\n    |  _________________________________________________________^\n97  | |         let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n    | |                       -------\n    | |                       |\n    | |                       move occurs because `counter` has type `Arc&lt;AtomicUsize&gt;`, which does not implement the `Copy` trait\n    | |                       move occurs due to use in generator\n98  | |         Err(anyhow::anyhow!(\n99  | |             &quot;Just demonstrating the problem, counter is {}&quot;,\n100 | |             counter\n101 | |         ))\n102 | |     }))\n    | |_____^ move out of `counter` occurs here\n</code></pre>\n<p>It's a slightly confusing error message. In my opinion, it's confusing because of the formatting I've used. And I've used that formatting because (1) <code>rustfmt</code> encourages it, and (2) the Hyper docs encourage it. Let me reformat a bit, and then explain the issue:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let counter = Arc::new(AtomicUsize::new(0));\nfakeserver::run(util::app_fn(move |_req| {\n    async move {\n        let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n        Err(anyhow::anyhow!(\n            &quot;Just demonstrating the problem, counter is {}&quot;,\n            counter\n        ))\n    }\n}))\n</code></pre>\n<p>The issue is that, in the argument to <code>app_fn</code>, we have two different control structures:</p>\n<ul>\n<li>A move closure, which takes ownership of <code>counter</code> and produces a <code>Future</code></li>\n<li>An <code>async move</code> block, which takes ownership of <code>counter</code></li>\n</ul>\n<p>The issue is that there's only one <code>counter</code> value. It gets moved first into the closure. That means we can't use <code>counter</code> again outside the closure, which we don't try to do. All good. The second thing is that, when that closure is called, the <code>counter</code> value will be moved from the closure into the <code>async move</code> block. That's also fine, but it's only fine once. If you try to call the closure a second time, it would fail, because the <code>counter</code> has already been moved. Therefore, this closure is a <code>FnOnce</code>, not a <code>Fn</code> or <code>FnMut</code>.</p>\n<p>And that's the problem here. As we saw above, we need at least a <code>FnMut</code> as our argument to the fake web server. This makes intuitive sense: we will call our application request handling function multiple times, not just once.</p>\n<p>The fix for this is to clone the <code>counter</code> inside the closure body, but before moving it into the <code>async move</code> block. That's easy enough:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fakeserver::run(util::app_fn(move |_req| {\n    let counter = counter.clone();\n    async move {\n        let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n        Err(anyhow::anyhow!(\n            &quot;Just demonstrating the problem, counter is {}&quot;,\n            counter\n        ))\n    }\n}))\n</code></pre>\n<p>This is a really subtle point, hopefully this demonstration will help make it clearer.</p>\n<h2 id=\"connections-and-requests\">Connections and requests</h2>\n<p>There's a simplification in our fake web server above. A real HTTP workflow starts off with a new connection, and then handles a stream of requests off of that connection. In other words, instead of having just one service, we really need two services:</p>\n<ol>\n<li>A service like we have above, which accepts <code>Request</code>s and returns <code>Response</code>s</li>\n<li>A service that accepts connection information and returns one of the above services</li>\n</ol>\n<p>Again, leaning on some terse Haskell syntax, we'd want:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">type InnerService = Request -&gt; IO Response\ntype OuterService = ConnectionInfo -&gt; IO InnerService\n</code></pre>\n<p>Or, to borrow some beautiful Java terminology, we want to create a <em>service factory</em> which will take some connection information and return a request handling service. Or, to use Tower/Hyper terminology, we have a <em>service</em>, and a <em>make service</em>. Which, if you've ever been confused by the Hyper tutorials like I was, may finally explain why &quot;Hello World&quot; requires both a <code>service_fn</code> and <code>make_service_fn</code> call.</p>\n<p>Anyway, it's too detailed to dive into all the changes necessary to the code above to replicate this concept, but I've <a href=\"https://gist.github.com/snoyberg/b574ef4ece5f23913c6c70b1f4f22ed5\">provided a Gist showing an <code>AppFactoryFn</code></a>.</p>\n<p>And with that... we've finally played around with fake stuff long enough that we can dive into real life Hyper code. Hurrah!</p>\n<h2 id=\"next-time\">Next time</h2>\n<p>Up until this point, we've only played with Tower. The next post in this series is available, where we try to <a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">understand Hyper and experiment with Axum</a>.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part2\">Read part 2 now</a></p>\n<p>If you're looking for more Rust content from FP Complete, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
        "slug": "axum-hyper-tonic-tower-part1",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1",
        "description": "Part 1 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
        "updated": null,
        "date": "2021-08-30",
        "year": 2021,
        "month": 8,
        "day": 30,
        "taxonomies": {
          "tags": [
            "rust"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/axum-hyper-tonic-tower-part1.png",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/axum-hyper-tonic-tower-part1/",
        "components": [
          "blog",
          "axum-hyper-tonic-tower-part1"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "what-is-tower",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#what-is-tower",
            "title": "What is Tower?",
            "children": []
          },
          {
            "level": 2,
            "id": "fake-web-server",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#fake-web-server",
            "title": "Fake web server",
            "children": []
          },
          {
            "level": 2,
            "id": "app-fn",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#app-fn",
            "title": "app_fn",
            "children": [
              {
                "level": 3,
                "id": "side-note-the-extra-clone",
                "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#side-note-the-extra-clone",
                "title": "Side note: the extra clone",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "connections-and-requests",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#connections-and-requests",
            "title": "Connections and requests",
            "children": []
          },
          {
            "level": 2,
            "id": "next-time",
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#next-time",
            "title": "Next time",
            "children": []
          }
        ],
        "word_count": 3168,
        "reading_time": 16,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
            "title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
            "title": "Levana NFT Launch"
          }
        ]
      },
      {
        "relative_path": "blog/rust-asref-asderef.md",
        "colocated_path": null,
        "content": "<p>What's wrong with this program?</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    match option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>The compiler gives us a wonderful error message, including a hint on how to fix it:</p>\n<pre><code>error[E0382]: borrow of partially moved value: `option_name`\n --&gt; src\\main.rs:7:22\n  |\n4 |         Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n  |              ---- value partially moved here\n...\n7 |     println!(&quot;{:?}&quot;, option_name);\n  |                      ^^^^^^^^^^^ value borrowed here after partial move\n  |\n  = note: partial move occurs because value has type `String`, which does not implement the `Copy` trait\nhelp: borrow this field in the pattern to avoid moving `option_name.0`\n  |\n4 |         Some(ref name) =&gt; println!(&quot;Name is {}&quot;, name),\n  |              ^^^\n</code></pre>\n<p>The issue here is that our pattern match on <code>option_name</code> moves the <code>Option&lt;String&gt;</code> value into the match. We can then no longer use <code>option_name</code> after the <code>match</code>. But this is disappointing, because our usage of <code>option_name</code> and <code>name</code> inside the pattern match doesn't actually require moving the value at all! Instead, borrowing would be just fine.</p>\n<p>And that's exactly what the <code>note</code> from the compiler says. We can use the <code>ref</code> keyword in the <a href=\"https://doc.rust-lang.org/stable/reference/patterns.html#identifier-patterns\">identifier pattern</a> to change this behavior and, instead of <em>moving</em> the value, we'll borrow a reference to the value. Now we're free to reuse <code>option_name</code> after the <code>match</code>. That version of the code looks like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    match option_name {\n        Some(ref name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>For the curious, you can <a href=\"https://doc.rust-lang.org/std/keyword.ref.html\">read more about the <code>ref</code> keyword</a>.</p>\n<h2 id=\"more-idiomatic\">More idiomatic</h2>\n<p>While this is <em>working</em> code, in my opinion and experience, it's not idiomatic. It's far more common to put the borrow on <code>option_name</code>, like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    match &amp;option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>I like this version more, since it's blatantly obvious that we have no intention of moving <code>option_name</code> in the pattern match. Now <code>name</code> still remains as a reference, <code>println!</code> can use it as a reference, and everything is fine.</p>\n<p>The fact that this code works, however, is a specifically added feature of the language. Before <a href=\"https://rust-lang.github.io/rfcs/2005-match-ergonomics.html\">RFC 2005 &quot;match ergonomics&quot; landed in 2016</a>, the code above would have failed. That's because we tried to match the <code>Some</code> constructor against a <em>reference</em> to an <code>Option</code>, and those types don't match up. To borrow the RFC's terminology, getting that code to work would require &quot;a bit of a dance&quot;:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    match &amp;option_name {\n        &amp;Some(ref name) =&gt; println!(&quot;Name is {}&quot;, name),\n        &amp;None =&gt; println!(&quot;No name provided&quot;),\n    }\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>Now all of the types really line up explicitly:</p>\n<ul>\n<li>We have an <code>&amp;Option&lt;String&gt;</code></li>\n<li>We can therefore match on a <code>&amp;Some</code> variant or a <code>&amp;None</code> variant</li>\n<li>In the <code>&amp;Some</code> variant, we need to make sure we borrow the inner value, so we add a <code>ref</code> keyword</li>\n</ul>\n<p>Fortunately, with RFC 2005 in place, this extra noise isn't needed, and we can simplify our pattern match as above. The Rust language is better for this change, and the masses can rejoice.</p>\n<h2 id=\"introducing-as-ref\">Introducing as_ref</h2>\n<p>But what if we didn't have RFC 2005? Would we be required to use the awkward syntax above forever? Thanks to a helper method, no. The problem in our code is that <code>&amp;option_name</code> is a reference to an <code>Option&lt;String&gt;</code>. And we want to pattern match on the <code>Some</code> and <code>None</code> constructors, and capture a <code>&amp;String</code> instead of a <code>String</code> (avoiding the move). RFC 2005 implements that as a direct language feature. But there's also a method on <code>Option</code> that does just this: <code>as_ref</code>.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;T&gt; Option&lt;T&gt; {\n    pub const fn as_ref(&amp;self) -&gt; Option&lt;&amp;T&gt; {\n        match *self {\n            Some(ref x) =&gt; Some(x),\n            None =&gt; None,\n        }\n    }\n}\n</code></pre>\n<p>This is another way of avoiding the &quot;dance,&quot; by capturing it in the method definition itself. But thankfully, there's a great language ergonomics feature that captures this pattern, and automatically applies this rule for us. Meaning that <code>as_ref</code> isn't really necessary any more... right?</p>\n<h2 id=\"side-rant-ergonomics-in-rust\">Side rant: ergonomics in Rust</h2>\n<p>I absolutely love the ergonomics features of Rust. There is no &quot;but&quot; in my love for RFC 2005. There is, however, a concern around learning and teaching a language with these kinds of ergonomics. These kinds of features work 99% of the time. But when they fail, as we're about to see, it can come as a large shock.</p>\n<p>I'm guessing most Rustaceans, at least those that learned the language after 2016, never considered the fact that there was something weird about being able to pattern match a <code>Some</code> from an <code>&amp;Option&lt;String&gt;</code> value. It feels natural. It <em>is</em> natural. But because you were never forced to confront this while learning the language, at some point in the distant future you'll crash into a wall when this ergonomic feature doesn't kick in.</p>\n<p>I kind of wish there was a <code>--no-ergonomics</code> flag that we could turn on when learning the language to force us to confront all of these details. But there isn't. I'm hoping blog posts like this help out. Anyway, &lt;/rant&gt;.</p>\n<h2 id=\"when-rfc-2005-fails\">When RFC 2005 fails</h2>\n<p>We can fairly easily create a contrived example of match ergonomics failing to solve our problem. Let's &quot;improve&quot; our program above by factoring out the greet logic to its own helper function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn try_greet(option_name: Option&lt;&amp;String&gt;) {\n    match option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n}\n\nfn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    try_greet(&amp;option_name);\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>This code won't compile:</p>\n<pre><code>error[E0308]: mismatched types\n  --&gt; src\\main.rs:10:15\n   |\n10 |     try_greet(&amp;option_name);\n   |               ^^^^^^^^^^^^\n   |               |\n   |               expected enum `Option`, found `&amp;Option&lt;String&gt;`\n   |               help: you can convert from `&amp;Option&lt;T&gt;` to `Option&lt;&amp;T&gt;` using `.as_ref()`: `&amp;option_name.as_ref()`\n   |\n   = note:   expected enum `Option&lt;&amp;String&gt;`\n           found reference `&amp;Option&lt;String&gt;`\n</code></pre>\n<p>Now we've bypassed any ability to use match ergonomics at the call site. With what we know about <code>as_ref</code>, it's easy enough to fix this. But, at least in my experience, the first time someone runs into this kind of error, it's a bit surprising, since most of us have never previously thought about the distinction between <code>Option&lt;&amp;T&gt;</code> and <code>&amp;Option&lt;T&gt;</code>.</p>\n<p>These kinds of errors tend to pop up when combining together other helper functions, such as <code>map</code>, which circumvent the need for explicit pattern matching.</p>\n<p>As an aside, you could solve this compile error pretty easily, without resorting to <code>as_ref</code>. Instead, you could change the type signature of <code>try_greet</code> to take a <code>&amp;Option&lt;String&gt;</code> instead of an <code>Option&lt;&amp;String&gt;</code>, and then allow the match ergonomics to kick in within the body of <code>try_greet</code>. One reason not to do this is that, as mentioned, this was all a contrived example to demonstrate a failure. But the other reason is more important: neither <code>&amp;Option&lt;String&gt;</code> nor <code>Option&lt;&amp;String&gt;</code> are good argument types. Let's explore that next.</p>\n<h2 id=\"when-as-ref-fails\">When as_ref fails</h2>\n<p>We're taught pretty early in our Rust careers that, when receiving an argument to a function, we should prefer taking references to slices instead of references to owned objects. In other words:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn greet_good(name: &amp;str) {\n    println!(&quot;Name is {}&quot;, name);\n}\n\nfn greet_bad(name: &amp;String) {\n    println!(&quot;Name is {}&quot;, name);\n}\n</code></pre>\n<p>And in fact, if you pass this code by <code>clippy</code>, it will tell you to change the signature of <code>greet_bad</code>. The <a href=\"https://rust-lang.github.io/rust-clippy/master/index.html#ptr_arg\">clippy lint description</a> provides a great explanation of this, but suffice it to say that <code>greet_good</code> is more general in what it accepts than <code>greet_bad</code>.</p>\n<p>The same logic applies to <code>try_greet</code>. Why should we accept <code>Option&lt;&amp;String&gt;</code> instead of <code>Option&lt;&amp;str&gt;</code>? And interestingly, clippy doesn't complain in this case like it did in <code>greet_bad</code>. To see why, let's change our signature like so and see what happens:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn try_greet(option_name: Option&lt;&amp;str&gt;) {\n    match option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n}\n\nfn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    try_greet(option_name.as_ref());\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>This code no longer compiles:</p>\n<pre><code>error[E0308]: mismatched types\n  --&gt; src\\main.rs:10:15\n   |\n10 |     try_greet(option_name.as_ref());\n   |               ^^^^^^^^^^^^^^^^^^^^ expected `str`, found struct `String`\n   |\n   = note: expected enum `Option&lt;&amp;str&gt;`\n              found enum `Option&lt;&amp;String&gt;`\n</code></pre>\n<p>This is another example of ergonomics failing. You see, when you call a function with an argument of type <code>&amp;String</code>, but the function expects a <code>&amp;str</code>, <a href=\"https://doc.rust-lang.org/book/ch15-02-deref.html#implicit-deref-coercions-with-functions-and-methods\">deref coercion</a> kicks in and will perform a conversion for you. This is a piece of Rust ergonomics that we all rely on regularly, and every once in a while it completely fails to help us. This is one of those times. The compiler will not automatically convert a <code>Option&lt;&amp;String&gt;</code> into an <code>Option&lt;&amp;str&gt;</code>.</p>\n<p>(You can also read more about <a href=\"https://doc.rust-lang.org/nomicon/coercions.html\">coercions in the nomicon</a>.)</p>\n<p>Fortunately, there's another helper method on <code>Option</code> that does this for us. <code>as_deref</code> works just like <code>as_ref</code>, but additionally performs a <code>deref</code> method call on the value. Its implementation in <code>std</code> is interesting:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;T: Deref&gt; Option&lt;T&gt; {\n    pub fn as_deref(&amp;self) -&gt; Option&lt;&amp;T::Target&gt; {\n        self.as_ref().map(|t| t.deref())\n    }\n}\n</code></pre>\n<p>But we can also implement it more explicitly to see the behavior spelled out:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::ops::Deref;\n\nfn try_greet(option_name: Option&lt;&amp;str&gt;) {\n    match option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n}\n\nfn my_as_deref&lt;T: Deref&gt;(x: &amp;Option&lt;T&gt;) -&gt; Option&lt;&amp;T::Target&gt; {\n    match *x {\n        None =&gt; None,\n        Some(ref t) =&gt; Some(t.deref())\n    }\n}\n\nfn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    try_greet(my_as_deref(&amp;option_name));\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>And to bring this back to something closer to real world code, here's a case where combining <code>as_deref</code> and <code>map</code> leads to much cleaner code than you'd otherwise have:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn greet(name: &amp;str) {\n    println!(&quot;Name is {}&quot;, name);\n}\n\nfn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    option_name.as_deref().map(greet);\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<h2 id=\"real-ish-life-example\">Real-ish life example</h2>\n<p>Like most of my blog posts, this one was inspired by some real world code. To simplify the concept down a bit, I was parsing a config file, and ended up with an <code>Option&lt;String&gt;</code>. I needed some code that would either provide the value from the config, or default to a static string in the source code. Without <code>as_deref</code>, I could have used <code>STATIC_STRING_VALUE.to_string()</code> to get types to line up, but that would have been ugly and inefficient. Here's a somewhat intact representation of that code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use serde::Deserialize;\n\n#[derive(Deserialize)]\nstruct Config {\n    some_value: Option&lt;String&gt;\n}\n\nconst DEFAULT_VALUE: &amp;str = &quot;my-default-value&quot;;\n\nfn main() {\n    let mut file = std::fs::File::open(&quot;config.yaml&quot;).unwrap();\n    let config: Config = serde_yaml::from_reader(&amp;mut file).unwrap();\n    let value = config.some_value.as_deref().unwrap_or(DEFAULT_VALUE);\n    println!(&quot;value is {}&quot;, value);\n}\n</code></pre>\n<p>Want to learn more Rust with FP Complete? Check out these links:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training courses</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"/tags/rust/\">Rust tagged articles</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust homepage</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/",
        "slug": "rust-asref-asderef",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Rust's as_ref vs as_deref",
        "description": "A short analysis of when to use the Option methods as_ref and as_deref",
        "updated": null,
        "date": "2021-07-05",
        "year": 2021,
        "month": 7,
        "day": 5,
        "taxonomies": {
          "tags": [
            "rust"
          ],
          "categories": [
            "functional programming",
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/rust-asref-asderef.png"
        },
        "path": "/blog/rust-asref-asderef/",
        "components": [
          "blog",
          "rust-asref-asderef"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "more-idiomatic",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#more-idiomatic",
            "title": "More idiomatic",
            "children": []
          },
          {
            "level": 2,
            "id": "introducing-as-ref",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#introducing-as-ref",
            "title": "Introducing as_ref",
            "children": []
          },
          {
            "level": 2,
            "id": "side-rant-ergonomics-in-rust",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#side-rant-ergonomics-in-rust",
            "title": "Side rant: ergonomics in Rust",
            "children": []
          },
          {
            "level": 2,
            "id": "when-rfc-2005-fails",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#when-rfc-2005-fails",
            "title": "When RFC 2005 fails",
            "children": []
          },
          {
            "level": 2,
            "id": "when-as-ref-fails",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#when-as-ref-fails",
            "title": "When as_ref fails",
            "children": []
          },
          {
            "level": 2,
            "id": "real-ish-life-example",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#real-ish-life-example",
            "title": "Real-ish life example",
            "children": []
          }
        ],
        "word_count": 1822,
        "reading_time": 10,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/intermediate-training-courses.md",
        "colocated_path": null,
        "content": "<p>I'm happy to announce that over the next few months, FP Complete will be offering intermediate training courses on both Haskell and Rust. This is a follow up to our previous beginner courses on both languages as well. I'm excited to get to teach both of these courses.</p>\n<p>More details below, but cutting to the chase: if you'd like to sign up, or just get more information on these courses, please <a href=\"mailto:[email protected]\">email [email protected]</a>.</p>\n<h2 id=\"overall-structure\">Overall structure</h2>\n<p>Each course consists of:</p>\n<ul>\n<li>Four sessions, held on Sunday, 1500 UTC, 8am Pacific time, 5pm Central European</li>\n<li>Each session is three hours, with a ten minute break</li>\n<li>Slides, exercises, and recordings will be provided to all participants</li>\n<li>Private Discord chat room is available to those interested to interact with other students and the teacher, kept open after the course finishes</li>\n</ul>\n<h2 id=\"dates\">Dates</h2>\n<p>We'll be holding these courses on the following dates</p>\n<ul>\n<li>Haskell\n<ul>\n<li>June 13</li>\n<li>June 20</li>\n<li>July 11</li>\n<li>July 25</li>\n</ul>\n</li>\n<li>Rust\n<ul>\n<li>August 8</li>\n<li>August 15</li>\n<li>August 22</li>\n<li>August 29</li>\n</ul>\n</li>\n</ul>\n<h2 id=\"cost-and-signup\">Cost and signup</h2>\n<p>Each course costs $150 per participant. Please register and arrange payment (via PayPal or Venmo) by contacting <a href=\"mailto:[email protected]\">[email protected]</a>.</p>\n<h2 id=\"topics-covered\">Topics covered</h2>\n<p>Before the course begins, and throughout the course, I'll ask participants for feedback on additional topics to cover, and tune the course appropriately. Below is the basis of the course which we'll focus on:</p>\n<ul>\n<li>Haskell (based largely on our <a href=\"https://tech.fpcomplete.com/haskell/syllabus/\">Applied Haskell syllabus</a>)\n<ul>\n<li>Data structures (<code>bytestring</code>, <code>text</code>, <code>containers</code> and <code>vector</code>)</li>\n<li>Evaluation order</li>\n<li>Mutable variables</li>\n<li>Concurrent programming (<code>async</code> and <code>stm</code>)</li>\n<li>Exception safety</li>\n<li>Testing</li>\n<li>Data serialization</li>\n<li>Web clients and servers</li>\n<li>Streaming data</li>\n</ul>\n</li>\n<li>Rust\n<ul>\n<li>Error handling</li>\n<li>Closures</li>\n<li>Multithreaded programming</li>\n<li><code>async</code>/<code>.await</code> and Tokio</li>\n<li>Basics of <code>unsafe</code></li>\n<li>Macros</li>\n<li>Testing and benchmarks</li>\n</ul>\n</li>\n</ul>\n<h2 id=\"want-to-learn-more\">Want to learn more?</h2>\n<p>Not sure if this is right for you? Feel free to <a href=\"https://twitter.com/snoyberg\">hit me up on Twitter</a> for more information, or <a href=\"mailto:[email protected]\">contact [email protected]</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/",
        "slug": "intermediate-training-courses",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Intermediate Training Courses - Haskell and Rust",
        "description": "Announcing two more training courses, covering intermediate Haskell and Rust topics. Sign up today!",
        "updated": null,
        "date": "2021-06-03",
        "year": 2021,
        "month": 6,
        "day": 3,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "blogimage": "/images/blog-listing/functional.png",
          "image": "images/blog/thumbs/intermediate-training-courses.png"
        },
        "path": "/blog/intermediate-training-courses/",
        "components": [
          "blog",
          "intermediate-training-courses"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "overall-structure",
            "permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#overall-structure",
            "title": "Overall structure",
            "children": []
          },
          {
            "level": 2,
            "id": "dates",
            "permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#dates",
            "title": "Dates",
            "children": []
          },
          {
            "level": 2,
            "id": "cost-and-signup",
            "permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#cost-and-signup",
            "title": "Cost and signup",
            "children": []
          },
          {
            "level": 2,
            "id": "topics-covered",
            "permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#topics-covered",
            "title": "Topics covered",
            "children": []
          },
          {
            "level": 2,
            "id": "want-to-learn-more",
            "permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#want-to-learn-more",
            "title": "Want to learn more?",
            "children": []
          }
        ],
        "word_count": 312,
        "reading_time": 2,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/tying-the-knot-haskell.md",
        "colocated_path": null,
        "content": "<p>This post has nothing to do with marriage. Tying the knot is, in my opinion at least, a relatively obscure technique you can use in Haskell to address certain corner cases. I've used it myself only a handful of times, one of which I'll reference below. I preface it like this to hopefully make clear: tying the knot is a fine technique to use in certain cases, but don't consider it a general technique that you should need regularly. It's not nearly as generally useful as something like <a href=\"https://tech.fpcomplete.com/haskell/library/stm/\">Software Transactional Memory</a>.</p>\n<p>That said, you're still interested in this technique, and are still reading this post. Great! Let's get started where all bad Haskell code starts: C++.</p>\n<h2 id=\"doubly-linked-lists\">Doubly linked lists</h2>\n<p>Typically I'd demonstrate imperative code in Rust, but <a href=\"https://rust-unofficial.github.io/too-many-lists/\">it's not a good idea for this case</a>. So we'll start off with a very simple doubly linked list implementation in C++. And by &quot;very simple&quot; I should probably say &quot;very poorly written,&quot; since I'm out of practice.</p>\n<p><img src=\"/images/haskell/cpp-is-rusty.png\" alt=\"Rusty C++\" /></p>\n<p>Anyway, reading the entire code isn't necessary to get the point across. Let's look at some relevant bits. We define a node of the list like this, including a nullable pointer to the previous and next node in the list:</p>\n<pre data-lang=\"cpp\" class=\"language-cpp \"><code class=\"language-cpp\" data-lang=\"cpp\">template &lt;typename T&gt; class Node {\npublic:\n  Node(T value) : value(value), prev(NULL), next(NULL) {}\n  Node *prev;\n  T value;\n  Node *next;\n};\n</code></pre>\n<p>When you add the first node to the list, you set the new node's previous and next values to <code>NULL</code>, and the list's first and last values to the new node. The more interesting case is when you already have something in the list. To add a new node to the back of the list, you need some code that looks like the following:</p>\n<pre data-lang=\"cpp\" class=\"language-cpp \"><code class=\"language-cpp\" data-lang=\"cpp\">node-&gt;prev = this-&gt;last;\nthis-&gt;last-&gt;next = node;\nthis-&gt;last = node;\n</code></pre>\n<p>For those (like me) not fluent in C++, I'm making three mutations:</p>\n<ol>\n<li>Mutating the new node's <code>prev</code> member to point to the currently last node of the list.</li>\n<li>Mutating the currently last node's <code>next</code> member to point at the new node.</li>\n<li>Mutating the list itself so that its <code>last</code> member points to the new node.</li>\n</ol>\n<p>Point being in all of this: there's a lot of mutation going on in order to create a double linked list. Contrast that with singly linked lists in Haskell, which are immutable data structures and require no mutation at all.</p>\n<p>Anyway, I've written my annual quota of C++ at this point, it's time to go back to Haskell.</p>\n<h2 id=\"riih-rewrite-it-in-haskell\">RIIH (Rewrite it in Haskell)</h2>\n<p>Using <code>IORef</code>s and lots of <code>IO</code> calls everywhere, it's possible to reproduce the C++ concept of a mutable doubly linked list in Haskell. Full code is <a href=\"https://gist.github.com/snoyberg/5de410aba87a4208b7c701e954c61d9d\">available in a Gist</a>, but let's step through the important bits. Our core data types look quite like the C++ version, but with <code>IORef</code> and <code>Maybe</code> sprinkled in for good measure:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Node a = Node\n    { prev  :: IORef (Maybe (Node a))\n    , value :: a\n    , next  :: IORef (Maybe (Node a))\n    }\n\ndata List a = List\n    { first :: IORef (Maybe (Node a))\n    , last :: IORef (Maybe (Node a))\n    }\n</code></pre>\n<p>And adding a new value to a non-empty list looks like this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">node &lt;- Node &lt;$&gt; newIORef (Just last&#x27;) &lt;*&gt; pure value &lt;*&gt; newIORef Nothing\nwriteIORef (next last&#x27;) (Just node)\nwriteIORef (last list) (Just node)\n</code></pre>\n<p>Notice that, like in the C++ code, we need to perform mutations on the existing node and the <code>last</code> member of the list.</p>\n<p>This certainly works, but it probably feels less than satisfying to a Haskeller:</p>\n<ul>\n<li>I don't love the idea of mutations all over the place.</li>\n<li>The code looks and feels ugly.</li>\n<li>I can't access the values of the list from pure code.</li>\n</ul>\n<p>So the challenge is: can we write a doubly linked list in Haskell in pure code?</p>\n<h2 id=\"defining-our-data\">Defining our data</h2>\n<p>I'll warn you in advance. Every single time I've written code that &quot;ties the knot&quot; in Haskell, I've gone through at least two stages:</p>\n<ol>\n<li>This doesn't make any sense, there's no way this is going to work, what exactly am I doing?</li>\n<li>Oh, it's done, how exactly did that work?</li>\n</ol>\n<p>It happened while writing the code below. You're likely to have the same feeling while reading this of &quot;wait, what? I don't get it, huh?&quot;</p>\n<p>Anyway, let's start off by defining our data types. We didn't like the fact that we had <code>IORef</code> all over the place. So let's just get rid of it!</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Node a = Node\n    { prev  :: Maybe (Node a)\n    , value :: a\n    , next  :: Maybe (Node a)\n    }\n\ndata List a = List\n    { first :: Maybe (Node a)\n    , last :: Maybe (Node a)\n    }\n</code></pre>\n<p>We still have <code>Maybe</code> to indicate the presence or absence of nodes before or after our own. That translation is pretty easy. The problem is going to arise when we try to build such a structure, since we've seen that we need mutation to make it happen. We'll need to rethink our API to get going.</p>\n<h2 id=\"non-mutable-api\">Non-mutable API</h2>\n<p>The first change we need to consider is getting rid of the <em>concept</em> of mutation in the API. Previously, we had functions like <code>pushBack</code> and <code>popBack</code>, which were inherently mutating. Instead, we should be thinking in terms of immutable data structures and APIs.</p>\n<p>We already know all about singly linked lists, the venerable <code>[]</code> data type. Let's see if we can build a function that will let us construct a doubly linked list from a singly linked list. In other words:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList :: [a] -&gt; List a\n</code></pre>\n<p>Let's knock out two easy cases first. An empty list should end up with no nodes at all. That clause would be:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList [] = List Nothing Nothing\n</code></pre>\n<p>The next easy case is a single value in the list. This ends up with a single node with no pointers to other nodes, and a <code>first</code> and <code>last</code> field that both point to that one node. Again, fairly easy, no knot tying required:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList [x] =\n    let node = Node Nothing x Nothing\n     in List (Just node) (Just node)\n</code></pre>\n<p>OK, that's too easy. Let's kick it up a notch.</p>\n<h2 id=\"two-element-list\">Two-element list</h2>\n<p>To get into things a bit more gradually, let's handle the two element case next, instead of the general case of &quot;2 or more&quot;, which is a bit more complicated. We need to:</p>\n<ol>\n<li>Construct a first node that points at the last node</li>\n<li>Construct a last node that points at the first node</li>\n<li>Construct a list that points at both the first and last nodes</li>\n</ol>\n<p>Step (3) isn't too hard. Step (2) doesn't sound too bad either, since presumably the first node already exists at that point. The problem appears to be step (1). How can we construct a first node that points at the second node, when we haven't constructed the second node yet? Let me show you how:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList [x, y] =\n    let firstNode = Node Nothing x (Just lastNode)\n        lastNode = Node (Just firstNode) y Nothing\n     in List (Just firstNode) (Just lastNode)\n</code></pre>\n<p>If that code doesn't confuse or bother you you've probably already learned about tying the knot. This seems to make no sense. I'm referring to <code>lastNode</code> while constructing <code>firstNode</code>, and referring to <code>firstNode</code> while constructing <code>lastNode</code>. This kind of makes me think of an <a href=\"https://en.wikipedia.org/wiki/Ouroboros\">Ouroboros</a>, or a snake eating its own tail:</p>\n<p><img src=\"/images/haskell/ouroboros.jpeg\" alt=\"Ouroboros\" /></p>\n<p>In a normal programming language, this concept wouldn't make sense. We'd need to define <code>firstNode</code> first with a null pointer for <code>next</code>. Then we could define <code>lastNode</code>. And then we could mutate <code>firstNode</code>'s <code>next</code> to point to the last node. But not in Haskell! Why? Because of <em>laziness</em>. Thanks to laziness, both <code>firstNode</code> and <code>lastNode</code> are initially created as thunks. Their contents need not exist yet. But thankfully, we can still create pointers to these not-fully-evaluated values.</p>\n<p>With those pointers available, we can then define an expression for each of these that leverages the pointer of the other. And we have now, successfully, tied the knot.</p>\n<h2 id=\"expanding-beyond-two\">Expanding beyond two</h2>\n<p>Expanding beyond two elements follows the exact same pattern, but (at least in my opinion) is significantly more complicated. I implemented it by writing a helper function, <code>buildNodes</code>, which (somewhat spookily) takes the previous node in the list as a parameter, and returns back the next node and the final node in the list. Let's see all of this in action:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList (x:y:ys) =\n    let firstNode = Node Nothing x (Just secondNode)\n        (secondNode, lastNode) = buildNodes firstNode y ys\n     in List (Just firstNode) (Just lastNode)\n\n-- | Takes the previous node in the list, the current value, and all following\n-- values. Returns the current node as well as the final node constructed in\n-- this list.\nbuildNodes :: Node a -&gt; a -&gt; [a] -&gt; (Node a, Node a)\nbuildNodes prevNode value [] =\n    let node = Node (Just prevNode) value Nothing\n     in (node, node)\nbuildNodes prevNode value (x:xs) =\n    let node = Node (Just prevNode) value (Just nextNode)\n        (nextNode, lastNode) = buildNodes node x xs\n     in (node, lastNode)\n</code></pre>\n<p>Notice that in <code>buildList</code>, we're using the same kind of trick to use <code>secondNode</code> to construct <code>firstNode</code>, and <code>firstNode</code> is a parameter passed to <code>buildNodes</code> that is used to construct <code>secondNode</code>.</p>\n<p>Within <code>buildNodes</code>, we have two clauses. The first clause is one of those simpler cases: we've only got one value left, so we create a terminal node that points back at previous. No knot tying required. The second clause, however, once again uses the knot tying technique, together with a recursive call to <code>buildNodes</code> to build up the rest of the nodes in the list.</p>\n<p>The full code is <a href=\"https://gist.github.com/snoyberg/876ad1ad0f106c80239bf098a6965a53\">available as a Gist</a>. I recommend reading through the code a few times until you feel comfortable with it. When you have a good grasp on what's going on, try implementing it from scratch yourself.</p>\n<h2 id=\"limitation\">Limitation</h2>\n<p>It's important to understand a limitation of this approach versus both mutable doubly linked lists and singly linked lists. With singly linked lists, I can easily construct a new singly linked list by <code>cons</code>ing a new value to the front. Or I can drop a few values from the front and cons some new values in front of that new tail. In other words, I can construct new values based on old values as much as I want.</p>\n<p>Similarly, with mutable doubly linked lists, I'm free to mutate at will, changing my existing data structure. This behaves slightly different from constructing new singly linked lists, and falls into the same category of mutable-vs-immutable data structures that Haskellers know and love so well. If you want a refresher, check out:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/haskell/tutorial/data-structures/\">Data structures</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/library/vector/\">vector</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/tutorial/mutable-variables/\">Mutable variables</a></li>\n</ul>\n<p>None of these apply with a tie-the-knot approach to data structures. Once you construct this doubly linked list, it is locked in place. If you try to prepend a new node to the front of this list, you'll find that you cannot update the <code>prev</code> pointer in the old first node.</p>\n<p>There is a workaround. You can construct a brand new doubly linked list using the values in the original. A common way to do this would be to provide a conversion function back from your <code>List a</code> to a <code>[a]</code>. Then you could append a value to a doubly linked list with some code like:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">let oldList = buildList [2..10]\n    newList = buildList $ 1 : toSinglyLinkedList oldList\n</code></pre>\n<p>However, unlike singly linked lists, we lose all possibilities of data sharing, at least at the structure level (the values themselves can still be shared).</p>\n<h2 id=\"why-tie-the-knot\">Why tie the knot?</h2>\n<p>That's a cool trick, but is it actually useful? In some situations, absolutely! One example I've worked on is in the <a href=\"https://www.stackage.org/package/xml-conduit\">xml-conduit</a> package. Some people may be familiar with XPath, a pretty nice standard for XML traversals. It allows you to say things like &quot;find the first <code>ul</code> tag in document, then find the <code>p</code> tag before that, and tell me its <code>id</code> attribute.&quot;</p>\n<p>A simple implementation of an XML data type in Haskell may look like this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Element = Element Name (Map Name AttributeValue) [Node]\ndata Node\n    = NodeElement Element\n    | NodeContent Text\n</code></pre>\n<p>Using this kind of data structure, it would be pretty difficult to implement the traversal that I just described. You would need to write logic to keep track of where you are in the document, and then implement logic to say &quot;OK, given that I was in the third child of the second child of the sixth child, what are all of the nodes that came before me?&quot;</p>\n<p>Instead, in <code>xml-conduit</code>, we use knot tying to create a data structure called a <a href=\"https://www.stackage.org/haddock/nightly-2021-05-23/xml-conduit-1.9.1.1/Text-XML-Cursor.html#t:Cursor\"><code>Cursor</code></a>. A <code>Cursor</code> not only keeps track of its own contents, but also contains a pointer to its parent cursor, its predecessor cursors, its following cursors, and its child cursors. You can then traverse the tree with ease. The traversal above would be implemented as:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">#!&#x2F;usr&#x2F;bin&#x2F;env stack\n-- stack --resolver lts-17.12 script\n{-# LANGUAGE OverloadedStrings #-}\nimport qualified Text.XML as X\nimport Text.XML.Cursor\n\nmain :: IO ()\nmain = do\n    doc &lt;- X.readFile X.def &quot;input.xml&quot;\n    let cursor = fromDocument doc\n    print $ cursor $&#x2F;&#x2F; element &quot;ul&quot; &gt;=&gt; precedingSibling &gt;=&gt; element &quot;p&quot; &gt;=&gt; attribute &quot;id&quot;\n</code></pre>\n<p>You can test this out yourself with this sample input document:</p>\n<pre data-lang=\"xml\" class=\"language-xml \"><code class=\"language-xml\" data-lang=\"xml\">&lt;foo&gt;\n    &lt;bar&gt;\n        &lt;baz&gt;\n            &lt;p id=&quot;hello&quot;&gt;Something&lt;&#x2F;p&gt;\n            &lt;ul&gt;\n                &lt;li&gt;Bye!&lt;&#x2F;li&gt;\n            &lt;&#x2F;ul&gt;\n        &lt;&#x2F;baz&gt;\n    &lt;&#x2F;bar&gt;\n&lt;&#x2F;foo&gt;\n</code></pre>\n<h2 id=\"should-i-tie-the-knot\">Should I tie the knot?</h2>\n<p><em>Insert bad marriage joke here</em></p>\n<p>Like most techniques in programming in general, and Haskell in particular, it can be tempting to go off and look for a use case to throw this technique at. The use cases definitely exist. I think <code>xml-conduit</code> is one of them. But let me point out that it's the <em>only</em> example I can think of in my career as a Haskeller where tying the knot was a great solution to the problem. There are similar cases out there that I'd include too (such as JSON document traversal).</p>\n<p>Is it worth learning the technique? Yeah, definitely. It's a mind-expanding move. It helps you internalize concepts of laziness just a bit better. It's really fun and mind-bending. But don't rush off to rewrite your code to use a relatively niche technique.</p>\n<p>If anyone's wondering, this blog post came out of a question that popped up during a Haskell training course. If you'd like to come learn some Haskell and dive into weird topics like this, come find out more about <a href=\"https://tech.fpcomplete.com/training/\">FP Complete's training programs</a>. We're gearing up for some intermediate Haskell and Rust courses soon, so add your name to the list if you want to get more information.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/",
        "slug": "tying-the-knot-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Tying the Knot in Haskell",
        "description": "An overview of a somewhat obscure technique in Haskell code, when you can use it, and its limitations.",
        "updated": null,
        "date": "2021-05-25",
        "year": 2021,
        "month": 5,
        "day": 25,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "blogimage": "/images/blog-listing/functional.png",
          "image": "images/blog/tying-the-knot-haskell.png"
        },
        "path": "/blog/tying-the-knot-haskell/",
        "components": [
          "blog",
          "tying-the-knot-haskell"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "doubly-linked-lists",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#doubly-linked-lists",
            "title": "Doubly linked lists",
            "children": []
          },
          {
            "level": 2,
            "id": "riih-rewrite-it-in-haskell",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#riih-rewrite-it-in-haskell",
            "title": "RIIH (Rewrite it in Haskell)",
            "children": []
          },
          {
            "level": 2,
            "id": "defining-our-data",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#defining-our-data",
            "title": "Defining our data",
            "children": []
          },
          {
            "level": 2,
            "id": "non-mutable-api",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#non-mutable-api",
            "title": "Non-mutable API",
            "children": []
          },
          {
            "level": 2,
            "id": "two-element-list",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#two-element-list",
            "title": "Two-element list",
            "children": []
          },
          {
            "level": 2,
            "id": "expanding-beyond-two",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#expanding-beyond-two",
            "title": "Expanding beyond two",
            "children": []
          },
          {
            "level": 2,
            "id": "limitation",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#limitation",
            "title": "Limitation",
            "children": []
          },
          {
            "level": 2,
            "id": "why-tie-the-knot",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#why-tie-the-knot",
            "title": "Why tie the knot?",
            "children": []
          },
          {
            "level": 2,
            "id": "should-i-tie-the-knot",
            "permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#should-i-tie-the-knot",
            "title": "Should I tie the knot?",
            "children": []
          }
        ],
        "word_count": 2453,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/syllabus/",
            "title": "Applied Haskell Syllabus"
          }
        ]
      },
      {
        "relative_path": "blog/pains-path-parsing.md",
        "colocated_path": null,
        "content": "<p>I've spent a considerable amount of coding time getting into the weeds of path parsing and generation in web applications. First with <a href=\"https://www.yesodweb.com/\">Yesod in Haskell</a>, and more recently with a side project for <a href=\"https://github.com/snoyberg/routetype-rs\">routetypes in Rust</a>. (Side note: I'll likely do some blogging and/or videos about that project in the future, stay tuned.) My recent work reminded me of a bunch of the pain points involved here. And as so often happens, I was complaining to my wife about these pain points, and decided to write a blog post about it.</p>\n<p>First off, there are plenty of pain points I'm not going to address. For example, the insane world of percent encoding, and the different rules for what part of the URL you're in, is a constant source of misery and mistakes. Little things like required leading forward slashes, or whether query string parameters should differentiate between &quot;no value provided&quot; (e.g. <code>?foo</code>) versus &quot;empty value provided&quot; (e.g. <code>?foo=</code>). But I'll restrict myself to just one aspect: <strong>roundtripping path segments and rendered paths</strong>.</p>\n<h2 id=\"what-s-a-path\">What's a path?</h2>\n<p>Let's take this blog post's URL: <code>https://www.fpcomplete.com/blog/pains-path-parsing/</code>. We can break it up into four logical pieces:</p>\n<ul>\n<li><code>https</code> is the <em>scheme</em></li>\n<li><code>://</code> is a required part of the URL syntax</li>\n<li><code>www.fpcomplete.com</code> is the <em>authority</em>. You may be wondering: isn't it just the domain name? Well, yes. But the authority may contain additional information too, like port number, username, password</li>\n<li><code>/blog/pains-path-parsing/</code> is the path, including the leading and trailing forward slashes</li>\n</ul>\n<p>This URL doesn't include them, but URLs may also include query strings, like <code>?source=rss</code>, and fragments, like <code>#what-s-a-path</code>. But we just care about that <code>path</code> component.</p>\n<p>The first way to think of a path is as a string. And by string, I mean a sequence of characters. And by sequence of characters, I really mean Unicode code points. (See how ridiculously pedantic I'm getting? Yeah, that's important.) But that's not true at all. To demonstrate, here's some Rust code that uses Hebrew letters in the path:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let uri = http::Uri::builder().path_and_query(&quot;&#x2F;hello&#x2F;מיכאל&#x2F;&quot;).build();\n    println!(&quot;{:?}&quot;, uri);\n}\n</code></pre>\n<p>And while that looks nice and simple, it fails spectacularly with the error message:</p>\n<pre><code>Err(http::Error(InvalidUri(InvalidUriChar)))\n</code></pre>\n<p>In reality, according to <a href=\"https://tools.ietf.org/html/rfc3986#section-2\">the RFC</a>, paths are made up of a limited set of ASCII characters, represented as octets (raw bytes). And we somehow have to use percent encoding to represent other characters.</p>\n<p>But before we can really talk about encoding and representing, we have to ask another orthogonal question.</p>\n<h2 id=\"what-do-paths-represent\">What do paths represent?</h2>\n<p>While a path is technically a sequence of a reserved number of ASCII octets, that's not how our applications treat them. Instead, we <em>want</em> to be able to talk about the full range of Unicode code points. But it's more than just that. We want to be able to talk about <em>groupings</em> of <em>sequences</em>. We call these <em>segments</em> typically. The raw path <code>/hello/world</code> can be thought of as the segments <code>[&quot;hello&quot;, &quot;world&quot;]</code>. I would call this <em>parsing</em> the path. And, in reverse, we can <em>render</em> those segments back into the original raw path.</p>\n<p>With these kinds of parse/render pairs, it's always nice to have complete roundtripping abilities. In other words, <code>parse(render(x)) == x</code> and <code>render(parse(x)) == x</code>. Generally these rules fail for a variety of reasons, such as:</p>\n<ol>\n<li>Multiple valid representations. For example, with the percent encoding we'll mention below, <code>%2a</code> and <code>%2A</code> mean the same thing.</li>\n<li>Often unimportant whitespace details get lost during parsing. This applies to formats like JSON, where <code>[true, false]</code> and <code>[   true,   false  ]</code> have the same meaning.</li>\n<li>Parsing can fail, so that it's invalid to call <code>render</code> on <code>parse(x)</code>.</li>\n</ol>\n<p>Because of this, we often end up reducing our goals to something like: for all <code>x</code>, <code>parse(render(x))</code> is successful, and produces output identical to <code>x</code>.</p>\n<p>In path parsing, we definitely have problem (1) above (multiple valid representations). But by using this simplified goal, we no longer worry about that problem. Paths in URLs also don't have unimportant whitespace details (every octet has meaning), so (2) isn't a problem to be concerned with. Even if it was, our <code>parse(render(x))</code> step would end up &quot;fixing&quot; it.</p>\n<p>The final point is interesting, and is going to be crucial to our complete solution. What exactly does it mean for path parsing to fail? I can think of two ideas in basic path parsing:</p>\n<ul>\n<li>It includes an octet outside of the allowed range</li>\n<li>It includes a percent encoding which is invalid, e.g. <code>%@@</code></li>\n</ul>\n<p>Let's assume for the rest of this post, however, that those have been dealt with at a previous step, and we know for a fact that those error conditions will not occur. Are there any other ways for parsing to fail? In a basic sense: no. In a more sophisticated parsing: absolutely.</p>\n<h2 id=\"basic-rendering\">Basic rendering</h2>\n<p>The basic rendering steps are fairly straightforward:</p>\n<ul>\n<li>Perform percent encoding on each segment</li>\n<li>Interpolate the segments with a slash separator</li>\n<li>Prepend a slash to the entire string</li>\n</ul>\n<p>To allow roundtripping, we need to ensure that each <em>input</em> to the <code>render</code> function generates a unique output. Unfortunately, with these basic rendering steps, we immediately run into an error:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">render segs = &quot;&#x2F;&quot; ++ interpolate &#x27;&#x2F;&#x27; (map percentEncode segs)\n\nrender []\n    = &quot;&#x2F;&quot; ++ interpolate &#x27;&#x2F;&#x27; (map percentEncode [])\n    = &quot;&#x2F;&quot; ++ interpolate &#x27;&#x2F;&#x27; []\n    = &quot;&#x2F;&quot; ++ &quot;&quot;\n    = &quot;&#x2F;&quot;\n\nrender [&quot;&quot;]\n    = &quot;&#x2F;&quot; ++ interpolate &#x27;&#x2F;&#x27; (map percentEncode [&quot;&quot;])\n    = &quot;&#x2F;&quot; ++ interpolate &#x27;&#x2F;&#x27; [&quot;&quot;]\n    = &quot;&#x2F;&quot; ++ &quot;&quot;\n    = &quot;&#x2F;&quot;\n</code></pre>\n<p>In other words, both <code>[]</code> and <code>[&quot;&quot;]</code> encode to the same raw path, <code>/</code>. This may seem like a trivial corner case not worth addressing. In fact, even more generally, empty path segments seem like a corner case. One possibility would be to say &quot;segments must be non-zero length&quot;. Then there's no potential <code>[&quot;&quot;]</code> input to worry about.</p>\n<p>When this topic came up in Yesod, we decided to approach this differently. We actually <em>did</em> have some people who had use cases for empty path segments. We'll get back to this in normalized rendering.</p>\n<h2 id=\"percent-encoding\">Percent encoding</h2>\n<p>I mentioned originally the annoyances of percent encoding character sets. I'm still not going to go deeply into details of it. But we do need to discuss it at a surface level. In the steps above, let's ask two related questions:</p>\n<ul>\n<li>Why did we percent encode <em>before</em> interpolating?</li>\n<li>Do we percent encode forward slashes?</li>\n</ul>\n<p>Let's try percent encoding <em>after</em> interpolating. And let's say we decide not to percent encode forward slashes. Then <code>render([&quot;foo/bar&quot;])</code> would turn into <code>/foo/bar</code>, which is identical to <code>render([&quot;foo&quot;, &quot;bar&quot;])</code>. That's not what we want. And if we decide we're going to percent encode <em>after</em> interpolating and that we <em>will percent encode forward slashes</em>, both inputs result in <code>/foo%2Fbar</code> as output. Neither of those is any good.</p>\n<p>OK, going back to percent encoding before interpolating, let's say that we don't percent encode forward slashes. Then both <code>[&quot;foo/bar&quot;]</code> and <code>[&quot;foo&quot;, &quot;bar&quot;]</code> will turn into <code>/foo/bar</code>, again bad. So by process of elimination, we're left with percent encoding before interpolating, and escaping the forward slashes in segments. With this configuration, we're left with <code>render([&quot;foo/bar&quot;]) == &quot;/foo%2Fbar&quot;</code> and <code>render([&quot;foo&quot;, &quot;bar&quot;]) == &quot;/foo/bar&quot;</code>. Not only is this unique output (our goal here), but it also intuitively feels right, at least to me.</p>\n<h2 id=\"unicode-codepoint-handling\">Unicode codepoint handling</h2>\n<p>One detail we've glossed over here is Unicode, and the difference between codepoints and octets. It's time to rectify that. Percent encoding is a process that works on <em>bytes</em>, not characters. I can percent encode <code>/</code> into <code>%2F</code>, but only because I'm assuming an ASCII representation of that character. By contrast, let's go back to my favorite non-Latin alphabet example, Hebrew. How do you represent the Hebrew letter Alef <code>א</code> with percent encoding? The answer is that you can't, at least not directly. Instead, we need to represent that <a href=\"https://unicode-table.com/en/05D0/\">Unicode codepoint</a> (U+05D0) as bytes. And the most universally accepted way to do that is to use UTF-8. So our process is something like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let segment: &amp;[char] = &quot;א&quot;;\nlet segment_bytes: &amp;[u8] = encode_utf8(segment); &#x2F;&#x2F; b&quot;\\xD7\\x90&quot;\nlet encoded: &amp;[u8] = percent_encode(segment_bytes); &#x2F;&#x2F; b&quot;%D7%90&quot;\n</code></pre>\n<p>OK, awesome, we now have a way to take a sequence of non-empty Unicode strings and generate a unique path representation of that. What's next?</p>\n<h2 id=\"basic-parsing\">Basic parsing</h2>\n<p>How do we go <em>backwards</em>? Easy: we reverse each of the steps above. Let's see the render steps again:</p>\n<ul>\n<li>Percent encode each segment, consisting of:\n<ul>\n<li>UTF-8 encode the codepoints into bytes</li>\n<li>Percent encode all relevant octets, including the forward slash</li>\n</ul>\n</li>\n<li>Interpolate all of the segments together, separated by a forward slash\n<ul>\n<li>Technically, the &quot;forward slash&quot; here is the forward slash <em>octet</em> <code>\\x2F</code>. But because everyone basically assumes ASCII/UTF-8 encoding, we can typically be a little loose in our terminology.</li>\n</ul>\n</li>\n<li>Prepend a forward slash (octet).</li>\n</ul>\n<p>Basic parsing is exactly the same steps in reverse:</p>\n<ul>\n<li>Strip off the forward slash.\n<ul>\n<li>Arguably, if a forward slash is missing, you could consider this a parse error. Most parsers simply ignore it instead.</li>\n</ul>\n</li>\n<li>Split the raw path on each occurrence of a forward slash. We'll discuss some subtleties about this next.</li>\n<li>Percent decode each segment, consisting of:\n<ul>\n<li>Look for any <code>%</code> signs, and grab the next two hexadecimal digits. In theory, you could treat an incorrect or missing digit as a parse error. In practice, many people end up using some kind of fallback.</li>\n<li>Take the percent decoded octets and UTF-8 decode them. Again, in theory, you could treat invalid UTF-8 data as a parse error, but many people simply use the <a href=\"https://en.wikipedia.org/wiki/Replacement_character\">Unicode replacement character</a>.</li>\n</ul>\n</li>\n</ul>\n<p>If implemented correctly, this should result in the goal we mentioned above: encoding and decoding a specific input will always give back the original value (ignoring the empty segment case, which we still haven't addressed). The one really tricky thing is making sure that our <em>split</em> and <em>interpolate</em> operations mirror each other correctly. There are actually <a href=\"https://www.stackage.org/package/split\">many different ways of splitting lists and strings</a>. Fortunately for my Rust interpolation, the <a href=\"https://doc.rust-lang.org/stable/std/primitive.str.html#method.split\">standard <code>split</code> method on <code>str</code></a> happens to implement exactly the behavior we want. You can check out the method's documentation for details (helpful even for non-Rustaceans!). Pay particular attention to the comments about contiguous separators, and think about how <code>[&quot;foo&quot;, &quot;&quot;, &quot;&quot;, &quot;bar&quot;]</code> would end up being interpolated and then parsed.</p>\n<p>OK, we're all done, right? Wrong!</p>\n<h2 id=\"normalization\">Normalization</h2>\n<p>I bet you thought I forgot about the empty segments. (Actually, given how many times I called them out, I bet you <em>didn't</em> think that.) Before, we saw exactly one problem with empty segments: the weird case of <code>[&quot;&quot;]</code>. I want to first establish that empty segments are a much bigger problem than that.</p>\n<p>I gave a link above to a GitHub repository: <code>https://github.com/snoyberg/routetype-rs</code>. Let's change that URL ever so slightly, and add an extra forward slash in between <code>snoyberg</code> and <code>routetype-rs</code>: <code>https://github.com/snoyberg//routetype-rs</code>. Amazingly, you get the same page for both URLs. Isn't that weird?</p>\n<p>No, not really. Extra forward slashes are often times ignored by web servers. &quot;I know what you meant, and you didn't mean an empty path segment.&quot; This isn't just a &quot;feature&quot; of webservers. The same concept applies on my Linux command line:</p>\n<pre><code>$ cat &#x2F;etc&#x2F;debian_version\nbullseye&#x2F;sid\n$ cat &#x2F;etc&#x2F;&#x2F;&#x2F;debian_version\nbullseye&#x2F;sid\n</code></pre>\n<p>I've got two problems with the behavior GitHub is demonstrating above:</p>\n<ul>\n<li>What if I'm writing some web application and I really, truly want to be able to embed a <em>meaningful</em> empty segment in the path?</li>\n<li>Doesn't it feel wrong, and maybe even hurt SEO, to have two different URLs that resolve to the same content?</li>\n</ul>\n<p>In Yesod, we addressed the second issue with a class method called <code>cleanPath</code>, that analyzes the segments of an incoming path and sees if there's a more canonical representation of them. For the case above, <code>https://github.com/snoyberg//routetype-rs</code> would produce the segments <code>[&quot;snoyberg&quot;, &quot;&quot;, &quot;routetype-rs&quot;]</code>, and <code>cleanPath</code> would decide that a more canonical representation would be <code>[&quot;snoyberg&quot;, &quot;routetype-rs&quot;]</code>. Then, Yesod would take the canonical representation and generate a redirect. In other words, if GitHub was written in Yesod, my request to <code>https://github.com/snoyberg//routetype-rs</code> would result in a redirect to <code>https://github.com/snoyberg/routetype-rs</code>.</p>\n<p><a href=\"https://github.com/yesodweb/yesod/issues/421\">Way back in 2012</a>, this led to a problem, however. Someone actually had empty path segments, and Yesod was automatically redirecting away from the generated URLs. We came up with a solution back then that I'm still very fond of: dash prefixing. See the linked issue for the details, but the way it works is:</p>\n<ul>\n<li>When encoding, if a segment consists entirely of dashes, add one more dash to it.\n<ul>\n<li>By our definition of &quot;consists entirely of dashes,&quot; the empty string counts too. So <code>dashPrefix &quot;&quot; == &quot;-&quot;</code>, and <code>dashPrefix &quot;---&quot; == &quot;----&quot;</code>.</li>\n</ul>\n</li>\n<li>When decoding:\n<ul>\n<li>Perform the split operation above.</li>\n<li>Next, perform the clean path check, and generate a redirect if there are any empty path segments.</li>\n<li>Once we know that there are no empty path segments, <em>then</em> undo dash prefixing. If a segment consists of only dashes, remove one of the dashes.</li>\n</ul>\n</li>\n</ul>\n<p>If you work this through enough, you can see that with this addition, every possible sequence of segments—even empty segments—results in a unique raw path after rendering. And every incoming raw path can either be parsed to a necessary redirect (if there are empty segments) or to a sequence of segments. And finally, each sequence of segments will successfully roundtrip back to the original sequence when parsing and rendering.</p>\n<p>I call this <em>normalized</em> parsing and rendering, since it is normalizing each incoming path to a single, canonical representation, at least as far as empty path segments are concerned. I suppose if someone wanted to be truly pedantic, they could also try to address variations in percent encoding behavior or invalid UTF-8 sequences. But I'd consider the former a non-semantic difference, and the latter garbage-in-garbage-out.</p>\n<h2 id=\"trailing-slashes\">Trailing slashes</h2>\n<p>There's one final point to bring up. What exactly causes an empty path segment to occur when parsing? One example is contiguous slashes, like our <code>snoyberg//routetype-rs</code> example above. But there's a far more interesting and prevalent case: the trailing slash. Many web servers use trailing slashes, likely originating from the common pattern of having <code>index.html</code> files and accessing a page based on the containing directory name. In fact, this blog post is hosted on a statically generating site which uses that technique, which is why the URL has a trailing slash. And if you perform basic parsing on our path here, you'd get:</p>\n<pre><code>basic_parse(&quot;&#x2F;blog&#x2F;pains-path-parsing&#x2F;&quot;) == [&quot;blog&quot;, &quot;pains-path-parsing&quot;, &quot;&quot;]\n</code></pre>\n<p>Whether to include trailing slashes in URLs has been an old argument on the internet. Personally, because I consider the parsing-into-segments concept to be central to path parsing, I prefer excluding the trailing slash. And in fact, Yesod's default (and, at least for now, <code>routetype-rs</code>'s default) is to treat such a URL as non-canonical and redirect away from it. I felt even more strongly about that when I realized lots of frameworks have special handling for &quot;final segments with filename extensions.&quot; For example, <code>/blog/bananas/</code> is good with a trailing slash, but <code>/images/bananas.png</code> should <em>not</em> have a trailing slash.</p>\n<p>However, since so many people like having trailing slashes, Yesod is configurable on this point, which is why <code>cleanPath</code> is a typeclass method that can be overridden. To each their own I suppose.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hope this blog post gave a little more insight into the wild world of the web and how something as seemingly innocuous as paths actually hides some depth. If you're interested in learning more about the <code>routetype-rs</code> project, please let me know, and I'll try to prioritize some follow ups on it.</p>\n<p>You may be interested in more <a href=\"https://tech.fpcomplete.com/rust/\">Rust</a> or <a href=\"https://tech.fpcomplete.com/haskell/\">Haskell</a> from FP Complete. Also, check out <a href=\"https://tech.fpcomplete.com/blog/\">our blog</a> for a wide range of technical content.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/",
        "slug": "pains-path-parsing",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The Pains of Path Parsing",
        "description": "A semi-deep dive into the finer points of path parsing and rendering in web applications",
        "updated": null,
        "date": "2021-04-26",
        "year": 2021,
        "month": 4,
        "day": 26,
        "taxonomies": {
          "tags": [
            "haskell",
            "rust",
            "web",
            "devops"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "blogimage": "/images/blog-listing/functional.png",
          "image": "images/blog/pains-path-parsing.png"
        },
        "path": "/blog/pains-path-parsing/",
        "components": [
          "blog",
          "pains-path-parsing"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "what-s-a-path",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#what-s-a-path",
            "title": "What's a path?",
            "children": []
          },
          {
            "level": 2,
            "id": "what-do-paths-represent",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#what-do-paths-represent",
            "title": "What do paths represent?",
            "children": []
          },
          {
            "level": 2,
            "id": "basic-rendering",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#basic-rendering",
            "title": "Basic rendering",
            "children": []
          },
          {
            "level": 2,
            "id": "percent-encoding",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#percent-encoding",
            "title": "Percent encoding",
            "children": []
          },
          {
            "level": 2,
            "id": "unicode-codepoint-handling",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#unicode-codepoint-handling",
            "title": "Unicode codepoint handling",
            "children": []
          },
          {
            "level": 2,
            "id": "basic-parsing",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#basic-parsing",
            "title": "Basic parsing",
            "children": []
          },
          {
            "level": 2,
            "id": "normalization",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#normalization",
            "title": "Normalization",
            "children": []
          },
          {
            "level": 2,
            "id": "trailing-slashes",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#trailing-slashes",
            "title": "Trailing slashes",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2670,
        "reading_time": 14,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/captures-closures-async.md",
        "colocated_path": null,
        "content": "<p>This blog post is the second in the <a href=\"/tags/rust-quickies/\">Rust quickies</a> series. In my <a href=\"https://tech.fpcomplete.com/training/\">training sessions</a>, we often come up with quick examples to demonstrate some point. Instead of forgetting about them, I want to put short blog posts together focusing on these examples. Hopefully these will be helpful, enjoy!</p>\n<div class=\"alert alert-secondary text-center\">FP Complete is looking for Rust and DevOps engineers. Interested in working with us? <a href=\"/jobs/\">Check out our jobs page</a>.</div>\n<h2 id=\"hello-hyper\">Hello Hyper!</h2>\n<p>For those not familiar, <a href=\"https://hyper.rs/\">Hyper</a> is an HTTP implementation for Rust, built on top of Tokio. It's a low level library powering frameworks like <a href=\"https://crates.io/crates/warp\">Warp</a> and <a href=\"https://rocket.rs/\">Rocket</a>, as well as the <a href=\"https://lib.rs/crates/reqwest\">reqwest</a> client library. For most people, most of the time, using a higher level wrapper like these is the right thing to do.</p>\n<p>But sometimes we like to get our hands dirty, and sometimes working directly with Hyper is the right choice. And definitely from a learning perspective, it's worth doing so at least once. And what could be easier than following the example from Hyper's homepage? To do so, <code>cargo new</code> a new project, add the following dependencies:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">hyper = { version = &quot;0.14&quot;, features = [&quot;full&quot;] }\ntokio = { version = &quot;1&quot;, features = [&quot;full&quot;] }\n</code></pre>\n<p>And add the following to <code>main.rs</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::convert::Infallible;\nuse std::net::SocketAddr;\nuse hyper::{Body, Request, Response, Server};\nuse hyper::service::{make_service_fn, service_fn};\n\nasync fn hello_world(_req: Request&lt;Body&gt;) -&gt; Result&lt;Response&lt;Body&gt;, Infallible&gt; {\n    Ok(Response::new(&quot;Hello, World&quot;.into()))\n}\n\n#[tokio::main]\nasync fn main() {\n    &#x2F;&#x2F; We&#x27;ll bind to 127.0.0.1:3000\n    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));\n\n    &#x2F;&#x2F; A `Service` is needed for every connection, so this\n    &#x2F;&#x2F; creates one from our `hello_world` function.\n    let make_svc = make_service_fn(|_conn| async {\n        &#x2F;&#x2F; service_fn converts our function into a `Service`\n        Ok::&lt;_, Infallible&gt;(service_fn(hello_world))\n    });\n\n    let server = Server::bind(&amp;addr).serve(make_svc);\n\n    &#x2F;&#x2F; Run this server for... forever!\n    if let Err(e) = server.await {\n        eprintln!(&quot;server error: {}&quot;, e);\n    }\n}\n</code></pre>\n<p>If you're interested, there's a <a href=\"https://hyper.rs/guides/server/hello-world/\">quick explanation</a> of this code available on Hyper's website. But our focus will be on making an ever-so-minor modification to this code. Let's go!</p>\n<h2 id=\"counter\">Counter</h2>\n<p>Remember the good old days of Geocities websites, where every page had to have a visitor counter? I want that. Let's modify our <code>hello_world</code> function to do just that:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::sync::{Arc, Mutex};\n\ntype Counter = Arc&lt;Mutex&lt;usize&gt;&gt;; &#x2F;&#x2F; Bonus points: use an AtomicUsize instead\n\nasync fn hello_world(counter: Counter, _req: Request&lt;Body&gt;) -&gt; Result&lt;Response&lt;Body&gt;, Infallible&gt; {\n    let mut guard = counter.lock().unwrap(); &#x2F;&#x2F; unwrap poisoned Mutexes\n    *guard += 1;\n    let message = format!(&quot;You are visitor number {}&quot;, guard);\n    Ok(Response::new(message.into()))\n}\n</code></pre>\n<p>That's easy enough, and now we're done with <code>hello_world</code>. The only problem is rewriting <code>main</code> to pass in a <code>Counter</code> value to it. Let's take a first, naive stab at the problem:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let addr = SocketAddr::from(([127, 0, 0, 1], 3000));\nlet counter: Counter = Arc::new(Mutex::new(0));\n\nlet make_svc = make_service_fn(|_conn| async {\n    Ok::&lt;_, Infallible&gt;(service_fn(|req| hello_world(counter, req)))\n});\n\nlet server = Server::bind(&amp;addr).serve(make_svc);\n\nif let Err(e) = server.await {\n    eprintln!(&quot;server error: {}&quot;, e);\n}\n</code></pre>\n<p>Unfortunately, this fails due to moving out of captured variables. (That's a topic we cover in detail in our closure training module.)</p>\n<pre><code>error[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n  --&gt; src\\main.rs:21:58\n   |\n18 |     let counter: Counter = Arc::new(Mutex::new(0));\n   |         ------- captured outer variable\n...\n21 |         Ok::&lt;_, Infallible&gt;(service_fn(|req| hello_world(counter, req)))\n   |                                                          ^^^^^^^ move occurs because `counter` has type `Arc&lt;std::sync::Mutex&lt;usize&gt;&gt;`, which does not implement the `Copy` trait\n\nerror[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n  --&gt; src\\main.rs:20:50\n   |\n18 |       let counter: Counter = Arc::new(Mutex::new(0));\n   |           ------- captured outer variable\n19 |\n20 |       let make_svc = make_service_fn(|_conn| async {\n   |  __________________________________________________^\n21 | |         Ok::&lt;_, Infallible&gt;(service_fn(|req| hello_world(counter, req)))\n   | |                                        -------------------------------\n   | |                                        |\n   | |                                        move occurs because `counter` has type `Arc&lt;std::sync::Mutex&lt;usize&gt;&gt;`, which does not implement the `Copy` trait\n   | |                                        move occurs due to use in generator\n22 | |     });\n   | |_____^ move out of `counter` occurs here\n</code></pre>\n<h2 id=\"clone\">Clone</h2>\n<p>That error isn't terribly surprising. We put our <code>Mutex</code> inside an <code>Arc</code> for a reason: we'll need to make multiple clones of it and pass those around to each new request handler. But we haven't called <code>clone</code> once yet! Again, let's do the most naive thing possible, and change:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">Ok::&lt;_, Infallible&gt;(service_fn(|req| hello_world(counter, req)))\n</code></pre>\n<p>into</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">Ok::&lt;_, Infallible&gt;(service_fn(|req| hello_world(counter.clone(), req)))\n</code></pre>\n<p>This is where the error messages begin to get more interesting:</p>\n<pre><code>error[E0597]: `counter` does not live long enough\n  --&gt; src\\main.rs:21:58\n   |\n20 |       let make_svc = make_service_fn(|_conn| async {\n   |  ____________________________________-------_-\n   | |                                    |\n   | |                                    value captured here\n21 | |         Ok::&lt;_, Infallible&gt;(service_fn(|req| hello_world(counter.clone(), req)))\n   | |                                                          ^^^^^^^ borrowed value does not live long enough\n22 | |     });\n   | |_____- returning this value requires that `counter` is borrowed for `&#x27;static`\n...\n29 |   }\n   |   - `counter` dropped here while still borrowed\n</code></pre>\n<p>Both <code>async</code> blocks and closures will, by default, capture variables from their environment by reference, instead of taking ownership. Our closure needs to have a <code>'static</code> lifetime, and therefore can't hold onto a reference to data in our <code>main</code> function.</p>\n<h2 id=\"move-all-the-things\"><code>move</code> all the things!</h2>\n<p>The standard solution to this is to simply sprinkle <code>move</code>s on each <code>async</code> block and closure. This will force each closure to own the <code>Arc</code> itself, not a reference to it. Doing so looks simple:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| async move {\n    Ok::&lt;_, Infallible&gt;(service_fn(move |req| hello_world(counter.clone(), req)))\n});\n</code></pre>\n<p>And this does in fact fix the error above. But it gives us a new error instead:</p>\n<pre><code>error[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n  --&gt; src\\main.rs:20:60\n   |\n18 |       let counter: Counter = Arc::new(Mutex::new(0));\n   |           ------- captured outer variable\n19 |\n20 |       let make_svc = make_service_fn(move |_conn| async move {\n   |  ____________________________________________________________^\n21 | |         Ok::&lt;_, Infallible&gt;(service_fn(move |req| hello_world(counter.clone(), req)))\n   | |                                        --------------------------------------------\n   | |                                        |\n   | |                                        move occurs because `counter` has type `Arc&lt;std::sync::Mutex&lt;usize&gt;&gt;`, which does not implement the `Copy` trait\n   | |                                        move occurs due to use in generator\n22 | |     });\n   | |_____^ move out of `counter` occurs here\n</code></pre>\n<h2 id=\"double-the-closure-double-the-clone\">Double the closure, double the clone!</h2>\n<p>Well, even <em>this</em> error makes a lot of sense. Let's understand better what our code is doing:</p>\n<ul>\n<li>Creates a closure to pass to <code>make_service_fn</code>, which will be called for each new incoming connection</li>\n<li>Within <em>that</em> closure, creates a new closure to pass to <code>service_fn</code>, which will be called for each new incoming request on an existing connection</li>\n</ul>\n<p>This is where the trickiness of working directly with Hyper comes into play. Each of those layers of closure need to own their own clone of the <code>Arc</code>. And in our code above, we're trying to move the <code>Arc</code> from the outer closure's captured variable into the inner closure's captured variable. If you squint hard enough, that's what the error message above is saying. Our outer closure is an <code>FnMut</code>, which must be callable multiple times. Therefore, we cannot move out of its captured variable.</p>\n<p>It seems like this should be an easy fix: just <code>clone</code> again!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| async move {\n    let counter_clone = counter.clone();\n    Ok::&lt;_, Infallible&gt;(service_fn(move |req| hello_world(counter_clone.clone(), req)))\n});\n</code></pre>\n<p>And this is the point at which we hit a real head scratcher: we get almost exactly the same error message:</p>\n<pre><code>error[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n  --&gt; src\\main.rs:20:60\n   |\n18 |       let counter: Counter = Arc::new(Mutex::new(0));\n   |           ------- captured outer variable\n19 |\n20 |       let make_svc = make_service_fn(move |_conn| async move {\n   |  ____________________________________________________________^\n21 | |         let counter_clone = counter.clone();\n   | |                             -------\n   | |                             |\n   | |                             move occurs because `counter` has type `Arc&lt;std::sync::Mutex&lt;usize&gt;&gt;`, which does not implement the `Copy` trait\n   | |                             move occurs due to use in generator\n22 | |         Ok::&lt;_, Infallible&gt;(service_fn(move |req| hello_world(counter_clone.clone(), req)))\n23 | |     });\n   | |_____^ move out of `counter` occurs here\n</code></pre>\n<h2 id=\"the-paradigm-shift\">The paradigm shift</h2>\n<p>What we need to do is to rewrite our code ever so slightly so reveal what the problem is. Let's add a bunch of unnecessary braces. We'll convert the code above:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| async move {\n    let counter_clone = counter.clone();\n    Ok::&lt;_, Infallible&gt;(service_fn(move |req| hello_world(counter_clone.clone(), req)))\n});\n</code></pre>\n<p>into this semantically identical code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| { &#x2F;&#x2F; outer closure\n    async move { &#x2F;&#x2F; async block\n        let counter_clone = counter.clone();\n        Ok::&lt;_, Infallible&gt;(service_fn(move |req| { &#x2F;&#x2F; inner closure\n            hello_world(counter_clone.clone(), req)\n        }))\n    }\n});\n</code></pre>\n<p>The error message is basically identical, just slightly different source locations. But now I can walk through the ownership of <code>counter</code> more correctly. I've added comments to highlight three different entities in the code above that can take ownership of values via some kind of environment:</p>\n<ul>\n<li>The outer closure, which handles each connection</li>\n<li>An <code>async</code> block, which forms the body of the outer closure</li>\n<li>The inner closure, which handles each request</li>\n</ul>\n<p>In the original structuring of the code, we put <code>move |_conn| async move</code> next to each other on one line, which—at least for me—obfuscated the fact that the closure and <code>async</code> block were two completely separate entities. With that change in place, let's track the ownership of <code>counter</code>:</p>\n<ol>\n<li>We create the <code>Arc</code> in the <code>main</code> function; it's owned by the <code>counter</code> variable.</li>\n<li>We move the <code>Arc</code> from the <code>main</code> function's <code>counter</code> variable into the outer closure's captured variables.</li>\n<li>We move the <code>counter</code> variable out of the outer closure and into the <code>async</code> block's captured variables.</li>\n<li>Within the body of the <code>async</code> block, we create a clone of <code>counter</code>, called <code>counter_clone</code>. This does not move out of the <code>async</code> block, since the <code>clone</code> method only requires a reference to the <code>Arc</code>.</li>\n<li>We move the <code>Arc</code> out of the <code>counter_clone</code> variable and into the inner closure.</li>\n<li>Within the body of the inner closure, we clone the <code>Arc</code> (which, as explained in (4), doesn't move) and pass it into the <code>hello_world</code> function.</li>\n</ol>\n<p>Based on this breakdown, can you see where the problem is? It's at step (3). We don't want to move out of the outer closure's captured variables. We try to avoid that move by cloning <code>counter</code>. But we clone too late! By using <code>counter</code> from inside an <code>async move</code> block, we're forcing the compiler to move. Hurray, we've identified the problem!</p>\n<h2 id=\"non-solution-non-move-async\">Non-solution: non-move <code>async</code></h2>\n<p>It seems like we were simply over-ambitious with our &quot;sprinkling <code>move</code>&quot; attempt above. The problem is that the <code>async</code> block is taking ownership of <code>counter</code>. Let's try simply removing the <code>move</code> keyword there:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n    async {\n        let counter_clone = counter.clone();\n        Ok::&lt;_, Infallible&gt;(service_fn(move |req| {\n            hello_world(counter_clone.clone(), req)\n        }))\n    }\n});\n</code></pre>\n<p>Unfortunately, this isn't a solution:</p>\n<pre><code>error: captured variable cannot escape `FnMut` closure body\n  --&gt; src\\main.rs:21:9\n   |\n18 |       let counter: Counter = Arc::new(Mutex::new(0));\n   |           ------- variable defined here\n19 |\n20 |       let make_svc = make_service_fn(move |_conn| {\n   |                                                 - inferred to be a `FnMut` closure\n21 | &#x2F;         async {\n22 | |             let counter_clone = counter.clone();\n   | |                                 ------- variable captured here\n23 | |             Ok::&lt;_, Infallible&gt;(service_fn(move |req| {\n24 | |                 hello_world(counter_clone.clone(), req)\n25 | |             }))\n26 | |         }\n   | |_________^ returns an `async` block that contains a reference to a captured variable, which then escapes the closure body\n   |\n   = note: `FnMut` closures only have access to their captured variables while they are executing...\n   = note: ...therefore, they cannot allow references to captured variables to escape\n</code></pre>\n<p>The problem here is that the outer closure will return the <code>Future</code> generated by the <code>async</code> block. And if the <code>async</code> block doesn't <code>move</code> the <code>counter</code>, it will be holding a reference to the outer closure's captured variables. And that's not allowed.</p>\n<h2 id=\"real-solution-clone-early-clone-often\">Real solution: clone early, clone often</h2>\n<p>OK, undo the <code>async move</code> to <code>async</code> transformation, it's a dead end. It turns out that all we've got to do is clone the <code>counter</code> before we start the <code>async move</code> block, like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n    let counter_clone = counter.clone(); &#x2F;&#x2F; this moved one line earlier\n    async move {\n        Ok::&lt;_, Infallible&gt;(service_fn(move |req| {\n            hello_world(counter_clone.clone(), req)\n        }))\n    }\n});\n</code></pre>\n<p>Now, we create a temporary <code>counter_clone</code> within the outer closure. This works by reference, and therefore doesn't move anything. We then move the new, temporary <code>counter_clone</code> into the <code>async move</code> block via a capture, and from there move it into the inner closure. With this, all of our closure captured variables remain unmoved, and therefore the requirements of <code>FnMut</code> are satisfied.</p>\n<p>And with that, we can finally enjoy the glory days of Geocities visitor counters!</p>\n<h2 id=\"async-closures\">Async closures</h2>\n<p>The formatting recommended by <code>rustfmt</code> hides away the fact that there are two different environments at play between the outer closure and the <code>async block</code>, by moving the two onto a single line with <code>move |_conn| async move</code>. That makes it feel like the two entities are somehow one and the same. But as we've demonstrated, they aren't.</p>\n<p>Theoretically this could be solved by having an async closure. I tested with <code>#![feature(async_closure)]</code> on <code>nightly-2021-03-02</code>, but couldn't figure out a way to use an async closure to solve this problem differently than I solved it above. But that may be my own lack of familiarity with <code>async_closure</code>.</p>\n<p>For now, the main takeaway is that closures and <code>async</code> blocks are two different entities, each with their own environment.</p>\n<p>If you liked this post you may also be interested in:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust home page</a></li>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/jobs/\">Jobs at FP Complete</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/",
        "slug": "captures-closures-async",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Captures in closures and async blocks",
        "description": "In this Rust Quickie, we'll cover a common mistake when writing async/await code, and how to more easily spot and fix it.",
        "updated": null,
        "date": "2021-03-03",
        "year": 2021,
        "month": 3,
        "day": 3,
        "taxonomies": {
          "tags": [
            "rust",
            "rust-quickies"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "blogimage": "/images/blog-listing/rust.png",
          "image": "images/blog/rust-quickies/captures-closures-async.png"
        },
        "path": "/blog/captures-closures-async/",
        "components": [
          "blog",
          "captures-closures-async"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "hello-hyper",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#hello-hyper",
            "title": "Hello Hyper!",
            "children": []
          },
          {
            "level": 2,
            "id": "counter",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#counter",
            "title": "Counter",
            "children": []
          },
          {
            "level": 2,
            "id": "clone",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#clone",
            "title": "Clone",
            "children": []
          },
          {
            "level": 2,
            "id": "move-all-the-things",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#move-all-the-things",
            "title": "move all the things!",
            "children": []
          },
          {
            "level": 2,
            "id": "double-the-closure-double-the-clone",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#double-the-closure-double-the-clone",
            "title": "Double the closure, double the clone!",
            "children": []
          },
          {
            "level": 2,
            "id": "the-paradigm-shift",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#the-paradigm-shift",
            "title": "The paradigm shift",
            "children": []
          },
          {
            "level": 2,
            "id": "non-solution-non-move-async",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#non-solution-non-move-async",
            "title": "Non-solution: non-move async",
            "children": []
          },
          {
            "level": 2,
            "id": "real-solution-clone-early-clone-often",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#real-solution-clone-early-clone-often",
            "title": "Real solution: clone early, clone often",
            "children": []
          },
          {
            "level": 2,
            "id": "async-closures",
            "permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#async-closures",
            "title": "Async closures",
            "children": []
          }
        ],
        "word_count": 2199,
        "reading_time": 11,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/short-circuit-sum-rust.md",
        "colocated_path": null,
        "content": "<p>This blog post is the first in a planned series I'm calling &quot;Rust quickies.&quot; In my <a href=\"https://tech.fpcomplete.com/training/\">training sessions</a>, we often come up with quick examples to demonstrate some point. Instead of forgetting about them, I want to put short blog posts together focusing on these examples. Hopefully these will be helpful, enjoy!</p>\n<div class=\"alert alert-secondary text-center\">FP Complete is looking for Rust and DevOps engineers. Interested in working with us? <a href=\"/jobs/\">Check out our jobs page</a>.</div>\n<h2 id=\"short-circuiting-a-for-loop\">Short circuiting a <code>for</code> loop</h2>\n<p>Let's say I've got an <code>Iterator</code> of <code>u32</code>s. I want to double each value and print it. Easy enough:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) {\n    for x in iter.into_iter().map(|x| x * 2) {\n        println!(&quot;{}&quot;, x);\n    }\n}\n\nfn main() {\n    weird_function(1..10);\n}\n</code></pre>\n<p>And now let's say we hate the number 8, and want to stop when we hit it. That's a simple one-line change:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) {\n    for x in iter.into_iter().map(|x| x * 2) {\n        if x == 8 { return } &#x2F;&#x2F; added this line\n        println!(&quot;{}&quot;, x);\n    }\n}\n</code></pre>\n<p>Easy, done, end of story. And for this reason, I <em>recommend</em> using <code>for</code> loops when possible. Even though, from a functional programming background, it feels overly imperative. However, some people out there want to be more functional, so let's explore that.</p>\n<h2 id=\"for-each-vs-map\">for_each vs map</h2>\n<p>Let's forget about the short-circuiting for a moment. And now we want to go back to the original version of the program, but <em>without</em> using a <code>for</code> loop. Easy enough with the method <code>for_each</code>. It takes a closure, which it runs for each value in the <code>Iterator</code>. Let's check it out:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) {\n    iter.into_iter().map(|x| x * 2).for_each(|x| {\n        println!(&quot;{}&quot;, x);\n    })\n}\n</code></pre>\n<p>But why, exactly do we need <code>for_each</code>? That seems awfully similar to <code>map</code>, which <em>also</em> applies a function over every value in an <code>Iterator</code>. Trying to make that change, however, demonstrates the problem. With this code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) {\n    iter.into_iter().map(|x| x * 2).map(|x| {\n        println!(&quot;{}&quot;, x);\n    })\n}\n</code></pre>\n<p>we get an error message:</p>\n<pre><code>error[E0308]: mismatched types\n --&gt; src\\main.rs:2:5\n  |\n2 | &#x2F;     iter.into_iter().map(|x| x * 2).map(|x| {\n3 | |         println!(&quot;{}&quot;, x);\n4 | |     })\n  | |______^ expected `()`, found struct `Map`\n</code></pre>\n<p>Undaunted, I fix this error by sticking a semicolon at the end of that expression. That generates a warning of <code>unused `Map` that must be used</code>. And sure enough, running this program produces no output.</p>\n<p>The problem is that <code>map</code> doesn't drain the <code>Iterator</code>. Said another way, <code>map</code> is <em>lazy</em>. It adapts one <code>Iterator</code> into a new <code>Iterator</code>. But unless something comes along and <em>drains</em> or <em>forces</em> the <code>Iterator</code>, no actions will occur. By contrast, <code>for_each</code> will always drain an <code>Iterator</code>.</p>\n<p>One easy trick to force draining of an <code>Iterator</code> is with the <code>count()</code> method. This will perform some unnecessary work of counting how many values are in the <code>Iterator</code>, but it's not that expensive. Another approach would be to use <code>collect</code>. This one is a little trickier, since <code>collect</code> typically needs some type annotations. But thanks to a fun trick of how <code>FromIterator</code> is implemented for the unit type, we can collect a stream of <code>()</code>s into a single <code>()</code> value. Meaning, this code works:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) {\n    iter.into_iter().map(|x| x * 2).map(|x| {\n        println!(&quot;{}&quot;, x);\n    }).collect()\n}\n</code></pre>\n<p>Note the lack of a semicolon at the end there. What do you think will happen if we add in the semicolon?</p>\n<h2 id=\"short-circuiting\">Short circuiting</h2>\n<p><strong>EDIT</strong> Enough people have asked &quot;why not use <code>take_while</code>?&quot; that I thought I'd address it. Yes, below, <code>take_while</code> will work for &quot;short circuiting.&quot; It's probably even a good idea. But the main goal in this post is to explore some funny implementation approaches, not recommend a best practice. And overall, despite some good arguments for <code>take_while</code> being a good choice here, I still stand by the overall recommendation to prefer <code>for</code> loops for simplicity.</p>\n<p>With the <code>for</code> loop approach, stopping at the first 8 was a trivial, 1 line addition. Let's do the same thing here:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) {\n    iter.into_iter().map(|x| x * 2).map(|x| {\n        if x == 8 { return }\n        println!(&quot;{}&quot;, x);\n    }).collect()\n}\n</code></pre>\n<p>Take a guess at what the output will be. Ready? OK, here's the real thing:</p>\n<pre><code>2\n4\n6\n10\n12\n14\n16\n18\n</code></pre>\n<p>We <em>skipped</em> 8, but we didn't stop. It's the difference between a <code>continue</code> and a <code>break</code> inside the <code>for</code> loop. Why did this happen?</p>\n<p>It's important to think about the scope of a <code>return</code>. It will exit the current function. And in this case, the current function isn't <code>weird_function</code>, but the <em>closure inside the <code>map</code> call</em>. This is what makes short-circuiting inside <code>map</code> so difficult.</p>\n<p>The same exact comment will apply to <code>for_each</code>. The only way to stop a <code>for_each</code> from continuing is to panic (or abort the program, if you want to get really aggressive).</p>\n<p>But with <code>map</code>, we have some ingenious ways of working around this and short-circuiting. Let's see it in action.</p>\n<h2 id=\"collect-an-option\">collect an <code>Option</code></h2>\n<p><code>map</code> needs some draining method to drive it. We've been using <code>collect</code>. I've <a href=\"https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/\">previously discussed the intricacies of this method</a>. One cool feature of <code>collect</code> is that, for <code>Option</code> and <code>Result</code>, it provides short-circuit capabilities. We can modify our program to take advantage of that:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) -&gt; Option&lt;()&gt; {\n    iter.into_iter().map(|x| x * 2).map(|x| {\n        if x == 8 { return None } &#x2F;&#x2F; short circuit!\n        println!(&quot;{}&quot;, x);\n        Some(()) &#x2F;&#x2F; keep going!\n    }).collect()\n}\n</code></pre>\n<p>I put a return type of <code>weird_function</code>, though we could also use turbofish on <code>collect</code> and throw away the result. We just need some type annotation to say what we're trying to collect. Since collecting the underlying <code>()</code> values doesn't take up extra memory, this is even pretty efficient! The only cost is the extra <code>Option</code>. But that extra <code>Option</code> is (arguably) useful; it lets us know if we short-circuited or not.</p>\n<p>But the story isn't so rosy with other types. Let's say our closure within <code>map</code> returns the <code>x</code> value. In other words, replace the last line with <code>Some(x)</code> instead of <code>Some(())</code>. Now we need to somehow collect up those <code>u32</code>s. Something like this would work:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) -&gt; Option&lt;Vec&lt;u32&gt;&gt; {\n    iter.into_iter().map(|x| x * 2).map(|x| {\n        if x == 8 { return None } &#x2F;&#x2F; short circuit!\n        println!(&quot;{}&quot;, x);\n        Some(x) &#x2F;&#x2F; keep going!\n    }).collect()\n}\n</code></pre>\n<p>But that incurs a heap allocation that we don't want! And using <code>count()</code> from before is useless too, since it won't even short circuit.</p>\n<p>But we do have one other trick.</p>\n<h2 id=\"sum\">sum</h2>\n<p>It turns out there's another draining method on <code>Iterator</code> that performs short circuiting: <code>sum</code>. This program works perfectly well:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) -&gt; Option&lt;u32&gt; {\n    iter.into_iter().map(|x| x * 2).map(|x| {\n        if x == 8 { return None } &#x2F;&#x2F; short circuit!\n        println!(&quot;{}&quot;, x);\n        Some(x) &#x2F;&#x2F; keep going!\n    }).sum()\n}\n</code></pre>\n<p>The downside is that it's unnecessarily summing up the values. And maybe that could be a real problem if some kind of overflow occurs. But this mostly works. But is there some way we can stay functional, short circuit, and get no performance overhead? Sure!</p>\n<h2 id=\"short\">Short</h2>\n<p>The final trick here is to create a new helper type for summing up an <code>Iterator</code>. But this thing won't really sum. Instead, it will throw away all of the values, and stop as soon as it sees an <code>Option</code>. Let's see it in practice:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Debug)]\nenum Short {\n    Stopped,\n    Completed,\n}\n\nimpl&lt;T&gt; std::iter::Sum&lt;Option&lt;T&gt;&gt; for Short {\n    fn sum&lt;I: Iterator&lt;Item = Option&lt;T&gt;&gt;&gt;(iter: I) -&gt; Self {\n        for x in iter {\n            if let None = x { return Short::Stopped }\n        }\n        Short::Completed\n    }\n}\nfn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) -&gt; Short {\n    iter.into_iter().map(|x| x * 2).map(|x| {\n        if x == 8 { return None } &#x2F;&#x2F; short circuit!\n        println!(&quot;{}&quot;, x);\n        Some(x) &#x2F;&#x2F; keep going!\n    }).sum()\n}\n\nfn main() {\n    println!(&quot;{:?}&quot;, weird_function(1..10));\n}\n</code></pre>\n<p>And voila! We're done!</p>\n<p><strong>Exercise</strong> It's pretty cheeky to use <code>sum</code> here. <code>collect</code> makes more sense. Replace <code>sum</code> with <code>collect</code>, and then change the <code>Sum</code> implementation into something else. Solution at the end.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>That's a lot of work to be functional. Rust has a great story around short circuiting. And it's not just with <code>return</code>, <code>break</code>, and <code>continue</code>. It's with the <code>?</code> try operator, which forms the basis of error handling in Rust. There are times when you'll want to use <code>Iterator</code> adapters, async streaming adapters, and functional-style code. But unless you have a pressing need, my recommendation is to stick to <code>for</code> loops.</p>\n<p>If you liked this post, and would like to see more Rust quickies, <a href=\"https://twitter.com/snoyberg\">let me know</a>. You may also like these other pages:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust home page</a></li>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/jobs/\">Jobs at FP Complete</a></li>\n</ul>\n<h2 id=\"solution\">Solution</h2>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::iter::FromIterator;\n\n#[derive(Debug)]\nenum Short {\n    Stopped,\n    Completed,\n}\n\nimpl&lt;T&gt; FromIterator&lt;Option&lt;T&gt;&gt; for Short {\n    fn from_iter&lt;I: IntoIterator&lt;Item = Option&lt;T&gt;&gt;&gt;(iter: I) -&gt; Self {\n        for x in iter {\n            if let None = x { return Short::Stopped }\n        }\n        Short::Completed\n    }\n}\nfn weird_function(iter: impl IntoIterator&lt;Item=u32&gt;) -&gt; Short {\n    iter.into_iter().map(|x| x * 2).map(|x| {\n        if x == 8 { return None } &#x2F;&#x2F; short circuit!\n        println!(&quot;{}&quot;, x);\n        Some(x) &#x2F;&#x2F; keep going!\n    }).collect()\n}\n\nfn main() {\n    println!(&quot;{:?}&quot;, weird_function(1..10));\n}\n</code></pre>\n",
        "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/",
        "slug": "short-circuit-sum-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Short Circuit Sum in Rust",
        "description": "Haskell and Rust both support asynchronous programming. Haskell includes a feature called async exceptions, which allow cancelling threads, but they come at a cost. See how Rust does the same job, and the relative trade-offs of each approach.",
        "updated": null,
        "date": "2021-02-15",
        "year": 2021,
        "month": 2,
        "day": 15,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust",
            "rust-quickies"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "image": "images/blog/rust-quickies/short-circuit-sum.png"
        },
        "path": "/blog/short-circuit-sum-rust/",
        "components": [
          "blog",
          "short-circuit-sum-rust"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "short-circuiting-a-for-loop",
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#short-circuiting-a-for-loop",
            "title": "Short circuiting a for loop",
            "children": []
          },
          {
            "level": 2,
            "id": "for-each-vs-map",
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#for-each-vs-map",
            "title": "for_each vs map",
            "children": []
          },
          {
            "level": 2,
            "id": "short-circuiting",
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#short-circuiting",
            "title": "Short circuiting",
            "children": []
          },
          {
            "level": 2,
            "id": "collect-an-option",
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#collect-an-option",
            "title": "collect an Option",
            "children": []
          },
          {
            "level": 2,
            "id": "sum",
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#sum",
            "title": "sum",
            "children": []
          },
          {
            "level": 2,
            "id": "short",
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#short",
            "title": "Short",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#conclusion",
            "title": "Conclusion",
            "children": []
          },
          {
            "level": 2,
            "id": "solution",
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#solution",
            "title": "Solution",
            "children": []
          }
        ],
        "word_count": 1556,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/philosophies-rust-haskell.md",
        "colocated_path": null,
        "content": "<p>Rust is a systems programming language following fairly standard imperative approaches and a C-style syntax. Haskell is a purely functional programming language, innovating in areas such as type theory and effect management. Viewed that way, these languages are polar opposites.</p>\n<p>And yet, these two languages attract many of the same people, including the engineering team at FP Complete. Putting on a different set of lenses, both languages provide powerful abstractions, enforce different kinds of correctness via static analysis in the compiler, and favor powerful features over quick adoption.</p>\n<p>In this post, I want to look at some of the philosophical underpinnings that explain some of the similarities and differences in the languages. Some of these are inherent. Rust's status as a systems programming language essentially requires some different approaches to Haskell's purely functional nature. But some of these are not. It wasn't strictly necessary for both languages to converge on similar systems for Algebraic Data Types (ADTs) and ad hoc polymorphism (via traits/type classes).</p>\n<p>Keep in mind that in writing this post, I'm viewing it as a <em>consumer</em> of the languages, not a designer. The designers themselves may have different motivations than those I describe. It would certainly be interesting to see if others have different takes on this topic.</p>\n<h2 id=\"rust-ownership\">Rust: ownership</h2>\n<p>This is so obvious that I almost forgot to include it. If there's one thing that defines Rust versus any other language, it's ownership and the borrow checker. This speaks to two core pieces of Rust:</p>\n<ul>\n<li>The goal of serving as a systems programming language, where garbage collection is not an option</li>\n<li>The goal of providing a safe subset of the language, where undefined behavior cannot occur</li>\n</ul>\n<p>The concept of ownership achieves both of these. Many additions have been made to the language to make it easier to work with ownership overall. This hints at the concept of ergonomics, which is fundamental to Rust philosophy. But ownership and borrow checking are also known as the harder parts of the language. Putting it together, we see a philosophy of striving to meet our goals safely, while making the usage of the features as easy as possible. However, if there's a conflict between the goals and ease of use, the goals win out.</p>\n<p>All of this stands in stark contrast to Haskell, which is explicitly <em>not</em> a systems language, and does not attempt in any way to address those cases. Instead, it leverages garbage collection quite happily, with the trade-offs between performance and ease-of-use inherent in that choice.</p>\n<h2 id=\"haskell-purely-functional\">Haskell: purely functional</h2>\n<p>The underlying goal of Haskell is ultimately to create a purely functional programming language. Many of the most notable and unusual features of Haskell directly derive from this goal, such as using monads to explicitly track effects.</p>\n<p>Other parts of the language follow from this less directly. For example, Haskell strongly embraces Higher Order Functions, currying, and partial function application. This combination turns many common structures in other languages (like loops) into normal functions. But in order to make this feel natural, Haskell uses slightly odd (compared to other languages) syntax for function application.</p>\n<p>And this gets into a more fundamental piece of philosophy. Haskell is willing to be quite dramatically different from other programming languages in its pursuit of its goals. In my opinion, Rust has been less willing to diverge from mainstream approaches, veering away only out of absolute necessity.</p>\n<p>This results in a world where Haskell feels quite a bit more foreign to others, but has more freedom to innovate. Rust, on the other hand, has stuck to existing solutions when possible, such as eschewing monadic futures in favor of <code>async</code>/<code>.await</code> syntax.</p>\n<h2 id=\"expression-oriented\">Expression oriented</h2>\n<p>I undervalued how important this feature was for a while, but recently I've realized that it's one of the most important features in both languages for me.</p>\n<blockquote class=\"twitter-tweet\"><p lang=\"en\" dir=\"ltr\">I used to think that the reason I loved both Haskell and Rust so much was their shared strong typing, ADTs, and pattern matching combination.<br><br>After a recent discussion, I think it may be more about being expression-oriented languages.</p>&mdash; Michael Snoyman (@snoyberg) <a href=\"https://twitter.com/snoyberg/status/1348486654017855489?ref_src=twsrc%5Etfw\">January 11, 2021</a></blockquote> <script async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>\n<p>Instead of relying on declare-then-assign patterns, both languages allow conditionals and other constructs to evaluate to values. This reduces the frequency of seeing mutable assignment and avoids cases of uninitialized variables. By restricting mutable assignment to cases where it's actual mutation, we get to free up a lot of head space to focus on the trickier parts of programming.</p>\n<h2 id=\"type-system\">Type system</h2>\n<p>Rust and Haskell have very similar type systems. Both make it easy to create new types, provide for features like newtypes, provide type aliases, and offer a combination of product (<code>struct</code>) and sum (<code>enum</code>) types. Both allow labeling fields or accessing values positionally. Both offer <a href=\"https://tech.fpcomplete.com/blog/pattern-matching/\">pattern matching</a> constructs. Overall, the similarities between the two languages far outweigh the differences.</p>\n<p>I place a large part of the shared interest between these languages at the feet of the type system. Since I started using Haskell, I feel strongly hampered using any language without a rich, flexible, and powerful type system. Rust's embrace of Algebraic Data Types (ADTs) feels natural.</p>\n<p>There are some differences between the languages in these topics, but they are <em>mostly</em> superficial. For example, Haskell uses the single keyword <code>data</code> for introducing both product and sum types, while Rust uses <code>struct</code> and <code>enum</code>, respectively. Haskell will allow creation of partial field accessors in sum types, while Rust does not. Haskell allows for partial pattern matches (with an optional warning), and Rust does not.</p>\n<p>These are meaningful and affect the way you use the languages, but I don't see them as deeply philosophical. Instead, I see both languages embracing the idea that encouraging programmers to define and use strong typing mechanisms leads to better code. And it's a message I wholeheartedly endorse.</p>\n<h2 id=\"traits-and-type-classes\">Traits and type classes</h2>\n<p>In the wide world of inheritance and polymorphism, there are a lot of different approaches. Within that, Rust's traits and Haskell's type classes are far more similar than different. Both of them allow you to separate out functionality (methods) from data (<code>struct</code>/<code>data</code>). Both allow you to create new types or traits/classes yourself and add them on to existing types/traits/classes. Both of them support a concept of associated types, and multiple parameters (either via parameterized traits or multi-param type classes).</p>\n<p>There are some differences between the two. For one, Rust doesn't allow orphans. An implementation must appear in the same crate as either the type definition or the trait definition. (The fact that Rust treats an entire crate as a compilation unit instead of a single module makes this restriction less of an imposition.) Also, Haskell supports functional dependencies, but that's not terribly interesting, since that can be closely approximated with associated types. And there are other, more subtle differences, around issues like overlapping instances. Rust's lack of orphans allows it to make some closed world assumptions that Haskell cannot.</p>\n<p>Ultimately, the distinctions above don't lend themselves to a deep philosophical difference, but rather minor variations on a theme. There is, however, one major distinction in this area between the two languages: Higher Kinded Types (HKTs). In Haskell, HKTs provide the basis for such typeclasses as <code>Functor</code>, <code>Applicative</code>, <code>Monad</code>, <code>Foldable</code>, and <code>Traversable</code>. In Rust, implementing some kind of traits around these concepts is <a href=\"https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/\">a bit more complicated</a>.</p>\n<p>And this is one of the deeper philosophical differences between the two languages. Haskellers readily embrace concepts like HKTs. The Rust community has adamantly avoided embracing them, due to their perceived complexity. Instead, in Rust, alternative and arguably simpler approaches have been used to solve the same problems these typeclasses solve in Haskell. Which leads us to probably the biggest philosophical difference between the languages.</p>\n<h2 id=\"general-vs-specific\">General vs specific</h2>\n<p>Let's say I want to have early termination in the case of an error. Or asynchronous coding capabilities. Or the ability to pass information to the rest of a computation. How would I achieve this?</p>\n<p>In Haskell, the answer is <em>obviously</em> <code>Monad</code>s. <code>do</code>-notation is a general purpose &quot;programmable semicolon.&quot; It generally solves all of these cases. And many, many more. Writing a parser? <code>Monad</code>. Concurrency? Maybe <code>Monad</code>, or maybe <code>Applicative</code> with <code>ApplicativeDo</code> turned on. But the common factor: we can express large classes of problems as <code>do</code>-notation.</p>\n<p>How about Rust? Well, if you want early termination for errors, you'll use a <code>Result</code> return type and the <code>?</code> try operator. Async? <code>async</code>/<code>.await</code> syntax. Pass in information? Maybe use method syntax, maybe use thread-local state, maybe something else.</p>\n<p>The point is that the Haskell community overall reaches for generalizing a solution as far as possible, usually along the lines of some abstract mathematical underpinning. There are huge advantages to this. We build out solutions to problems we didn't even know we had. We are able to rely on mathematical laws to guide our designs and ensure concepts compose nicely.</p>\n<p>The Rust community, instead, favors specific, ergonomic solutions. Error handling is <em>really</em> common, so give it a single character operator. Make sure that it handles common cases, like unifying error types via the <code>From</code> trait. Make sure error messages are as clear as possible. Optimize for the 95%, and don't worry about the 5% yet. (And see the next section for the 5%.)</p>\n<p>To me, this is the deepest non-inherent divide between the languages. Sure, ownership versus purity is huge, but it's right there on the label of the languages. <strong>This distinction ends up impacting how new language features are added, how people generally think about solutions, and how libraries are designed.</strong></p>\n<p>One final point. As much as I've implied that the Rust and Haskell communities are in two camps here, that's not quite fair. There are people in the Haskell community looking to make more specific solutions to some problems. (I'm probably one of them with things like <code>RIO</code>.) And while I can't think of a concrete Rust example to the contrary, I have no doubt that there are cases where people design general solutions when a more specific one would suffice.</p>\n<h2 id=\"code-generation-metaprogramming-macros\">Code generation/metaprogramming/macros</h2>\n<p>Haskell has metaprogramming via Template Haskell (TH). It's almost universally viewed as a necessary evil, but evil nonetheless. It screws up compilation in some cases via stage restrictions, it requires a language pragma to enable, and introduces awkward syntax. Features like deriving serialization instances are generally moving towards in-language features via the <code>Generic</code> typeclass.</p>\n<p>Rust's &quot;Hello World&quot; sticks a macro call on the second line via <code>println!</code>. The syntax for calling macros looks almost identical to function calls. Common libraries encourage macro usage all over the place. <code>serde</code> serialization deriving, <code>structopt</code> command line parsing, and <code>snafu</code>/<code>thiserror</code> error type creation all leverage macro attributes and deriving.</p>\n<p>This is a fascinating distinction to me. I've been on both sides of the TH divide. Yesod famously uses TH for a lot of code generation, which has earned the ire of many Haskellers. I've since generally avoided using TH when possible in the past few years. And when I picked up Rust, I studiously avoided learning how to create macros until relatively recently, lest I be tempted to slip back into my old, evil ways.</p>\n<p>Metaprogramming definitely complicates some things. It makes it harder to debug some problems. Rust does a pretty good job at making sure error messages can be comprehensible. But documentation on macro arguments and return types is still not as nice as functions and methods.</p>\n<p>I think I'm still mostly in the Haskell camp of avoiding unnecessary metaprogramming in my API design, but I'm beginning to be more free with it. And I have no reservations in Rust about <em>using</em> macros; they're wonderful. I do wonder if the main issue in Haskell isn't the overall concept of metaprogramming, but the specific implementation with Template Haskell.</p>\n<h2 id=\"backwards-compatibility\">Backwards compatibility</h2>\n<p>Rust overall has a more coherent and consistent story around backwards compatibility. It's almost always painless to upgrade to new versions of the Rust compiler. This puts an extra burden on the compiler team, and constrains changes that can be made to the language. And in one case (the module system update), it required a new <code>edition</code> system to allow for full backwards compatibility.</p>\n<p>The Haskell community overall cares less about backwards compatibility. New versions of the compiler regularly break code. New versions of libraries will get released to smooth out rough edges in the APIs. (I used to do this regularly, and now regret that. I've tried hard to keep backwards compatibility in my libraries.)</p>\n<p>Overall, I think the Rust community's approach here is better for producing production software. Arguably the Haskell approach allows for much more exploration and attainment of some higher level of beauty. Or as they say, &quot;avoid (success at all costs).&quot;</p>\n<h2 id=\"optimistic-optimizations\">Optimistic optimizations</h2>\n<p>GHC has a powerful rewrite rules system, which can rewrite less efficient combinations of functions to more optimized ones. This plays in a big way in the <code>vector</code> package, where rewrite rules implement stream fusion, allowing many classes of vector pipelines to completely avoid allocation. This is a massive optimization. At least when it works. As I've personally experienced, and many others have too, rewrite rules can be finicky. The Haskell approach is to be happy that our code sometimes gets much faster, and that we get to keep elegant, easy-to-understand code.</p>\n<p>The Rust approach is the polar opposite. Either code will <em>definitely</em> be fast or <em>definitely</em> be slow. I learned this a while ago when looking into recursive functions and tail call optimization (TCO). The Rust compiler will <em>not</em> perform a TCO, because it's so easy to accidentally change a TCO-able implementation into something that eats up stack space. There are plans to make explicit tail calls possible with the <code>become</code> keyword someday.</p>\n<p>More generally, Rust embraces the concept of zero cost abstractions. The idea is that you should be able to abstract and simplify code, when we can guarantee that there is no cost. In the Haskell world, we tend to focus on the elegant abstraction, even if a cost will be involved.</p>\n<h2 id=\"learning-curve\">Learning curve</h2>\n<p>A short one here. Both languages have a higher-than-average learning curve compared with other languages. Both languages embrace their learning curves. As much as possible, we try to make learning and using the languages easy. But neither language shies away from powerful features, even if it will make the language a bit harder to learn.</p>\n<p>To quote a Perlism: you'll only learn the language once, you'll use it for the rest of your life.</p>\n<h2 id=\"explicitly-mark-things\">Explicitly mark things</h2>\n<p>Both languages embrace the idea of explicitly marking things. For example, both languages encourage (in Haskell's case) or enforce (in Rust's case) marking the type signature of all functions. But that's pretty common. Haskell goes further, and requires that you mark all effectful computations with the <code>IO</code> type (or something similar, like <code>MonadIO</code>). Rust requires than anything which may fail be marked with a <code>Result</code> return value.</p>\n<p>You may argue that these are actually a <em>difference</em> in the language, and to some extent that's true. But I think the difference is about what the language considers important. Haskell, for reasons of purity, values deeply the idea that an effect may be performed. It then lumps errors and exceptions into the contract of <code>IO</code> and the concept of laziness (for better or worse). Rust, on the other hand, doesn't care if you may perform an effect, but deeply cares about whether an error may occur.</p>\n<h2 id=\"type-enforce-everything\">Type enforce <em>everything</em>?</h2>\n<p>When I initially implemented Haskell's <code>monad-logger</code>, I provided an instance for <code>IO</code> which performed no output. I received many complaints that people would rather get a compile time error if they forgot to initialize the logging system, and I removed the <code>IO</code> instance. (Without getting into details: this was <em>definitely</em> the right decision for the API, regardless of the distinction with Rust.)</p>\n<p>That's why I was so amused when I first used the <code>log</code> crate in Rust, and realized that if you don't initialize the logging system, it produces no output. There's no runtime error, just silence.</p>\n<p>Similarly, many functions in the Tokio crate will fail at runtime if run from outside of the context of a Tokio runtime. But nothing in the type system enforces this idea.</p>\n<p>And finally, I've been bitten a few times by <code>actix-web</code>'s state management. If you mismatch the type of the state between your handlers and your service declaration, you'll end up with a runtime error instead of a compile time bug.</p>\n<p>In the Haskell world, the overall philosophy is generally to approach &quot;if it compiles, it works.&quot; Haskellers love enforcing almost every invariant at the type level.</p>\n<p>I haven't discussed this much with Rustaceans, but it seems to me that the overall Rust philosophy here is slightly different. Instead, we like to express <em>tricky</em> invariants at the type level. But if something is so obviously going to fail or behave incorrectly in the most basic smoke testing, such as a Tokio function crashing, there's no need to develop type-level protections against it.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hope this laundry list comparison was interesting. I've been meaning to write it down for a while, so I kind of feel like I checked off a New Year's Resolution in doing so. I'd be curious to hear any other points of comparison people have, or disagreements about my assessments.</p>\n<p>You may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-at-fpco-2020/\">Rust at FP Complete, 2020 update</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/\">Collect in Rust, traverse in Haskell and Scala</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/error-handling-is-hard/\">Error handling is hard</a></li>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training at FP Complete</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/syllabus/\">Applied Haskell syllabus</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
        "slug": "philosophies-rust-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Philosophies of Rust and Haskell",
        "description": "As regular users of both Rust and Haskell, the FP Complete engineering team often discusses the similarities and differences in these languages. In this post, we'll review some of the philosophical underpinnings of these languages.",
        "updated": null,
        "date": "2021-01-11",
        "year": 2021,
        "month": 1,
        "day": 11,
        "taxonomies": {
          "tags": [
            "rust",
            "haskell",
            "insights"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "image": "images/blog/philosophies-rust-haskell.png"
        },
        "path": "/blog/philosophies-rust-haskell/",
        "components": [
          "blog",
          "philosophies-rust-haskell"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "rust-ownership",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#rust-ownership",
            "title": "Rust: ownership",
            "children": []
          },
          {
            "level": 2,
            "id": "haskell-purely-functional",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#haskell-purely-functional",
            "title": "Haskell: purely functional",
            "children": []
          },
          {
            "level": 2,
            "id": "expression-oriented",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#expression-oriented",
            "title": "Expression oriented",
            "children": []
          },
          {
            "level": 2,
            "id": "type-system",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#type-system",
            "title": "Type system",
            "children": []
          },
          {
            "level": 2,
            "id": "traits-and-type-classes",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#traits-and-type-classes",
            "title": "Traits and type classes",
            "children": []
          },
          {
            "level": 2,
            "id": "general-vs-specific",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#general-vs-specific",
            "title": "General vs specific",
            "children": []
          },
          {
            "level": 2,
            "id": "code-generation-metaprogramming-macros",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#code-generation-metaprogramming-macros",
            "title": "Code generation/metaprogramming/macros",
            "children": []
          },
          {
            "level": 2,
            "id": "backwards-compatibility",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#backwards-compatibility",
            "title": "Backwards compatibility",
            "children": []
          },
          {
            "level": 2,
            "id": "optimistic-optimizations",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#optimistic-optimizations",
            "title": "Optimistic optimizations",
            "children": []
          },
          {
            "level": 2,
            "id": "learning-curve",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#learning-curve",
            "title": "Learning curve",
            "children": []
          },
          {
            "level": 2,
            "id": "explicitly-mark-things",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#explicitly-mark-things",
            "title": "Explicitly mark things",
            "children": []
          },
          {
            "level": 2,
            "id": "type-enforce-everything",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#type-enforce-everything",
            "title": "Type enforce everything?",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2999,
        "reading_time": 15,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cloning-reference-method-calls.md",
        "colocated_path": null,
        "content": "<p>This semi-surprising corner case came up in some recent <a href=\"https://tech.fpcomplete.com/training/\">Rust training</a> I was giving. I figured a short write-up may help some others in the future.</p>\n<p>Rust's language design focuses on ergonomics. The goal is to make common patterns easy to write on a regular basis. This overall works out very well. But occasionally, you end up with a surprising outcome. And I think this situation is a good example.</p>\n<p>Let's start off by pretending that method syntax doesn't exist at all. Let's say I've got a <code>String</code>, and I want to clone it. I know that there's a <code>Clone::clone</code> method, which takes a <code>&amp;String</code> and returns a <code>String</code>. We can leverage that like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn uses_string(x: String) {\n    println!(&quot;I consumed the String! {}&quot;, x);\n}\n\nfn main() {\n    let name = &quot;Alice&quot;.to_owned();\n    let name_clone = Clone::clone(&amp;name);\n    uses_string(name);\n    uses_string(name_clone);\n}\n</code></pre>\n<p>Notice that I needed to pass <code>&amp;name</code> to <code>clone</code>, not simply <code>name</code>. If I did the latter, I would end up with a type error:</p>\n<pre><code>error[E0308]: mismatched types\n --&gt; src\\main.rs:7:35\n  |\n7 |     let name_clone = Clone::clone(name);\n  |                                   ^^^^\n  |                                   |\n  |                                   expected reference, found struct `String`\n  |                                   help: consider borrowing here: `&amp;name`\n</code></pre>\n<p>And that's because Rust won't automatically borrow a reference from function arguments. You need to explicit say that you want to borrow the value. Cool.</p>\n<p>But now I've remembered that method syntax <em>is</em>, in fact, a thing. So let's go ahead and use it!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let name_clone = (&amp;name).clone();\n</code></pre>\n<p>Remembering that <code>clone</code> takes a <code>&amp;String</code> and not a <code>String</code>, I've gone ahead and helpfully borrowed from <code>name</code> before calling the <code>clone</code> method. And I needed to wrap up that whole expression in parentheses, otherwise it will be parsed incorrectly by the compiler.</p>\n<p>That all works, but it's clearly not the way we want to write code in general. Instead, we'd like to forgo the parentheses and the <code>&amp;</code> symbol. And fortunately, we can! Most Rustaceans early on learn that you can simply do this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let name_clone = name.clone();\n</code></pre>\n<p>In other words, when we use method syntax, we can call <code>.clone()</code> on either a <code>String</code> <em>or</em> a <code>&amp;String</code>. That's because with a <a href=\"https://doc.rust-lang.org/stable/reference/expressions/method-call-expr.html\">method call expression</a>, &quot;the receiver may be automatically dereferenced or borrowed in order to call a method.&quot; Essentially, the compiler follows these steps:</p>\n<ul>\n<li>What's the type of <code>name</code>? OK, it's a <code>String</code></li>\n<li>Is there a method available that takes a <code>String</code> as the receiver? Nope.</li>\n<li>OK, try borrowing it. Is there a method available that takes a <code>&amp;String</code> as the receiver? Yes. Use that!</li>\n</ul>\n<p>And, for the most part, this works exactly as you'd expect. Until it doesn't. Let's start off with a confusing error message. Let's say I've got a helper function to loudly clone a <code>String</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn clone_loudly(x: &amp;String) -&gt; String {\n    println!(&quot;Cloning {}&quot;, x);\n    x.clone()\n}\n\nfn uses_string(x: String) {\n    println!(&quot;I consumed the String! {}&quot;, x);\n}\n\nfn main() {\n    let name = &quot;Alice&quot;.to_owned();\n    let name_clone = clone_loudly(&amp;name);\n    uses_string(name);\n    uses_string(name_clone);\n}\n</code></pre>\n<p>Looking at <code>clone_loudly</code>, I realize that I can easily generalize this to more than just a <code>String</code>. The only two requirements are that the type must implement <code>Display</code> (for the <code>println!</code> call) and <code>Clone</code>. Let's go ahead and implement that, accidentally forgetting about the <code>Clone</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::fmt::Display;\nfn clone_loudly&lt;T: Display&gt;(x: &amp;T) -&gt; T {\n    println!(&quot;Cloning {}&quot;, x);\n    x.clone()\n}\n</code></pre>\n<p>As you'd expect, this doesn't compile. However, the error message given may be surprising. If you're like me, you were probably expecting an error message about missing a <code>Clone</code> bound on <code>T</code>. In fact, we get something else entirely:</p>\n<pre><code>error[E0308]: mismatched types\n --&gt; src\\main.rs:4:5\n  |\n2 | fn clone_loudly&lt;T: Display&gt;(x: &amp;T) -&gt; T {\n  |                 - this type parameter - expected `T` because of return type\n3 |     println!(&quot;Cloning {}&quot;, x);\n4 |     x.clone()\n  |     ^^^^^^^^^ expected type parameter `T`, found `&amp;T`\n  |\n  = note: expected type parameter `T`\n                  found reference `&amp;T`\n</code></pre>\n<p>Strangely enough, the <code>.clone()</code> seems to have succeeded, but returned a <code>&amp;T</code> instead of a <code>T</code>. That's because the method call expression is following the same steps as above with <code>String</code>, namely:</p>\n<ul>\n<li>What's the type of <code>x</code>? OK, it's a <code>&amp;T</code></li>\n<li>Is there a <code>clone</code> method available that takes a <code>&amp;T</code> as the receiver? Nope, since we don't know that <code>T</code> implements the <code>Clone</code> trait.</li>\n<li>OK, try borrowing it. Is there a method available that takes a <code>&amp;&amp;T</code> as the receiver? <a href=\"https://doc.rust-lang.org/1.48.0/src/core/clone.rs.html#222-227\">Interestingly yes</a>.</li>\n</ul>\n<p>Let's dig in on that <code>Clone</code> implementation a bit. Removing a bit of noise so we can focus on the important bits:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;T&gt; Clone for &amp;T {\n    fn clone(self: &amp;&amp;T) -&gt; &amp;T {\n        *self\n    }\n}\n</code></pre>\n<p>Since references are <code>Copy</code>able, derefing a reference to a reference results in copying the inner reference value. What I find fascinating, and slightly concerning, is that we have two orthogonal features in the language:</p>\n<ul>\n<li>Method call syntax automatically causing borrows</li>\n<li>The ability to implement traits for both a type and a reference to that type</li>\n</ul>\n<p>When combined, there's some level of ambiguity about <em>which</em> trait implementation will end up being used.</p>\n<p>In this example, we're fortunate that the code didn't compile. We ended up with nothing more than a confusing error message. I haven't yet run into a real life issue where this behavior can result in code which compiles but does the wrong thing. It's certainly theoretically possible, but seems unlikely to occur unintentionally. That said, if anyone has been bitten by this, I'd be very interested to hear the details.</p>\n<p>So the takeaway: autoborrowing and derefing as part of method call syntax is a great feature of the language. It would be a major pain to use Rust without it. I'm glad it's present. Having traits implemented for references is a great feature, and I wouldn't want to use the language without it.</p>\n<p>But every once in a while, these two things bite us. Caveat emptor.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloning-reference-method-calls/",
        "slug": "cloning-reference-method-calls",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cloning a reference and method call syntax in Rust",
        "description": "A short example of a possibly surprising impact of how method resolution works in Rust",
        "updated": null,
        "date": "2020-12-28",
        "year": 2020,
        "month": 12,
        "day": 28,
        "taxonomies": {
          "categories": [
            "functional programming",
            "rust"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "image": "images/blog/method-syntax-autoborrow-surprise.png",
          "author_avatar": "/images/leaders/michael-snoyman.png"
        },
        "path": "/blog/cloning-reference-method-calls/",
        "components": [
          "blog",
          "cloning-reference-method-calls"
        ],
        "summary": null,
        "toc": [],
        "word_count": 975,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/pattern-matching.md",
        "colocated_path": null,
        "content": "<p>I first started writing Haskell about 15 years ago. My learning curve for the language was haphazard at best. In many cases, I learnt concepts by osmosis, and only later learned the proper terminology and details around them. One of the prime examples of this is pattern matching. Using a <code>case</code> expression in Haskell, or a <code>match</code> expression in Rust, always felt natural. But it took years to realize that patterns appeared in other parts of the languages than just these expressions, and what terms like <em>irrefutable</em> meant.</p>\n<p>It's quite possible most Haskellers and Rustaceans will consider this content obvious. But maybe there are a few others like me out there who never had a chance to realize how ubiquitous patterns are in these languages. This post may also be a fun glimpse into either Haskell or Rust if you're only familiar with one of the languages.</p>\n<h2 id=\"language-references\">Language references</h2>\n<p>Both Haskell and Rust have language references available online. The caveats are that the Rust reference is marked as incomplete, and the Haskell language reference is for Haskell2010, which GHC does not strictly adhere to. That said, both are readily understandable and complete enough to get a very good intuition. If you've never looked at either of these documents, I highly recommend having a peek.</p>\n<ul>\n<li><a href=\"https://www.haskell.org/onlinereport/haskell2010/haskellch3.html#x8-580003.17\">Haskell 2010 Language Report, section 3.17 Pattern Matching</a></li>\n<li><a href=\"https://doc.rust-lang.org/stable/reference/patterns.html#range-patterns\">Rust language reference, Patterns</a></li>\n</ul>\n<h2 id=\"case-and-match\">case and match</h2>\n<p>The first place most of us hear the term &quot;pattern matching&quot; is in Haskell's <code>case</code> expression, or Rust's <code>match</code> expression. And it makes perfect sense here. We can provide multiple <em>patterns</em>, typically based on a data constructor/variant, and the language will match the most appropriate one. Slightly tying in with <a href=\"https://tech.fpcomplete.com/blog/error-handling-is-hard/\">my previous post on errors</a>, let's look at a common example: pattern matching on an <code>Either</code> value in Haskell.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">mightFail :: Either String Int\n\nmain =\n    case mightFail of\n        Left err -&gt; putStrLn $ &quot;Error occurred: &quot; ++ err\n        Right x -&gt; putStrLn $ &quot;Successful result: &quot; ++ show x\n</code></pre>\n<p>Or a <code>Result</code> value in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn might_fail() -&gt; Result&lt;i32, String&gt; { ... }\n\nfn main() {\n    match might_fail() {\n        Err(err) =&gt; println!(&quot;Error occurred: {}&quot;, err),\n        Ok(x) =&gt; println!(&quot;Successful result: {}&quot;, x),\n    }\n}\n</code></pre>\n<p>I think most programmers, even those unfamiliar with these languages, could intuit to some extent what these expressions do. <code>mightFail</code> and <code>might_fail()</code> return some kind of value. The value may be in multiple different &quot;states.&quot; The patterns match, and we branch our behavior depending on which state. Easy enough.</p>\n<p>Already here, though, there's an important detail many of us gloss over. Or at least I did. Our patterns not only <em>match a constructor</em>, they also <em>bind a variable</em>. In the examples above, we bind the variables <code>err</code> and <code>x</code> to values contained by the data constructors. And that's pretty interesting, because both Haskell and Rust <em>also</em> use <code>let</code> bindings for defining variables. I wonder if there's some kind of connection there.</p>\n<p><em>Narrator: there was a connection</em></p>\n<h2 id=\"functions-in-haskell\">Functions in Haskell</h2>\n<p>Haskell immediately adds a curve ball (in a good way) to this story. Let's take a classic recursive definition of a factorial function (note: this isn't a <em>good</em> definition since it has a space leak).</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">fact :: Int -&gt; Int\nfact i =\n    case i of\n        0 -&gt; 1\n        _ -&gt; i * fact (i - 1)\n</code></pre>\n<p>This feels a bit verbose. We capture the variable <code>i</code>, only to immediately pattern match on it. We also have a <em>new</em> kind of pattern, <code>_</code>. When I first learned Haskell, I thought of <code>_</code> as &quot;a variable I don't care about.&quot; But it's actually more specialized than this: a wildcard pattern, something which matches anything. (We'll get into what variables match later.)</p>\n<p>Anyway, to make this kind of code a bit terser, Haskell offers a different way of writing this function:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">fact :: Int -&gt; Int\nfact 0 = 1\nfact i = i * fact (i - 1)\n</code></pre>\n<p>These two versions of the code are identical. It's just a syntactic trick. Let's see another more interesting syntactic trick.</p>\n<h2 id=\"what-about-let\">What about let?</h2>\n<p>We use <code>let</code> expressions (and <code>let</code> bindings in <code>do</code>-notation) in Haskell to create new variables, e.g.:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main =\n    let name = &quot;Alice&quot;\n     in putStrLn $ &quot;Hello, &quot; ++ name\n</code></pre>\n<p>And we do the same with <code>let</code> statements in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let name = &quot;Alice&quot;;\n    println!(&quot;Hello, {}&quot;, name);\n}\n</code></pre>\n<p>But here's where we begin to get a bit fancy. We already saw that we can bind variables in <code>case</code> and <code>match</code> expressions. Does that mean we can do away with the <code>let</code>s? Yes we can!</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main =\n    case &quot;Alice&quot; of\n        name -&gt; putStrLn $ &quot;Hello, &quot; ++ name\n</code></pre>\n<p>And</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    match &quot;Alice&quot; {\n        name =&gt; println!(&quot;Hello, {}&quot;, name)\n    }\n}\n</code></pre>\n<p>This isn't <em>good</em> code per se. In fact, <code>cargo clippy</code> will complain about it. But it does hint at the fact that there's a deeper connection between two constructs. And the connection is this: the left hand side of the equals sign in a <code>let</code> statement/expression/binding is a <em>pattern</em>.</p>\n<h2 id=\"ditch-the-case-ditch-the-match\">Ditch the case! Ditch the match!</h2>\n<p>Alright, so we can technically get rid of <code>let</code>s if we wanted to (which we don't). Can we get rid of the <code>case</code> expressions in Haskell? The real answer is &quot;definitely not.&quot; But interestingly, this code compiles!</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">mightFail :: Either String Int\nmightFail = Left &quot;It failed&quot;\n\nmain :: IO ()\nmain =\n    let Right x = mightFail\n     in putStrLn $ &quot;Successful result: &quot; ++ show x\n</code></pre>\n<p>As mentioned, we can put a pattern on the left hand side of the equals sign. And we've done just that here. But what on Earth does this code <em>do</em>? As you can see, the <code>mightFail</code> expression will evaluate to a <code>Left</code> value. But our pattern only matches on <code>Right</code> values! Running this code gives us:</p>\n<pre><code>Main.hs:10:9-27: Non-exhaustive patterns in Right x\n</code></pre>\n<p>Haskell is a non-strict language. Performing this binding is allowed. But evaluating the result of this binding blows up.</p>\n<p>Rust, however, <strong>is</strong> a strict language. We can do something very similar in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let Ok(x) = might_fail();\n    println!(&quot;Successful result: {}&quot;, x);\n}\n</code></pre>\n<p>But this code won't even compile:</p>\n<pre><code>error[E0005]: refutable pattern in local binding: `Err(_)` not covered\n...\n    = note: `let` bindings require an &quot;irrefutable pattern&quot;, like a `struct` or an `enum` with only one variant\n    = note: for more information, visit https:&#x2F;&#x2F;doc.rust-lang.org&#x2F;book&#x2F;ch18-02-refutability.html\n    = note: the matched value is of type `std::result::Result&lt;i32, std::string::String&gt;`\nhelp: you might want to use `if let` to ignore the variant that isn&#x27;t matched\n</code></pre>\n<p>Let's dive into those &quot;exhaustive&quot; and &quot;refutable&quot; concepts, and then round out this post with a glance at where else patterns appear in these languages.</p>\n<p>Side note: it's true that the Haskell code above compiles. However, if you turn on the <a href=\"https://ghc.gitlab.haskell.org/ghc/doc/users_guide/using-warnings.html#ghc-flag--Wincomplete-uni-patterns\"><code>-Wincomplete-uni-patterns</code> warning</a>, you'll get a warning about this. I personally think this warning should be included in <code>-Wall</code>.</p>\n<h2 id=\"refutable-and-irrefutable-exhaustive-and-non-exhaustive\">Refutable and irrefutable, exhaustive and non-exhaustive</h2>\n<p>This topic is quite a bit more complicated in Haskell due to non-strictness. How matching works in the presence of &quot;bottom&quot; or undefined values is an entire extra wrench of complication. I'm going to ignore those cases entirely here. If you're interested in more information on this, my article <a href=\"https://tech.fpcomplete.com/haskell/tutorial/all-about-strictness/\">All about strictness</a> discusses some of these points.</p>\n<p>Some patterns will <em>always</em> match a value. The simplest example of this is a wildcard. In fact, that's basically its definition. Quoting the Rust reference:</p>\n<blockquote>\n<p>The <em>wildcard pattern</em> (an underscore symbol) matches any value.</p>\n</blockquote>\n<p>And fortunately for us, things behave exactly the same way in Haskell.</p>\n<p>Another pattern that matches any value given is variable. <code>let x = blah</code> is a valid binding, regardless of what <code>blah</code> is. Both of these are known as <em>irrefutable</em> patterns.</p>\n<p>By contrast, some patterns are refutable. They are patterns that only match some possible cases of the value, not all. The simplest example is the one we saw before: matching on one of many data constructors/variants in a data type (Haskell) or enum (Rust).</p>\n<p>Contrasting yet again: if you have a <code>struct</code> in Rust, or a Rust <code>enum</code> with only variant, or a Haskell <code>data</code> with only one data constructor, or a Haskell <code>newtype</code>, the pattern will always match. That is, of course, assuming any patterns nested within will <em>also</em> always match. To demonstrate, this pattern match is irrefutable:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Foo = Foo Bar\ndata Bar = Baz Int\n\nmain :: IO ()\nmain =\n    let Foo (Baz x) = Foo (Baz 5)\n     in putStrLn $ &quot;x == &quot; ++ show x\n</code></pre>\n<p>However, if I add another data constructor to <code>Bar</code>, it becomes refutable:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Foo = Foo Bar\ndata Bar = Baz Int | Bin Char\n\nmain :: IO ()\nmain =\n    let Foo (Baz x) = Foo (Bin &#x27;c&#x27;)\n     in putStrLn $ &quot;x == &quot; ++ show x\n</code></pre>\n<p>In both Haskell and Rust, tuples behave like data types with one constructor, and therefore as long as the patterns inside of them are irrefutable, they are irrefutable too.</p>\n<p>The final case I want to point out is <em>literal patterns</em>. Literal patterns are very much refutable. This code thankfully does not compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let &#x27;x&#x27; = &#x27;a&#x27;;\n}\n</code></pre>\n<p>But the really interesting thing for someone not used to pattern matching is that you can do this at all! We've already done pattern matching on literal values above, in our definition of <code>fact</code>. It's very convenient to be able to build up complex case/match expressions using literal syntax (like list/slice syntax).</p>\n<p>Alright, let's see a few more examples of where patterns are used in these languages, then tie it up.</p>\n<h2 id=\"function-arguments\">Function arguments</h2>\n<p>Function arguments are patterns in both languages. In Haskell we saw that you can use <em>refutable</em> patterns, and provide multiple function clauses. The same doesn't apply to Rust functions. You'll need to use an irrefutable pattern in the function, and then do some pattern matching or other kind of branching in the body of the functions. For example, the poorly written <code>fact</code> function can be rewritten in Rust as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn fact(i: u32) -&gt; u32 {\n    if i == 0 {\n        1\n    } else {\n        i * fact(i - 1)\n    }\n}\n\nfn main() {\n    println!(&quot;5! == {}&quot;, fact(5));\n}\n</code></pre>\n<p>Perhaps more interestingly in both languages, you can use a pattern matching a data structure in the function argument. For example, in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct Person {\n    name: String,\n    age: u32,\n}\n\nfn greet(Person { name, age }: &amp;Person) {\n    println!(&quot;{} is {} years old&quot;, name, age);\n}\n\nfn main() {\n    let alice = Person {\n        name: &quot;Alice&quot;.to_owned(),\n        age: 30,\n    };\n    greet(&amp;alice);\n}\n</code></pre>\n<p>Or in Haskell, using positional instead of named fields:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Person = Person String Int\n\ngreet :: Person -&gt; IO ()\ngreet (Person name age) = putStrLn $ name ++ &quot; is &quot; ++ show age ++ &quot; years old&quot;\n\nmain :: IO ()\nmain = greet $ Person &quot;Alice&quot; 30\n</code></pre>\n<h2 id=\"closures-functions-and-lambdas\">Closures, functions, and lambdas</h2>\n<p>The arguments to closures (Rust) and lambdas (Haskell) are patterns. That means we can match on irrefutable things like tuples fairly easily:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let greet = |(name, age)| println!(&quot;{} is {} years old&quot;, name, age);\n    greet((&quot;Alice&quot;, 30));\n}\n</code></pre>\n<p>The big difference is that, in Rust, the pattern must be irrefutable. This is again due to strictness. The following code will compile in Haskell, but fail at runtime:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main :: IO ()\nmain =\n    let mylambda = \\(Right x) -&gt; putStrLn x\n     in mylambda (Left &quot;Error!&quot;)\n</code></pre>\n<p>Again, <code>-Wincomplete-uni-patterns</code> will warn about this. But again, it's not on by default.</p>\n<p>By contrast, in Rust, the equivalent code will fail to compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myclosure = |Ok(x): Result&lt;i32, &amp;str&gt;| println!(&quot;{}&quot;, x);\n    myclosure(Err(&quot;Hello&quot;));\n}\n</code></pre>\n<p>This produces:</p>\n<pre><code>error[E0005]: refutable pattern in function argument: `Err(_)` not covered\n</code></pre>\n<p>And if you're wondering: I needed to add the explicit <code>: Result&lt;i32, &amp;str&gt;</code> type annotation to help type inference get to that error message. Without it, it just complained that it couldn't infer the type of <code>x</code>.</p>\n<h2 id=\"if-let-while-let-and-for-rust\">if let, while let, and for (Rust)</h2>\n<p>The <code>if let</code> and <code>while let</code> expressions are all about refutable pattern matches. &quot;Only do this if the pattern matches&quot; and &quot;keep doing this while the pattern matches.&quot; <code>if let</code> looks something like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let result: Result&lt;(), String&gt; = Err(&quot;Something happened&quot;.to_owned());\n    if let Err(e) = result {\n        eprintln!(&quot;Something went wrong: {}&quot;, e);\n    }\n}\n</code></pre>\n<p>And with <code>while let</code>, you can make something close to a <code>for</code> loop:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let mut iter = 1..=10;\n    while let Some(i) = iter.next() {\n        println!(&quot;i == {}&quot;, i);\n    }\n}\n</code></pre>\n<p>And speaking of <code>for</code> loops, the left hand side of the <code>in</code> keyword is a pattern. This can be really nice for cases like destructuring the tuple generated by the <code>enumerate()</code> method:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    for (idx, c) in &quot;Hello, world!&quot;.chars().enumerate() {\n        println!(&quot;{}: {}&quot;, idx, c);\n    }\n}\n</code></pre>\n<p>The patterns in a <code>for</code> loop must be irrefutable. This code won't compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let array = [Ok(1), Ok(2), Err(&quot;something&quot;), Ok(3)];\n    for Ok(x) in &amp;array {\n        println!(&quot;x == {}&quot;, x);\n    }\n}\n</code></pre>\n<p>Instead, if you want to exit the <code>for</code> loop at the first <code>Err</code> value, you would need to do something like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let array = [Ok(1), Ok(2), Err(&quot;something&quot;), Ok(3)];\n    for x in &amp;array {\n        match x {\n            Ok(x) =&gt; println!(&quot;x == {}&quot;, x),\n            Err(_) =&gt; break,\n        }\n    }\n}\n</code></pre>\n<h2 id=\"where-they-re-used\">Where they're used</h2>\n<p>This was not intended to be a complete explanation of all examples of patterns in these languages. However, for a bit of completeness, let me quote the Haskell language specification for where patterns are part of the language:</p>\n<blockquote>\n<p>Patterns appear in lambda abstractions, function definitions, pattern bindings, list comprehensions, do expressions, and case expressions. However, the first five of these ultimately translate into case expressions, so defining the semantics of pattern matching for case expressions is sufficient.</p>\n</blockquote>\n<p>And similarly for Rust:</p>\n<blockquote>\n<ul>\n<li>let declarations</li>\n<li>Function and closure parameters</li>\n<li>match expressions</li>\n<li>if let expressions</li>\n<li>while let expressions</li>\n<li>for expressions</li>\n</ul>\n</blockquote>\n<p>There are also more advanced examples of patterns that I haven't touched on at all. Reference patterns in Rust would be relevant here, as would lazy patterns in Haskell.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hoped this gave a little bit of insight into the value of patterns. For me, the important takeaway is:</p>\n<ul>\n<li>Patterns appear in lots of places</li>\n<li>The difference between refutable and irrefutable patterns</li>\n<li>There are some places where you must use irrefutable patterns</li>\n<li>There are some places where Haskell lets you use refutable patterns, but you shouldn't</li>\n<li>Variable binding is just one special case of patterns</li>\n</ul>\n<p>If you're interested in learning more about either Haskell or Rust, check out our <a href=\"https://tech.fpcomplete.com/haskell/syllabus/\">Haskell syllabus</a> or our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a>. FP Complete also offers both corporate and public training classes on both Haskell and Rust. If you're interested in learning more, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us for details</a>.</p>\n<div class=\"text-center\">\n<a class=\"button-coral\" href=\"/training/\">Learn about Rust training</a>\n</div>\n",
        "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/",
        "slug": "pattern-matching",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Pattern matching",
        "description": "Pattern matching is a central feature of some programming languages, notably both Rust and Haskell. But patterns may be even more central than you realize. We'll look at some details in this post.",
        "updated": null,
        "date": "2020-12-14",
        "year": 2020,
        "month": 12,
        "day": 14,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust",
            "haskell",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/pattern-matching/",
        "components": [
          "blog",
          "pattern-matching"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "language-references",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#language-references",
            "title": "Language references",
            "children": []
          },
          {
            "level": 2,
            "id": "case-and-match",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#case-and-match",
            "title": "case and match",
            "children": []
          },
          {
            "level": 2,
            "id": "functions-in-haskell",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#functions-in-haskell",
            "title": "Functions in Haskell",
            "children": []
          },
          {
            "level": 2,
            "id": "what-about-let",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#what-about-let",
            "title": "What about let?",
            "children": []
          },
          {
            "level": 2,
            "id": "ditch-the-case-ditch-the-match",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#ditch-the-case-ditch-the-match",
            "title": "Ditch the case! Ditch the match!",
            "children": []
          },
          {
            "level": 2,
            "id": "refutable-and-irrefutable-exhaustive-and-non-exhaustive",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#refutable-and-irrefutable-exhaustive-and-non-exhaustive",
            "title": "Refutable and irrefutable, exhaustive and non-exhaustive",
            "children": []
          },
          {
            "level": 2,
            "id": "function-arguments",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#function-arguments",
            "title": "Function arguments",
            "children": []
          },
          {
            "level": 2,
            "id": "closures-functions-and-lambdas",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#closures-functions-and-lambdas",
            "title": "Closures, functions, and lambdas",
            "children": []
          },
          {
            "level": 2,
            "id": "if-let-while-let-and-for-rust",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#if-let-while-let-and-for-rust",
            "title": "if let, while let, and for (Rust)",
            "children": []
          },
          {
            "level": 2,
            "id": "where-they-re-used",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#where-they-re-used",
            "title": "Where they're used",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2421,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
            "title": "Philosophies of Rust and Haskell"
          }
        ]
      },
      {
        "relative_path": "blog/monads-gats-nightly-rust.md",
        "colocated_path": null,
        "content": "<p>This blog post was entirely inspired by reading the <a href=\"https://www.reddit.com/r/rust/comments/k4vzvp/gats_on_nightly/\">GATs on Nightly!</a> Reddit post by /u/C5H5N5O. I just decided to take things a little bit too far, and thought a blog post on it would be fun. I want to be clear from the start: I'm introducing some advanced concepts in this post that rely on unstable features in Rust. I'm not advocating their usage <em>at all</em>. I'm just exploring what may and may not be possible with GATs.</p>\n<p>Rust shares many similarities with Haskell at the type system level. Both have types, generic types, associated types, and traits/type classes (which are basically equivalent). However, Haskell has one important additional feature that is lacking in Rust: Higher Kinded Types (HKTs). This isn't an accidental limitation in Rust, or some gap that should be filled in. It's an intentional design decision, at least as far as I know. But as a result, some things until now can't really be implemented in Rust.</p>\n<p>Take, for instance, a <code>Functor</code> in Haskell. For all of its scary sounding name, almost all developers today are familiar with the concept of a <code>Functor</code>. A <code>Functor</code> provides a general purpose interface for &quot;map a function over this structure.&quot; Many different structures in Rust can provide such mapping functionality, including <code>Option</code>, <code>Result</code>, <code>Iterator</code>, and <code>Future</code>.</p>\n<p>However, it hasn't been possible to write a general purpose <code>Functor</code> trait that can be implemented by multiple types. Instead, individual types can implement them as methods on that type. For example, we can write our own custom <code>MyOption</code> and <code>MyResult</code> enums and provide <code>map</code> methods:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Debug, PartialEq)]\nenum MyOption&lt;A&gt; {\n    Some(A),\n    None,\n}\n\nimpl&lt;A&gt; MyOption&lt;A&gt; {\n    fn map&lt;F: FnOnce(A) -&gt; B, B&gt;(self, f: F) -&gt; MyOption&lt;B&gt; {\n        match self {\n            MyOption::Some(a) =&gt; MyOption::Some(f(a)),\n            MyOption::None =&gt; MyOption::None,\n        }\n    }\n}\n\n#[test]\nfn test_option_map() {\n    assert_eq!(MyOption::Some(5).map(|x| x + 1), MyOption::Some(6));\n    assert_eq!(MyOption::None.map(|x: i32| x + 1), MyOption::None);\n}\n\n#[derive(Debug, PartialEq)]\nenum MyResult&lt;A, E&gt; {\n    Ok(A),\n    Err(E),\n}\n\nimpl&lt;A, E&gt; MyResult&lt;A, E&gt; {\n    fn map&lt;F: FnOnce(A) -&gt; B, B&gt;(self, f: F) -&gt; MyResult&lt;B, E&gt; {\n        match self {\n            MyResult::Ok(a) =&gt; MyResult::Ok(f(a)),\n            MyResult::Err(e) =&gt; MyResult::Err(e),\n        }\n    }\n}\n\n#[test]\nfn test_result_map() {\n    assert_eq!(MyResult::Ok(5).map(|x| x + 1), MyResult::Ok::&lt;i32, ()&gt;(6));\n    assert_eq!(MyResult::Err(&quot;hello&quot;).map(|x: i32| x + 1), MyResult::Err(&quot;hello&quot;));\n}\n</code></pre>\n<p>However, it hasn't been possible without GATs to define <code>map</code> as a trait method. Let's see why. Here's a naive approach to a &quot;monomorphic functor&quot; trait, and an implementation for <code>Option</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">&#x2F;&#x2F;&#x2F; Monomorphic functor trait\ntrait MonoFunctor {\n    type Unwrapped; &#x2F;&#x2F; value &quot;contained inside&quot;\n    fn map&lt;F&gt;(self, f: F) -&gt; Self\n    where\n        F: FnMut(Self::Unwrapped) -&gt; Self::Unwrapped;\n}\n\nimpl&lt;A&gt; MonoFunctor for Option&lt;A&gt; {\n    type Unwrapped = A;\n    fn map&lt;F: FnMut(A) -&gt; A&gt;(self, mut f: F) -&gt; Option&lt;A&gt; {\n        match self {\n            Some(a) =&gt; Some(f(a)),\n            None =&gt; None,\n        }\n    }\n}\n</code></pre>\n<p>In our trait definition, we define an associated type <code>Unwrapped</code>, for the value that lives &quot;inside&quot; the <code>MonoFunctor</code>. In the case of <code>Option&lt;A&gt;</code>, that would be <code>A</code>. And herein lies the problem. We're hard-coding the <code>Unwrapped</code> to just one type, <code>A</code>. Usually with a <code>map</code> function, we want to change the type to <code>B</code>. But we have no way in current, stable Rust to say &quot;I want a type that's associated with this <code>MonoFunctor</code>, but also a little bit different in what lives inside of it.&quot;</p>\n<p>That's where Generic Associated Types come in.</p>\n<h2 id=\"polymorphic-functor\">Polymorphic Functor</h2>\n<p>In order to get a polymorphic functor, we need to be able to say &quot;here's how my type would look if I wrapped up a <em>different</em> type inside of it.&quot; For example, with <code>Option</code>, we'd like to say &quot;hey, I've got <code>Option&lt;A&gt;</code>, and it contains an <code>A</code> type, but if it contained a <code>B</code> type instead, it would be <code>Option&lt;B&gt;</code>.&quot; To do this, we're going to use the generic associated type <code>Wrapped&lt;B&gt;</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Functor {\n    type Unwrapped;\n    type Wrapped&lt;B&gt;: Functor;\n\n    fn map&lt;F, B&gt;(self, f: F) -&gt; Self::Wrapped&lt;B&gt;\n    where\n        F: FnMut(Self::Unwrapped) -&gt; B;\n}\n</code></pre>\n<p>So what we're saying is:</p>\n<ul>\n<li>Each functor has an associated type <code>Unwrapped</code>, which is the thing it contains</li>\n<li>When we know a functor, we can also figure out another, associated type <code>Wrapped&lt;B&gt;</code> which is &quot;like <code>Self</code>, but has a different wrapped up value underneath&quot;</li>\n<li>Like before, <code>map</code> is a method that takes two parameters: <code>self</code> and a function</li>\n<li>The function parameter will map from the current underlying <code>Unwrapped</code> value to some new type <code>B</code></li>\n<li>And the output of <code>map</code> will be a <code>Wrapped&lt;B&gt;</code></li>\n</ul>\n<p>That's a bit abstract. Let's see what this looks like for the <code>Option</code> type:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;A&gt; Functor for Option&lt;A&gt; {\n    type Unwrapped = A;\n    type Wrapped&lt;B&gt; = Option&lt;B&gt;;\n\n    fn map&lt;F: FnMut(A) -&gt; B, B&gt;(self, mut f: F) -&gt; Option&lt;B&gt; {\n        match self {\n            Some(x) =&gt; Some(f(x)),\n            None =&gt; None,\n        }\n    }\n}\n\n#[test]\nfn test_option_map() {\n    assert_eq!(Some(5).map(|x| x + 1), Some(6));\n    assert_eq!(None.map(|x: i32| x + 1), None);\n}\n</code></pre>\n<p>And if you play with all of the type gymnastics, you'll see that this ends up being identical to the <code>map</code> method we special-cased for <code>MyOption</code> above (sans a difference between <code>FnOnce</code> and <code>FnMut</code>). Cool!</p>\n<h3 id=\"side-note-hkts\">Side note: HKTs</h3>\n<p>In Haskell, none of this generic associated type business is needed. In fact, Haskell <code>Functor</code>s don't use <em>any</em> associated types. The typeclass for <code>Functor</code> in Haskell far predates the presence of associated types in the language. For comparison, let's see what that looks like, renaming a bit to match up with Rust:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">class Functor f where\n    map :: (a -&gt; b) -&gt; f a -&gt; f b\ninstance Functor Option where\n    map f option =\n        case option of\n            Some x -&gt; Some (f x)\n            None -&gt; None\n</code></pre>\n<p>Or, to translate it into Rust-like syntax:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait HktFunctor {\n    fn map&lt;A, B, F: FnMut(A) -&gt; B&gt;(self: Self&lt;A&gt;, f: F) -&gt; Self&lt;B&gt;;\n\nimpl HktFunctor for Option {\n    fn map&lt;A, B, F: FnMut(A) -&gt; B&gt;(self: Option&lt;A&gt;, f: F) -&gt; Option&lt;B&gt; {\n        match self {\n            Some(a) =&gt; Some(f(a)),\n            None =&gt; None,\n        }\n    }\n}\n</code></pre>\n<p>But this isn't valid Rust! That's because we're trying to provide type parameters to <code>Self</code>. But in Rust, <code>Option</code> isn't a type. <code>Option</code> must be applied to a single type parameter before it becomes a type. <code>Option&lt;i32&gt;</code> is a type. <code>Option</code> on its own is not.</p>\n<p>By contrast, in Haskell, <code>Maybe Int</code> is a type of <em>kind</em> <code>Type</code>. <code>Maybe</code> is a <em>type constructor</em>, of <em>kind</em> <code>Type -&gt; Type</code>. But you can treat <code>Maybe</code> has a type of its own for purposes of creating type classes and instance. <code>Functor</code> in Haskell works on the kind <code>Type -&gt; Type</code>. This is what we mean by &quot;higher kinded types&quot;: we can have types whose <em>kind</em> is higher than just <code>Type</code>.</p>\n<p>GATs in Rust are a workaround for this lack of HKTs for the examples below. But as we'll ultimately see, they are more brittle and more verbose. That's not to say that GATs are a Bad Thing, far from it. It <em>is</em> to say that trying to write Haskell in Rust is probably not a good idea.</p>\n<p>OK, now that we've thoroughly established that what we're about to do isn't a great idea... let's do it!</p>\n<h2 id=\"pointed\">Pointed</h2>\n<p>There's a controversial typeclass in Haskell called <code>Pointed</code>. It's controversial because it introduces a typeclass without any laws associated with it, which is often not very liked. But since I already told you this is all a bad idea, let's implement <code>Pointed</code>.</p>\n<p>The idea of <code>Pointed</code> is simple: wrap up a value into a <code>Functor</code>-like thing. So in the case of <code>Option</code>, it would be like wrapping it with <code>Some</code>. In a <code>Result</code>, it's using <code>Ok</code>. And for a <code>Vec</code>, it would be a single-element vector. Unlike <code>Functor</code>, this will be a static method, since we don't have an existing <code>Pointed</code> value to change. Let's see it in action:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Pointed: Functor {\n    fn wrap&lt;T&gt;(t: T) -&gt; Self::Wrapped&lt;T&gt;;\n}\n\nimpl&lt;A&gt; Pointed for Option&lt;A&gt; {\n    fn wrap&lt;T&gt;(t: T) -&gt; Option&lt;T&gt; {\n        Some(t)\n    }\n}\n</code></pre>\n<p>What's particularly interesting about this is that we don't use the <code>A</code> type parameter in the <code>Option</code> implementation at all.</p>\n<p>There's one more thing worth noting. The result of calling <code>wrap</code> is a <code>Self::Wrapped&lt;T&gt;</code> value. What exactly do we know about <code>Self::Wrapped&lt;T&gt;</code>? Well, from the <code>Functor</code> trait definition, we know exactly one thing: that <code>Wrapped&lt;T&gt;</code> must be a <code>Functor</code>. Interestingly, we have <em>lost the knowledge</em> here that <code>Self::Wrapped&lt;T&gt;</code> is also a <code>Pointed</code>. That's going to be a recurring theme for the next few traits.</p>\n<p>But let me reiterate this a different way. When we're working with a general <code>Functor</code> trait implementation, we don't know <em>anything at all</em> about the <code>Wrapped</code> associated type except that it implements <code>Functor</code> itself. Logically, we know that for a <code>Option&lt;A&gt;</code> implementation, we'd like <code>Wrapped</code> to be a <code>Option&lt;B&gt;</code> kind of thing. But the GAT implementation does not enforce it. (By contrast, the HKT approach in Haskell <em>does</em> enforce this.) Nothing prevents us from writing a horrifically non-sensible implementation such as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;A&gt; Functor for MyOption&lt;A&gt; {\n    type Unwrapped = A;\n    type Wrapped&lt;B&gt; = Result&lt;B, String&gt;; &#x2F;&#x2F; wut?\n\n    fn map&lt;F: FnMut(A) -&gt; B, B&gt;(self, mut f: F) -&gt; Result&lt;B, String&gt; {\n        match self {\n            MyOption::Some(a) =&gt; Ok(f(a)),\n            MyOption::None =&gt; Err(&quot;Well this is weird, isn&#x27;t it?&quot;.to_owned()),\n        }\n    }\n}\n\n</code></pre>\n<p>You may be thinking, &quot;So what, no one would write something like that. And it's their own fault if they do.&quot; That's not the point here. The point is that the compiler can't know that there's a connection between <code>Self</code> and <code>Wrapped&lt;B&gt;</code>. And since it can't know that, there are some things we can't get to type check. I'll show you one of those at the end.</p>\n<h2 id=\"applicative\">Applicative</h2>\n<p>When I give Haskell training, and I get to the <code>Functor</code>/<code>Applicative</code>/<code>Monad</code> section, most people are nervous about <code>Monad</code>s. In my experience, the really confusing part is really <code>Applicative</code>. Once you understand that, <code>Monad</code> is relatively speaking easy.</p>\n<p>The <code>Applicative</code> typeclass in Haskell has two methods. <code>pure</code> is equivalent to the <code>wrap</code> that I put into <code>Pointed</code>, so we can ignore it. The other method is <code>&lt;*&gt;</code>, known as &quot;apply,&quot; or &quot;splat&quot;, or &quot;the tie fighter.&quot; I originally implemented <code>Applicative</code> with a method called <code>apply</code> that matches that operator, but found that it was better to go a different route.</p>\n<p>Instead, there's an alternate way to define an <code>Applicative</code> typeclass, based on a different function called <code>liftA2</code> (or, in Rust, <code>lift_a2</code>). Here's the idea. Suppose I have two functions:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn birth_year() -&gt; Option&lt;i32&gt; { ... }\nfn current_year() -&gt; Option&lt;i32&gt; { ... }\n</code></pre>\n<p>I may not know the current year or the birth year, in which case I'll return <code>None</code>. But if I get a <code>Some</code> return for both of these function calls, then I can calculate the age. In normal Rust code, this may look like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn age() -&gt; Option&lt;i32&gt; {\n    let birth_year = birth_year()?;\n    let current_year = current_year()?;\n    Some(current_year - birth_year)\n}\n</code></pre>\n<p>But that's leveraging <code>?</code> and early return. A primary purpose of <code>Applicative</code> is to address the same problem. So let's rewrite this without any early return, and instead use some pattern matching:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn age() -&gt; Option&lt;i32&gt; {\n    match (birth_year(), current_year()) {\n        (Some(birth_year), Some(current_year)) =&gt; Some(current_year - birth_year),\n        _ =&gt; None,\n    }\n}\n</code></pre>\n<p>This certainly works, but it's verbose. It also doesn't generalize to other cases, like a <code>Result</code>. And what about a really sophisticated case, like &quot;I have a <code>Future</code> that will return the birth year, a <code>Future</code> that will return the current year, and I want to produce a <code>Future</code> that finds the difference.&quot; With async/await syntax, it's easy enough to do. But we can also do it with <code>Applicative</code>, using our <code>lift_a2</code> method.</p>\n<p>The point of <code>lift_a2</code> is: I've got two values wrapped up, perhaps both in an <code>Option</code>. I'd like to use a function to combine them together. Let's see what that looks like in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Applicative: Pointed {\n    fn lift_a2&lt;F, B, C&gt;(self, b: Self::Wrapped&lt;B&gt;, f: F) -&gt; Self::Wrapped&lt;C&gt;\n    where\n        F: FnMut(Self::Unwrapped, B) -&gt; C;\n}\n\nimpl&lt;A&gt; Applicative for Option&lt;A&gt; {\n    fn lift_a2&lt;F, B, C&gt;(self, b: Self::Wrapped&lt;B&gt;, mut f: F) -&gt; Self::Wrapped&lt;C&gt;\n    where\n        F: FnMut(Self::Unwrapped, B) -&gt; C\n    {\n        let a = self?;\n        let b = b?;\n        Some(f(a, b))\n    }\n}\n</code></pre>\n<p>With this definition in place, we can now rewrite <code>age</code> as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn age() -&gt; Option&lt;i32&gt; {\n    current_year().lift_a2(birth_year(), |cy, by| cy - by)\n}\n</code></pre>\n<p>Whether this is an improvement or not probably depends heavily on how much Haskell you've written in your life. Again, I'm not advocating changing Rust here, but it's certainly interesting.</p>\n<p>We could also do the same kind of thing with <code>Result</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn birth_year() -&gt; Result&lt;i32, String&gt; {\n    Err(&quot;No birth year&quot;.to_string())\n}\n\nfn current_year() -&gt; Result&lt;i32, String&gt; {\n    Err(&quot;No current year&quot;.to_string())\n}\n\nfn age() -&gt; Result&lt;i32, String&gt; {\n    current_year().lift_a2(birth_year(), |cy, by| cy - by)\n}\n</code></pre>\n<p>Which may beg the question: which of the two <code>Err</code> values do we take? Well, that depends on our implementation of <code>Applicative</code>, but typically we would prefer choosing the first:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;A, E&gt; Applicative for Result&lt;A, E&gt; {\n    fn lift_a2&lt;F, B, C&gt;(self, b: Self::Wrapped&lt;B&gt;, mut f: F) -&gt; Self::Wrapped&lt;C&gt;\n    where\n        F: FnMut(Self::Unwrapped, B) -&gt; C\n    {\n        match (self, b) {\n            (Ok(a), Ok(b)) =&gt; Ok(f(a, b)),\n            (Err(e), _) =&gt; Err(e),\n            (_, Err(e)) =&gt; Err(e),\n        }\n    }\n}\n</code></pre>\n<p>But what if we wanted both? Here's a case where <code>Applicative</code> gives us power that <code>?</code> doesn't.</p>\n<h2 id=\"validation\">Validation</h2>\n<p>The <code>Validation</code> type from Haskell represents the idea &quot;I'm going to try lots of things, some of them may fail, and I want to collect together all of the error results.&quot; A typically example of this would be web form parsing. If a user enters an invalid email address, invalid phone number, <em>and</em> forgets to click the &quot;I agree&quot; box, you'd want to generate all three error messages. You don't want to generate just one.</p>\n<p>To start off our <code>Validation</code> implementation, we need to introduce one more Haskell-y typeclass, this time for representing the concept of &quot;combining together multiple values.&quot; We <em>could</em> just hard-code <code>Vec</code> in here, but where's the fun in that? Instead, let's introduce the strangely-named <code>Semigroup</code> trait. This doesn't even require any special GAT code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Semigroup {\n    fn append(self, rhs: Self) -&gt; Self;\n}\n\nimpl Semigroup for String {\n    fn append(mut self, rhs: Self) -&gt; Self {\n        self += &amp;rhs;\n        self\n    }\n}\n\nimpl&lt;T&gt; Semigroup for Vec&lt;T&gt; {\n    fn append(mut self, mut rhs: Self) -&gt; Self {\n        Vec::append(&amp;mut self, &amp;mut rhs);\n        self\n    }\n}\n\nimpl Semigroup for () {\n    fn append(self, (): ()) -&gt; () {}\n}\n</code></pre>\n<p>With that in place, we can now define a new <code>enum</code> called <code>Validation</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(PartialEq, Debug)]\nenum Validation&lt;A, E&gt; {\n    Ok(A),\n    Err(E),\n}\n</code></pre>\n<p>The <code>Functor</code> and <code>Pointed</code> implementations are boring, let's skip straight to the meat with the <code>Applicative</code> implementation:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;A, E: Semigroup&gt; Applicative for Validation&lt;A, E&gt; {\n    fn lift_a2&lt;F, B, C&gt;(self, b: Self::Wrapped&lt;B&gt;, mut f: F) -&gt; Self::Wrapped&lt;C&gt;\n    where\n        F: FnMut(Self::Unwrapped, B) -&gt; C\n    {\n        match (self, b) {\n            (Validation::Ok(a), Validation::Ok(b)) =&gt; Validation::Ok(f(a, b)),\n            (Validation::Err(e), Validation::Ok(_)) =&gt; Validation::Err(e),\n            (Validation::Ok(_), Validation::Err(e)) =&gt; Validation::Err(e),\n            (Validation::Err(e1), Validation::Err(e2)) =&gt; Validation::Err(e1.append(e2)),\n        }\n    }\n}\n</code></pre>\n<p>Here, we're saying that the error type parameter must implement <code>Semigroup</code>. If both values are <code>Ok</code>, we apply the <code>f</code> function to them and wrap up the result in <code>Ok</code>. If only one of the values is <code>Err</code>, we return that error. But if <em>both</em> of them are error, we leverage the <code>append</code> method of <code>Semigroup</code> to combine them together. This is something you can't get with <code>?</code>-style error handling.</p>\n<h2 id=\"monad\">Monad</h2>\n<p>At last, the dreaded monad rears its head! But in reality, at least for Rustaceans, monad isn't much of a surprise. You're already used to it: it's the <code>and_then</code> method. Almost any chain of statements that end with <code>?</code> in Rust can be reimagined as monadic binds. In my opinion, the main reason monad has the allure of the unknowable was a series of particularly bad tutorials that cemented this idea in people's minds.</p>\n<p>Anyway, since we're just trying to match the existing method signature of <code>and_then</code> on <code>Option</code>, I'm not going to spend much time motivating &quot;why monad.&quot; Instead, let's just look at the definition of the trait:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Monad : Applicative {\n    fn bind&lt;B, F&gt;(self, f: F) -&gt; Self::Wrapped&lt;B&gt;\n    where\n        F: FnMut(Self::Unwrapped) -&gt; Self::Wrapped&lt;B&gt;;\n}\n\nimpl&lt;A&gt; Monad for Option&lt;A&gt; {\n    fn bind&lt;B, F&gt;(self, f: F) -&gt; Option&lt;B&gt;\n    where\n        F: FnMut(A) -&gt; Option&lt;B&gt;,\n    {\n        self.and_then(f)\n    }\n}\n</code></pre>\n<p>And just like that, we've got monadic Rust. Time to ride off into the sunset.</p>\n<p>But wait, there's more!</p>\n<h2 id=\"monad-transformers\">Monad transformers</h2>\n<img src=\"/images/blog/transformers-rust.jpg\">\n<p>I'm overall not a huge fan of monad transformers. I think they are drastically overused in Haskell, and lead to huge amounts of complication. I instead advocate the <a href=\"https://www.fpcomplete.com/blog/2017/06/readert-design-pattern/\">ReaderT design pattern</a>. But again, this post is definitely not about best practices.</p>\n<p>Typically, each monad instance provides some kind of additional functionality. <code>Option</code> means &quot;it might not produce a value.&quot; <code>Result</code> means &quot;it might fail with an error.&quot; If we provided it, <code>Future</code> means &quot;it won't produce a value immediately, but it will eventually.&quot; And as a final example, the <code>Reader</code> monad means &quot;I have read-only access to some environmental data.&quot;</p>\n<p>But what if we want to have two pieces of functionality? There's no obvious way to combine a <code>Reader</code> and a <code>Result</code>. In Rust, we <em>do</em> combine together <code>Result</code> and <code>Future</code> via <code>async</code> functions and <code>?</code>, but that had to have carefully designed language support. Instead, the Haskell approach to this problem would be: just provide <code>do</code> notation (syntactic sugar for monads), and then layer up your monad transformers to add together all of the functionality.</p>\n<p>I've considered writing a blog post on this philosophical difference for a while. (If people are interested in such a post, please let me know.) But for now, let's simply explore what it looks like to provide a monad transformer in Rust. We'll implement it for the most boring of all monad transformers, <code>IdentityT</code>. This is the transformer that doesn't do anything at all. (And if you're wondering &quot;why have it,&quot; consider why Rust has 1-tuples. Sometimes, you need something that fits a certain shape to make some generic code work nicely.)</p>\n<p>Since <code>IdentityT</code> doesn't do anything, it's comforting to see that its type reflects that perfectly:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct IdentityT&lt;M&gt;(M);\n</code></pre>\n<p>I'm calling the type parameter <code>M</code>, because it's going to itself be an implementation of <code>Monad</code>. That's the idea here: every monad transformer sits on top of a &quot;base monad.&quot;</p>\n<p>Next, let's look at a <code>Functor</code> implementation. The idea is to unwrap the <code>IdentityT</code> layer, leverage the underlying <code>map</code> method, and then rewrap <code>IdentityT</code>.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;M: Functor&gt; Functor for IdentityT&lt;M&gt; {\n    type Unwrapped = M::Unwrapped;\n    type Wrapped&lt;A&gt; = IdentityT&lt;M::Wrapped&lt;A&gt;&gt;;\n\n    fn map&lt;F, B&gt;(self, f: F) -&gt; Self::Wrapped&lt;B&gt;\n    where\n        F: FnMut(M::Unwrapped) -&gt; B\n    {\n        IdentityT(self.0.map(f))\n    }\n}\n</code></pre>\n<p>For our associated types, we leverage the associated types of <code>M</code>. Inside <code>map</code>, we use <code>self.0</code> to get the underlying <code>M</code>, and wrap the result of the <code>map</code> method call with <code>IdentityT</code>. Cool!</p>\n<p>The <code>Pointed</code>, <code>Applicative</code>, and <code>Monad</code> implementations follow similar patterns, so I'll drop all of those in too:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;M: Pointed&gt; Pointed for IdentityT&lt;M&gt; {\n    fn wrap&lt;T&gt;(t: T) -&gt; IdentityT&lt;M::Wrapped&lt;T&gt;&gt; {\n        IdentityT(M::wrap(t))\n    }\n}\n\nimpl&lt;M: Applicative&gt; Applicative for IdentityT&lt;M&gt; {\n    fn lift_a2&lt;F, B, C&gt;(self, b: Self::Wrapped&lt;B&gt;, f: F) -&gt; Self::Wrapped&lt;C&gt;\n    where\n        F: FnMut(Self::Unwrapped, B) -&gt; C\n    {\n        IdentityT(self.0.lift_a2(b.0, f))\n    }\n}\n\nimpl&lt;M: Monad&gt; Monad for IdentityT&lt;M&gt; {\n    fn bind&lt;B, F&gt;(self, mut f: F) -&gt; Self::Wrapped&lt;B&gt;\n    where\n        F: FnMut(Self::Unwrapped) -&gt; Self::Wrapped&lt;B&gt;\n    {\n        IdentityT(self.0.bind(|x| f(x).0))\n    }\n}\n</code></pre>\n<p>And finally, we'll define one new trait: <code>MonadTrans</code>. <code>MonadTrans</code> captures the idea of &quot;layering up&quot; a base monad into the transformed monad. In Haskell, you'll often see code like <code>lift (readFile &quot;foo.txt&quot;)</code>, where <code>readFile</code> works in the base monad, and we're sitting in a layer on top of that.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait MonadTrans {\n    type Base: Monad;\n\n    fn lift(base: Self::Base) -&gt; Self;\n}\n\nimpl&lt;M: Monad&gt; MonadTrans for IdentityT&lt;M&gt; {\n    type Base = M;\n\n    fn lift(base: M) -&gt; Self {\n        IdentityT(base)\n    }\n}\n</code></pre>\n<p>So is this useful? Not terribly on its own. We could arguably create an ecosystem of <code>ReaderT</code>, <code>WriterT</code>, <code>ContT</code>, <code>ConduitT</code>, and more, and start building up sophisticated systems. But I'm strongly of the opinion that we don't need that stuff in Rust, at least not yet. I'm happy to go this far in my implementation to explore the wonders of GATs, but let's not go crazy and try to make something useful just because we can.</p>\n<h2 id=\"join\">join</h2>\n<p>Alright, now the fun begins. We've seen GATs in practice. And it seems like Rust is keeping pace with Haskell pretty well. That's about to end.</p>\n<p>There's another method that goes along with <code>Monad</code>s in Haskell, called <code>join</code>. It's equivalent in power to the <code>bind</code> method we've already seen, but works differently. <code>join</code> &quot;flattens&quot; two layers of monads in Haskell. And a side note: there's already <a href=\"https://doc.rust-lang.org/stable/std/option/enum.Option.html#impl-10\">a method called <code>flatten</code></a> in Rust that does just this for <code>Option</code> and <code>Result</code>.</p>\n<p>The catch with <code>join</code>: the monads have to be the same. In other words, <code>join (Just (Just 5)) == Just 5</code>, but <code>join (Just (Right 6))</code> is a type error, since <code>Just</code> is a <code>Maybe</code> data constructor, and <code>Right</code> is an <code>Either</code> data constructor.</p>\n<p>Now we're in a bit of a quandary. In Haskell, where we have higher kinded types, it's easy to say &quot;<code>Maybe</code> must be the same as <code>Maybe</code>&quot;:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">join :: Monad m =&gt; m (m a) -&gt; m a\njoin m = bind m (\\x -&gt; x)\n</code></pre>\n<p>But I couldn't figure out a way to express the same idea with GATs in Rust and get the syntax accepted by the compiler. This is the closest I came:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn join&lt;MOuter, MInner, A&gt;(outer: MOuter) -&gt; MOuter::Wrapped&lt;A&gt;\nwhere\n    MOuter: Monad&lt;Unwrapped = MInner&gt;,\n    MInner: Monad&lt;Unwrapped = A, Wrapped = MOuter::Wrapped&lt;A&gt;&gt;,\n{\n    outer.bind(|inner| inner)\n}\n\n#[test]\nfn test_join() {\n    assert_eq!(join(Some(Some(true))), Some(true));\n}\n</code></pre>\n<p>Unfortunately, this broke the compiler:</p>\n<pre><code>error: internal compiler error: compiler\\rustc_middle\\src\\ty\\subst.rs:529:17: type parameter `B&#x2F;#1` (B&#x2F;1) out of range when substituting, substs=[MInner]\n\nthread &#x27;rustc&#x27; panicked at &#x27;Box&lt;Any&gt;&#x27;, &#x2F;rustc&#x2F;b7ebc6b0c1ba3c27ebb17c0b496ece778ef11e18\\compiler\\rustc_errors\\src\\lib.rs:904:9\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\n\nnote: the compiler unexpectedly panicked. this is a bug.\n\nnote: we would appreciate a bug report: https:&#x2F;&#x2F;github.com&#x2F;rust-lang&#x2F;rust&#x2F;issues&#x2F;new?labels=C-bug%2C+I-ICE%2C+T-compiler&amp;template=ice.md\n\nnote: rustc 1.50.0-nightly (b7ebc6b0c 2020-11-30) running on x86_64-pc-windows-msvc\n\nnote: compiler flags: -C embed-bitcode=no -C debuginfo=2 -C incremental --crate-type bin\n\nnote: some of the compiler flags provided by cargo are hidden\n</code></pre>\n<p>I think it's fair to say I was pushing the compiler to the limit here. In any event, I opened up <a href=\"https://github.com/rust-lang/rust/issues/79636\">a GitHub issue</a> for this.</p>\n<h2 id=\"mapm-traverse\">mapM/traverse</h2>\n<p>Already, we were stymied by <code>join</code>. How about another popular functional idiom: <code>traverse</code>. As I <a href=\"https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/\">previously mentioned</a>, <code>traverse</code> is incredibly popular in Scala, and pretty common in Haskell. It functions very much like a <code>map</code>, except the result of each step through the <code>map</code> is wrapped in some <code>Applicative</code>, and the <code>Applicative</code> values are combined into an overall data structure.</p>\n<p>Sound confusing? Fair enough. As a simpler example: if you have a <code>Vec&lt;A&gt;</code> value, and a function from <code>A</code> to <code>Option&lt;B&gt;</code>, <code>traverse</code> can put these together into an <code>Option&lt;Vec&lt;B&gt;&gt;</code>. Or using the <code>Validation</code> type we had above, you could combine <code>Vec&lt;A&gt;</code> and <code>Fn(A) -&gt; Validation&lt;B, Vec&lt;MyErr&gt;&gt;</code> into a <code>Validation&lt;Vec&lt;B&gt;, Vec&lt;MyErr&gt;&gt;</code>, returning either all of the successfully generated <code>B</code> values, or all of the errors that occurred along the way.</p>\n<p>Anyway, I ended up with this as a starting type signature for our function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn traverse&lt;F, M, A, B, I&gt;(iter: I, f: F) -&gt; M::Wrapped&lt;Vec&lt;B&gt;&gt;\n</code></pre>\n<p>Then we have the following trait bounds:</p>\n<ul>\n<li><code>I: IntoIterator&lt;Item = A&gt;</code>: <code>I</code> is an iterator of <code>A</code> values. To simplify, you can think of it as <code>Vec&lt;A&gt;</code>.j</li>\n<li><code>M: Applicative&lt;Unwrapped = B&gt;</code>: <code>M</code> is some implementation of <code>Applicative</code> which unwraps to a <code>B</code>. In our example: this would be <code>Validation&lt;B, Vec&lt;MyErr&gt;&gt;</code>.</li>\n<li><code>F: FnMut(A) -&gt; M</code>: <code>F</code> is a function that takes the <code>A</code> values from the iterator and produces <code>M</code> values.</li>\n<li><code>M::Wrapped&lt;Vec&lt;B&gt;&gt;: Applicative&lt;Unwrapped = Vec&lt;B&gt;&gt;</code>: wrapping up the result <code>Vec&lt;B&gt;</code> in <code>M</code>'s wrapping produces a value which is also an <code>Applicative</code></li>\n</ul>\n<p>This last bullet shows one of the pain points I mentioned above. Since we don't really know from the <code>Wrapped</code> associated type itself much at all, we only get the <code>Functor</code> bound &quot;for free&quot;. We need to explicitly say that it's also <code>Applicative</code>, and that unwrapping it again will get you back a <code>Vec&lt;B&gt;</code>.</p>\n<p>In any event, I wasn't clever enough to figure out a way to make all of this compile. This was the final version of the code I came up with:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn traverse&lt;F, M, A, B, I&gt;(iter: I, f: F) -&gt; M::Wrapped&lt;Vec&lt;B&gt;&gt;\nwhere\n    F: FnMut(A) -&gt; M,\n    M: Applicative&lt;Unwrapped = B&gt;,\n    I: IntoIterator&lt;Item = A&gt;,\n    M::Wrapped&lt;Vec&lt;B&gt;&gt;: Applicative&lt;Unwrapped = Vec&lt;B&gt;&gt;,\n{\n    let mut iter = iter.into_iter().map(f);\n\n    let mut result: M::Wrapped&lt;Vec&lt;B&gt;&gt; = match iter.next() {\n        Some(b) =&gt; b.map(|x| vec![x]),\n        None =&gt; return M::wrap(Vec::new()),\n    };\n\n    for m in iter {\n        result = result.lift_a2(m, |vec, b| {\n            vec.push(b);\n            vec\n        });\n    }\n\n    result\n}\n</code></pre>\n<p>But this fails with the error messages:</p>\n<pre><code>error[E0308]: mismatched types\n   --&gt; src\\main.rs:448:33\n    |\n433 | fn traverse&lt;F, M, A, B, I&gt;(iter: I, f: F) -&gt; M::Wrapped&lt;Vec&lt;B&gt;&gt;\n    |                - this type parameter\n...\n448 |         result = result.lift_a2(m, |vec, b| {\n    |                                 ^ expected associated type, found type parameter `M`\n    |\n    = note: expected associated type `&lt;&lt;M as Functor&gt;::Wrapped&lt;Vec&lt;B&gt;&gt; as Functor&gt;::Wrapped&lt;_&gt;`\n                found type parameter `M`\n    = note: you might be missing a type parameter or trait bound\n\nerror[E0308]: mismatched types\n   --&gt; src\\main.rs:448:18\n    |\n433 |   fn traverse&lt;F, M, A, B, I&gt;(iter: I, f: F) -&gt; M::Wrapped&lt;Vec&lt;B&gt;&gt;\n    |                  - this type parameter\n...\n448 |           result = result.lift_a2(m, |vec, b| {\n    |  __________________^\n449 | |             vec.push(b);\n450 | |             vec\n451 | |         });\n    | |__________^ expected type parameter `M`, found associated type\n    |\n    = note: expected associated type `&lt;M as Functor&gt;::Wrapped&lt;Vec&lt;B&gt;&gt;`\n               found associated type `&lt;&lt;M as Functor&gt;::Wrapped&lt;Vec&lt;B&gt;&gt; as Functor&gt;::Wrapped&lt;Vec&lt;B&gt;&gt;`\nhelp: consider further restricting this bound\n    |\n436 |     M: Applicative&lt;Unwrapped = B&gt; + Functor&lt;Wrapped = M&gt;,\n    |                                   ^^^^^^^^^^^^^^^^^^^^^^\n</code></pre>\n<p>Maybe this is a limitation in GATs. Maybe I'm just not clever enough to figure it out. But I thought this was a good point to call it quits. If anyone knows a trick to make this work, let me know!</p>\n<h2 id=\"should-we-have-hkts-in-rust\">Should we have HKTs in Rust?</h2>\n<p>This was a fun adventure. GATs look like a nice extension to the trait system in Rust. I look forward to the feature stabilizing and landing. And it's certainly fun to play with all of this.</p>\n<p>But Rust is not Haskell. The ergonomics of GATs, in my opinion, will never compete with higher kinded types on Haskell's home turf. And I'm not at all convinced that it should. Rust is a wonderful language as is. I'm happy to write Rust style in a Rust codebase, and save my Haskell coding for my Haskell codebases.</p>\n<p>I hope others enjoyed this adventure as much as I have. A really ugly version of my code is available <a href=\"https://gist.github.com/snoyberg/91ae892199bc8a6687d3798343a9ee54\">as a Gist</a>. You'll need to use a recent nightly Rust build, but otherwise it has no dependencies.</p>\n<p>If you liked this post, you may be interested in some other Haskell/Rust hybrid posts:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/07/iterators-streams-rust-haskell/\">Iterators and Streams in Rust and Haskell</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/streaming-utf8-haskell-rust/\">Streaming UTF-8 in Haskell and Rust</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/10/is-rust-functional/\">Is Rust functional?</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/async-exceptions-haskell-rust/\">Async Exceptions in Haskell, and Rust</a></li>\n</ul>\n<p>FP Complete offers training, consulting, and review services in both Haskell and Rust. Want to hear more? <a href=\"https://tech.fpcomplete.com/contact-us/\">Contact us to speak with one of our engineers about how we can help.</a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/",
        "slug": "monads-gats-nightly-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Monads and GATs in nightly Rust",
        "description": "I saw a recent Reddit post on the advances in Generic Associated Types (GATs) in Rust, which allows for the definition of a Monad trait. In this post, I'm going to take it one step further: a monad transformer trait in Rust!",
        "updated": null,
        "date": "2020-12-07",
        "year": 2020,
        "month": 12,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "image": "images/blog/transformers-rust.jpg"
        },
        "path": "/blog/monads-gats-nightly-rust/",
        "components": [
          "blog",
          "monads-gats-nightly-rust"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "polymorphic-functor",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#polymorphic-functor",
            "title": "Polymorphic Functor",
            "children": [
              {
                "level": 3,
                "id": "side-note-hkts",
                "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#side-note-hkts",
                "title": "Side note: HKTs",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "pointed",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#pointed",
            "title": "Pointed",
            "children": []
          },
          {
            "level": 2,
            "id": "applicative",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#applicative",
            "title": "Applicative",
            "children": []
          },
          {
            "level": 2,
            "id": "validation",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#validation",
            "title": "Validation",
            "children": []
          },
          {
            "level": 2,
            "id": "monad",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#monad",
            "title": "Monad",
            "children": []
          },
          {
            "level": 2,
            "id": "monad-transformers",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#monad-transformers",
            "title": "Monad transformers",
            "children": []
          },
          {
            "level": 2,
            "id": "join",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#join",
            "title": "join",
            "children": []
          },
          {
            "level": 2,
            "id": "mapm-traverse",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#mapm-traverse",
            "title": "mapM/traverse",
            "children": []
          },
          {
            "level": 2,
            "id": "should-we-have-hkts-in-rust",
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#should-we-have-hkts-in-rust",
            "title": "Should we have HKTs in Rust?",
            "children": []
          }
        ],
        "word_count": 4766,
        "reading_time": 24,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
            "title": "Philosophies of Rust and Haskell"
          }
        ]
      },
      {
        "relative_path": "blog/error-handling-is-hard.md",
        "colocated_path": null,
        "content": "<p>This blog post will use mostly Rust and Haskell code snippets to demonstrate its points. But I don't believe the core point is language-specific at all.</p>\n<p>Here's a bit of Rust code to read the contents of <code>input.txt</code> and print it to <code>stdout</code>. What's wrong with it?</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let s = std::fs::read_to_string(&quot;input.txt&quot;).unwrap();\n    println!(&quot;{}&quot;, s);\n}\n</code></pre>\n<p>If you're Rust-fluent, that <code>.unwrap()</code> may stick out to you like a sore thumb. You know it means &quot;convert any error that occurred into a panic.&quot; And panics are a Bad Thing. It's not correct error handling. Instead, something like this is &quot;better&quot;:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    match std::fs::read_to_string(&quot;input.txt&quot;) {\n        Ok(s) =&gt; println!(&quot;{}&quot;, s),\n        Err(e) =&gt; eprintln!(&quot;Unable to read from input.txt: {:?}&quot;, e),\n    }\n}\n</code></pre>\n<p>The presence of <code>enum</code>s in Rust makes it really easy to ensure you properly handle all failure cases fully. The code above will not panic. If an I/O error occurs, such as file not found, permissions denied, or a hardware failure, it will print an error message to <code>stderr</code>. But this <em>still</em> isn't good error handling, for two reasons:</p>\n<ol>\n<li>The exit code of the program doesn't indicate an error occurred. We'd need to use something like <a href=\"https://doc.rust-lang.org/stable/std/process/fn.abort.html\"><code>abort</code></a> to fix that, which isn't too hard. But it's something else to remember.</li>\n<li>This is <em>very</em> verbose! We've got a trivial little program here, and we're obscuring the actual behavior of the program with all of this line noise around matching different <code>enum</code> variants.</li>\n</ol>\n<p>Fortunately, the Rust language is benevolent, and it makes it possible to do things <em>even better</em> than before. The <code>?</code> operator will try to do something, and automatically short-circuit if an error occurs. We now get to avoid those pesky panics without cluttering our code. And we get the proper exit code to boot!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() -&gt; Result&lt;(), std::io::Error&gt; {\n    let s = std::fs::read_to_string(&quot;input.txt&quot;)?;\n    println!(&quot;{}&quot;, s);\n    Ok(())\n}\n</code></pre>\n<p>All is good in the world, we can stop this post here and go home. The greatest marvel of error handling has arrived!</p>\n<h2 id=\"look-again\">Look again</h2>\n<p>So it turns out I forgot to create my <code>input.txt</code> file. Let's see the beautiful error message generated by my program:</p>\n<pre><code>Error: Os { code: 2, kind: NotFound, message: &quot;The system cannot find the file specified.&quot; }\n</code></pre>\n<p>Huh... that's thoroughly unhelpful. In my 5-line program, it's trivial enough to figure out which file doesn't exist. But imagine a 5,000 line program. Or if the code in question is in a dependency. Or if you're a member of the ops team, have never written a line of Rust in your life, don't have access to the codebase, the production server is down at 2am, and you see this error message in your logs.</p>\n<h2 id=\"runtime-exceptions-to-the-rescue\">Runtime exceptions to the rescue?</h2>\n<p>Well, <em>obviously</em> this is just because Rust uses error returns instead of Good Ol' Runtime Exceptions. Obviously something like Haskell solves this problem better, right? Well, sort of. With this program, and no <code>input.txt</code>:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main = do\n  s &lt;- readFile &quot;input.txt&quot;\n  putStrLn s\n</code></pre>\n<p>I do in fact get a much nicer error message:</p>\n<pre><code>input.txt: openFile: does not exist (No such file or directory)\n</code></pre>\n<p>I didn't even need to include any error handling logic in the code; it's all implicit! But in reality, the clarity of this error message has little to do with exception handling semantics. It has to do with the construction of this specific error message. It contains enough information to help debug this.</p>\n<p>But there are plenty of counterexamples in Haskell. Calling <code>head</code> on an empty list provides a line number these days, but you used to just get an error that &quot;oops, tried to <code>head</code> an empty list, somewhere, in one of your libraries. Good luck!&quot; Some low-level network functionality still gives vague error messages.</p>\n<p>And even the glorious <code>does not exist</code> message above is only marginally useful. And that's because of...</p>\n<h2 id=\"context\">Context!</h2>\n<p>In a trivial 2-line program, the reality is that &quot;file not found&quot; without any additional information is perfectly reasonable. That's because I know <em>exactly</em> the context in which the error occurred. It either occurred on line 1, or line 2. By contrast, in a 500k SLOC codebase, knowing that <code>input.txt</code> doesn't exist is probably not nearly enough to debug things.</p>\n<ul>\n<li>What content is <code>input.txt</code> supposed to have?</li>\n<li>What part of the code was trying to read it?</li>\n<li>What was I going to do with the contents of the file?</li>\n</ul>\n<p>Similarly, knowing that I can't connect to IP address 255.813.20.1 may be sufficient in a small network test. But in a reasonably complicated program, I'd <em>much</em> rather get the context that I'm trying to make an HTTPS request to example.com proxied through a server with IP address 255.813.20.1, which was specified via the <code>HTTP_PROXY</code> environment variable. That last bit of information may shortcircuit days of debugging to point out &quot;doh, I had a typo in my Kubernetes manifest file!&quot;</p>\n<p>Stack traces are often a huge help here. They tell you a lot of useful context. And both Rust and Haskell are particularly weak at providing this context in their error representations. But it's still not a panacea. The ugly reality is that...</p>\n<h2 id=\"there-s-an-inherent-trade-off\">There's an inherent trade-off</h2>\n<p>Like so many other things, error handling ultimately is a trade-off. When we're writing our initial code, <strong>we don't want to think about errors</strong>. We code to the happy path. How productive would you be if you had to derail every line of code with thought processes around the myriad ways your code could fail?</p>\n<p>But then we're debugging a production issue, and <strong>we definitely want to think about errors</strong>. We curse our lazy selves for not handling an error case that <em>obviously</em> could have arisen. &quot;Why did I decide to abort the process when the TCP connection failed? I should have retried! I should have logged the address I tried to connect to!&quot;</p>\n<p>Then we flood our code with log messages, and are frustrated when we can't see the important bits.</p>\n<p>Finding the right balance is an art. And typically it's an art that we don't spend enough time thinking about. There are some well-established tools for this, like runtime-configurable log levels. That's a huge step in the right direction.</p>\n<p>Rust is such a great example of this. Explicit <code>match</code>ing on <code>Result</code> values really forces you to think through all of the different error cases and how to report them correctly. Complex custom <code>enum</code> error types allow you to define all of the different values you'd want reported. But all of this adds huge line noise compared to <code>?</code>. So <code>?</code> wins the day.</p>\n<h2 id=\"the-method-is-secondary\">The method is secondary</h2>\n<p>The Rust community accepts that panics are bad. The Haskell community constantly argues about whether runtime exceptions are a good or bad thing. Java is either loved or hated for checked exceptions. Golang is either lauded or mocked for <code>if err != nil</code>.</p>\n<p>I'm not at all arguing that those discussions are irrelevant. There are significant trade-offs to these various approaches. They affect performance, trackability of errors, and more.</p>\n<p>What I'm arguing here is that we spend a disproportionate time on how we report and recover from errors, and far less on discussing what a good error actually contains.</p>\n<h2 id=\"my-ideal\">My ideal</h2>\n<p>These are evolving thoughts for me. So take them with a grain of salt. And I'm very interested to hear differing opinions.</p>\n<p>I've long held that in Haskell, we should use runtime exceptions. This has been interpreted by many as my <em>advocacy</em> of runtime exceptions. Instead, I would advocate: use the language's native mechanism. I don't pine for exceptions when writing Rust. Quite the opposite in fact. I overall prefer explicit error handling. But it's not worth fighting the battle against runtime exceptions when they are already ubiquitous.</p>\n<p>I think Rust and Haskell are both close to the sweet spot in error handling. There's relatively little verbosity around adding this handling. If you leverage libraries like <a href=\"https://crates.io/crates/anyhow\"><code>anyhow</code></a> in Rust, there's even less.</p>\n<p>My biggest concern with a library like <code>anyhow</code> is how easy it becomes to do the wrong thing. Taking our broken example from above. It's trivial to &quot;upgrade&quot; it to use <code>anyhow</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() -&gt; anyhow::Result&lt;()&gt; {\n    let s = std::fs::read_to_string(&quot;input.txt&quot;)?;\n    println!(&quot;{}&quot;, s);\n    Ok(())\n}\n</code></pre>\n<p>However, this still produces the same useless error message we started with. Instead, we need to be a bit more explicit with a <code>context</code> method call to get a nicer message:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use anyhow::Context;\n\nfn main() -&gt; anyhow::Result&lt;()&gt; {\n    let s = std::fs::read_to_string(&quot;input.txt&quot;)\n        .context(&quot;Failed to read input.txt&quot;)?;\n    println!(&quot;{}&quot;, s);\n    Ok(())\n}\n</code></pre>\n<p>Now we get the much more helpful error message:</p>\n<pre><code>Error: Failed to read input.txt\n\nCaused by:\n    The system cannot find the file specified. (os error 2)\n</code></pre>\n<p>This is a good balance of concision and helpfulness. The downside is that lack of <em>enforcement</em>. Nothing forced me to add the <code>.context</code> call. I worry that in a large codebase, or under time pressure, people like me will end up forgetting to add the helpful context.</p>\n<p>Could we design a modified <code>anyhow</code> that <em>forces</em> a <code>context</code> call? Certainly. But:</p>\n<ol>\n<li>It will lose out on the current simple ergonomics.</li>\n<li>No tool can force the &quot;right&quot; level of context, that requires human insight and thought. And those are quantities in short supply, and not usually interested in error messages.</li>\n</ol>\n<h2 id=\"advice\">Advice</h2>\n<p>I don't have an answer here. I would advise people to start by recognizing that good error handling is <em>difficult</em>. We like to think of it as a trivial but tedious task. It isn't. Doing this correctly requires real thought and design. We're too quick to sweep it under the rug as the unimportant parts of our code.</p>\n<p>I'll continue with my general advice of using your language's preferred mechanisms for error handling. In Rust, that means using <code>Result</code> and avoiding panics. In Haskell, it means some mixture of explicit <code>Either</code> return values and runtime exceptions (the exact mixture very much up for debate). In Java, it's mostly checked exceptions, though there's plenty of added unchecked exceptions to gum up the works too.</p>\n<p>But consider spending a bit more time on thinking through not just <em>how</em> to report/raise/throw an error/exception, but what exactly you're reporting/raising/throwing. Think of the poor ops guy drinking his 7th cup of coffee at 4am trying to figure out what part of the codebase needs <code>input.txt</code>, or why in the world the program is trying to connect to an invalid IP address.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/",
        "slug": "error-handling-is-hard",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Error handling is hard",
        "description": "Arguments rage over topics like explicit errors vs runtime exceptions, checked vs unchecked, and more. In this post, I want to reframe the discussion a bit. Good error handling is simply hard, and consists of conflicting goals.",
        "updated": null,
        "date": "2020-11-30",
        "year": 2020,
        "month": 11,
        "day": 30,
        "taxonomies": {
          "tags": [
            "rust",
            "haskell",
            "insights"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/error-handling-is-hard/",
        "components": [
          "blog",
          "error-handling-is-hard"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "look-again",
            "permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#look-again",
            "title": "Look again",
            "children": []
          },
          {
            "level": 2,
            "id": "runtime-exceptions-to-the-rescue",
            "permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#runtime-exceptions-to-the-rescue",
            "title": "Runtime exceptions to the rescue?",
            "children": []
          },
          {
            "level": 2,
            "id": "context",
            "permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#context",
            "title": "Context!",
            "children": []
          },
          {
            "level": 2,
            "id": "there-s-an-inherent-trade-off",
            "permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#there-s-an-inherent-trade-off",
            "title": "There's an inherent trade-off",
            "children": []
          },
          {
            "level": 2,
            "id": "the-method-is-secondary",
            "permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#the-method-is-secondary",
            "title": "The method is secondary",
            "children": []
          },
          {
            "level": 2,
            "id": "my-ideal",
            "permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#my-ideal",
            "title": "My ideal",
            "children": []
          },
          {
            "level": 2,
            "id": "advice",
            "permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#advice",
            "title": "Advice",
            "children": []
          }
        ],
        "word_count": 1757,
        "reading_time": 9,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/pattern-matching/",
            "title": "Pattern matching"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
            "title": "Philosophies of Rust and Haskell"
          }
        ]
      },
      {
        "relative_path": "blog/ownership-puzzle-rust-async-hyper.md",
        "colocated_path": null,
        "content": "<p>Most of the web services I've written in Rust have used <code>actix-web</code>. Recently, I needed to write something that will provide some reverse proxy functionality. I'm more familiar with the hyper-powered HTTP client libraries (<code>reqwest</code> in particular). I decided this would be a good time to experiment again with hyper on the server side as well. The theory was that having matching <code>Request</code> and <code>Response</code> types between the client and server would work nicely. And it certainly did.</p>\n<p>In the process, I ended up with an interesting example of battling ownership through closures and async blocks. This is a topic I typically mention in my Rust training sessions as the hardest thing I had to learn when learning Rust. So I figure a blog post demonstrating one of these crazy cases would be worthwhile.</p>\n<p>Side note: If you're interested in learning more about Rust, we'll be offering a <a href=\"https://tech.fpcomplete.com/training/\">free Rust training course</a> in December. Sign up for more information.</p>\n<h2 id=\"cargo-toml\">Cargo.toml</h2>\n<p>If you want to play along, you should start off with a <code>cargo new</code>. I'm using the following <code>[dependencies]</code> in my <code>Cargo.toml</code></p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[dependencies]\nhyper = &quot;0.13&quot;\ntokio = { version = &quot;0.2&quot;, features = [&quot;full&quot;] }\nlog = &quot;0.4.11&quot;\nenv_logger = &quot;0.8.1&quot;\nhyper-tls = &quot;0.4.3&quot;\n</code></pre>\n<p>I'm also compiling with Rust version 1.47.0. If you'd like, you can add <code>1.47.0</code> to your <code>rust-toolchain</code>. And finally, my full <code>Cargo.lock</code> is <a href=\"https://gist.github.com/snoyberg/550a96c3888a2563f20afcec2c652801\">available as a Gist</a>.</p>\n<h2 id=\"basic-web-service\">Basic web service</h2>\n<p>To get started with a hyper-powered web service, we can use the example straight from the <a href=\"https://hyper.rs/\">hyper homepage</a>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::{convert::Infallible, net::SocketAddr};\nuse hyper::{Body, Request, Response, Server};\nuse hyper::service::{make_service_fn, service_fn};\n\nasync fn handle(_: Request&lt;Body&gt;) -&gt; Result&lt;Response&lt;Body&gt;, Infallible&gt; {\n    Ok(Response::new(&quot;Hello, World!&quot;.into()))\n}\n\n#[tokio::main]\nasync fn main() {\n    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));\n\n    let make_svc = make_service_fn(|_conn| async {\n        Ok::&lt;_, Infallible&gt;(service_fn(handle))\n    });\n\n    let server = Server::bind(&amp;addr).serve(make_svc);\n\n    if let Err(e) = server.await {\n        eprintln!(&quot;server error: {}&quot;, e);\n    }\n}\n</code></pre>\n<p>It's worth explaining this a little bit, since at least in my opinion the distinction between <code>make_service_fn</code> and <code>service_fn</code> wasn't clear. There are two different things we're trying to create here:</p>\n<ul>\n<li>A <code>MakeService</code>, which takes a <code>&amp;AddrStream</code> and gives back a <code>Service</code></li>\n<li>A <code>Service</code>, which takes a <code>Request</code> and gives back a <code>Response</code></li>\n</ul>\n<p>This glosses over a number of details, such as:</p>\n<ul>\n<li>Error handling</li>\n<li>Everything is async (<code>Future</code>s are everywhere)</li>\n<li>Everything is expressed in terms of general purpose <code>trait</code>s</li>\n</ul>\n<p>To help us with that &quot;glossing&quot;, hyper provides two convenience functions for creating <code>MakeService</code> and <code>Service</code> values, <code>make_service_fn</code> and <code>service_fn</code>. Each of these will convert a closure into their respective types. Then the <code>MakeService</code> closure can return a <code>Service</code> value, and the <code>MakeService</code> value can be provided to <code>hyper::server::Builder::serve</code>. Let's get even more concrete from the code above:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">async fn handle(_: Request&lt;Body&gt;) -&gt; Result&lt;Response&lt;Body&gt;, Infallible&gt; {...}\nlet make_svc = make_service_fn(|_conn| async {\n    Ok::&lt;_, Infallible&gt;(service_fn(handle))\n});\n</code></pre>\n<p>The <code>handle</code> function takes a <code>Request&lt;Body&gt;</code> and returns a <code>Future&lt;Output=Result&lt;Response&lt;Body, Infallible&gt;&gt;&gt;</code>. The <code>Infallible</code> is a nice way of saying &quot;no errors can possibly occur here.&quot; The type signatures at play require that we use a <code>Result</code>, but morally <code>Result&lt;T, Infallible&gt;</code> is equivalent to <code>T</code>.</p>\n<p><code>service_fn</code> converts this <code>handle</code> value into a <code>Service</code> value. This new value implements all of the appropriate traits to satisfy the requirements of <code>make_service_fn</code> and <code>serve</code>. We wrap up that new <code>Service</code> in its own <code>Result&lt;_, Infallible&gt;</code>, ignore the input <code>&amp;AddrStream</code> value, and pass all of this to <code>make_service_fn</code>. <code>make_svc</code> is now a value that can be passed to <code>serve</code>, and we have &quot;Hello, world!&quot;</p>\n<p>And if all of this seems a bit complicated for a &quot;Hello world,&quot; you may understand why there are lots of frameworks built on top of hyper to make it easier to work with. Anyway, onwards!</p>\n<h2 id=\"initial-reverse-proxy\">Initial reverse proxy</h2>\n<p>Next up, we want to modify our <code>handle</code> function to perform a reverse proxy instead of returning the &quot;Hello, World!&quot; text. For this example, we're going to hard-code <code>https://www.fpcomplete.com</code> as the destination site for this reverse proxy. To make this happen, we'll need to:</p>\n<ul>\n<li>Construct a <code>Request</code> value, based on the incoming <code>Request</code>'s request headers and path, but targeting the <code>www.fpcomplete.com</code> server</li>\n<li>Construct a <code>Client</code> value from hyper with TLS support</li>\n<li>Perform the request</li>\n<li>Return the <code>Response</code> as the response from <code>handle</code></li>\n<li>Introduce error handling</li>\n</ul>\n<p>I'm also going to move over to the <code>env-logger</code> and <code>log</code> crates for producing output. I did this when working on the code myself, and switching to <code>RUST_LOG=debug</code> was a great way to debug things. (When I was working on this, I forgot I needed to create a special <code>Client</code> with TLS support.)</p>\n<p>So from the top! We now have the following <code>use</code> statements:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use hyper::service::{make_service_fn, service_fn};\nuse hyper::{Body, Client, Request, Response, Server};\nuse hyper_tls::HttpsConnector;\nuse std::net::SocketAddr;\n</code></pre>\n<p>We next have three constants. The <code>SCHEME</code> and <code>HOST</code> are pretty self-explanatory: the hardcoded destination.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">const SCHEME: &amp;str = &quot;https&quot;;\nconst HOST: &amp;str = &quot;www.fpcomplete.com&quot;;\n</code></pre>\n<p>Next we have some HTTP request headers that should <em>not</em> be forwarded onto the destination server. This blacklist approach to HTTP headers in reverse proxies works well enough. It's probably a better idea in general to follow a whitelist approach. In any event, these six headers have the potential to change behavior at the transport layer, and therefore cannot be passed on from the client:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">&#x2F;&#x2F;&#x2F; HTTP headers to strip, a whitelist is probably a better idea\nconst STRIPPED: [&amp;str; 6] = [\n    &quot;content-length&quot;,\n    &quot;transfer-encoding&quot;,\n    &quot;accept-encoding&quot;,\n    &quot;content-encoding&quot;,\n    &quot;host&quot;,\n    &quot;connection&quot;,\n];\n</code></pre>\n<p>And next we have a fairly boilerplate error type definition. We can generate a <code>hyper::Error</code> when performing the HTTP request to the destination server, and a <code>hyper::http::Error</code> when constructing the new <code>Request</code>. Arguably we should simply panic if the latter error occurs, since it indicates programmer error. But I've decided to treat it as its own error variant. So here's some boilerplate!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Debug)]\nenum ReverseProxyError {\n    Hyper(hyper::Error),\n    HyperHttp(hyper::http::Error),\n}\n\nimpl From&lt;hyper::Error&gt; for ReverseProxyError {\n    fn from(e: hyper::Error) -&gt; Self {\n        ReverseProxyError::Hyper(e)\n    }\n}\n\nimpl From&lt;hyper::http::Error&gt; for ReverseProxyError {\n    fn from(e: hyper::http::Error) -&gt; Self {\n        ReverseProxyError::HyperHttp(e)\n    }\n}\n\nimpl std::fmt::Display for ReverseProxyError {\n    fn fmt(&amp;self, fmt: &amp;mut std::fmt::Formatter) -&gt; std::fmt::Result {\n        write!(fmt, &quot;{:?}&quot;, self)\n    }\n}\n\nimpl std::error::Error for ReverseProxyError {}\n</code></pre>\n<p>With all of this in place, we can finally start writing our <code>handle</code> function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">async fn handle(mut req: Request&lt;Body&gt;) -&gt; Result&lt;Response&lt;Body&gt;, ReverseProxyError&gt; {\n}\n</code></pre>\n<p>We're going to mutate the incoming <code>Request</code> to have our new destination, and then pass it along to the destination server. This is where the beauty of using hyper for client <em>and</em> server comes into play: no need to futz around with changing body or header representations. The first thing we do is strip out any of the <code>STRIPPED</code> request headers:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let h = req.headers_mut();\nfor key in &amp;STRIPPED {\n    h.remove(*key);\n}\n</code></pre>\n<p>Next, we're going to construct the new request URI by combining:</p>\n<ul>\n<li>The hard-coded scheme (<code>https</code>)</li>\n<li>The hard-coded authority (<code>www.fpcomplete.com</code>)</li>\n<li>The path and query from the incoming request</li>\n</ul>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let mut builder = hyper::Uri::builder()\n    .scheme(SCHEME)\n    .authority(HOST);\nif let Some(pq) = req.uri().path_and_query() {\n    builder = builder.path_and_query(pq.clone());\n}\n*req.uri_mut() = builder.build()?;\n</code></pre>\n<p>Panicking if <code>req.uri().path_and_query()</code> is <code>None</code> would be appropriate here, but as is my wont, I'm avoiding panics if possible. Next, for good measure, let's add in a little bit of debug output:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">log::debug!(&quot;request == {:?}&quot;, req);\n</code></pre>\n<p>Now we can construct our <code>Client</code> value to perform the HTTPS request:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let https = HttpsConnector::new();\nlet client = Client::builder().build(https);\n</code></pre>\n<p>And finally, let's perform the request, log the response, and return the response:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let response = client.request(req).await?;\nlog::debug!(&quot;response == {:?}&quot;, response);\nOk(response)\n</code></pre>\n<p>Our <code>main</code> function looks pretty similar to what we had before. I've added in initialization of <code>env-logger</code> with a default to <code>info</code> level output, and modified the program to <code>abort</code> if the server produces any errors:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n    env_logger::Builder::from_env(env_logger::Env::default().default_filter_or(&quot;info&quot;)).init();\n    let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n    let make_svc = make_service_fn(|_conn| async {\n        Ok::&lt;_, ReverseProxyError&gt;(service_fn(handle))\n    });\n\n    let server = Server::bind(&amp;addr).serve(make_svc);\n    log::info!(&quot;Server started, bound on {}&quot;, addr);\n\n    if let Err(e) = server.await {\n        log::error!(&quot;server error: {}&quot;, e);\n        std::process::abort();\n    }\n}\n</code></pre>\n<p>The full code is <a href=\"https://gist.github.com/snoyberg/ab29c50671858e82ed5f6a88f8170449\">available as a Gist</a>. This program works as expected, and if I <code>cargo run</code> it and connect to <code>http://localhost:3000</code>, I see the FP Complete homepage. Yay!</p>\n<h2 id=\"wasteful-client\">Wasteful Client</h2>\n<p>The problem with this program is that it constructs a brand new <code>Client</code> value on every incoming request. That's expensive. Instead, we would like to produce the <code>Client</code> once, in <code>main</code>, and reuse it for each request. And herein lies the ownership puzzle. While we're at this, let's move away from using <code>const</code>s for the scheme and host, and instead bundle together the client, scheme, and host into a new <code>struct</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct ReverseProxy {\n    scheme: String,\n    host: String,\n    client: Client&lt;HttpsConnector&lt;hyper::client::HttpConnector&gt;&gt;,\n}\n</code></pre>\n<p>Next, we'll want to change <code>handle</code> from a standalone function to a method on <code>ReverseProxy</code>. (We could equivalently pass in a reference to <code>ReverseProxy</code> for <code>handle</code>, but this feels more idiomatic):</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl ReverseProxy {\n    async fn handle(&amp;self, mut req: Request&lt;Body&gt;) -&gt; Result&lt;Response&lt;Body&gt;, ReverseProxyError&gt; {\n        ...\n    }\n}\n</code></pre>\n<p>Then, within <code>handle</code>, we can replace <code>SCHEME</code> and <code>HOST</code> with <code>&amp;*self.scheme</code> and <code>&amp;*self.host</code>. You may be wondering &quot;why <code>&amp;*</code> and not <code>&amp;</code>.&quot; Without <code>&amp;*</code>, you'll get an error message:</p>\n<pre><code>error[E0277]: the trait bound `hyper::http::uri::Scheme: std::convert::From&lt;&amp;std::string::String&gt;` is not satisfied\n  --&gt; src\\main.rs:59:14\n   |\n59 |             .scheme(&amp;self.scheme)\n   |              ^^^^^^ the trait `std::convert::From&lt;&amp;std::string::String&gt;` is not implemented for `hyper::http::uri::Scheme`\n</code></pre>\n<p>This is one of those examples where the magic of deref coercion seems to fall apart. Personally, I prefer using <code>self.scheme.as_str()</code> instead of <code>&amp;*self.scheme</code> to be more explicit, but <code>&amp;*self.scheme</code> is likely more idiomatic.</p>\n<p>Anyway, the final change within <code>handle</code> is to remove the <code>let https = ...;</code> and <code>let client = ...;</code> statements, and instead construct our response with:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let response = self.client.request(req).await?;\n</code></pre>\n<p>With that, our <code>handle</code> method is done, and we can focus our efforts on the true puzzle: the <code>main</code> function itself.</p>\n<h2 id=\"the-easy-part\">The easy part</h2>\n<p>The easy part of this is great: construct a <code>ReverseProxy</code> value, and provide the <code>make_svc</code> to <code>serve</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n    env_logger::Builder::from_env(env_logger::Env::default().default_filter_or(&quot;info&quot;)).init();\n    let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n    let https = HttpsConnector::new();\n    let client = Client::builder().build(https);\n\n    let rp = ReverseProxy {\n        client,\n        scheme: &quot;https&quot;.to_owned(),\n        host: &quot;www.fpcomplete.com&quot;.to_owned(),\n    };\n\n    &#x2F;&#x2F; here be dragons\n\n    let server = Server::bind(&amp;addr).serve(make_svc);\n    log::info!(&quot;Server started, bound on {}&quot;, addr);\n\n    if let Err(e) = server.await {\n        log::error!(&quot;server error: {}&quot;, e);\n        std::process::abort();\n    }\n}\n</code></pre>\n<p>That middle part is where the difficulty lies. Previously, this code looked like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(|_conn| async {\n    Ok::&lt;_, ReverseProxyError&gt;(service_fn(handle))\n});\n</code></pre>\n<p>We no longer have a <code>handle</code> function. Working around that little enigma doesn't seem so bad initially. We'll create a closure as the argument to <code>service_fn</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(|_conn| async {\n    Ok::&lt;_, ReverseProxyError&gt;(service_fn(|req| {\n        rp.handle(req)\n    }))\n});\n</code></pre>\n<p>While that looks appealing, it fails lifetimes completely:</p>\n<pre><code>error[E0597]: `rp` does not live long enough\n   --&gt; src\\main.rs:90:13\n    |\n88  |       let make_svc = make_service_fn(|_conn| async {\n    |  ____________________________________-------_-\n    | |                                    |\n    | |                                    value captured here\n89  | |         Ok::&lt;_, ReverseProxyError&gt;(service_fn(|req| {\n90  | |             rp.handle(req)\n    | |             ^^ borrowed value does not live long enough\n91  | |         }))\n92  | |     });\n    | |_____- returning this value requires that `rp` is borrowed for `&#x27;static`\n...\n101 |   }\n    |   - `rp` dropped here while still borrowed\n</code></pre>\n<p>Nothing in the lifetimes of these values tells us that the <code>ReverseProxy</code> value will outlive the service. We cannot simply borrow a reference to <code>ReverseProxy</code> inside our closure. Instead, we're going to need to move ownership of the <code>ReverseProxy</code> to the closure.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(|_conn| async {\n    Ok::&lt;_, ReverseProxyError&gt;(service_fn(move |req| {\n        rp.handle(req)\n    }))\n});\n</code></pre>\n<p>Note the addition of <code>move</code> in front of the closure. Unfortunately, this doesn't work, and gives us a confusing error message:</p>\n<pre><code>error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements\n  --&gt; src\\main.rs:90:16\n   |\n90 |             rp.handle(req)\n   |                ^^^^^^\n   |\nnote: first, the lifetime cannot outlive the lifetime `&#x27;_` as defined on the body at 89:47...\n  --&gt; src\\main.rs:89:47\n   |\n89 |         Ok::&lt;_, ReverseProxyError&gt;(service_fn(move |req| {\n   |                                               ^^^^^^^^^^\nnote: ...so that closure can access `rp`\n  --&gt; src\\main.rs:90:13\n   |\n90 |             rp.handle(req)\n   |             ^^\n   = note: but, the lifetime must be valid for the static lifetime...\nnote: ...so that the type `hyper::proto::h2::server::H2Stream&lt;impl std::future::Future, hyper::Body&gt;` will meet its required lifetime bounds\n  --&gt; src\\main.rs:94:38\n   |\n94 |     let server = Server::bind(&amp;addr).serve(make_svc);\n   |                                      ^^^^^\n\nerror: aborting due to previous error\n</code></pre>\n<p>Instead of trying to parse that, let's take a step back, reassess, and then try again.</p>\n<h2 id=\"so-many-layers\">So many layers!</h2>\n<p>Remember way back to the beginning of this post. I went into some details around the process of having a <code>MakeService</code>, which would be run for each new incoming connection, and a <code>Service</code>, which would be run for each new request on an existing connection. The way we've written things so far, the first time we handle a request, that request handler will consume the <code>ReverseProxy</code>. That means that we would have a use-after-move for each subsequent request on that connection. We'd <em>also</em> have a use-after-move for each subsequent connection we receive.</p>\n<p>We want to share our <code>ReverseProxy</code> across multiple different <code>MakeService</code> and <code>Service</code> instantiations. Since this will occur across multiple system threads, the most straightforward way to handle this is to wrap our <code>ReverseProxy</code> in an <code>Arc</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let rp = std::sync::Arc::new(ReverseProxy {\n    client,\n    scheme: &quot;https&quot;.to_owned(),\n    host: &quot;www.fpcomplete.com&quot;.to_owned(),\n});\n</code></pre>\n<p>Now we're going to need to play around with <code>clone</code>ing this <code>Arc</code> at appropriate times. In particular, we'll need to clone twice: once inside the <code>make_service_fn</code> closure, and once inside the <code>service_fn</code> closure. This will ensure that we never move the <code>ReverseProxy</code> value out of the closure's environment, and that our closure can remain a <code>FnMut</code> instead of an <code>FnOnce</code>.</p>\n<p>And, in order to make <em>that</em> happen, we'll need to convince the compiler through appropriate usages of <code>move</code> to move ownership of the <code>ReverseProxy</code>, instead of borrowing a reference to a value with a different lifetime. This is where the fun begins! Let's go through a series of modifications until we get to our final mind-bender.</p>\n<h2 id=\"adding-move\">Adding move</h2>\n<p>To recap, we'll start with this code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let rp = std::sync::Arc::new(ReverseProxy {\n    client,\n    scheme: &quot;https&quot;.to_owned(),\n    host: &quot;www.fpcomplete.com&quot;.to_owned(),\n});\n\nlet make_svc = make_service_fn(|_conn| async {\n    Ok::&lt;_, ReverseProxyError&gt;(service_fn(|req| {\n        rp.handle(req)\n    }))\n});\n</code></pre>\n<p>The first thing I tried was adding an <code>rp.clone()</code> inside the first <code>async</code> block:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(|_conn| async {\n    let rp = rp.clone();\n    Ok::&lt;_, ReverseProxyError&gt;(service_fn(|req| {\n        rp.handle(req)\n    }))\n});\n</code></pre>\n<p>This doesn't work, presumably because I need to stick some <code>move</code>s on the initial closure and <code>async</code> block like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| async move {\n    let rp = rp.clone();\n    Ok::&lt;_, ReverseProxyError&gt;(service_fn(|req| {\n        rp.handle(req)\n    }))\n});\n</code></pre>\n<p>This unfortunately still doesn't work, and gives me the error message:</p>\n<pre><code>error[E0507]: cannot move out of `rp`, a captured variable in an `FnMut` closure\n  --&gt; src\\main.rs:88:60\n   |\n82 |       let rp = std::sync::Arc::new(ReverseProxy {\n   |           -- captured outer variable\n...\n88 |       let make_svc = make_service_fn(move |_conn| async move {\n   |  ____________________________________________________________^\n89 | |         let rp = rp.clone();\n   | |                  --\n   | |                  |\n   | |                  move occurs because `rp` has type `std::sync::Arc&lt;ReverseProxy&gt;`, which does not implement the `Copy` trait\n   | |                  move occurs due to use in generator\n90 | |         Ok::&lt;_, ReverseProxyError&gt;(service_fn(|req| {\n91 | |             rp.handle(req)\n92 | |         }))\n93 | |     });\n   | |_____^ move out of `rp` occurs here\n</code></pre>\n<p>It took me a while to grok what was happening. And in fact, I'm not 100% certain I grok it yet. But I believe what is happening is:</p>\n<ul>\n<li>The closure grabs ownership of <code>rp</code> (good)</li>\n<li>The <code>async</code> block grabs ownership of <code>rp</code>, which seemed good, but isn't</li>\n<li>Inside the <code>async</code> block, we make a clone of <code>rp</code></li>\n<li>When the <code>async</code> block is dropped, its ownership of the original <code>rp</code> is dropped</li>\n<li>Since the <code>rp</code> was moved out of the closure, the closure is now an <code>FnOnce</code> and cannot be called a second time</li>\n</ul>\n<p>That's no good! It turns out the trick to fixing this isn't so difficult. Don't grab ownership in the <code>async</code> block. Instead, clone the <code>rp</code> in the closure, before the <code>async</code> block:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n    let rp = rp.clone();\n    async move {\n        Ok::&lt;_, ReverseProxyError&gt;(service_fn(|req| {\n            rp.handle(req)\n        }))\n    }\n});\n</code></pre>\n<p>Woohoo! One <code>clone</code> down. This code still doesn't compile, but we're closer. The next change to make is simple: stick a <code>move</code> on the inner closure:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n    let rp = rp.clone();\n    async move {\n        Ok::&lt;_, ReverseProxyError&gt;(service_fn(move |req| {\n            rp.handle(req)\n        }))\n    }\n});\n</code></pre>\n<p>This also fails, but going back to our description before, it's easy to see why. We still need a second <code>clone</code>, to make sure we aren't moving the <code>ReverseProxy</code> value out of the closure. Making that change is easy, but unfortunately doesn't fully solve our problem. This code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n    let rp = rp.clone();\n    async move {\n        Ok::&lt;_, ReverseProxyError&gt;(service_fn(move |req| {\n            let rp = rp.clone();\n            rp.handle(req)\n        }))\n    }\n});\n</code></pre>\n<p>Still gives us the error message:</p>\n<pre><code>error[E0515]: cannot return value referencing local variable `rp`\n  --&gt; src\\main.rs:93:17\n   |\n93 |                 rp.handle(req)\n   |                 --^^^^^^^^^^^^\n   |                 |\n   |                 returns a value referencing data owned by the current function\n   |                 `rp` is borrowed here\n</code></pre>\n<p>What's going on here?</p>\n<h2 id=\"did-your-future-borrow-my-reference\">Did your Future borrow my reference?</h2>\n<p>Again, referring to the introduction, I mentioned that the <code>service_fn</code> parameter had to return a <code>Future&lt;Output...&gt;</code>. This is an example of the <code>impl Trait</code> approach. I've previously <a href=\"https://tech.fpcomplete.com/rust/ownership-and-impl-trait/\">blogged about ownership and impl trait</a>. There are some pain points around this combination. And we've hit one of them.</p>\n<p>The return type of our <code>handle</code> method doesn't indicate what underlying type is implementing <code>Future</code>. That underlying implementation <em>may</em> choose to hold onto references passed into the <code>handle</code> method. That would include references to <code>&amp;self</code>. And that means if we return that <code>Future</code> outside of our closure, a reference may outlive the value.</p>\n<p>I can think of two ways to solve this problem, though there are probably more. The first one I'll show you isn't the one I prefer, but is the one that likely gets the idea across more clearly. Our <code>handle</code> method is taking a reference to <code>ReverseProxy</code>. But if it didn't take a reference, and instead received the <code>ReverseProxy</code> by move, there would be no references to accidentally end up in the <code>Future</code>.</p>\n<p>Cloning the <code>ReverseProxy</code> itself is expensive. Fortunately, we have another option: pass in the <code>Arc&lt;ReverseProxy&gt;</code>!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl ReverseProxy {\n    async fn handle(self: std::sync::Arc&lt;Self&gt;, mut req: Request&lt;Body&gt;) -&gt; Result&lt;Response&lt;Body&gt;, ReverseProxyError&gt; {\n        ...\n    }\n}\n</code></pre>\n<p>Without changing any code inside the <code>handle</code> method or the <code>main</code> function, this compiles and behaves correctly. But like I said: I don't like it very much. This is limiting the generality of our <code>handle</code> method. It feels like putting the complexity in the wrong place. (Maybe you'll disagree and say that this is the better solution. That's fine, I'd be really interested to hear people's thoughts.)</p>\n<p>Instead, another possibility is to introduce an <code>async move</code> inside <code>main</code>. This will take ownership of the <code>Arc&lt;ReverseProxy&gt;</code>, and ensure that it lives as long as the <code>Future</code> generated by that <code>async move</code> block itself. This solution looks like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n    let rp = rp.clone();\n    async move {\n        Ok::&lt;_, ReverseProxyError&gt;(service_fn(move |req| {\n            let rp = rp.clone();\n            async move { rp.handle(req).await }\n        }))\n    }\n});\n</code></pre>\n<p>We need to call <code>.await</code> inside the <code>async</code> block to ensure we don't return a future-of-a-future. But with that change, everything works. I'm not terribly thrilled with this. It feels like an ugly hack. I don't have any recommendations, but I hope there are improvements to the <code>impl Trait</code> ownership story in the future.</p>\n<h2 id=\"one-final-improvement\">One final improvement</h2>\n<p>One final tweak. We put <code>async move</code> after the first <code>rp.clone()</code> originally. This helped make the error messages more tractable. But it turns out that that <code>move</code> isn't doing anything useful. The <code>move</code> on the inner <code>closure</code> already forces a move of the cloned <code>rp</code>. So we can simplify our code by removing just one <code>move</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n    let rp = rp.clone();\n    async {\n        Ok::&lt;_, ReverseProxyError&gt;(service_fn(move |req| {\n            let rp = rp.clone();\n            async move { rp.handle(req).await }\n        }))\n    }\n});\n</code></pre>\n<p>This final version of the code is <a href=\"https://gist.github.com/snoyberg/54df3cc7fa1ee1fa77cbde6c75f3df0c\">available as a Gist too</a>.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hope this was a fun trip down ownership lane. If this seemed overly complicated, keep in mind a few things:</p>\n<ul>\n<li>It's strongly recommended to use a higher level web framework when writing server side code in Rust</li>\n<li>We ended up implementing something pretty sophisticated in about 100 lines of code</li>\n<li>Hopefully the story around ownership and <code>impl Trait</code> will improve over time</li>\n</ul>\n<p>If you enjoyed this, you may want to check out our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a>, or sign up for our <a href=\"https://tech.fpcomplete.com/training/\">free December Rust training course</a>.</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/rust/ownership-and-impl-trait/\">Ownership and impl Trait</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/\">Collect in Rust, traverse in Haskell and Scala</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/\">Where Rust fits in your organization</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/\">Avoiding duplicating strings in Rust</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/",
        "slug": "ownership-puzzle-rust-async-hyper",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "An ownership puzzle with Rust, async, and hyper",
        "description": "I personally find ownership in the presence of closures and async blocks to be hard to master. In this post, I'll work through a puzzle I encountered with a simple reverse proxy in Rust and hyper.",
        "updated": null,
        "date": "2020-11-17",
        "year": 2020,
        "month": 11,
        "day": 17,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/ownership-puzzle-rust-async-hyper/",
        "components": [
          "blog",
          "ownership-puzzle-rust-async-hyper"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "cargo-toml",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#cargo-toml",
            "title": "Cargo.toml",
            "children": []
          },
          {
            "level": 2,
            "id": "basic-web-service",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#basic-web-service",
            "title": "Basic web service",
            "children": []
          },
          {
            "level": 2,
            "id": "initial-reverse-proxy",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#initial-reverse-proxy",
            "title": "Initial reverse proxy",
            "children": []
          },
          {
            "level": 2,
            "id": "wasteful-client",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#wasteful-client",
            "title": "Wasteful Client",
            "children": []
          },
          {
            "level": 2,
            "id": "the-easy-part",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#the-easy-part",
            "title": "The easy part",
            "children": []
          },
          {
            "level": 2,
            "id": "so-many-layers",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#so-many-layers",
            "title": "So many layers!",
            "children": []
          },
          {
            "level": 2,
            "id": "adding-move",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#adding-move",
            "title": "Adding move",
            "children": []
          },
          {
            "level": 2,
            "id": "did-your-future-borrow-my-reference",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#did-your-future-borrow-my-reference",
            "title": "Did your Future borrow my reference?",
            "children": []
          },
          {
            "level": 2,
            "id": "one-final-improvement",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#one-final-improvement",
            "title": "One final improvement",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 3542,
        "reading_time": 18,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/rust-kubernetes-windows.md",
        "colocated_path": null,
        "content": "<p>A few years back, we <a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">published a blog post</a> about deploying a Rust application using Docker and Kubernetes. That application was a Telegram bot. We're going to do something similar today, but with a few meaningful differences:</p>\n<ol>\n<li>We're going to be deploying a web app. Don't get too excited: this will be an incredibly simply piece of code, basically copy-pasted from the <a href=\"https://actix.rs/docs/application/\">actix-web documentation</a>.</li>\n<li>We're going to build the deployment image on Github Actions</li>\n<li>And we're going to be building this using Windows Containers instead of Linux. (Sorry for burying the lead.)</li>\n</ol>\n<p>We put this together for testing purposes when rolling out Windows support in our <a href=\"https://tech.fpcomplete.com/products/kube360/\">managed Kubernetes product, Kube360®</a> here at FP Complete. I wanted to put this post together to demonstrate a few things:</p>\n<ul>\n<li>How pleasant and familiar Windows Containers workflows were versus the more familiar Linux approaches</li>\n<li>Github Actions work seamlessly for building Windows Containers</li>\n<li>With the correct configuration, Kubernetes is a great platform for deploying Windows Containers</li>\n<li>And, of course, how wonderful the Rust toolchain is on Windows</li>\n</ul>\n<p>Alright, let's dive in! And if any of those topics sound interesting, and you'd like to learn more about FP Complete offerings, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us for more information on our offerings</a>.</p>\n<h2 id=\"prereqs\">Prereqs</h2>\n<p>Quick sidenote before we dive in. Windows Containers only run on Windows machines. Not even all Windows machines will support Windows Containers. You'll need Windows 10 Pro or a similar license, and have Docker installed on that machine. You'll also need to ensure that Docker is set to use Windows instead of Linux containers.</p>\n<p>If you have all of that set up, you'll be able to follow along with most of the steps below. If not, you won't be able to build or run the Docker images on your local machine.</p>\n<p>Also, for running the application on Kubernetes, you'll need a Kubernetes cluster with Windows nodes. I'll be using the FP Complete Kube360 test cluster on Azure in this blog post, though we've previously tested in on both AWS and on-prem clusters too.</p>\n<h2 id=\"the-rust-application\">The Rust application</h2>\n<p>The source code for this application will be, by far, the most uninteresting part of this post. As mentioned, it's basically a copy-paste of an example straight from the actix-web documentation featuring mutable state. It turns out this was a great way to test out basic Kubernetes functionality like health checks, replicas, and autohealing.</p>\n<p>We're going to build this using the latest stable Rust version as of writing this post, so create a <code>rust-toolchain</code> file with the contents:</p>\n<pre><code>1.47.0\n</code></pre>\n<p>Our <code>Cargo.toml</code> file will be pretty vanilla, just adding in the dependency on <code>actix-web</code>:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[package]\nname = &quot;windows-docker-web&quot;\nversion = &quot;0.1.0&quot;\nauthors = [&quot;Michael Snoyman &lt;[email protected]&gt;&quot;]\nedition = &quot;2018&quot;\n\n[dependencies]\nactix-web = &quot;3.1&quot;\n</code></pre>\n<p>If you want to see the <code>Cargo.lock</code> file I compiled with, it's <a href=\"https://github.com/fpco/windows-docker-web/blob/f8a3192e63f2e699cc67716488a633f5e0893446/Cargo.lock\">available in the source repo</a>.</p>\n<p>And finally, the actual code in <code>src/main.rs</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use actix_web::{get, web, App, HttpServer};\nuse std::sync::Mutex;\n\nstruct AppState {\n    counter: Mutex&lt;i32&gt;,\n}\n\n#[get(&quot;&#x2F;&quot;)]\nasync fn index(data: web::Data&lt;AppState&gt;) -&gt; String {\n    let mut counter = data.counter.lock().unwrap();\n    *counter += 1;\n    format!(&quot;Counter is at {}&quot;, counter)\n}\n\n#[actix_web::main]\nasync fn main() -&gt; std::io::Result&lt;()&gt; {\n    let host = &quot;0.0.0.0:8080&quot;;\n    println!(&quot;Trying to listen on {}&quot;, host);\n    let app_state = web::Data::new(AppState {\n        counter: Mutex::new(0),\n    });\n    HttpServer::new(move || App::new().app_data(app_state.clone()).service(index))\n        .bind(host)?\n        .run()\n        .await\n}\n</code></pre>\n<p>This code creates an application state (a mutex of an <code>i32</code>), defines a single <code>GET</code> handler that increments that variable and prints the current value, and then hosts this on <code>0.0.0.0:8080</code>. Not too shabby.</p>\n<p>If you're following along with the code, now would be a good time to <code>cargo run</code> and make sure you're able to load up the site on your <code>localhost:8080</code>.</p>\n<h2 id=\"dockerfile\">Dockerfile</h2>\n<p>If this is your first foray into Windows Containers, you may be surprised to hear me say &quot;Dockerfile.&quot; Windows Container images can be built with the same kind of Dockerfiles you're used to from the Linux world. This even supports more advanced features, such as multistage Dockerfiles, which we're going to take advantage of here.</p>\n<p>There are a number of different base images provided by Microsoft for Windows Containers. We're going to be using Windows Server Core. It provides enough capabilities for installing Rust dependencies (which we'll see shortly), without including too much unneeded extras. Nanoserver is a much lighterweight image, but it doesn't play nicely with the Microsoft Visual C++ runtime we're using for the <code>-msvc</code> Rust target.</p>\n<p><strong>NOTE</strong> I've elected to use the <code>-msvc</code> target here instead of <code>-gnu</code> for two reasons. Firstly, it's closer to the actual use cases we need to support in Kube360, and therefore made a better test case. Also, as the default target for Rust on Windows, it seemed appropriate. It should be possible to set up a more minimal nanoserver-based image based on the <code>-gnu</code> target, if someone's interested in a &quot;fun&quot; side project.</p>\n<p>The <a href=\"https://github.com/fpco/windows-docker-web/blob/f8a3192e63f2e699cc67716488a633f5e0893446/Dockerfile\">complete Dockerfile is available on Github</a>, but let's step through it more carefully. As mentioned, we'll be performing a multistage build. We'll start with the build image, which will install the Rust build toolchain and compile our application. We start off by using the Windows Server Core base image and switching the shell back to the standard <code>cmd.exe</code>:</p>\n<pre><code>FROM mcr.microsoft.com&#x2F;windows&#x2F;servercore:1809 as build\n\n# Restore the default Windows shell for correct batch processing.\nSHELL [&quot;cmd&quot;, &quot;&#x2F;S&quot;, &quot;&#x2F;C&quot;]\n</code></pre>\n<p>Next we're going to install the Visual Studio buildtools necessary for building Rust code:</p>\n<pre><code># Download the Build Tools bootstrapper.\nADD https:&#x2F;&#x2F;aka.ms&#x2F;vs&#x2F;16&#x2F;release&#x2F;vs_buildtools.exe &#x2F;vs_buildtools.exe\n\n# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload,\n# excluding workloads and components with known issues.\nRUN vs_buildtools.exe --quiet --wait --norestart --nocache \\\n    --installPath C:\\BuildTools \\\n    --add Microsoft.Component.MSBuild \\\n    --add Microsoft.VisualStudio.Component.Windows10SDK.18362 \\\n    --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64\t\\\n || IF &quot;%ERRORLEVEL%&quot;==&quot;3010&quot; EXIT 0\n</code></pre>\n<p>And then we'll modify the entrypoint to include the environment modifications necessary to use those buildtools:</p>\n<pre><code># Define the entry point for the docker container.\n# This entry point starts the developer command prompt and launches the PowerShell shell.\nENTRYPOINT [&quot;C:\\\\BuildTools\\\\Common7\\\\Tools\\\\VsDevCmd.bat&quot;, &quot;&amp;&amp;&quot;, &quot;powershell.exe&quot;, &quot;-NoLogo&quot;, &quot;-ExecutionPolicy&quot;, &quot;Bypass&quot;]\n</code></pre>\n<p>Next up is installing <code>rustup</code>, which is fortunately pretty easy:</p>\n<pre><code>RUN curl -fSLo rustup-init.exe https:&#x2F;&#x2F;win.rustup.rs&#x2F;x86_64\nRUN start &#x2F;w rustup-init.exe -y -v &amp;&amp; echo &quot;Error level is %ERRORLEVEL%&quot;\nRUN del rustup-init.exe\n\nRUN setx &#x2F;M PATH &quot;C:\\Users\\ContainerAdministrator\\.cargo\\bin;%PATH%&quot;\n</code></pre>\n<p>Then we copy over the relevant source files and kick off a build, storing the generated executable in <code>c:\\output</code>:</p>\n<pre><code>COPY Cargo.toml &#x2F;project&#x2F;Cargo.toml\nCOPY Cargo.lock &#x2F;project&#x2F;Cargo.lock\nCOPY rust-toolchain &#x2F;project&#x2F;rust-toolchain\nCOPY src&#x2F; &#x2F;project&#x2F;src\nRUN cargo install --path &#x2F;project --root &#x2F;output\n</code></pre>\n<p>And with that, we're done with our build! Time to jump over to our runtime image. We don't need the Visual Studio buildtools in this image, but we do need the Visual C++ runtime:</p>\n<pre><code>FROM mcr.microsoft.com&#x2F;windows&#x2F;servercore:1809\n\nADD https:&#x2F;&#x2F;download.microsoft.com&#x2F;download&#x2F;6&#x2F;A&#x2F;A&#x2F;6AA4EDFF-645B-48C5-81CC-ED5963AEAD48&#x2F;vc_redist.x64.exe &#x2F;vc_redist.x64.exe\nRUN c:\\vc_redist.x64.exe &#x2F;install &#x2F;quiet &#x2F;norestart\n</code></pre>\n<p>With that in place, we can copy over our executable from the build image and set it as the default <code>CMD</code> in the image:</p>\n<pre><code>COPY --from=build c:&#x2F;output&#x2F;bin&#x2F;windows-docker-web.exe &#x2F;\n\nCMD [&quot;&#x2F;windows-docker-web.exe&quot;]\n</code></pre>\n<p>And just like that, we've got a real life Windows Container. If you'd like to, you can test it out yourself by running:</p>\n<pre><code>&gt; docker run --rm -p 8080:8080 fpco&#x2F;windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446\n</code></pre>\n<p>If you connect to port 8080, you should see our painfully simple app. Hurrah!</p>\n<h2 id=\"building-with-github-actions\">Building with Github Actions</h2>\n<p>One of the nice things about using a multistage Dockerfile for performing the build is that our CI scripts become very simple. Instead of needing to set up an environment with correct build tools or any other configuration, our script:</p>\n<ul>\n<li>Logs into the Docker Hub registry</li>\n<li>Performs a <code>docker build</code></li>\n<li>Pushes to the Docker Hub registry</li>\n</ul>\n<p>The downside is that there is no build caching at play with this setup. There are multiple methods to mitigate this problem, such as creating helper build images that pre-bake the dependencies. Or you can perform the builds on the host on CI and only use the Dockerfile for generating the runtime image. Those are interesting tweaks to try out another time. </p>\n<p>Taking on the simple multistage approach though, we have the following in our <code>.github/workflows/container.yml</code> file:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">name: Build a Windows container\n\non:\n    push:\n        branches: [master]\n\njobs:\n    build:\n        runs-on: windows-latest\n\n        steps:\n        - uses: actions&#x2F;checkout@v1\n\n        - name: Build and push\n          shell: bash\n          run: |\n            echo &quot;${{ secrets.DOCKER_HUB_TOKEN }}&quot; | docker login --username fpcojenkins --password-stdin\n            IMAGE_ID=fpco&#x2F;windows-docker-web:$GITHUB_SHA\n            docker build -t $IMAGE_ID .\n            docker push $IMAGE_ID\n</code></pre>\n<p>I like following the convention of tagging my images with the Git SHA of the commit. Other people prefer different tagging schemes, it's all up to you.</p>\n<h2 id=\"manifest-files\">Manifest files</h2>\n<p>Now that we have a working Windows Container image, the next step is to deploy it to our Kube360 cluster. Generally, we use ArgoCD and Kustomize for managing app deployments within Kube360, which lets us keep a very nice Gitops workflow. Instead, for this blog post, I'll show you the raw manifest files. It will also let us play with the <code>k3</code> command line tool, which also happens to be written in Rust.</p>\n<p>First we'll have a Deployment manifest to manage the pods running the application itself. Since this is a simple Rust application, we can put very low resource limits on this. We're going to disable the Istio sidebar, since it's not compatible with Windows. We're going to ask Kubernetes to use the Windows machines to host these pods. And we're going to set up some basic health checks. All told, this is what our manifest file looks like:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: apps&#x2F;v1\nkind: Deployment\nmetadata:\n  name: windows-docker-web\n  labels:\n    app.kubernetes.io&#x2F;component: webserver\nspec:\n  replicas: 1\n  minReadySeconds: 5\n  selector:\n    matchLabels:\n      app.kubernetes.io&#x2F;component: webserver\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io&#x2F;component: webserver\n      annotations:\n        sidecar.istio.io&#x2F;inject: &quot;false&quot;\n    spec:\n      runtimeClassName: windows-2019\n      containers:\n        - name: windows-docker-web\n          image: fpco&#x2F;windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446\n          ports:\n            - name: http\n              containerPort: 8080\n          readinessProbe:\n            httpGet:\n              path: &#x2F;\n              port: 8080\n            initialDelaySeconds: 10\n            periodSeconds: 10\n          livenessProbe:\n            httpGet:\n              path: &#x2F;\n              port: 8080\n            initialDelaySeconds: 10\n            periodSeconds: 10\n          resources:\n            requests:\n              memory: 128Mi\n              cpu: 100m\n            limits:\n              memory: 128Mi\n              cpu: 100m\n</code></pre>\n<p>Awesome, that's the most complicated by far of the three manifests. Next we'll put a fairly stock-standard Service in front of that deployment:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: v1\nkind: Service\nmetadata:\n  name: windows-docker-web\n  labels:\n    app.kubernetes.io&#x2F;component: webserver\nspec:\n  ports:\n  - name: http\n    port: 80\n    targetPort: http\n  type: ClusterIP\n  selector:\n    app.kubernetes.io&#x2F;component: webserver\n</code></pre>\n<p>This exposes a services on port 80, and targets the <code>http</code> port (port 8080) inside the deployment. Finally, we have our Ingress. Kube360 uses external DNS to automatically set DNS records, and cert-manager to automatically grab TLS certificates. Our manifest looks like this:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.k8s.io&#x2F;v1beta1\nkind: Ingress\nmetadata:\n  annotations:\n    cert-manager.io&#x2F;cluster-issuer: letsencrypt-ingress-prod\n    kubernetes.io&#x2F;ingress.class: nginx\n    nginx.ingress.kubernetes.io&#x2F;force-ssl-redirect: &quot;true&quot;\n  name: windows-docker-web\nspec:\n  rules:\n  - host: windows-docker-web.az.fpcomplete.com\n    http:\n      paths:\n      - backend:\n          serviceName: windows-docker-web\n          servicePort: 80\n  tls:\n  - hosts:\n    - windows-docker-web.az.fpcomplete.com\n    secretName: windows-docker-web-tls\n</code></pre>\n<p>Now that we have our application inside a Docker image, and we have our manifest files to instruct Kubernetes on how to run it, we just need to deploy these manifests and we'll be done.</p>\n<h2 id=\"launch\">Launch</h2>\n<p>With our manifests in place, we can finally deploy them. You can use <code>kubectl</code> directly to do this. Since I'm deploying to Kube360, I'm going to use the <code>k3</code> command line tool, which automates the process of logging in, getting temporary Kubernetes credentials, and providing those to the <code>kubectl</code> command via an environment variable. These steps could be run on Windows, Mac, or Linux. But since we've done the rest of this post on Windows, I'll use my Windows machine for this too.</p>\n<pre><code>&gt; k3 init test.az.fpcomplete.com\n&gt; k3 kubectl apply -f deployment.yaml\nWeb browser opened to https:&#x2F;&#x2F;test.az.fpcomplete.com&#x2F;k3-confirm?nonce=c1f764d8852f4ff2a2738fb0a2078e68\nPlease follow the login steps there (if needed).\nThen return to this terminal.\nPolling the server. Please standby.\nChecking ...\nThanks, got the token response. Verifying token is valid\nRetrieving a kubeconfig for use with k3 kubectl\nKubeconfig retrieved. You are now ready to run kubectl commands with `k3 kubectl ...`\ndeployment.apps&#x2F;windows-docker-web created\n&gt; k3 kubectl apply -f ingress.yaml\ningress.networking.k8s.io&#x2F;windows-docker-web created\n&gt; k3 kubectl apply -f service.yaml\nservice&#x2F;windows-docker-web created\n</code></pre>\n<p>I told <code>k3</code> to use the <code>test.az.fpcomplete.com</code> cluster. On the first <code>k3 kubectl</code> call, it detected that I did not have valid credentials for the cluster, and opened up my browser to a page that allowed me to log in. One of the design goals in Kube360 is to strongly leverage existing identity providers, such as Azure AD, Google Directory, Okta, Microsoft 365, and others. This is not only more secure than copy-pasting <code>kubeconfig</code> files with permanent credentials around, but more user friendly. As you can see, the process above was pretty automated.</p>\n<p>It's easy enough to check that the pods are actually running and healthy:</p>\n<pre><code>&gt; k3 kubectl get pods\nNAME                                  READY   STATUS    RESTARTS   AGE\nwindows-docker-web-5687668cdf-8tmn2   1&#x2F;1     Running   0          3m2s\n</code></pre>\n<p>Initially, the ingress controller looked like this while it was getting TLS certificates:</p>\n<pre><code>&gt; k3 kubectl get ingress\nNAME                        CLASS    HOSTS                                  ADDRESS   PORTS     AGE\ncm-acme-http-solver-zlq6j   &lt;none&gt;   windows-docker-web.az.fpcomplete.com             80        0s\nwindows-docker-web          &lt;none&gt;   windows-docker-web.az.fpcomplete.com             80, 443   3s\n</code></pre>\n<p>And after cert-manager gets the TLS certificate, it will switch over to:</p>\n<pre><code>&gt; k3 kubectl get ingress\nNAME                 CLASS    HOSTS                                  ADDRESS          PORTS     AGE\nwindows-docker-web   &lt;none&gt;   windows-docker-web.az.fpcomplete.com   52.151.225.139   80, 443   90s\n</code></pre>\n<p>And finally, our site is live! Hurrah, a Rust web application compiled for Windows and running on Kubernetes inside Azure.</p>\n<p><strong>NOTE</strong> Depending on when you read this post, the web app may or may not still be live, so don't be surprised if you don't get a response if you try to connect to that host.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>This post was a bit light on actual Rust code, but heavy on a lot of Windows scripting. As I think many Rustaceans already know, the dev experience for Rust on Windows is top notch. What may not have been obvious is how pleasant the Docker experience is on Windows. There are definitely some pain points, like the large images involved and needing to install the VC runtime. But overall, with a bit of cargo-culting, it's not too bad. And finally, having a cluster with Windows support ready via Kube360 makes deployment a breeze.</p>\n<p>If anyone has follow up questions about anything here, please <a href=\"https://twitter.com/snoyberg\">reach out to me on Twitter</a> or <a href=\"https://tech.fpcomplete.com/contact-us/\">contact our team at FP Complete</a>. In addition to our <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360 product offering</a>, FP Complete provides many related services, including:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/platformengineering/\">DevOps consulting</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust consulting and training</a></li>\n<li><a href=\"https://tech.fpcomplete.com/services/\">General training and consulting services</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/\">Haskell consulting and training</a></li>\n</ul>\n<p>If you liked this post, please check out some related posts:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Deploying Rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">The Rust Crash Course eBook</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/understanding-cloud-auth/\">Understanding cloud auth</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
        "slug": "rust-kubernetes-windows",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Deploying Rust with Windows Containers on Kubernetes",
        "description": "An example of deploying Rust inside a Windows Containers as a web service hosted on Kubernetes",
        "updated": null,
        "date": "2020-10-26",
        "year": 2020,
        "month": 10,
        "day": 26,
        "taxonomies": {
          "tags": [
            "rust",
            "devops",
            "kubernetes"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "image": "images/blog/rust-windows-kube360.png"
        },
        "path": "/blog/rust-kubernetes-windows/",
        "components": [
          "blog",
          "rust-kubernetes-windows"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "prereqs",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#prereqs",
            "title": "Prereqs",
            "children": []
          },
          {
            "level": 2,
            "id": "the-rust-application",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#the-rust-application",
            "title": "The Rust application",
            "children": []
          },
          {
            "level": 2,
            "id": "dockerfile",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#dockerfile",
            "title": "Dockerfile",
            "children": []
          },
          {
            "level": 2,
            "id": "building-with-github-actions",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#building-with-github-actions",
            "title": "Building with Github Actions",
            "children": []
          },
          {
            "level": 2,
            "id": "manifest-files",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#manifest-files",
            "title": "Manifest files",
            "children": []
          },
          {
            "level": 2,
            "id": "launch",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#launch",
            "title": "Launch",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2573,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
            "title": "An Istio/mutual TLS debugging story"
          }
        ]
      },
      {
        "relative_path": "blog/collect-rust-traverse-haskell-scala.md",
        "colocated_path": null,
        "content": "<p>There's a running joke in the functional programming community. Any Scala program can be written by combining the <code>traverse</code> function the correct number of times. This blog post is dedicated to that joke.</p>\n<p>In Rust, the <code>Iterator</code> trait defines a stream of values of a specific type. Many common types provide an <code>Iterator</code> interface. And the built in <code>for</code> loop construct works directly with the <code>Iterator</code> trait. Using that, we can easily do something like &quot;print all the numbers in a <code>Vec</code>&quot;:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec: Vec&lt;i32&gt; = vec![1, 2, 3, 4, 5];\n\n    for num in myvec {\n        println!(&quot;{}&quot;, num);\n    }\n}\n</code></pre>\n<p>Let's say we want to do something a bit different: double every value in the <code>Vec</code>. The most idiomatic and performant way to do that in Rust is with mutable references, e.g.:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let mut myvec: Vec&lt;i32&gt; = vec![1, 2, 3, 4, 5];\n\n    for num in &amp;mut myvec {\n        *num *= 2;\n    }\n\n    println!(&quot;{:?}&quot;, myvec);\n}\n</code></pre>\n<p>Since we're dedicating this post to functional programmers, it's worth noting: this looks decidedly not-functional. &quot;Take a collection and apply a function over each value&quot; is well understood in FP circles—and increasingly in non-FP circles—as a <code>map</code>. Or using more category-theoretic nomenclature, it's a <code>Functor</code>. Fortunately, Rust provides a <code>map</code> method for <code>Iterator</code>s. Unfortunately, unlike Scala or Haskell, <code>map</code> doesn't work on data types like <code>Vec</code>. Let's compare, using Haskell:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">list :: [Int]\nlist = [1, 2, 3, 4, 5]\n\nmain :: IO ()\nmain = do\n  let newList :: [Int]\n      newList = map (* 2) list\n  print newList\n</code></pre>\n<p>The <code>map</code> function from the <code>Functor</code> typeclass works directly on a list. It produces a new list with the function applied to each value. Let's try to do the most equivalent thing in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec: Vec&lt;i32&gt; = vec![1, 2, 3, 4, 5];\n\n    let new_vec: Vec&lt;i32&gt; = myvec.map(|x| x * 2);\n\n    println!(&quot;{:?}&quot;, new_vec);\n}\n</code></pre>\n<p>This fails with the error message:</p>\n<pre><code>no method named `map` found for struct `std::vec::Vec&lt;i32&gt;` in the current scope\n</code></pre>\n<p>That's because, in Rust, <code>map</code> applies to the <code>Iterator</code> itself, not the underlying data structures. In order to use <code>map</code> on a <code>Vec</code>, we have to:</p>\n<ol>\n<li>Convert the <code>Vec</code> into an <code>Iterator</code></li>\n<li>Perform the <code>map</code> on the <code>Iterator</code></li>\n<li>Convert the <code>Iterator</code> back into a <code>Vec</code></li>\n</ol>\n<p>(1) can be performed using the <code>IntoIterator</code> trait, which provides a method named <code>into_iter</code>. And for (3), we could write our own <code>for</code> loop that fills up a <code>Vec</code>. But the right way is to use the <code>FromIterator</code> trait. And the easiest way to do that is with the <code>collect</code> method on <code>Iterator</code>.</p>\n<h2 id=\"using-fromiterator-and-collect\">Using FromIterator and collect</h2>\n<p>Let's write a program that properly uses <code>map</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec: Vec&lt;i32&gt; = vec![1, 2, 3, 4, 5];\n\n    let new_vec: Vec&lt;i32&gt; = myvec\n        .into_iter()\n        .map(|x| x * 2)\n        .collect();\n\n    println!(&quot;{:?}&quot;, new_vec);\n}\n</code></pre>\n<p>Fairly straightforward, and our 3 steps turn into three chained method calls. Unfortunately, in practice, using <code>collect</code> is often not quite as straightforward as this. That's due to type inference. To see what I mean, let's take all of the type annotations out of the program above:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec = vec![1, 2, 3, 4, 5];\n\n    let new_vec = myvec\n        .into_iter()\n        .map(|x| x * 2)\n        .collect();\n\n    println!(&quot;{:?}&quot;, new_vec);\n}\n</code></pre>\n<p>This gives us the very friendly message:</p>\n<pre><code>error[E0282]: type annotations needed\n --&gt; src\\main.rs:4:9\n  |\n4 |     let new_vec = myvec\n  |         ^^^^^^^ consider giving `new_vec` a type\n</code></pre>\n<p>The issue here is that we don't know which implementation of <code>FromIterator</code> we should be using. This is a problem that didn't exist in the pure FP world with <code>map</code> and <code>Functor</code>. In that world, <code>Functor</code>'s <code>map</code> is always &quot;shape preserving.&quot; When you <code>map</code> over a list in Haskell, the result will always be a list.</p>\n<p>That's not the case with the <code>IntoIterator</code>/<code>FromIterator</code> combination. <code>IntoIterator</code> destroys the original data structure, fully consuming it and producing an <code>Iterator</code>. Similarly, <code>FromIterator</code> produces a brand new data structure out of thin air, without any reference to the original data structure. Therefore, an explicit type annotation saying what the output type should be is necessary. In our program above, we did this by annotating <code>new_vec</code>. Another way is to use &quot;turbofish&quot; to annotate which <code>collect</code> to use:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec = vec![1, 2, 3, 4, 5];\n\n    let new_vec = myvec\n        .into_iter()\n        .map(|x| x * 2)\n        .collect::&lt;Vec&lt;_&gt;&gt;();\n\n    println!(&quot;{:?}&quot;, new_vec);\n}\n</code></pre>\n<p>Note that we only needed indicate that we were collecting into a <code>Vec</code>. Rust's normal type inference was able to figure out:</p>\n<ul>\n<li>Which numeric type to use for the values</li>\n<li>That the original <code>myvec</code> should be a <code>Vec</code>, since it was produced by the <code>vec!</code> macro</li>\n</ul>\n<h2 id=\"side-effects-and-traverse\">Side effects and traverse</h2>\n<p>Alright, I want to announce to the world that I'll be doubling these values. It's easy to modify our <code>map</code>-using code in Rust to do this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec = vec![1, 2, 3, 4, 5];\n\n    let new_vec = myvec\n        .into_iter()\n        .map(|x| {\n            println!(&quot;About to double {}&quot;, x);\n            x * 2\n        })\n        .collect::&lt;Vec&lt;_&gt;&gt;();\n\n    println!(&quot;{:?}&quot;, new_vec);\n}\n</code></pre>\n<p>But Haskellers will warn you that this isn't quite so simple. <code>map</code> in Haskell is a pure function, meaning it doesn't allow for any side-effects (like printing to the screen). You can see this in action fairly easily:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">list :: [Int]\nlist = [1, 2, 3, 4, 5]\n\nmain :: IO ()\nmain = do\n  let newList :: [Int]\n      newList =\n        map\n          (\\x -&gt; do\n            putStrLn (&quot;About to double &quot; ++ show x)\n            pure (x * 2))\n          list\n  print newList\n</code></pre>\n<p>This code won't compile, due to the mismatch between an <code>Int</code> (a pure number) and an <code>IO Int</code> (an action with side effects which produces an <code>Int</code>):</p>\n<pre><code>Couldn&#x27;t match type &#x27;IO Int&#x27; with &#x27;Int&#x27;\nExpected type: [Int]\nActual type: [IO Int]\n</code></pre>\n<p>Instead, we need to use <code>map</code>'s more powerful cousin, <code>traverse</code> (a.k.a. <code>mapM</code>, or &quot;monadic map&quot;). <code>traverse</code> allows us to perform a series of actions, and produce a new list with all of their results. This looks like:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">list :: [Int]\nlist = [1, 2, 3, 4, 5]\n\nmain :: IO ()\nmain = do\n  newList &lt;-\n    traverse\n      (\\x -&gt; do\n        putStrLn (&quot;About to double &quot; ++ show x)\n        pure (x * 2))\n      list\n  print newList\n</code></pre>\n<p>So why the difference between Haskell and Rust here? That's because Rust is not a pure language. Any function can perform side effects, like printing to the screen. Haskell, on the other hand, doesn't allow this, and therefore we need special helper functions like <code>traverse</code> to account for the potential side effects.</p>\n<p>I won't get into the philosophical differences between the two languages. Suffice it to say that both approaches have merit, and both have advantages and disadvantages. Let's see where the Rust approach &quot;breaks down&quot;, and how <code>FromIterator</code> steps up to the plate.</p>\n<h2 id=\"handling-failure\">Handling failure</h2>\n<p>In the example above with Haskell, we used side effects via the <code>IO</code> type. However, <code>traverse</code> isn't limited to working with <code>IO</code>. It can work with <em>many</em> different types, anything which is considered <code>Applicative</code>. And this covers many different common needs, including error handling. For example, we can change our program to not allow doubling &quot;big&quot; numbers greater than 5:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">list :: [Int]\nlist = [1, 2, 3, 4, 5, 6]\n\nmain :: IO ()\nmain = do\n  let newList =\n        traverse\n          (\\x -&gt;\n            if x &gt; 5\n              then Left &quot;Not allowed to double big numbers&quot;\n              else Right (x * 2))\n          list\n  case newList of\n    Left err -&gt; putStrLn err\n    Right newList&#x27; -&gt; print newList\n</code></pre>\n<p><code>Either</code> is a sum type, like an <code>enum</code> in Rust. It's equivalent to <code>Result</code> in Rust, but with different names. Instead of <code>Ok</code> and <code>Err</code>, we have <code>Right</code> (used by convention for success) and <code>Left</code> (used by convention for failure). The <code>Applicative</code> instance for it will stop processing when it encounters the first <code>Left</code>. So our program above will ultimately produce the output <code>Not allowed to double big numbers</code>. You can put as many values after the <code>6</code> in <code>list</code> as you want, and it will produce the same output. In fact, it will never even inspect those numbers.</p>\n<p>Coming back to Rust, let's first simply collect all of our <code>Result</code>s together into a single <code>Vec</code> to make sure the basics work:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec = vec![1, 2, 3, 4, 5, 6];\n\n    let new_vec: Vec&lt;Result&lt;i32, &amp;str&gt;&gt; = myvec\n        .into_iter()\n        .map(|x| {\n            if x &gt; 5 {\n                Err(&quot;Not allowed to double big numbers&quot;)\n            } else {\n                Ok(x * 2)\n            }\n        })\n        .collect();\n\n    println!(&quot;{:?}&quot;, new_vec);\n}\n</code></pre>\n<p>That makes sense. We've already seen that <code>.collect()</code> can take all the values in an <code>Iterator</code>'s stream and stick them into a <code>Vec</code>. And the <code>map</code> method is now generating <code>Result&lt;i32, &amp;str&gt;</code> values, so everything lines up.</p>\n<p>But this isn't the behavior we want. We want two changes:</p>\n<ul>\n<li><code>new_vec</code> should result in a <code>Result&lt;Vec&lt;i32&gt;, &amp;str&gt;</code>. In other words, it should result in either a single <code>Err</code> value, or a vector of successful results. Right now, it has a vector of success-or-failure values.</li>\n<li>We should immediately stop processing the original <code>Vec</code> once we see a value that's too big.</li>\n</ul>\n<p>To make it a bit more clear, it's easy enough to implement this with a <code>for</code> loop:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec = vec![1, 2, 3, 4, 5];\n\n    let mut new_vec = Vec::new();\n\n    for x in myvec {\n        if x &gt; 5 {\n            println!(&quot;Not allowed to double big numbers&quot;);\n            return;\n        } else {\n            new_vec.push(x);\n        }\n    }\n\n    println!(&quot;{:?}&quot;, new_vec);\n}\n</code></pre>\n<p>But now we've lost out on our <code>map</code> entirely, and we're dropping down to using explicit loops, mutation, and short-circuiting (via <code>return</code>). In other words, this code doesn't feel nearly as elegant to me.</p>\n<p>It turns out that our original code was almost perfect. Let's see a bit of magic, and then explain how it happend. Our previous version of the code used <code>map</code> and resulted in a <code>Vec&lt;Result&lt;i32, &amp;str&gt;&gt;</code>. And we wanted <code>Result&lt;Vec&lt;i32&gt;, &amp;str&gt;</code>. What happens if we simply change the type to what we want?</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec = vec![1, 2, 3, 4, 5, 6];\n\n    let new_vec: Result&lt;Vec&lt;i32&gt;, &amp;str&gt; = myvec\n        .into_iter()\n        .map(|x| {\n            if x &gt; 5 {\n                Err(&quot;Not allowed to double big numbers&quot;)\n            } else {\n                Ok(x * 2)\n            }\n        })\n        .collect();\n\n    match new_vec {\n        Ok(new_vec) =&gt; println!(&quot;{:?}&quot;, new_vec),\n        Err(e) =&gt; println!(&quot;{}&quot;, e),\n    }\n}\n</code></pre>\n<p>Thanks to the power of <code>FromIterator</code>, this simply works! To understand why, let's see some <a href=\"https://doc.rust-lang.org/stable/std/iter/trait.FromIterator.html#method.from_iter-15\">documentation on <code>FromIterator</code></a>:</p>\n<blockquote>\n<p>Takes each element in the <code>Iterator</code>: if it is an <code>Err</code>, no further elements are taken, and the <code>Err</code> is returned. Should no <code>Err</code> occur, a container with the values of each <code>Result</code> is returned.</p>\n</blockquote>\n<p>And suddenly it seems that Rust has implemented <code>traverse</code> all along! This extra flexibility in the <code>FromIterator</code> setup allows us to regain the short-circuiting error-handling behavior that FP people are familiar with in <code>traverse</code>.</p>\n<p>In contrast to <code>traverse</code>, we're still dealing with two different traits (<code>IntoIterator</code> and <code>FromIterator</code>), and there's nothing preventing these from being different types. Therefore, some kind of type annotation is still necessary. On the one hand, that could be seen as a downside of Rust's approach. On the other hand, it allows us to be more flexible in what types we generate, which we'll look at in the next section.</p>\n<p>And finally, it turns out we can use turbofish to rescue us yet again. For example:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let myvec = vec![1, 2, 3, 4, 5, 6];\n\n    let new_vec = myvec\n        .into_iter()\n        .map(|x| {\n            if x &gt; 5 {\n                Err(&quot;Not allowed to double big numbers&quot;)\n            } else {\n                Ok(x * 2)\n            }\n        })\n        .collect::&lt;Result&lt;Vec&lt;_&gt;, _&gt;&gt;();\n\n    match new_vec {\n        Ok(new_vec) =&gt; println!(&quot;{:?}&quot;, new_vec),\n        Err(e) =&gt; println!(&quot;{}&quot;, e),\n    }\n}\n</code></pre>\n<h2 id=\"different-fromiterator-impls\">Different FromIterator impls</h2>\n<p>So far, we've only seen two implementations of <code>FromIterator</code>: <code>Vec</code> and <code>Result</code>. There are many more available. One of my favorite is <code>HashMap</code>, which lets you collect a sequence of key/value pairs into a mapping.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::collections::HashMap;\n\nfn main() {\n    let people = vec![\n        (&quot;Alice&quot;, 30),\n        (&quot;Bob&quot;, 35),\n        (&quot;Charlies&quot;, 25),\n    ].into_iter().collect::&lt;HashMap&lt;_, _&gt;&gt;();\n\n    println!(&quot;Alice is: {:?}&quot;, people.get(&quot;Alice&quot;));\n}\n</code></pre>\n<p>And due to how the <code>FromIterator</code> impl for <code>Result</code> works, you can layer these two together to collect a stream of <code>Result</code>s of pairs into a <code>Result&lt;HashMap&lt;_, _&gt;, _&gt;</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::collections::HashMap;\n\nfn main() {\n    let people = vec![\n        Ok((&quot;Alice&quot;, 30)),\n        Ok((&quot;Bob&quot;, 35)),\n        Err(&quot;Uh-oh, this didn&#x27;t work!&quot;),\n        Ok((&quot;Charlies&quot;, 25)),\n    ].into_iter().collect::&lt;Result&lt;HashMap&lt;_, _&gt;, &amp;str&gt;&gt;();\n\n    match people {\n        Err(e) =&gt; println!(&quot;Error occurred: {}&quot;, e),\n        Ok(people) =&gt; {\n            println!(&quot;Alice is: {:?}&quot;, people.get(&quot;Alice&quot;));\n        }\n    }\n}\n</code></pre>\n<h2 id=\"validation\">Validation</h2>\n<p>In the Haskell world, we have two different concepts of error collection:</p>\n<ul>\n<li><code>Either</code>, which says &quot;stop on the first error&quot;</li>\n<li><code>Validation</code>, which says &quot;collect all of the errors together&quot;</li>\n</ul>\n<p><code>Validation</code> can be very useful for things like parsing web forms. You don't want to generate just the first failure, but collect all of the failures together for producing a more user-friendly experience. For fun, I decided to implement this in Rust as well:</p>\n<blockquote class=\"twitter-tweet\"><p lang=\"en\" dir=\"ltr\">I&#39;m tempted to write a Validation &quot;Applicative&quot; in Rust with a FromIterator impl to collect multiple Err values. I have no real need for this, but it still seems fun.</p>&mdash; Michael Snoyman (@snoyberg) <a href=\"https://twitter.com/snoyberg/status/1311639149171159041?ref_src=twsrc%5Etfw\">October 1, 2020</a></blockquote> <script async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>\n<p>And as you'll see from the rest of that thread, a lot of the motivation for this blog post came from the Twitter replies.</p>\n<p>The implementation in Rust is fairly straightforward, and pretty easy to understand. I've <a href=\"https://github.com/snoyberg/validation-rs\">made it available on Github</a>. If there's any interest in seeing this as a crate, let me know in the issue tracker.</p>\n<p>To see this in action, let's modify our program above. First, I'll add the dependency to my <code>Cargo.toml</code> file:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[dependencies.validation]\ngit = &quot;https:&#x2F;&#x2F;github.com&#x2F;snoyberg&#x2F;validation-rs&quot;\nrev = &quot;0a7521f7022262bb00aea61761f76c3dd5ccefb5&quot;\n</code></pre>\n<p>And then modify the code to use the <code>Validation</code> enum instead of <code>Result</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::collections::HashMap;\nuse validation::Validation;\n\nfn main() {\n    let people = vec![\n        Ok((&quot;Alice&quot;, 30)),\n        Ok((&quot;Bob&quot;, 35)),\n        Err(&quot;Uh-oh, this didn&#x27;t work!&quot;),\n        Ok((&quot;Charlies&quot;, 25)),\n        Err(&quot;And neither did this!&quot;),\n    ].into_iter().collect::&lt;Validation&lt;HashMap&lt;_, _&gt;, Vec&lt;&amp;str&gt;&gt;&gt;();\n\n    match people.into_result() {\n        Err(errs) =&gt; {\n            println!(&quot;Errors:&quot;);\n            errs.into_iter().map(|x| println!(&quot;{}&quot;, x)).collect()\n        }\n        Ok(people) =&gt; {\n            println!(&quot;Alice is: {:?}&quot;, people.get(&quot;Alice&quot;));\n        }\n    }\n}\n</code></pre>\n<p><strong>Bonus</strong> Note the somewhat cheeky usage of <code>map</code> and <code>collect</code> to print out the errors. This is leveraging the <code>()</code> impl of <code>FromIterator</code>, which collects together a stream of <code>()</code> values into a single one.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I realize that was a bit of a rambling journey, but hopefully a fun one for Rustaceans, Haskellers, and Scala folk. Here are some of my takeaways:</p>\n<ul>\n<li>The <code>collect</code> method is very flexible</li>\n<li>There's no magic involved in <code>collect</code>, just the <code>FromIterator</code> trait and the behavior of the types that implement it\n<ul>\n<li>This was actually a big takeaway for me. I had somehow forgotten about <code>FromIterator</code> a few months back, and was nervous about what &quot;secret&quot; behavior <code>collect</code> was doing.</li>\n</ul>\n</li>\n<li>The downside of <code>collect</code> is that, since it's not structure preserving like <code>map</code> or <code>traverse</code>, you'll sometimes need type annotations\n<ul>\n<li>Get used to turbofish!</li>\n</ul>\n</li>\n<li>There are lots of useful impls of <code>FromIterator</code> available</li>\n</ul>\n<p>If you enjoyed this post, you may also want to check out these related topics:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/07/iterators-streams-rust-haskell/\">Iterators and Streams in Rust and Haskell</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/streaming-utf8-haskell-rust/\">Streaming UTF-8 in Haskell and Rust</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-devops/\">Rust with DevOps Success Strategies</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a></li>\n</ul>\n<p>If you're interested in learning more about FP Complete's <a href=\"https://tech.fpcomplete.com/training/\">consulting and training services</a>, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us</a> to talk with our team about how we can help you succeed with Rust, DevOps, and functional programming.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/",
        "slug": "collect-rust-traverse-haskell-scala",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Collect in Rust, traverse in Haskell and Scala",
        "description": "In this post, we'll analyze the collect method in Rust, powered by the Iterator and FromIterator traits, together with a comparison against the traverse function from Haskell and Scala",
        "updated": null,
        "date": "2020-10-06",
        "year": 2020,
        "month": 10,
        "day": 6,
        "taxonomies": {
          "tags": [
            "rust",
            "haskell",
            "scala",
            "functional programming"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "image": "images/blog/traverse-turtles.png",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/collect-rust-traverse-haskell-scala/",
        "components": [
          "blog",
          "collect-rust-traverse-haskell-scala"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "using-fromiterator-and-collect",
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/#using-fromiterator-and-collect",
            "title": "Using FromIterator and collect",
            "children": []
          },
          {
            "level": 2,
            "id": "side-effects-and-traverse",
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/#side-effects-and-traverse",
            "title": "Side effects and traverse",
            "children": []
          },
          {
            "level": 2,
            "id": "handling-failure",
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/#handling-failure",
            "title": "Handling failure",
            "children": []
          },
          {
            "level": 2,
            "id": "different-fromiterator-impls",
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/#different-fromiterator-impls",
            "title": "Different FromIterator impls",
            "children": []
          },
          {
            "level": 2,
            "id": "validation",
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/#validation",
            "title": "Validation",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2584,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/",
            "title": "Monads and GATs in nightly Rust"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/",
            "title": "An ownership puzzle with Rust, async, and hyper"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
            "title": "Philosophies of Rust and Haskell"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/",
            "title": "Short Circuit Sum in Rust"
          }
        ]
      },
      {
        "relative_path": "blog/of-course-it-compiles-right.md",
        "colocated_path": null,
        "content": "<p>I recently joined <a href=\"https://lambda.show/episodes/michael-snoyman-from-haskell-to-rust\">Matt Moore on LambdaShow</a>. We spent some time discussing Rust, and one point I made was that, in my experience with Rust, ergonomics go something like this:</p>\n<ul>\n<li>Beginner: oh cool, that worked, no problem</li>\n<li>Advanced beginner: wait... why exactly did that work 99 other times? Why is it failing this time? I'm so confused!</li>\n<li>Intermediate/advanced: oh, now I understand things really well, that's convenient</li>\n</ul>\n<p>That may seem a bit abstract. Fortunately for me, an example of that popped up almost immediately after the post went live. This is my sheepish blog post explaining how I fairly solidly misunderstood something about the borrow checker. Hopefully it will help others.</p>\n<p>Two weeks back, I wrote an offhand tweet with a bit of a code puzzle:</p>\n<blockquote class=\"twitter-tweet\"><p lang=\"en\" dir=\"ltr\">This program _looks_ like it could segfault by using a pointer to a dropped String. Who wants to guess what it actually does? <a href=\"https://t.co/gurHjdh2A7\">pic.twitter.com/gurHjdh2A7</a></p>&mdash; Michael Snoyman (@snoyberg) <a href=\"https://twitter.com/snoyberg/status/1301579676423462914?ref_src=twsrc%5Etfw\">September 3, 2020</a></blockquote>\n<script async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>\n<p>I thought this was a slightly tricky case of ownership, and hoped it would help push people to a more solid understanding of the topic. Soon after, I got a reply that gave the solution I had expected:</p>\n<blockquote class=\"twitter-tweet\" data-conversation=\"none\"><p lang=\"en\" dir=\"ltr\">Without the RefCell it would fail to compile, because you have a live &amp; with hello, then you try to borrow all_tags as &amp;mut, despite an immutable reference. Since we use RefCell, this becomes a runtime panic instead.</p>&mdash; Lúcás Meier (@cronokirby) <a href=\"https://twitter.com/cronokirby/status/1301582898701758465?ref_src=twsrc%5Etfw\">September 3, 2020</a></blockquote>\n<p>But then the twist: a question that made me doubt my own sanity.</p>\n<blockquote class=\"twitter-tweet\" data-conversation=\"none\"><p lang=\"en\" dir=\"ltr\">But I don&#39;t quite understand how it does that at runtime. All we keep is a plain reference, not a `Ref` (which keeps count). The `Ref` should actually get destroyed, but somehow it won&#39;t because of the &amp;str we keep. I studied the code quite a bit, but it still looks like magic.</p>&mdash; Eskimo Coder (@tuxkimo) <a href=\"https://twitter.com/tuxkimo/status/1304363365314236416?ref_src=twsrc%5Etfw\">September 11, 2020</a></blockquote>\n<p>This led me to filing <a href=\"https://github.com/rust-lang/rust/issues/76601\">a bogus bug report with the Rust team</a>. Fortunately for me, Jonas Schievink had mercy and quickly pointed me to the <a href=\"https://doc.rust-lang.org/reference/destructors.html?highlight=temporary,life#temporary-lifetime-extension\">documentation on temporary lifetime extension</a>, which explains the whole situation.</p>\n<p>If you've read this much, and everything made perfect sense, congratulations! You probably don't need to bother reading the rest of the post. But if anything is unclear, keep reading. I'll try to make this as clear as possible.</p>\n<p>And if the explanation below still doesn't make sense, may I recommend FP Complete's <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a> to brush up on ownership?</p>\n<h2 id=\"borrow-rules\">Borrow rules</h2>\n<p>Arguably the key feature of Rust is its borrow checker. One of the core rules of the borrow checker is that you cannot access data that is mutably referenced elsewhere. Or said more directly: you can either immutably borrow data multiple times, or mutably borrow it once, but not both at the same time. Usually, we let the borrow checker enforce this rule. And it enforces that rule at compile time.</p>\n<p>However, there are some situations where a statically checked rule like that is too restrictive. In such cases, the Rust standard library provides <em>cells</em>, which let you move this borrow checking from compile time (via static analysis) to runtime (via dynamic counters). This is known as <em>interior mutability</em>. And a common type for this is a <a href=\"https://doc.rust-lang.org/stable/std/cell/struct.RefCell.html\"><code>RefCell</code></a>.</p>\n<p>With a <code>RefCell</code>, the checking occurs at runtime. Let's demonstrate how that works. First, consider this program that fails to compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let mut age: u32 = 30;\n\n    let age_ref: &amp;u32 = &amp;age;\n\n    let age_mut_ref: &amp;mut u32 = &amp;mut age;\n    *age_mut_ref += 1;\n\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_ref);\n}\n</code></pre>\n<p>We try to take both an immutable reference <em>and</em> a mutable reference to the value <code>age</code> simultaneously. This doesn't work out too well:</p>\n<pre><code>error[E0502]: cannot borrow `age` as mutable because it is also borrowed as immutable\n --&gt; src\\main.rs:6:33\n  |\n4 |     let age_ref: &amp;u32 = &amp;age;\n  |                         ---- immutable borrow occurs here\n5 |\n6 |     let age_mut_ref: &amp;mut u32 = &amp;mut age;\n  |                                 ^^^^^^^^ mutable borrow occurs here\n...\n9 |     println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_ref);\n  |                                                      ------- immutable borrow later used here\n</code></pre>\n<p>The right thing to do is to fix this code. But let's do the wrong thing! Instead of trying to fix it correctly, we're going to use <code>RefCell</code> to replace our compile time checks (which prevent the code from building) with runtime checks (which allow the code to build, and then fail at runtime). Let's check that out:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::cell::{Ref, RefMut, RefCell};\nfn main() {\n    let age: RefCell&lt;u32&gt; = RefCell::new(30);\n\n    let age_ref: Ref&lt;u32&gt; = age.borrow();\n\n    let mut age_mut_ref: RefMut&lt;u32&gt; = age.borrow_mut();\n    *age_mut_ref += 1;\n\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_ref);\n}\n</code></pre>\n<p>It's instructive to compare this code with the previous code. It looks remarkably similar! We're replaced <code>&amp;u32</code> with <code>Ref&lt;u32&gt;</code>, <code>&amp;mut u32</code> with <code>RefMut&lt;u32&gt;</code>, and <code>&amp;age</code> and <code>&amp;mut age</code> with <code>age.borrow()</code> and <code>age.borrow_mut()</code>, respectively. You may be wondering: what are those <code>Ref</code> and <code>RefMut</code> things? Hold that thought.</p>\n<p>This code surprisingly compiles. And here's the runtime output (using Rust Nightly, which gives a slightly nicer error message):</p>\n<pre><code>thread &#x27;main&#x27; panicked at &#x27;already borrowed: BorrowMutError&#x27;, src\\main.rs:7:44\n</code></pre>\n<p>That looks a lot like the error message we saw above from the compiler. That's no accident: these are the same error showing up in two different ways.</p>\n<h2 id=\"ref-and-refmut\">Ref and RefMut</h2>\n<p>Our code panics when it calls <code>age.borrow_mut()</code>. Something seems to <em>know</em> that the <code>age_ref</code> variable exists. And in fact, that's basically true. When we called <code>age.borrow()</code>, a counter on the <code>RefCell</code> was incremented. As long as <code>age_ref</code> stays alive, that counter will remain active. When <code>age_ref</code> goes out of scope, the <code>Ref&lt;u32&gt;</code> will be dropped, and the drop will cause the counter to be decremented. The same logic applies to the <code>age_mut_ref</code>. Let's make two modifications to our code. First, there's no need to call <code>age.borrow()</code> before <code>age.borrow_mut()</code>. Let's slightly rearrange the code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::cell::{Ref, RefMut, RefCell};\nfn main() {\n    let age: RefCell&lt;u32&gt; = RefCell::new(30);\n\n    let mut age_mut_ref: RefMut&lt;u32&gt; = age.borrow_mut();\n    *age_mut_ref += 1;\n\n    let age_ref: Ref&lt;u32&gt; = age.borrow();\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_ref);\n}\n</code></pre>\n<p>This compiles, but still gives a runtime error. However, it's a slightly different one:</p>\n<pre><code>thread &#x27;main&#x27; panicked at &#x27;already mutably borrowed: BorrowError&#x27;, src\\main.rs:8:33\n</code></pre>\n<p>Now the problem is that, when we try to call <code>age.borrow()</code>, the <code>age_mut_ref</code> is still active. Fortunately, we can fix that by manually dropping it before the <code>age.borrow()</code> call:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::cell::{Ref, RefMut, RefCell};\nfn main() {\n    let age: RefCell&lt;u32&gt; = RefCell::new(30);\n\n    let mut age_mut_ref: RefMut&lt;u32&gt; = age.borrow_mut();\n    *age_mut_ref += 1;\n    std::mem::drop(age_mut_ref);\n\n    let age_ref: Ref&lt;u32&gt; = age.borrow();\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_ref);\n}\n</code></pre>\n<p>And finally, our program not only compiles, but runs successfully! Now I know that I'm 31 years old! (Or at least I wish I still was.)</p>\n<p>We have another mechanism for forcing the value to drop: an inner block. If we create a block within the <code>main</code> function, it will have its own scope, and the <code>age_mut_ref</code> will automatically be dropped, no need for <code>std::mem::drop</code>. That looks like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::cell::{Ref, RefMut, RefCell};\nfn main() {\n    let age: RefCell&lt;u32&gt; = RefCell::new(30);\n\n    {\n        let mut age_mut_ref: RefMut&lt;u32&gt; = age.borrow_mut();\n        *age_mut_ref += 1;\n    }\n\n    let age_ref: Ref&lt;u32&gt; = age.borrow();\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_ref);\n}\n</code></pre>\n<p>Once again, this compiles and runs. Looking back, we can hopefully now understand why <code>Ref</code> and <code>RefMut</code> are necessary. If <code>.borrow()</code> and <code>.borrow_mut()</code> simply returned actual references (immutable or mutable), there would be no <code>struct</code> with a <code>Drop</code> impl to ensure that the internal counters in <code>RefCell</code> were decremented when they go out of scope. So the world now makes sense.</p>\n<h2 id=\"no-reference-without-a-ref\">No reference without a Ref</h2>\n<p>Here's something cool: you can borrow a normal reference (e.g. <code>&amp;u32</code>) from a <code>Ref</code> (e.g. <code>Ref&lt;u32&gt;</code>). Check this out:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::cell::{Ref, RefMut, RefCell};\nfn main() {\n    let age: RefCell&lt;u32&gt; = RefCell::new(30);\n\n    {\n        let mut age_mut_ref: RefMut&lt;u32&gt; = age.borrow_mut();\n        *age_mut_ref += 1;\n    }\n\n    let age_ref: Ref&lt;u32&gt; = age.borrow();\n    let age_reference: &amp;u32 = &amp;age_ref;\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_reference);\n}\n</code></pre>\n<p><code>age_ref</code> is a <code>Ref&lt;u32&gt;</code>, but <code>age_reference</code> is a <code>&amp;u32</code>. This is a compile-time-checked reference. We're now saying that the lifetime of <code>age_reference</code> cannot outlive the lifetime of <code>age_ref</code>. As it stands, that's true, and everything compiles and runs correctly. But we can break that really easily using either <code>std::mem::drop</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::cell::{Ref, RefMut, RefCell};\nfn main() {\n    let age: RefCell&lt;u32&gt; = RefCell::new(30);\n\n    {\n        let mut age_mut_ref: RefMut&lt;u32&gt; = age.borrow_mut();\n        *age_mut_ref += 1;\n    }\n\n    let age_ref: Ref&lt;u32&gt; = age.borrow();\n    let age_reference: &amp;u32 = &amp;age_ref;\n    std::mem::drop(age_ref);\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_reference);\n}\n</code></pre>\n<p>Or by using inner blocks:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::cell::{Ref, RefMut, RefCell};\nfn main() {\n    let age: RefCell&lt;u32&gt; = RefCell::new(30);\n\n    {\n        let mut age_mut_ref: RefMut&lt;u32&gt; = age.borrow_mut();\n        *age_mut_ref += 1;\n    }\n\n    let age_reference: &amp;u32 = {\n        let age_ref: Ref&lt;u32&gt; = age.borrow();\n        &amp;age_ref\n    };\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_reference);\n}\n</code></pre>\n<p>The latter results in the error message:</p>\n<pre><code>error[E0597]: `age_ref` does not live long enough\n  --&gt; src\\main.rs:12:9\n   |\n10 |     let age_reference: &amp;u32 = {\n   |         ------------- borrow later stored here\n11 |         let age_ref: Ref&lt;u32&gt; = age.borrow();\n12 |         &amp;age_ref\n   |         ^^^^^^^^ borrowed value does not live long enough\n13 |     };\n   |     - `age_ref` dropped here while still borrowed\n</code></pre>\n<p>This makes sense hopefully: <code>age_reference</code> is borrowing from <code>age_ref</code>, and therefore cannot outlive it.</p>\n<h2 id=\"the-false-fail\">The false fail</h2>\n<p>Alright, our inner block currently looks like this, and refuses to compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let age_reference: &amp;u32 = {\n    let age_ref: Ref&lt;u32&gt; = age.borrow();\n    &amp;age_ref\n};\n</code></pre>\n<p><code>age_ref</code> is really a useless temporary variable inside that block. I assign a value to it, and then immediately borrow from that variable and never use it again. It should have no impact on our program to combine that into a single line within a block, right? Wrong. Check out this program:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::cell::{RefMut, RefCell};\nfn main() {\n    let age: RefCell&lt;u32&gt; = RefCell::new(30);\n\n    {\n        let mut age_mut_ref: RefMut&lt;u32&gt; = age.borrow_mut();\n        *age_mut_ref += 1;\n    }\n\n    let age_reference: &amp;u32 = {\n        &amp;age.borrow()\n    };\n    println!(&quot;Happy birthday, you&#x27;re {} years old!&quot;, age_reference);\n}\n</code></pre>\n<p>This looks almost identical to the code above. But this code compiles and runs successfully. What gives?!? It turns out, creating our temporary variable wasn't quite as meaningless as we thought. That's thanks to something called <a href=\"https://doc.rust-lang.org/reference/destructors.html?highlight=temporary,life#temporary-lifetime-extension\"><em>temporary lifetime extension</em></a>. Let me start with a caveat from the docs themselves:</p>\n<blockquote>\n<p><strong>Note</strong>: The exact rules for temporary lifetime extension are subject to change. This is describing the current behavior only.</p>\n</blockquote>\n<p>With that out of the way, let's quote once more from the docs:</p>\n<blockquote>\n<p>The temporary scopes for expressions in <code>let</code> statements are sometimes <em>extended</em> to the scope of the block containing the <code>let</code> statement. This is done when the usual temporary scope would be too small, based on certain syntactic rules.</p>\n</blockquote>\n<p>OK, I'm all done quoting. The documentation there is pretty good at explaining things. For our case above, let's look at the code in question:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let age_reference: &amp;u32 = {\n    &amp;age.borrow()\n};\n</code></pre>\n<p><code>age.borrow()</code> create a value of type <code>Ref&lt;u32&gt;</code>. What variable holds that value? Trick question: there isn't one. This value is <em>temporary</em>. We use temporary values in programming all the time. In <code>(1 + 2) + 5</code>, the expression <code>1 + 2</code> generates a temporary <code>3</code>, which is then added to <code>5</code> and thrown away. Normally these temporaries aren't terribly interesting.</p>\n<p>But in the context of lifetimes and borrow checkers, they are. Taken at the most literal, <code>{ &amp;age.borrow() }</code> should behave as follows:</p>\n<ul>\n<li>Create a new block</li>\n<li>Call <code>age.borrow()</code> to get a <code>Ref&lt;u32&gt;</code></li>\n<li>That <code>Ref&lt;u32&gt;</code> is owned by the block around this expression</li>\n<li>Borrow a reference to that <code>Ref&lt;u32&gt;</code></li>\n<li>Try to return that reference as the result of the block</li>\n<li>Realize that reference refers to a value that was dropped with the block, and therefore lifetime rules are violated</li>\n</ul>\n<p>But this kind of thing would pop up all the time! Consider the incredibly simple examples from the docs that I promised not to quote from anymore (borrowing code snippets is different, OK?):</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let x = &amp;mut 0;\n&#x2F;&#x2F; Usually a temporary would be dropped by now, but the temporary for `0` lives\n&#x2F;&#x2F; to the end of the block.\nprintln!(&quot;{}&quot;, x);\n</code></pre>\n<p>It turns out that strictly following lexical scoping rules for lifetimes wouldn't be ergonomic. So there's a special case to make it feel right.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>Firstly, I hope this was a good example of my comment about ergonomics. I never would have thought about <code>let x = &amp;mut 0</code> as a beginner: yeah, sure, I can borrow a reference to a number. Cool. Then, with a bit more experience, it suddenly seems shocking: what's the lifetime of <code>0</code>? And finally, with just a bit more experience (and the kind help of Rust issue tracker maintainers), it makes sense again.</p>\n<p>Secondly, I hope this semi-deep dive into how <code>RefCell</code> moves borrow rule checking to runtime helps elucidate some things. In my opinion, this was one of the harder concepts to grok in my Rust learning journey.</p>\n<p>Thirdly, I hope seeing the temporary lifetime extension rules helps clarify why some things work that you thought wouldn't. I know I've been in the middle of writing something before, been surprised the borrow checker didn't punch me in the face, and then happily went on my way instead of questioning why everything went better than expected.</p>\n<p>The tweets I started this off with discuss a more advanced version than I covered in the rest of the post. I'd recommend going back to the top and making sure the code and explanations all make sense.</p>\n<p>Want to learn more about Rust? Check out FP Complete's <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a>, or read about our <a href=\"https://tech.fpcomplete.com/training/\">training courses</a>. Also, you may be interested in these related posts:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/different-levels-async-rust/\">Different levels of async in Rust</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/10/is-rust-functional/\">Is Rust functional?</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/\">Serverless Rust using WASM and CloudFlare</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/",
        "slug": "of-course-it-compiles-right",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Rust: Of course it compiles, right?",
        "description": "An example of some code that looks like it should fail at runtime, then looks like it should fail at compile time, and why rustc lets it slide",
        "updated": null,
        "date": "2020-09-21",
        "year": 2020,
        "month": 9,
        "day": 21,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "image": "images/blog/of-course.png",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/of-course-it-compiles-right/",
        "components": [
          "blog",
          "of-course-it-compiles-right"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "borrow-rules",
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/#borrow-rules",
            "title": "Borrow rules",
            "children": []
          },
          {
            "level": 2,
            "id": "ref-and-refmut",
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/#ref-and-refmut",
            "title": "Ref and RefMut",
            "children": []
          },
          {
            "level": 2,
            "id": "no-reference-without-a-ref",
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/#no-reference-without-a-ref",
            "title": "No reference without a Ref",
            "children": []
          },
          {
            "level": 2,
            "id": "the-false-fail",
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/#the-false-fail",
            "title": "The false fail",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2435,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/where-rust-fits-in-your-organization.md",
        "colocated_path": null,
        "content": "<p>Rust is a relatively new and promising language that offers improvements in software\nin terms of safety and speed. We'll cover if adopting Rust into your organization\nmakes sense and where you would want to add it to an existing software stack.</p>\n<h2 id=\"advantages-of-rust\">Advantages of Rust</h2>\n<h3 id=\"background\">Background</h3>\n<p>Rust was originally created by Mozilla in order to replace C++ in the Firefox \nbrowser with a safer alternative.\nC++ is not a memory safe language, and for Mozilla memory safety issues were the \nmain culprit for numerous bugs and security vulnerabilities in the Firefox browser.</p>\n<p>To replace it Mozilla needed a language that would not require a runtime or a \ngarbage collector. No language existed at that time which reasonably met those\nrequirements, so instead Mozilla worked to implement their own language. \nOut of that endeavor sprung Rust.</p>\n<h3 id=\"adoption-and-use-beyond-mozilla\">Adoption and use beyond Mozilla</h3>\n<p>Since its creation the language has gained widespread adoption and use far\nbeyond Mozilla and the Firefox browser. \nThis is not surprising, as the language is generally considered to be superbly well\ndesigned, adopting many programming language advances that have been made in the\nlast 20 years. \nAdd to that it's incredibly fast - on the same level as idiomatic C and C++ code.</p>\n<h3 id=\"language-design\">Language Design</h3>\n<p>Another reason for its popularity and growing use is that Rust doesn't re-implement\nbug-causing language design choices.</p>\n<p>With Rust, errors induced by missing null checking and poor error handling, as \nwell as other classes of coding errors, are ruled out by the design of the language \nand the strong type checks by the Rust compiler.</p>\n<p>For example instead of allowing for things to be <code>null</code> or <code>nil</code>, Rust has enum \ntypes. Using these a Rust programmer can handle failure cases in \na reasonable and safe way with useful enum types like \n<a href=\"https://doc.rust-lang.org/std/option/enum.Option.html\"><code>Option</code></a> and\n<a href=\"https://doc.rust-lang.org/std/result/enum.Result.html\"><code>Result</code></a>.</p>\n<p>Compare this to a language like Go which doesn't provide this and instead implements\nthe <code>null</code> pointer. Doing so essentially creates a dangerous escape door out of \nthe type system that infects every type in the language. \nAs a result a Go programmer could easily forget to check for <code>null</code> and overlook \ncases where a <code>null</code> value could be returned.</p>\n<p>So if you have a Python 2 code base and you're trying to decide whether to \nre-implement it Go, use Rust instead!</p>\n<h2 id=\"rust-in-the-wild\">Rust in the wild</h2>\n<h3 id=\"rust-adoption-success-stories\">Rust Adoption Success Stories</h3>\n<p>In 2020 Rust was once again (for 5 years running!) the most loved programming \nlanguage according to the \n<a href=\"https://insights.stackoverflow.com/survey/2020#technology-most-loved-dreaded-and-wanted-languages-loved\">Stack Overflow developer survey</a>.</p>\n<p>Just because software developers love a language though, does not equate\nto success if you adopt the language into your organization.</p>\n<p>Some of the best success stories for companies that have adopted Rust come from \nthose that isolated some small but critical piece of their software and \nre-implemented it in Rust.</p>\n<p>In a large organization, Rust is extremely useful in a scenario like this where\na small but rate limiting piece of the software stack can be re-written in Rust.\nThis gives the organization the benefits of adopting Rust in terms of performant,\nfast software but without requiring them to adopt the language across the board.\nAnd because Rust doesn't bring its own competing runtime and garbage collector, \nit fits this role phenomenally well.</p>\n<h3 id=\"large-companies-that-count-themselves-as-rustaceans\">Large Companies that count themselves as Rustaceans</h3>\n<p>Large companies like Microsoft now expound on Rust being \n<a href=\"https://thenewstack.io/microsoft-rust-is-the-industrys-best-chance-at-safe-systems-programming/\">the future of safe software development</a>\nand have <a href=\"https://medium.com/the-innovation/how-microsoft-is-adopting-rust-e0f8816566ba\">adopted using it</a>.\nOther companies like Amazon have <a href=\"https://seanmonstar.com/post/617213413024759808/next-up-aws\">chosen Rust</a>\nmore and more for new critical pieces of cloud infrastructure software.</p>\n<p>Apple, Google, Facebook, Cloudflare, and Dropbox (to name a few) also all now count\nthemselves as Rust adopters.</p>\n<h2 id=\"cost-and-tradeoffs-of-rust\">Cost and Tradeoffs of Rust</h2>\n<h3 id=\"fighting-the-rust-compiler\">Fighting the Rust Compiler</h3>\n<p>One of the key reasons to use Rust is to limit (or completely eliminate) entire\nclasses of runtime bugs and errors. The drawback is that with Rust's strong type\nsystem and compile time checks, you will end up seeing a fair bit more compile time \nerrors with your code.  Some developers find this unnerving and become frustrated.\nThis is especially true if they're used to less safe languages (like Javascript or C++)\nthat ignore certain categories of programming mistakes at compile time and leave \nthem as surprises when the software is run.</p>\n<p>For some organizations, they're okay with this trade-off and the associated cost \nof discovering errors in production.\nIn these scenarios, it may be the case that the code being written is not \nincredibly critical and shipping buggy code to production is tolerable \n(to a certain degree). </p>\n<h3 id=\"development-time\">Development Time</h3>\n<p>Rust also brings with it a certain cost in terms of the time it takes to iterate\non and develop. This is something associated with all compiled languages and it's\nnot exclusive to Rust, but it's worth considering.\nRust might not be a good fit if your organization's projects are comprised of \nrelatively simple codebases where the added compile time is not worth it.</p>\n<h2 id=\"is-rust-right-for-your-organization\">Is Rust Right for Your Organization?</h2>\n<p>Rust is well suited to situations where having performant, resource efficient code\nmakes a huge difference for the larger overall product.\nIf your organization could benefit from isolating critical pieces of its software\nstack that meet this description, then you should consider adopting and using Rust.\nThe unique qualities of Rust mean that you don't need to adopt Rust across your \nentire organization to see a meaningful difference.</p>\n<p>In addition to that, Rust is seeing major adoption outside its original target \nuse case as a systems language. More and more it's being used for web servers,\nweb dev via Web Assembly, game development, and general purpose programming uses.\nRust has become a full stack language with huge range of supported use cases. </p>\n<p>If you'd like to know more about Rust and how adopting it could make a difference\nin your organization, then please reach out to FP Complete! \nIf you have a Rust project you want to get started, or if you would like \nRust training for your team, FP Complete can help.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/",
        "slug": "where-rust-fits-in-your-organization",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Where Rust fits in your organization",
        "description": "We'll cover if adopting Rust into your organization makes sense and where you would want to add it to an existing software stack.",
        "updated": null,
        "date": "2020-09-16",
        "year": 2020,
        "month": 9,
        "day": 16,
        "taxonomies": {
          "tags": [
            "devops",
            "rust",
            "insights"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mike McGirr",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/where-rust-fits-in-your-organization/",
        "components": [
          "blog",
          "where-rust-fits-in-your-organization"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "advantages-of-rust",
            "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#advantages-of-rust",
            "title": "Advantages of Rust",
            "children": [
              {
                "level": 3,
                "id": "background",
                "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#background",
                "title": "Background",
                "children": []
              },
              {
                "level": 3,
                "id": "adoption-and-use-beyond-mozilla",
                "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#adoption-and-use-beyond-mozilla",
                "title": "Adoption and use beyond Mozilla",
                "children": []
              },
              {
                "level": 3,
                "id": "language-design",
                "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#language-design",
                "title": "Language Design",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "rust-in-the-wild",
            "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#rust-in-the-wild",
            "title": "Rust in the wild",
            "children": [
              {
                "level": 3,
                "id": "rust-adoption-success-stories",
                "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#rust-adoption-success-stories",
                "title": "Rust Adoption Success Stories",
                "children": []
              },
              {
                "level": 3,
                "id": "large-companies-that-count-themselves-as-rustaceans",
                "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#large-companies-that-count-themselves-as-rustaceans",
                "title": "Large Companies that count themselves as Rustaceans",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "cost-and-tradeoffs-of-rust",
            "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#cost-and-tradeoffs-of-rust",
            "title": "Cost and Tradeoffs of Rust",
            "children": [
              {
                "level": 3,
                "id": "fighting-the-rust-compiler",
                "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#fighting-the-rust-compiler",
                "title": "Fighting the Rust Compiler",
                "children": []
              },
              {
                "level": 3,
                "id": "development-time",
                "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#development-time",
                "title": "Development Time",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "is-rust-right-for-your-organization",
            "permalink": "https://tech.fpcomplete.com/blog/where-rust-fits-in-your-organization/#is-rust-right-for-your-organization",
            "title": "Is Rust Right for Your Organization?",
            "children": []
          }
        ],
        "word_count": 1050,
        "reading_time": 6,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/",
            "title": "An ownership puzzle with Rust, async, and hyper"
          }
        ]
      },
      {
        "relative_path": "blog/avoiding-duplicating-strings-rust.md",
        "colocated_path": null,
        "content": "<p><em>Based on actual events.</em></p>\n<p>Let's say you've got a blog. The blog has a bunch of posts. Each post has a title and a set of tags. The metadata for these posts is all contained in TOML files in a single directory. (If you use <a href=\"https://getzola.org/\">Zola</a>, you're pretty close to that.) And now you need to generate a CSV file showing a matrix of blog posts and their tags. Seems like a great job for Rust!</p>\n<p>In this post, we're going to:</p>\n<ul>\n<li>Explore how we'd solve this (fairly simple) problem</li>\n<li>Investigate how Rust's types tell us a lot about memory usage</li>\n<li>Play with some nice and not-so-nice ways to optimize our program</li>\n</ul>\n<p>Onwards!</p>\n<h2 id=\"program-behavior\">Program behavior</h2>\n<p>We've got a bunch of TOML files sitting in the <code>posts</code> directory. Here are some example files:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\"># devops-for-developers.toml\ntitle = &quot;DevOps for (Skeptical) Developers&quot;\ntags = [&quot;dev&quot;, &quot;devops&quot;]\n\n# rust-devops.toml\ntitle = &quot;Rust with DevOps&quot;\ntags = [&quot;devops&quot;, &quot;rust&quot;]\n</code></pre>\n<p>We want to create a CSV file that looks like this:</p>\n<pre data-lang=\"csv\" class=\"language-csv \"><code class=\"language-csv\" data-lang=\"csv\">Title,dev,devops,rust,streaming\nDevOps for (Skeptical) Developers,true,true,false,false\nRust with DevOps,false,true,true,false\nServerless Rust using WASM and Cloudflare,false,true,true,false\nStreaming UTF-8 in Haskell and Rust,false,false,true,true\n</code></pre>\n<p>To make this happen, we need to:</p>\n<ul>\n<li>Iterate through the files in the <code>posts</code> directory</li>\n<li>Load and parse each TOML file</li>\n<li>Collect a set of all tags present in all posts</li>\n<li>Collect the parsed post information</li>\n<li>Create a CSV file from that information</li>\n</ul>\n<p>Not too bad, right?</p>\n<h2 id=\"setup\">Setup</h2>\n<p>You should make sure you've <a href=\"https://www.rust-lang.org/tools/install\">installed the Rust tools</a>. Then you can create a new empty project with <code>cargo new tagcsv</code>.</p>\n<p>Later on, we're going to play with some unstable language features, so let's opt into a nightly version of the compiler. To do this, create a <code>rust-toolchain</code> file containing:</p>\n<pre><code>nightly-2020-08-29\n</code></pre>\n<p>Then add the following dependencies to your <code>Cargo.toml</code> file:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[dependencies]\ncsv = &quot;1.1.3&quot;\nserde = &quot;1.0.115&quot;\nserde_derive = &quot;1.0.115&quot;\ntoml = &quot;0.5.6&quot;\n</code></pre>\n<p>OK, now we can finally work on some code!</p>\n<h2 id=\"first-version\">First version</h2>\n<p>We're going to use the <code>toml</code> crate to parse our metadata files. <code>toml</code> is built on top of <code>serde</code>, and we can conveniently use <code>serde_derive</code> to automatically derive a <code>Deserialize</code> implementation for a <code>struct</code> that represents that metadata. So we'll start off our program with:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use serde_derive::Deserialize;\nuse std::collections::HashSet;\n\n#[derive(Deserialize)]\nstruct Post {\n    title: String,\n    tags: HashSet&lt;String&gt;,\n}\n</code></pre>\n<p>Next, we'll define our <code>main</code> function to load the data:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() -&gt; Result&lt;(), std::io::Error&gt; {\n    &#x2F;&#x2F; Collect all tags across all of the posts\n    let mut all_tags: HashSet&lt;String&gt; = HashSet::new();\n    &#x2F;&#x2F; And collect the individual posts\n    let mut posts: Vec&lt;Post&gt; = Vec::new();\n\n    &#x2F;&#x2F; Read in the files in the posts directory\n    let dir = std::fs::read_dir(&quot;posts&quot;)?;\n    for entry in dir {\n        &#x2F;&#x2F; Error handling\n        let entry = entry?;\n        &#x2F;&#x2F; Read the file contents as a String\n        let contents = std::fs::read_to_string(entry.path())?;\n        &#x2F;&#x2F; Parse the contents with the toml crate\n        let post: Post = toml::from_str(&amp;contents)?;\n        &#x2F;&#x2F; Add all of the tags to the all_tags set\n        for tag in &amp;post.tags {\n            all_tags.insert(tag.clone());\n        }\n        &#x2F;&#x2F; Update the Vec of posts\n        posts.push(post);\n    }\n    &#x2F;&#x2F; Generate the CSV output\n    gen_csv(&amp;all_tags, &amp;posts)?;\n    Ok(())\n}\n</code></pre>\n<p>And finally, let's define our <code>gen_csv</code> function to take the set of tags and the <code>Vec</code> of posts and generate the output file:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn gen_csv(all_tags: &amp;HashSet&lt;String&gt;, posts: &amp;[Post]) -&gt; Result&lt;(), std::io::Error&gt; {\n    &#x2F;&#x2F; Open the file for output\n    let mut writer = csv::Writer::from_path(&quot;tag-matrix.csv&quot;)?;\n\n    &#x2F;&#x2F; Generate the header, with the word &quot;Title&quot; and then all of the tags\n    let mut header = vec![&quot;Title&quot;];\n    for tag in all_tags.iter() {\n        header.push(tag);\n    }\n    writer.write_record(header)?;\n\n    &#x2F;&#x2F; Print out a separate row for each post\n    for post in posts {\n        &#x2F;&#x2F; Create a record with the post title...\n        let mut record = vec![post.title.as_str()];\n        for tag in all_tags {\n            &#x2F;&#x2F; and then a true or false for each tag name\n            let field = if post.tags.contains(tag) {\n                &quot;true&quot;\n            } else {\n                &quot;false&quot;\n            };\n            record.push(field);\n        }\n        writer.write_record(record)?;\n    }\n    writer.flush()?;\n    Ok(())\n}\n</code></pre>\n<p>Side note: it would be slightly nicer to alphabetize the set of tags, which you can do by collecting all of the tags into a <code>Vec</code> and then sorting it. I had that previously, but removed it in the code above to reduce incidental noise to the example. If you feel like having fun, try adding that back.</p>\n<p>Anyway, this program works exactly as we want, and produces a CSV file. Perfect, right?</p>\n<h2 id=\"let-the-types-guide-you\">Let the types guide you</h2>\n<p>I love type-driven programming. I love the idea that looking at the types tells you a lot about the behavior of your program. And in Rust, the types can often tell you about the <em>memory usage</em> of your program. I want to focus on two lines, and then prove a point with a third. Consider:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">tags: HashSet&lt;String&gt;,\n</code></pre>\n<p>and</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let mut all_tags: HashSet&lt;String&gt; = HashSet::new();\n</code></pre>\n<p>Firstly, I love the fact that the types tell us so much about expected behavior. The tags are a <em>set</em>: the order is unimportant, and there are no duplicates. That makes sense. We don't want to list &quot;devops&quot; twice in our set of all tags. And there's nothing inherently &quot;first&quot; or &quot;second&quot; about &quot;dev&quot; vs &quot;rust&quot;. And we know that tags are arbitrary pieces of textual data. Awesome.</p>\n<p>But what I <em>really</em> like here is that it tells us about memory usage. Each post has its own copy of each tag. So does the <code>all_tags</code> set. How do I know this? Easy: because that's exactly what <code>String</code> means. There's no possibility of data sharing, at all. If we have 200 posts tagged &quot;dev&quot;, we will have 201 copies of the string &quot;dev&quot; in memory (200 for the posts, once for the <code>all_tags</code>).</p>\n<p>And now that we've seen it in the types, we can see evidence of it in the implementation too:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">all_tags.insert(tag.clone());\n</code></pre>\n<p>That <code>.clone()</code> bothered me when I first wrote it. And that's what got me to look at the types, which bothered me further.</p>\n<p>In reality, this is nothing to worry about. Even with 1,000 posts, averaging 5 tags, with each tag averaging 20 bytes, this will only take up an extra 100,000 bytes of memory. So optimizing this away is not a good use of our time. We're much better off doing something else.</p>\n<p>But I wanted to have fun. And if you're reading this post, I think you want to continue this journey too. Onwards!</p>\n<h2 id=\"rc\">Rc</h2>\n<p>This isn't the first solution I tried. But it's the first one that worked easily. So we'll start here.</p>\n<p>The first thing we have to change is our types. As long as we have <code>HashSet&lt;String&gt;</code>, we know for a fact that we'll have extra copies of the data. This seems like a nice use case for <code>Rc</code>. <code>Rc</code> uses reference counting to let multiple values share ownership of another value. Sounds like exactly what we want!</p>\n<p>My approach here is to use compiler-error-driven development, and I encourage you to play along with your own copy of the code. First, let's <code>use</code> <code>Rc</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::rc::Rc;\n</code></pre>\n<p>Next, let's change our definition of <code>Post</code> to use an <code>Rc&lt;String&gt;</code> instead of <code>String</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Deserialize)]\nstruct Post {\n    title: String,\n    tags: HashSet&lt;Rc&lt;String&gt;&gt;,\n}\n</code></pre>\n<p>The compiler doesn't like this very much. We can't derive <code>Deserialize</code> for an <code>Rc&lt;String&gt;</code>. So instead, let's make a <code>RawPost</code> struct for the deserializing, and then dedicate <code>Post</code> for holding the data with <code>Rc&lt;String&gt;</code>. In other words:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Deserialize)]\nstruct RawPost {\n    title: String,\n    tags: HashSet&lt;String&gt;,\n}\n\nstruct Post {\n    title: String,\n    tags: HashSet&lt;Rc&lt;String&gt;&gt;,\n}\n</code></pre>\n<p>And then, when parsing the <code>toml</code>, we'll parse into a <code>RawPost</code> type:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let post: RawPost = toml::from_str(&amp;contents)?;\n</code></pre>\n<p>If you're following along, you'll only have one error message at this point about <code>posts.push(post);</code> having a mismatch between <code>Post</code> and <code>RawPost</code>. But before we address that, let's make one more type change above. I want to make <code>all_tags</code> contain <code>Rc&lt;String&gt;</code>.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let mut all_tags: HashSet&lt;Rc&lt;String&gt;&gt; = HashSet::new();\n</code></pre>\n<p>OK, now we've got some nice error messages about mismatches between <code>Rc&lt;String&gt;</code> and <code>String</code>. This is where we have to be careful. The easiest thing to do would be to simply wrap our <code>String</code>s in an <code>Rc</code> and end up with lots of copies of <code>String</code>. Let's implement the next bit incorrectly first to see what I'm talking about.</p>\n<p>At this point in our code rewrite, we've got a <code>RawPost</code>, and we need to:</p>\n<ul>\n<li>Add its tags to <code>all_tags</code></li>\n<li>Create a new <code>Post</code> value based on the <code>RawPost</code></li>\n<li>Add the <code>Post</code> to the <code>posts</code> <code>Vec</code></li>\n</ul>\n<p>Here's the simple and wasteful implementation:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let raw_post: RawPost = toml::from_str(&amp;contents)?;\n\nlet mut post_tags: HashSet&lt;Rc&lt;String&gt;&gt; = HashSet::new();\n\nfor tag in raw_post.tags {\n    let tag = Rc::new(tag);\n    all_tags.insert(tag.clone());\n    post_tags.insert(tag);\n}\n\nlet post = Post {\n    title: raw_post.title,\n    tags: post_tags,\n};\nposts.push(post);\n</code></pre>\n<p>The problem here is that we always keep the original <code>String</code> from the <code>RawPost</code>. If that tag is already present in the <code>all_tags</code> set, we don't end up using the same copy.</p>\n<p>There's an unstable method on <code>HashSet</code>s that helps us out here. <code>get_or_insert</code> will try to insert a value into a <code>HashSet</code>. If the value is already present, it will drop the new value and return a reference to the original value. If the value isn't present, the value is added to the <code>HashSet</code> and we get a reference back to it. Changing our code to use that is pretty easy:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">for tag in raw_post.tags {\n    let tag = Rc::new(tag);\n    let tag = all_tags.get_or_insert(tag);\n    post_tags.insert(tag.clone());\n}\n</code></pre>\n<p>We still end up with a <code>.clone()</code> call, but now it's a clone of an <code>Rc</code>, which is a cheap integer increment. No additional memory allocation required! Since this method is unstable, we also have to enable the feature by adding this at the top of your source file:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#![feature(hash_set_entry)]\n</code></pre>\n<p>And only one more change required. The signature for <code>gen_csv</code> is expecting a <code>&amp;HashSet&lt;String&gt;</code>. If you change that to <code>&amp;HashSet&lt;Rc&lt;String&gt;&gt;</code>, the code will compile and run correctly. Yay!</p>\n<p>In case you got lost with all of the edits above, here's the current version of <code>main.rs</code>:</p>\n<script src=\"https://gist.github.com/snoyberg/5d6e7a515f9065f80b74909920ed94db.js\"></script>\n<h2 id=\"quibbles\">Quibbles</h2>\n<p>I already told you that the original <code>HashSet&lt;String&gt;</code> version of the code is likely Good Enough™ for most cases. I'll tell you that, if you're really bothered by that overhead, the <code>HashSet&lt;Rc&lt;String&gt;&gt;</code> version if almost certainly the right call. So we should probably just stop here and end the blog post on a nice, safe note.</p>\n<p>But let's be bold and crazy. I don't actually like this version of the code that much, for two reasons:</p>\n<ol>\n<li>The <code>Rc</code> feels dirty here. <code>Rc</code> is great for weird lifetime situations with values. But in our case, we know that the <code>all_tags</code> set, which owns all of the tags, will always outlive the usage of the tags inside the <code>Post</code>s. So reference counting feels like an unnecessary overhead and obscuring the situation.</li>\n<li>As demonstrated before, it's all too easy to mess up with the <code>Rc&lt;String&gt;</code> version. You can accidentally bypass all of the memory saving benefits by using a new <code>String</code> instead of cloning a reference to an existing one.</li>\n</ol>\n<p>What I'd really like to do is to have <code>all_tags</code> be a <code>HashSet&lt;String&gt;</code> and own the tags themselves. And then, inside <code>Post</code>, I'd like to keep references to those tags. Unfortunately, this doesn't quite work. Can you foresee why? If not, don't worry, I didn't see it until the borrow checker told me how wrong I was a few times. Let's experience that joy together. And we'll do it with compiler-driven development again.</p>\n<p>The first thing I'm going to do is remove the <code>use std::rc::Rc;</code> statement. That leads to our first error: <code>Rc</code> isn't in scope for <code>Post</code>. We want to keep a <code>&amp;str</code> in this struct. But we have to be explicit about lifetimes when holding references in structs. So our code ends up as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct Post&lt;&#x27;a&gt; {\n    title: String,\n    tags: HashSet&lt;&amp;&#x27;a str&gt;,\n}\n</code></pre>\n<p>The next error is about the definition of <code>all_tags</code> in <code>main</code>. That's easy enough: just take out the <code>Rc</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let mut all_tags: HashSet&lt;String&gt; = HashSet::new();\n</code></pre>\n<p>This is easy! Similarly, <code>post_tags</code> is defined as a <code>HashSet&lt;Rc&lt;String&gt;&gt;</code>. In this case, we want to hold <code>&amp;str</code>s instead, so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let mut post_tags: HashSet&lt;&amp;str&gt; = HashSet::new();\n</code></pre>\n<p>We no longer need to use <code>Rc::new</code> in the <code>for</code> loop, or clone the <code>Rc</code>. So our loop simplifies down to:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">for tag in raw_post.tags {\n    let tag = all_tags.get_or_insert(tag);\n    post_tags.insert(tag);\n}\n</code></pre>\n<p>And (misleadingly), we just have one error message left: the signature for <code>gen_csv</code> still uses a <code>Rc</code>. We'll get rid of that with the new signature:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn gen_csv(all_tags: &amp;HashSet&lt;String&gt;, posts: &amp;[Post]) -&gt; Result&lt;(), std::io::Error&gt; {\n</code></pre>\n<p>And we get an (IMO confusing) error message about <code>&amp;str</code> and <code>&amp;String</code> not quite lining up:</p>\n<pre><code>error[E0277]: the trait bound `&amp;str: std::borrow::Borrow&lt;std::string::String&gt;` is not satisfied\n  --&gt; src\\main.rs:67:38\n   |\n67 |             let field = if post.tags.contains(tag) {\n   |                                      ^^^^^^^^ the trait `std::borrow::Borrow&lt;std::string::String&gt;` is not implemented for `&amp;str`\n</code></pre>\n<p>But this can be solved by explicitly asking for a <code>&amp;str</code> via the <code>as_str</code> method:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let field = if post.tags.contains(tag.as_str()) {\n</code></pre>\n<p>And you might think we're done. But this is where the &quot;misleading&quot; idea comes into play.</p>\n<h2 id=\"the-borrow-checker-wins\">The borrow checker wins</h2>\n<p>If you've been following along, you should now see an error message on your screen that looks something like:</p>\n<pre><code>error[E0499]: cannot borrow `all_tags` as mutable more than once at a time\n  --&gt; src\\main.rs:35:23\n   |\n35 |             let tag = all_tags.get_or_insert(tag);\n   |                       ^^^^^^^^ mutable borrow starts here in previous iteration of loop\n\nerror[E0502]: cannot borrow `all_tags` as immutable because it is also borrowed as mutable\n  --&gt; src\\main.rs:46:13\n   |\n35 |             let tag = all_tags.get_or_insert(tag);\n   |                       -------- mutable borrow occurs here\n...\n46 |     gen_csv(&amp;all_tags, &amp;posts)?;\n   |             ^^^^^^^^^  ------ mutable borrow later used here\n   |             |\n   |             immutable borrow occurs here\n</code></pre>\n<p>I was <a href=\"https://twitter.com/snoyberg/status/1299253999800123392?s=20\">convinced</a> that the borrow checker was being overly cautious here. Why would a mutable borrow of <code>all_tags</code> to insert a tag into the set conflict with an immutable borrow of the tags inside the set? (If you already see my error, feel free to laugh at my naivete.) I could follow why I'd violated borrow check rules. Specifically: you can't have a mutable reference and any other reference live at the same time. But I didn't see how this was actually stopping my code from segfaulting.</p>\n<p>After a bit more thinking, it clicked. I realized that I had an invariant in my head which did not appear anywhere in my types. And therefore, the borrow checker was fully justified in saying my code was unsafe. What I realized is that I had implicitly been assuming that my mutations of the <code>all_tags</code> set would never delete any existing values in the set. I can look at my code and see that that's the case. However, the borrow checker doesn't play those kinds of games. It deals with types and facts. And in fact, my code was not provably correct.</p>\n<p>So now is really time to quit, and accept the <code>Rc</code>s, or even just the <code>String</code>s and wasted memory. We're all done. Please don't keep reading.</p>\n<h2 id=\"time-to-get-unsafe\">Time to get unsafe</h2>\n<p>OK, I lied. We're going to take one last step here. I'm not going to tell you this is a good idea. I'm not going to tell you this code is generally safe. I am going to tell you that it works in my testing, and that I refuse to commit it to the master branch of the project I'm working on.</p>\n<p>We've got two issues:</p>\n<ul>\n<li>We have an unstated invariant that we never delete tags from our <code>all_tags</code> <code>HashSet</code></li>\n<li>We need a mutable reference to the <code>HashSet</code> to insert, and that prevents taking immutable references for our tags</li>\n</ul>\n<p>Let's fix this. We're going to define a new <code>struct</code>, called an <code>AppendSet</code>, which only provides the ability to insert new tags, not delete old ones.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct AppendSet&lt;T&gt; {\n    inner: HashSet&lt;T&gt;,\n}\n</code></pre>\n<p>We're going to provide three methods:</p>\n<ul>\n<li>A static method <code>new</code>, boring</li>\n<li>A <code>get_or_insert</code> that behaves just like <code>HashSet</code>'s, but only needs an immutable reference, not a mutable one</li>\n<li>An <code>inner</code> method that returns a reference to the internal <code>HashSet</code> so we can reuse its <code>Iterator</code> interface</li>\n</ul>\n<p>The first and last are really easy. <code>get_or_insert</code> is a bit more involved, let's just stub it out for now.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;T&gt; AppendSet&lt;T&gt; {\n    fn new() -&gt; Self {\n        AppendSet {\n            inner: HashSet::new(),\n        }\n    }\n\n    fn get_or_insert(&amp;self, t: T) -&gt; &amp;T\n    where\n        T: Eq + std::hash::Hash,\n    {\n        unimplemented!()\n    }\n\n    fn inner(&amp;self) -&gt; &amp;HashSet&lt;T&gt; {\n        &amp;self.inner\n    }\n}\n</code></pre>\n<p>Next, we'll redefine <code>all_tags</code> as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let all_tags: AppendSet&lt;String&gt; = AppendSet::new();\n</code></pre>\n<p>Note that we no longer have the <code>mut</code> keyword here. We never need to mutate this thing... sort of. We'll interact with it via <code>get_or_insert</code>, which at least claims it doesn't mutate. The only other change we have to make is in the call to <code>gen_csv</code>, where we want to use the <code>inner()</code> method:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">gen_csv(all_tags.inner(), &amp;posts)?;\n</code></pre>\n<p>And perhaps surprisingly, our code now compiles. There's only one thing left to do: implement that <code>get_or_insert</code> method. And this is where the dirtiness happens.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn get_or_insert(&amp;self, t: T) -&gt; &amp;T\nwhere\n    T: Eq + std::hash::Hash,\n{\n    let const_ptr = self as *const Self;\n    let mut_ptr = const_ptr as *mut Self;\n    let this = unsafe { &amp;mut *mut_ptr };\n    this.inner.get_or_insert(t)\n}\n</code></pre>\n<p>That's right, <code>unsafe</code> baby!</p>\n<img src=\"/images/blog/live-dangerously.jpg\" style=\"max-width:100%\">\n<p>This code absolutely works. I'm also fairly certain it won't <em>generally</em> work. We are very likely violating invariants of <code>HashSet</code>'s interface. As one simple example, we now have the ability to change the contents of a <code>HashSet</code> while there is an active iterate looping through it. I haven't investigated the internals of <code>HashSet</code>, but I wouldn't be surprised at all to find out this breaks some invariants.</p>\n<p><strong>NOTE</strong> To address one of these concerns: what if we modified the <code>inner</code> method on <code>AppendSet</code> to consume the <code>self</code> and return a <code>HashSet</code>? That would definitely help us avoid accidentally violating invariants. But it also won't compile. The <code>AppendSet</code> itself is immutably borrowed by the <code>Post</code> values, and therefore we cannot move it.</p>\n<p>So does this code work? It seems to. Will <code>AppendSet</code> generally work for similar problems? I have no idea. Will this code continue to work with future versions of the standard library with changes to <code>HashSet</code>'s implementation? I have no idea. In other words: <strong>don't use this code</strong>. But it sure was fun to write.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>That was certainly a fun excursion. It's a bit disappointing to end up at the ideal solution requiring unsafe to work. But the <code>Rc</code> version is a really nice middle ground. And even the &quot;bad&quot; version isn't so bad.</p>\n<p>A theoretically better answer would be to use a data structure specifically designed for this use case. I didn't do any investigation to see if such things existed already. If you have any advice, please let me know!</p>\n<p>Check out <a href=\"https://tech.fpcomplete.com/rust/\">other FP Complete Rust information</a>.</p>\n<h2 id=\"update-accidentally-safe\">Update: accidentally safe</h2>\n<p>On December 13, 2020, I received a really great email from <a href=\"https://github.com/p-avital\">Pierre Avital</a>, who detailed why the code above is &quot;accidentally safe.&quot; With permission, I'm including Pierre's comments here:</p>\n<p>I find the unsafe version in your article to have one of the most wonderful cases of &quot;working by accident&quot; I've ever seen. It's actually memory safe, but not for the reasons your might think (you might have thought it through, but it's kind of a funny one so I'll explain my reasoning anyway).</p>\n<p>See with other types, even <code>&amp;String</code>, this would have been technically unsafe (but most likely would never segfault anyway) despite you preventing deletion, because HashSet can't guarantee pointer stability when adding : if it runs out of capacity, it will ask for more memory to the allocator, which might give a completely different chunk of memory (although it is extremely unlikely for small sets). So upon adding an element, you might invalidate the other references (and still, the memory where they pointed would still need to be overwritten for a bug to appear, depending on paging and how lenient the OS is).\nThis specific issue could be avoided by using <code>HashSet::with_capacity(upper_bound)</code>, provided you never add more elements than said <code>upper_bound</code>.</p>\n<p>But see, here's the magic of what you wrote: you used <code>&amp;str</code>, and <code>&amp;str</code> isn't a pointer to a <code>String</code>, it's actually a slice, aka a begin and an end pointers, wrapped into a coat to trick you into thinking they're one pointer. This means the references used in <code>Post</code> don't point to the HashSet's <code>String</code>s, but directly to their content, which won't move in memory unless they are modified.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/",
        "slug": "avoiding-duplicating-strings-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Avoiding duplicating strings in Rust",
        "description": "Let's go way over the top and try to avoid having duplicate strings in memory in Rust",
        "updated": null,
        "date": "2020-09-14",
        "year": 2020,
        "month": 9,
        "day": 14,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "image": "images/blog/live-dangerously.jpg",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/avoiding-duplicating-strings-rust/",
        "components": [
          "blog",
          "avoiding-duplicating-strings-rust"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "program-behavior",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#program-behavior",
            "title": "Program behavior",
            "children": []
          },
          {
            "level": 2,
            "id": "setup",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#setup",
            "title": "Setup",
            "children": []
          },
          {
            "level": 2,
            "id": "first-version",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#first-version",
            "title": "First version",
            "children": []
          },
          {
            "level": 2,
            "id": "let-the-types-guide-you",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#let-the-types-guide-you",
            "title": "Let the types guide you",
            "children": []
          },
          {
            "level": 2,
            "id": "rc",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#rc",
            "title": "Rc",
            "children": []
          },
          {
            "level": 2,
            "id": "quibbles",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#quibbles",
            "title": "Quibbles",
            "children": []
          },
          {
            "level": 2,
            "id": "the-borrow-checker-wins",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#the-borrow-checker-wins",
            "title": "The borrow checker wins",
            "children": []
          },
          {
            "level": 2,
            "id": "time-to-get-unsafe",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#time-to-get-unsafe",
            "title": "Time to get unsafe",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#conclusion",
            "title": "Conclusion",
            "children": []
          },
          {
            "level": 2,
            "id": "update-accidentally-safe",
            "permalink": "https://tech.fpcomplete.com/blog/avoiding-duplicating-strings-rust/#update-accidentally-safe",
            "title": "Update: accidentally safe",
            "children": []
          }
        ],
        "word_count": 3473,
        "reading_time": 18,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/ownership-puzzle-rust-async-hyper/",
            "title": "An ownership puzzle with Rust, async, and hyper"
          }
        ]
      },
      {
        "relative_path": "blog/rust-for-devops-tooling.md",
        "colocated_path": null,
        "content": "<p>A beginner's guide to writing your DevOps tools in Rust.</p>\n<h2 id=\"introduction\">Introduction</h2>\n<p>In this blog post we'll cover some basic DevOps use cases for Rust and why \nyou would want to use it.\nAs part of this, we'll also cover a few common libraries you will likely use\nin a Rust-based DevOps tool for AWS.</p>\n<p>If you're already familiar with writing DevOps tools in other languages,\nthis post will explain why you should try Rust.</p>\n<p>We'll cover why Rust is a particularly good choice of language to write your DevOps\ntooling and critical cloud infrastructure software in.\nAnd we'll also walk through a small demo DevOps tool written in Rust. \nThis project will be geared towards helping someone new to the language ecosystem \nget familiar with the Rust project structure.</p>\n<p>If you're brand new to Rust, and are interested in learning the language, you may want to start off with our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a>.</p>\n<h2 id=\"what-makes-the-rust-language-unique\">What Makes the Rust Language Unique</h2>\n<blockquote>\n<p>Rust is a systems programming language focused on three goals: safety, speed, \nand concurrency. It maintains these goals without having a garbage collector, \nmaking it a useful language for a number of use cases other languages aren’t \ngood at: embedding in other languages, programs with specific space and time \nrequirements, and writing low-level code, like device drivers and operating systems. </p>\n</blockquote>\n<p><em>The Rust Book (first edition)</em></p>\n<p>Rust was initially created by Mozilla and has since gained widespread adoption and\nsupport. As the quote from the Rust book alludes to, it was designed to fill the \nsame space that C++ or C would (in that it doesn’t have a garbage collector or a runtime).\nBut Rust also incorporates zero-cost abstractions and many concepts that you would\nexpect in a higher level language (like Go or Haskell).\nFor that, and many other reasons, Rust's uses have expanded well beyond that\noriginal space as low level safe systems language.</p>\n<p>Rust's ownership system is extremely useful in efforts to write correct and \nresource efficient code. Ownership is one of the killer features of the Rust \nlanguage and helps programmers catch classes of resource errors at compile time \nthat other languages miss or ignore.</p>\n<p>Rust is an extremely performant and efficient language, comparable to the speeds \nyou see with idiomatic everyday C or C++.\nAnd since there isn’t a garbage collector in Rust, it’s a lot easier to get \npredictable deterministic performance.</p>\n<h2 id=\"rust-and-devops\">Rust and DevOps</h2>\n<p>What makes Rust unique also makes it very useful for areas stemming from robots \nto rocketry, but are those qualities relevant for DevOps?\nDo we care if we have efficient executables or fine grained control over \nresources, or is Rust a bit overkill for what we typically need in DevOps?</p>\n<p><em>Yes and no</em></p>\n<p>Rust is clearly useful for situations where performance is crucial and actions \nneed to occur in a deterministic and consistent way. That obviously translates to \nlow-level places where previously C and C++ were the only game in town. \nIn those situations, before Rust, people simply had to accept the inherent risk and \nadditional development costs of working on a large code base in those languages.\nRust now allows us to operate in those areas but without the risk that C and C++\ncan add.</p>\n<p>But with DevOps and infrastructure programming we aren't constrained by those \nrequirements. For DevOps we've been able to choose from languages like Go, Python, \nor Haskell because we're not strictly limited by the use case to languages without \ngarbage collectors. Since we can reach for other languages you might argue \nthat using Rust is a bit overkill, but let's go over a few points to counter this.</p>\n<h3 id=\"why-you-would-want-to-write-your-devops-tools-in-rust\">Why you would want to write your DevOps tools in Rust</h3>\n<ul>\n<li>Small executables relative to other options like Go or Java</li>\n<li>Easy to port across different OS targets</li>\n<li>Efficient with resources (which helps cut down on your AWS bill) </li>\n<li>One of the fastest languages (even when compared to C)</li>\n<li>Zero cost abstractions - Rust is a low level performant language which also\ngives the us benefits of a high level language with its generics and abstractions.</li>\n</ul>\n<p>To elaborate on some of these points a bit further:</p>\n<h4 id=\"os-targets-and-cross-compiling-rust-for-different-architectures\">OS targets and Cross Compiling Rust for different architectures</h4>\n<p>For DevOps it's also worth mentioning the (relative) ease with which you can \nport your Rust code across different architectures and different OS's. </p>\n<p>Using the official Rust toolchain installer <code>rustup</code>, it's easy to get the \nstandard library for your target platform.\nRust <a href=\"https://doc.rust-lang.org/nightly/rustc/platform-support.html\">supports a great number of platforms</a>\nwith different tiers of support.\nThe docs for the <code>rustup</code> tool has <a href=\"https://rust-lang.github.io/rustup/cross-compilation.html\">a section</a>\ncovering how you can access pre-compiled artifacts for various architectures.\nTo install the target platform for an architecture (other than the host platform which is installed by default)\nyou simply need to run <code>rustup target add</code>:</p>\n<pre><code>$ rustup target add x86_64-pc-windows-msvc \ninfo: downloading component &#x27;rust-std&#x27; for &#x27;x86_64-pc-windows-msvc&#x27;\ninfo: installing component &#x27;rust-std&#x27; for &#x27;x86_64-pc-windows-msvc&#x27;\n</code></pre>\n<p>Cross compilation is already built into the Rust compiler by default. \nOnce the <code>x86_64-pc-windows-msvc</code> target is installed you can build for Windows \nwith the <code>cargo</code> build tool using the <code>--target</code> flag:</p>\n<pre><code>cargo build --target=x86_64-pc-windows-msvc\n</code></pre>\n<p>(the default target is always the host architecture)</p>\n<p>If one of your dependencies links to a native (i.e. non-Rust) library, you will\nneed to make sure that those cross compile as well. Doing <code>rustup target add</code>\nonly installs the Rust standard library for that target. However for the other \ntools that are often needed when cross-compiling, there is the handy\n<a href=\"https://github.com/rust-embedded/cross\">github.com/rust-embedded/cross</a> tool.\nThis is essentially a wrapper around cargo which does all cross compilation in \ndocker images that have all the necessary bits (linkers) and pieces installed.</p>\n<h4 id=\"small-executables\">Small Executables</h4>\n<p>A key unique feature of Rust is that it doesn't need a runtime or a garbage collector.\nCompare this to languages like Python or Haskell: with Rust the lack of any runtime\ndependencies (Python), or system libraries (as with Haskell) is a huge advantage \nfor portability.</p>\n<p>For practical purposes, as far as DevOps is concerned, this portability means \nthat Rust executables are much easier to deploy than scripts.\nWith Rust, compared to Python or Bash, we don't need to set up the environment for \nour code ahead of time. This frees us up from having to worry if the runtime \ndependencies for the language are set up.</p>\n<p>In addition to that, with Rust you're able to produce 100% static executables for \nLinux using the MUSL libc (and by default Rust will statically link all Rust code). \nThis means that you can deploy your Rust DevOps tool's binaries across your Linux \nservers without having to worry if the correct <code>libc</code> or other libraries were \ninstalled beforehand.</p>\n<p>Creating static executables for Rust is simple. As we discussed before, when discussing\ndifferent OS targets, it's easy with Rust to switch the target you're building against.\nTo compile static executables for the Linux MUSL target all you need to do is add \nthe <code>musl</code> target with:</p>\n<pre><code>$ rustup target add x86_64-unknown-linux-musl\n</code></pre>\n<p>Then you can using this new target to build your Rust project as a fully static \nexecutable with:</p>\n<pre><code>$ cargo build --target x86_64-unknown-linux-musl\n</code></pre>\n<p>As a result of not having a runtime or a garbage collector, Rust executables \ncan be extremely small. For example, there is a common DevOps tool called \nCredStash that was originally written in Python but has since been \nported to Go (GCredStash) and now Rust (RuCredStash).</p>\n<p>Comparing the executable sizes of the Rust versus Go implementations of CredStash,\nthe Rust executable is nearly a quarter of the size of the Go variant. </p>\n<table><thead><tr><th>Implementation</th><th>Executable Size</th></tr></thead><tbody>\n<tr><td>Rust CredStash: (RuCredStash Linux amd64)</td><td>3.3 MB</td></tr>\n<tr><td>Go CredStash: (GCredStash Linux amd64 v0.3.5)</td><td>11.7 MB</td></tr>\n</tbody></table>\n<p>Project links:</p>\n<ul>\n<li><a href=\"https://github.com/psibi/rucredstash\">github.com/psibi/rucredstash</a></li>\n<li><a href=\"https://github.com/winebarrel/gcredstash\">github.com/winebarrel/gcredstash</a></li>\n</ul>\n<p>This is by no means a perfect comparison, and 8 MB may not seem like a lot, but\nconsider the advantage automatically of having executables that are a quarter of the \nsize you would typically expect. </p>\n<p>This cuts down on the size your Docker images, AWS AMI's, or Azure VM images need\nto be - and that helps speed up the time it takes to spin up new deployments.</p>\n<p>With a tool of this size, having an executable that is 75% smaller than it \nwould be otherwise is not immediately apparent. On this scale the difference, 8 MB,\nis still quite cheap.\nBut with larger tools (or collections of tools and Rust based software) the benefits\nadd up and the difference begins to be a practical and worthwhile consideration.</p>\n<p>The Rust implementation was also not strictly written with the resulting size of \nthe executable in mind. So if executable size was even more important of a \nfactor other changes could be made - but that's beyond the scope of this post.</p>\n<h4 id=\"rust-is-fast\">Rust is fast</h4>\n<p>Rust is very fast even for common idiomatic everyday Rust code. And not only that\nit's arguably easier to work with than with C and C++ and catch errors in your \ncode.</p>\n<p>For the Fortunes benchmark (which exercises the ORM, \ndatabase connectivity, dynamic-size collections, sorting, server-side templates, \nXSS countermeasures, and character encoding) Rust is second and third, only lagging \nbehind the first place C++ based framework by 4 percent. </p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-fortunes.png\" style=\"max-width:95%\">\n<p>In the benchmark for database access for a single query Rust is first and second:</p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-single-query.png\" style=\"max-width:95%\">\n<p>And in a composite of all the benchmarks Rust based frameworks are second and third place.</p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-composite.png\" style=\"max-width:95%\">\n<p>Of course language and framework benchmarks are not real life, however this is \nstill a fair comparison of the languages as they relate to others (within the context \nand the focus of the benchmark).</p>\n<p>Source: <a href=\"https://www.techempower.com/benchmarks/\">https://www.techempower.com/benchmarks</a></p>\n<h3 id=\"why-would-you-not-want-to-write-your-devops-tools-in-rust\">Why would you not want to write your DevOps tools in Rust?</h3>\n<p>For medium to large projects, it’s important to have a type system and compile \ntime checks like those in Rust versus what you would find in something like Python\nor Bash.\nThe latter languages let you get away with things far more readily. This makes \ndevelopment much &quot;faster&quot; in one sense.</p>\n<p>Certain situations, especially those with involving small project codebases, would \nbenefit more from using an interpreted language. In these cases, being able to quickly \nchange pieces of the code without needing to re-compile and re-deploy the project\noutweighs the benefits (in terms of safety, execution speed, and portability)\nthat languages like Rust bring. </p>\n<p>Working with and iterating on a Rust codebase in those circumstances, with frequent\nbut small codebases changes, would be needlessly time-consuming\nIf you have a small codebase with few or no runtime dependencies, then it wouldn't\nbe worth it to use Rust.</p>\n<h2 id=\"demo-devops-project-for-aws\">Demo DevOps Project for AWS</h2>\n<p>We'll briefly cover some of the libraries typically used for an AWS focused \nDevOps tool in a walk-through of a small demo Rust project here. \nThis aims to provide a small example that uses some of the libraries you'll likely\nwant if you’re writing a CLI based DevOps tool in Rust. Specifically for this \nexample we'll show a tool that does some basic operations against AWS S3 \n(creating new buckets, adding files to buckets, listing the contents of buckets).</p>\n<h3 id=\"project-structure\">Project structure</h3>\n<p>For AWS integration we're going to utilize the <a href=\"https://www.rusoto.org/\">Rusoto</a> library.\nSpecifically for our modest demo Rust DevOps tools we're going to pull in the \n<a href=\"https://docs.rs/rusoto_core/0.45.0/rusoto_core/\">rusoto_core</a> and the \n<a href=\"https://docs.rs/rusoto_s3/0.45.0/rusoto_s3/\">rusoto_s3</a> crates (in Rust a <em>crate</em>\nis akin to a library or package).</p>\n<p>We're also going to use the <a href=\"https://docs.rs/structopt/0.3.16/structopt/\">structopt</a> crate\nfor our CLI options. This is a handy, batteries included CLI library that makes \nit easy to create a CLI interface around a Rust struct. </p>\n<p>The tool operates by matching the CLI option and arguments the user passes in \nwith a <a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs#L211\"><code>match</code> expression</a>.</p>\n<p>We can then use this to match on that part of the CLI option struct we've defined \nand call the appropriate functions for that option.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">match opt {\n    Opt::Create { bucket: bucket_name } =&gt; {\n        println!(&quot;Attempting to create a bucket called: {}&quot;, bucket_name);\n        let demo = S3Demo::new(bucket_name);\n        create_demo_bucket(&amp;demo);\n    },\n</code></pre>\n<p>This matches on the <a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs#L182\"><code>Create</code></a>\nvariant of the <code>Opt</code> enum. </p>\n<p>We then use <code>S3Demo::new(bucket_name)</code> to create a new <code>S3Client</code> which we can\nuse in the standalone <code>create_demo_bucket</code> function that we've defined \nwhich will create a new S3 bucket.</p>\n<p>The tool is fairly simple with most of the code located in \n<a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs\">src/main.rs</a></p>\n<h3 id=\"building-the-rust-project\">Building the Rust project</h3>\n<p>Before you build the code in this project, you will need to install Rust. \nPlease follow <a href=\"https://www.rust-lang.org/tools/install\">the official install instructions here</a>.</p>\n<p>The default build tool for Rust is called Cargo. It's worth getting familiar \nwith <a href=\"https://doc.rust-lang.org/cargo/guide/\">the docs for Cargo</a>\nbut here's a quick overview for building the project.</p>\n<p>To build the project run the following from the root of the \n<a href=\"https://github.com/fpco/rust-aws-devops\">git repo</a>:</p>\n<pre><code>cargo build\n</code></pre>\n<p>You can then use <code>cargo run</code> to run the code or execute the code directly\nwith <code>./target/debug/rust-aws-devops</code>:</p>\n<pre><code>$ .&#x2F;target&#x2F;debug&#x2F;rust-aws-devops \n\nRunning tool\nRustAWSDevops 0.1.0\nMike McGirr &lt;[email protected]&gt;\n\nUSAGE:\n    rust-aws-devops &lt;SUBCOMMAND&gt;\n\nFLAGS:\n    -h, --help       Prints help information\n    -V, --version    Prints version information\n\nSUBCOMMANDS:\n    add-object       Add the specified file to the bucket\n    create           Create a new bucket with the given name\n    delete           Try to delete the bucket with the given name\n    delete-object    Remove the specified object from the bucket\n    help             Prints this message or the help of the given subcommand(s)\n    list             Try to find the bucket with the given name and list its objects``\n</code></pre>\n<p>Which will output the nice CLI help output automatically created for us \nby <code>structopt</code>.</p>\n<p>If you're ready to build a release version (with optimizations turn on which \nwill make compilation take slightly longer) run the following:</p>\n<pre><code>cargo build --release\n</code></pre>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>As this small demo showed, it's not difficult to get started using Rust to write\nDevOps tools. And even then we didn't need to make a trade-off between ease of\ndevelopment and performant fast code. </p>\n<p>Hopefully the next time you're writing a new piece of DevOps software, \nanything from a simple CLI tool for a specific DevOps operation or you're writing \nthe next Kubernetes, you'll consider reaching for Rust.\nAnd if you have further questions about Rust, or need help implementing your Rust \nproject, please feel free to reach out to FP Complete for Rust engineering \nand training!</p>\n<p>Want to learn more Rust? Check out our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a>. And for more information, check out our <a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/",
        "slug": "rust-for-devops-tooling",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Using Rust for DevOps tooling",
        "description": "A beginner's guide to writing your DevOps tools in Rust.",
        "updated": null,
        "date": "2020-09-09",
        "year": 2020,
        "month": 9,
        "day": 9,
        "taxonomies": {
          "tags": [
            "devops",
            "rust",
            "insights"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mike McGirr",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/rust-for-devops-tooling/",
        "components": [
          "blog",
          "rust-for-devops-tooling"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "introduction",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#introduction",
            "title": "Introduction",
            "children": []
          },
          {
            "level": 2,
            "id": "what-makes-the-rust-language-unique",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#what-makes-the-rust-language-unique",
            "title": "What Makes the Rust Language Unique",
            "children": []
          },
          {
            "level": 2,
            "id": "rust-and-devops",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#rust-and-devops",
            "title": "Rust and DevOps",
            "children": [
              {
                "level": 3,
                "id": "why-you-would-want-to-write-your-devops-tools-in-rust",
                "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#why-you-would-want-to-write-your-devops-tools-in-rust",
                "title": "Why you would want to write your DevOps tools in Rust",
                "children": [
                  {
                    "level": 4,
                    "id": "os-targets-and-cross-compiling-rust-for-different-architectures",
                    "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#os-targets-and-cross-compiling-rust-for-different-architectures",
                    "title": "OS targets and Cross Compiling Rust for different architectures",
                    "children": []
                  },
                  {
                    "level": 4,
                    "id": "small-executables",
                    "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#small-executables",
                    "title": "Small Executables",
                    "children": []
                  },
                  {
                    "level": 4,
                    "id": "rust-is-fast",
                    "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#rust-is-fast",
                    "title": "Rust is fast",
                    "children": []
                  }
                ]
              },
              {
                "level": 3,
                "id": "why-would-you-not-want-to-write-your-devops-tools-in-rust",
                "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#why-would-you-not-want-to-write-your-devops-tools-in-rust",
                "title": "Why would you not want to write your DevOps tools in Rust?",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "demo-devops-project-for-aws",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#demo-devops-project-for-aws",
            "title": "Demo DevOps Project for AWS",
            "children": [
              {
                "level": 3,
                "id": "project-structure",
                "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#project-structure",
                "title": "Project structure",
                "children": []
              },
              {
                "level": 3,
                "id": "building-the-rust-project",
                "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#building-the-rust-project",
                "title": "Building the Rust project",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 2540,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/",
            "title": "Cloud Vendor Neutrality"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
            "title": "Levana NFT Launch"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/",
            "title": "Rust: Of course it compiles, right?"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
            "title": "Deploying Rust with Windows Containers on Kubernetes"
          },
          {
            "permalink": "https://tech.fpcomplete.com/rust/",
            "title": "FP Complete Rust"
          }
        ]
      },
      {
        "relative_path": "blog/http-status-codes-async-rust.md",
        "colocated_path": null,
        "content": "<p>This blog post is a direct follow up on my previous blog post on <a href=\"https://tech.fpcomplete.com/blog/different-levels-async-rust/\">different levels of async in Rust</a>. You may want to check that one out before diving in here.</p>\n<p>Alright, so now we know that we can make our programs asynchronous by using non-blocking I/O calls. But last time we only saw examples that remained completely sequential, defeating the whole purpose of async. Let's change that with something more sophisticated.</p>\n<p>A few months ago I needed to ensure that all the URLs for a domain name resolved to either a real web page (200 status code) or redirected to somewhere else with a real web page. To make that happen, I needed a program that would:</p>\n<ul>\n<li>Read all the URLs in a text file, one URL per line</li>\n<li>Produce a CSV file containing the URL and its status code</li>\n</ul>\n<p>To make this simple, we're going to take a lot of shortcuts like:</p>\n<ul>\n<li>Hard-coding the input file path for the URLs</li>\n<li>Printing out the CSV output to standard output</li>\n<li>Using a simple <code>println!</code> for generating CSV output instead of using a library</li>\n<li>Allow any errors to crash the entire program\n<ul>\n<li>In fact, as you'll see later, we're really treating this as a requirement: if any HTTP requests have an error, the program <em>must</em> terminate with an error code, so we know something went wrong</li>\n</ul>\n</li>\n</ul>\n<p>For the curious: the original version of this was a <a href=\"https://twitter.com/snoyberg/status/1265526242486468616\">really short Haskell program</a> that had these properties. For fun a few weeks back, I rewrote it in <a href=\"https://twitter.com/snoyberg/status/1296716412262780928?s=20\">two</a> <a href=\"https://twitter.com/snoyberg/status/1296718361766887424?s=20\">ways</a> in Rust, which ultimately led to this pair of blog posts.</p>\n<h2 id=\"fully-blocking\">Fully blocking</h2>\n<p>Like last time, I recommend following along with my code. I'll kick this off with <code>cargo new httpstatus</code>. And then to avoid further futzing with our <code>Cargo.toml</code>, let's add our dependencies preemptively:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[dependencies]\ntokio = { version = &quot;0.2.22&quot;, features = [&quot;full&quot;] }\nreqwest = { version = &quot;0.10.8&quot;, features = [&quot;blocking&quot;] }\nasync-channel = &quot;1.4.1&quot;\nis_type = &quot;0.2.1&quot;\n</code></pre>\n<p>That <code>features = [&quot;blocking&quot;]</code> should hopefully grab your attention. The <code>reqwest</code> library provides an optional, fully blocking API. That seems like a great place to get started. Here's a nice, simple program that does what we need:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">&#x2F;&#x2F; To use .lines() before, just like last time\nuse std::io::BufRead;\n\n&#x2F;&#x2F; We&#x27;ll return _some_ kind of an error\nfn main() -&gt; Result&lt;(), Box&lt;dyn std::error::Error&gt;&gt; {\n    &#x2F;&#x2F; Open the file for input\n    let file = std::fs::File::open(&quot;urls.txt&quot;)?;\n    &#x2F;&#x2F; Make a buffered version so we can read lines\n    let buffile = std::io::BufReader::new(file);\n\n    &#x2F;&#x2F; CSV header\n    println!(&quot;URL,Status&quot;);\n\n    &#x2F;&#x2F; Create a client so we can make requests\n    let client = reqwest::blocking::Client::new();\n\n    for line in buffile.lines() {\n        &#x2F;&#x2F; Error handling on reading the lines in the file\n        let line = line?;\n        &#x2F;&#x2F; Make a request and send it, getting a response\n        let resp = client.get(&amp;line).send()?;\n        &#x2F;&#x2F; Print the status code\n        println!(&quot;{},{}&quot;, line, resp.status().as_u16());\n    }\n    Ok(())\n}\n</code></pre>\n<p>Thanks to Rust's <code>?</code> syntax, error handling is pretty easy here. In fact, there are basically no gotchas here. <code>reqwest</code> makes this code really easy to write!</p>\n<p>Once you put a <code>urls.txt</code> file together, such as the following:</p>\n<pre><code>https:&#x2F;&#x2F;www.wikipedia.org\nhttps:&#x2F;&#x2F;www.wikipedia.org&#x2F;path-the-does-not-exist\nhttp:&#x2F;&#x2F;wikipedia.org\n</code></pre>\n<p>You'll hopefully get output such as:</p>\n<pre><code>URL,Status\nhttps:&#x2F;&#x2F;www.wikipedia.org,200\nhttps:&#x2F;&#x2F;www.wikipedia.org&#x2F;path-the-does-not-exist,404\nhttp:&#x2F;&#x2F;wikipedia.org,200\n</code></pre>\n<p>The logic above is pretty easy to follow, and hopefully the inline comments explain anything confusing. With that idea in mind, let's up our game a bit.</p>\n<h2 id=\"ditching-the-blocking-api\">Ditching the blocking API</h2>\n<p>Let's first move away from the blocking API in <code>reqwest</code>, but still keep all the sequential nature of the program. This involves four relatively minor changes to the code, all spelled out below:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::io::BufRead;\n\n&#x2F;&#x2F; First change: add the Tokio runtime\n#[tokio::main]\n&#x2F;&#x2F; Second: turn this into an async function\nasync fn main() -&gt; Result&lt;(), Box&lt;dyn std::error::Error&gt;&gt; {\n    let file = std::fs::File::open(&quot;urls.txt&quot;)?;\n    let buffile = std::io::BufReader::new(file);\n\n    println!(&quot;URL,Status&quot;);\n\n    &#x2F;&#x2F; Third change: Now we make an async Client\n    let client = reqwest::Client::new();\n\n    for line in buffile.lines() {\n        let line = line?;\n\n        &#x2F;&#x2F; Fourth change: We need to .await after send()\n        let resp = client.get(&amp;line).send().await?;\n\n        println!(&quot;{},{}&quot;, line, resp.status().as_u16());\n    }\n    Ok(())\n}\n</code></pre>\n<p>The program is still fully sequential: we fully send a request, then get the response, before we move onto the next URL. But we're at least ready to start playing with different async approaches.</p>\n<h2 id=\"where-blocking-is-fine\">Where blocking is fine</h2>\n<p>IF you remember from last time, we had a bit of a philosophical discussion on the nature of blocking, and that ultimately some blocking is OK in a program. In order to both simplify what we do here, as well as provide some real-world recommendations, let's list all of the blocking I/O we're doing:</p>\n<ul>\n<li>Opening the file <code>urls.txt</code></li>\n<li>Reading lines from that file</li>\n<li>Outputting to <code>stdout</code> with <code>println!</code></li>\n<li>Implicitly closing the file descriptor</li>\n</ul>\n<p>Note that, even though we're sequentially running our HTTP requests right now, those are in fact using non-blocking I/O. Therefore, I haven't included anything related to HTTP in the list above. We'll start dealing with the sequential nature next.</p>\n<p>Returning to the four blocking I/O calls above, I'm going to make a bold statement: don't bother making them non-blocking. It's not actually terribly difficult to do the file I/O using <code>tokio</code> (we saw how last time). But we get virtually no benefit from doing so. The latency for local disk access, especially when we're talking a file as small as <code>urls.txt</code> is likely to be, and especially in contrast to a bunch of HTTP requests, is miniscule.</p>\n<p>Feel free to disagree with me, or to take on making those calls non-blocking as an exercise. But I'm going to focus instead on higher value targets.</p>\n<h2 id=\"concurrent-requests\">Concurrent requests</h2>\n<p>The real problem here is that we have sequential HTTP requests going on. Instead, we would much prefer to make our requests concurrently. If we assume there are 100 URLs, and each request takes 1 second (hopefully an overestimation), a sequential algorithm can at best finish in 100 seconds. However, a concurrent algorithm could in theory finish all 100 requests in just 1 second. In reality that's pretty unlikely to happen, but it is completely reasonable to expect a significant speedup factor, depending on network conditions, number of hosts you're connecting to, and other similar factors.</p>\n<p>So how exactly do we do concurrency with <code>tokio</code>? The most basic answer is the <code>tokio::spawn</code> function. This spawns a new <em>task</em> in the <code>tokio</code> runtime. This is similar in principle to spawning a new system thread. But instead, running and scheduling is managed by the runtime instead of the operating system. Let's take a first stab at spawning each HTTP request into its own task:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">tokio::spawn(async move {\n    let resp = client.get(&amp;line).send().await?;\n\n    println!(&quot;{},{}&quot;, line, resp.status().as_u16());\n});\n</code></pre>\n<p>That looks nice, but we have a problem:</p>\n<pre><code>error[E0277]: the `?` operator can only be used in an async block that returns `Result` or `Option` (or another type that implements `std::ops::Try`)\n  --&gt; src\\main.rs:16:24\n   |\n15 |           tokio::spawn(async move {\n   |  _________________________________-\n16 | |             let resp = client.get(&amp;line).send().await?;\n   | |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot use the `?` operator in an async block that returns `()`\n17 | |\n18 | |             println!(&quot;{},{}&quot;, line, resp.status().as_u16());\n19 | |         });\n   | |_________- this function should return `Result` or `Option` to accept `?`\n</code></pre>\n<p>Our task doesn't return a <code>Result</code>, and therefore has no way to complain about errors. This is actually indicating a far more serious issue, which we'll get to later. But for now, let's just pretend errors won't happen, and cheat a bit with <code>.unwrap()</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let resp = client.get(&amp;line).send().await.unwrap();\n</code></pre>\n<p>This also fails, now with an ownership issue:</p>\n<pre><code>error[E0382]: use of moved value: `client`\n  --&gt; src\\main.rs:15:33\n   |\n10 |       let client = reqwest::Client::new();\n   |           ------ move occurs because `client` has type `reqwest::async_impl::client::Client`, which does not implement the `Copy` trait\n</code></pre>\n<p>This one is easier to address. The <code>Client</code> is being shared by multiple tasks. But each task needs to make its own clone of the <code>Client</code>. If you read <a href=\"https://docs.rs/reqwest/0.10.8/reqwest/struct.Client.html\">the docs</a>, you'll see that this is recommended behavior:</p>\n<blockquote>\n<p>The <code>Client</code> holds a connection pool internally, so it is advised that you create one and <strong>reuse</strong> it.</p>\n<p>You do <strong>not</strong> have to wrap the <code>Client</code> it in an <code>Rc</code> or <code>Arc</code> to <strong>reuse</strong> it, because it already uses an <code>Arc</code> internally.</p>\n</blockquote>\n<p>Once we add this line before our <code>tokio::spawn</code>, our code will compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let client = client.clone();\n</code></pre>\n<p>Unfortunately, things fail pretty spectacularly at runtime:</p>\n<pre><code>URL,Status\nthread &#x27;thread &#x27;tokio-runtime-workerthread &#x27;tokio-runtime-worker&#x27; panicked at &#x27;&#x27; panicked at &#x27;tokio-runtime-workercalled `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: &quot;https:&#x2F;&#x2F;www.wikipedia.org&#x2F;path-the-does-not-exist&quot;, source: hyper::Error(Connect, ConnectError(&quot;dns error&quot;, Custom { kind: Interrupted, error: JoinError::Cancelled })) }called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: &quot;https:&#x2F;&#x2F;www.wikipedia.org&#x2F;&quot;, source: hyper::Error(Connect, ConnectError(&quot;dns error&quot;, Custom { kind: Interrupted, error: JoinError::Cancelled })) }&#x27; panicked at &#x27;&#x27;, &#x27;, called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: &quot;http:&#x2F;&#x2F;wikipedia.org&#x2F;&quot;, source: hyper::Error(Connect, ConnectError(&quot;dns error&quot;, Custom { kind: Interrupted, error: JoinError::Cancelled })) }src\\main.rssrc\\main.rs&#x27;, ::src\\main.rs1717:::241724\n</code></pre>\n<p>That's a big error message, but the important bit for us is a bunch of <code>JoinError::Cancelled</code> stuff all over the place.</p>\n<h2 id=\"wait-for-me\">Wait for me!</h2>\n<p>Let's talk through what's happening in our program:</p>\n<ol>\n<li>Initiate the Tokio runtime</li>\n<li>Create a <code>Client</code></li>\n<li>Open the file, start reading line by line</li>\n<li>For each line:\n<ul>\n<li>Spawn a new task</li>\n<li>That task starts making non-blocking I/O calls</li>\n<li>Those tasks go to sleep, to be rescheduled when data is ready</li>\n<li>When all is said and done, print out the CSV lines</li>\n</ul>\n</li>\n<li>Reach the end of the <code>main</code> function, which triggers the runtime to shut down</li>\n</ol>\n<p>The problem is that we reach (5) long before we finish (4). When this happens, all in-flight I/O will be cancelled, which leads to the error messages we saw above. Instead, we need to ensure we wait for each task to complete before we exit. The easiest way to do this is to call <code>.await</code> on the result of the <code>tokio::spawn</code> call. (Those results, by the way, are called <code>JoinHandle</code>s.) However, doing so immediately will completely defeat the purpose of our concurrent work, since we will once again be sequential!</p>\n<p>Instead, we want to spawn all of the tasks, and then wait for them all to complete. One easy way to achieve this is to put all of the <code>JoinHandle</code>s into a <code>Vec</code>. Let's look at the code. And since we've made a bunch of changes since our last complete code dump, I'll show you the full current status of our source file:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::io::BufRead;\n\n#[tokio::main]\nasync fn main() -&gt; Result&lt;(), Box&lt;dyn std::error::Error&gt;&gt; {\n    let file = std::fs::File::open(&quot;urls.txt&quot;)?;\n    let buffile = std::io::BufReader::new(file);\n\n    println!(&quot;URL,Status&quot;);\n\n    let client = reqwest::Client::new();\n\n    let mut handles = Vec::new();\n\n    for line in buffile.lines() {\n        let line = line?;\n\n        let client = client.clone();\n        let handle = tokio::spawn(async move {\n            let resp = client.get(&amp;line).send().await.unwrap();\n\n            println!(&quot;{},{}&quot;, line, resp.status().as_u16());\n        });\n        handles.push(handle);\n    }\n\n    for handle in handles {\n        handle.await?;\n    }\n    Ok(())\n}\n</code></pre>\n<p>And finally we have a concurrent program! This is actually pretty good, but it has two flaws we'd like to fix:</p>\n<ol>\n<li>It doesn't properly handle errors, instead just using <code>.unwrap()</code>. I mentioned this above, and said our usage of <code>.unwrap()</code> was indicating a &quot;far more serious issue.&quot; That issue was the fact that the result values from spawning subthreads are never noticed by the main thread, which is really the core issue causing the cancellation we discussed above. It's always nice when type-driven error messages indicate a runtime bug in our code!</li>\n<li>There's no limitation on the number of concurrent tasks we'll spawn. Ideally, we'd rather have a job queue approach, with a dedicated number of worker tasks. This will let our program behave better as we increase the number of URLs in our input file.</li>\n</ol>\n<p><strong>NOTE</strong> It would be possible in the program above to skip the <code>spawn</code>s and collect a <code>Vec</code> of <code>Future</code>s, then <code>await</code> on those. However, that would once again end up sequential in nature. Spawning allows all of those <code>Future</code>s to run concurrently, and be polled by the <code>tokio</code> runtime itself. It would also be possible to use <a href=\"https://docs.rs/futures/0.3.5/futures/future/fn.join_all.html\"><code>join_all</code></a> to poll all of the <code>Future</code>s, but it <a href=\"https://github.com/tokio-rs/tokio/issues/2401#issuecomment-612858572\">has some performance issues</a>. So best to stick with <code>tokio::spawn</code>.</p>\n<p>Let's address the simpler one first: proper error handling.</p>\n<h2 id=\"error-handling\">Error handling</h2>\n<p>The basic concept of error handling is that we want the errors from the spawned tasks to be detected in the main tasks, and then cause the application to exit. One way to handle that is to return the <code>Err</code> values from the spawned tasks directly, and then pick them up with the <code>JoinHandle</code> that <code>spawn</code> returns. This sounds nice, but naively implemented will result in checking the error responses one at a time. Instead, we'd rather fail early, by detecting that (for example) the 57th request failed and immediately terminating the application.</p>\n<p>You <em>could</em> do some kind of a &quot;tell me which is the first <code>JoinHandle</code> that's ready,&quot; but it's not the way I initially implemented it, and some quick Googling indicated <a href=\"https://github.com/tokio-rs/tokio/issues/2401\">you'd have to be careful about which library functions you use</a>. Instead, we'll try a different approach using an <code>mpsc</code> (multi-producer, single-consumer).</p>\n<p>Here's the basic idea. Let's pretend there are 100 URLs in the file. We'll spawn 100 tasks. Each of those tasks will write a single value onto the <code>mpsc</code> channel: a <code>Result&lt;(), Error&gt;</code>. Then, in the <code>main</code> task, we'll read 100 values off of the channel. If any of them are <code>Err</code>, we exit the program immediately. Otherwise, if we read off 100 <code>Ok</code> values, we exit successfully.</p>\n<p>Before we read the file, we don't know how many lines will be in it. So we're going to use an unbounded channel. This isn't generally recommended practice, but it ties in closely with my second complaint above: we're spawning a separate task for each line in the file instead of doing something more intelligent like a job queue. In other words, if we can safely spawn N tasks, we can safely have an unbounded channel of size N.</p>\n<p>Alright, let's see the code in question!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::io::BufRead;\n\n#[tokio::main]\nasync fn main() -&gt; Result&lt;(), Box&lt;dyn std::error::Error&gt;&gt; {\n    let file = std::fs::File::open(&quot;urls.txt&quot;)?;\n    let buffile = std::io::BufReader::new(file);\n\n    println!(&quot;URL,Status&quot;);\n\n    let client = reqwest::Client::new();\n\n    &#x2F;&#x2F; Create the channel. tx will be the sending side (each spawned task),\n    &#x2F;&#x2F; and rx will be the receiving side (the main task after spawning).\n    let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel();\n\n    &#x2F;&#x2F; Keep track of how many lines are in the file, and therefore\n    &#x2F;&#x2F; how many tasks we spawned\n    let mut count = 0;\n\n    for line in buffile.lines() {\n        let line = line?;\n\n        let client = client.clone();\n        &#x2F;&#x2F; Each spawned task gets its own copy of tx\n        let tx = tx.clone();\n        tokio::spawn(async move {\n            &#x2F;&#x2F; Use a map to say: if the request went through\n            &#x2F;&#x2F; successfully, then print it. Otherwise:\n            &#x2F;&#x2F; keep the error\n            let msg = client.get(&amp;line).send().await.map(|resp| {\n                println!(&quot;{},{}&quot;, line, resp.status().as_u16());\n            });\n            &#x2F;&#x2F; And send the message to the channel. We ignore errors here.\n            &#x2F;&#x2F; An error during sending would mean that the receiving side\n            &#x2F;&#x2F; is already closed, which would indicate either programmer\n            &#x2F;&#x2F; error, or that our application is shutting down because\n            &#x2F;&#x2F; another task generated an error.\n            tx.send(msg).unwrap();\n        });\n\n        &#x2F;&#x2F; Increase the count of spawned tasks\n        count += 1;\n    }\n\n    &#x2F;&#x2F; Drop the sending side, so that we get a None when\n    &#x2F;&#x2F; calling rx.recv() one final time. This allows us to\n    &#x2F;&#x2F; test some extra assertions below\n    std::mem::drop(tx);\n\n    let mut i = 0;\n    loop {\n        match rx.recv().await {\n            &#x2F;&#x2F; All senders are gone, which must mean that\n            &#x2F;&#x2F; we&#x27;re at the end of our loop\n            None =&gt; {\n                assert_eq!(i, count);\n                break Ok(());\n            }\n            &#x2F;&#x2F; Something finished successfully, make sure\n            &#x2F;&#x2F; that we haven&#x27;t reached the final item yet\n            Some(Ok(())) =&gt; {\n                assert!(i &lt; count);\n            }\n            &#x2F;&#x2F; Oops, an error! Time to exit!\n            Some(Err(e)) =&gt; {\n                assert!(i &lt; count);\n                return Err(From::from(e));\n            }\n        }\n        i += 1;\n    }\n}\n</code></pre>\n<p>With this in place, we now have a proper concurrent program that does error handling correctly. Nifty! Before we hit the job queue, let's clean this up a bit.</p>\n<h2 id=\"workers\">Workers</h2>\n<p>The previous code works well. It allows us to spawn multiple worker tasks, and then wait for all of them to complete, handling errors when they occur. Let's generalize this! We're doing this now since it will make the final step in this blog post much easier.</p>\n<p>We'll put all of the code for this in a separate module of our project. The code will be mostly the same as what we had before, except we'll have a nice <code>struct</code> to hold onto our data, and we'll be more explicit about the error type. Put this code into <code>src/workers.rs</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use is_type::Is; &#x2F;&#x2F; fun trick, we&#x27;ll look at it below\nuse std::future::Future;\nuse tokio::sync::mpsc;\n\n&#x2F;&#x2F;&#x2F; Spawn and then run workers to completion, handling errors\npub struct Workers&lt;E&gt; {\n    count: usize,\n    tx: mpsc::UnboundedSender&lt;Result&lt;(), E&gt;&gt;,\n    rx: mpsc::UnboundedReceiver&lt;Result&lt;(), E&gt;&gt;,\n}\n\nimpl&lt;E: Send + &#x27;static&gt; Workers&lt;E&gt; {\n    &#x2F;&#x2F;&#x2F; Create a new Workers value\n    pub fn new() -&gt; Self {\n        let (tx, rx) = mpsc::unbounded_channel();\n        Workers { count: 0, tx, rx }\n    }\n\n    &#x2F;&#x2F;&#x2F; Spawn a new task to run inside this Workers\n    pub fn spawn&lt;T&gt;(&amp;mut self, task: T)\n    where\n        &#x2F;&#x2F; Make sure we can run the task\n        T: Future + Send + &#x27;static,\n        &#x2F;&#x2F; And a weird trick: make sure that the output\n        &#x2F;&#x2F; from the task is Result&lt;(), E&gt;\n        &#x2F;&#x2F; Equality constraints would make this much nicer\n        &#x2F;&#x2F; See: https:&#x2F;&#x2F;github.com&#x2F;rust-lang&#x2F;rust&#x2F;issues&#x2F;20041\n        T::Output: Is&lt;Type = Result&lt;(), E&gt;&gt;,\n    {\n        &#x2F;&#x2F; Get a new copy of the send side\n        let tx = self.tx.clone();\n        &#x2F;&#x2F; Spawn a new task\n        tokio::spawn(async move {\n            &#x2F;&#x2F; Run the provided task and get its result\n            let res = task.await;\n            &#x2F;&#x2F; Send the task to the channel\n            &#x2F;&#x2F; This should never fail, so we panic if something goes wrong\n            match tx.send(res.into_val()) {\n                Ok(()) =&gt; (),\n                &#x2F;&#x2F; could use .unwrap, but that would require Debug constraint\n                Err(_) =&gt; panic!(&quot;Impossible happend! tx.send failed&quot;),\n            }\n        });\n        &#x2F;&#x2F; One more worker to wait for\n        self.count += 1;\n    }\n\n    &#x2F;&#x2F;&#x2F; Finish running all of the workers, exiting when the first one errors or all of them complete\n    pub async fn run(mut self) -&gt; Result&lt;(), E&gt; {\n        &#x2F;&#x2F; Make sure we don&#x27;t wait for ourself here\n        std::mem::drop(self.tx);\n        &#x2F;&#x2F; How many workers have completed?\n        let mut i = 0;\n\n        loop {\n            match self.rx.recv().await {\n                None =&gt; {\n                    assert_eq!(i, self.count);\n                    break Ok(());\n                }\n                Some(Ok(())) =&gt; {\n                    assert!(i &lt; self.count);\n                }\n                Some(Err(e)) =&gt; {\n                    assert!(i &lt; self.count);\n                    return Err(e);\n                }\n            }\n            i += 1;\n        }\n    }\n}\n</code></pre>\n<p>Now in <code>src/main.rs</code>, we're going to get to focus on just our business logic... and error handling. Have a look at the new contents:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">&#x2F;&#x2F; Indicate that we have another module\nmod workers;\n\nuse std::io::BufRead;\n\n&#x2F;&#x2F;&#x2F; Create a new error type to handle the two ways errors can happen.\n#[derive(Debug)]\nenum AppError {\n    IO(std::io::Error),\n    Reqwest(reqwest::Error),\n}\n\n&#x2F;&#x2F; And now implement some boilerplate From impls to support ? syntax\nimpl From&lt;std::io::Error&gt; for AppError {\n    fn from(e: std::io::Error) -&gt; Self {\n        AppError::IO(e)\n    }\n}\n\nimpl From&lt;reqwest::Error&gt; for AppError {\n    fn from(e: reqwest::Error) -&gt; Self {\n        AppError::Reqwest(e)\n    }\n}\n\n#[tokio::main]\nasync fn main() -&gt; Result&lt;(), AppError&gt; {\n    let file = std::fs::File::open(&quot;urls.txt&quot;)?;\n    let buffile = std::io::BufReader::new(file);\n\n    println!(&quot;URL,Status&quot;);\n\n    let client = reqwest::Client::new();\n    let mut workers = workers::Workers::new();\n\n    for line in buffile.lines() {\n        let line = line?;\n        let client = client.clone();\n        &#x2F;&#x2F; Use workers.spawn, and no longer worry about results\n        &#x2F;&#x2F; ? works just fine inside!\n        workers.spawn(async move {\n            let resp = client.get(&amp;line).send().await?;\n            println!(&quot;{},{}&quot;, line, resp.status().as_u16());\n            Ok(())\n        })\n    }\n\n    &#x2F;&#x2F; Wait for the workers to complete\n    workers.run().await\n}\n</code></pre>\n<p>There's more noise around error handling, but overall the code is easier to understand. Now that we have that out of the way, we're finally ready to tackle the last piece of this...</p>\n<h2 id=\"job-queue\">Job queue</h2>\n<p>Let's review again at a high level how we do error handling with workers. We set up a channel to allow each worker task to send its results to a single receiver, the main task. We used <code>mpsc</code>, or &quot;multi-producer single-consumer.&quot; That matches up with what we just described, right?</p>\n<p>OK, a job queue is kind of similar. We want to have a single task that reads lines from the file and feeds them into a channel. Then, we want multiple workers to read values from the channel. This is &quot;single-producer multi-consumer.&quot; Unfortunately, <code>tokio</code> doesn't provide such a channel out of the box. After I asked on Twitter, I <a href=\"https://twitter.com/gallabytes/status/1300193419084460033?s=20\">was recommended</a> to use <a href=\"https://crates.io/crates/async-channel\">async-channel</a>, which provides a &quot;multi-producer multi-consumer.&quot; That works for us!</p>\n<p>Thanks to our work before with the <code>Workers</code> <code>struct</code> refactor, this is now pretty easy. Let's have a look at the modified <code>main</code> function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() -&gt; Result&lt;(), AppError&gt; {\n    let file = std::fs::File::open(&quot;urls.txt&quot;)?;\n    let buffile = std::io::BufReader::new(file);\n\n    println!(&quot;URL,Status&quot;);\n\n    &#x2F;&#x2F; Feel free to define to any numnber (&gt; 0) you want\n    &#x2F;&#x2F; At a value of 4, this could comfortably fit in OS threads\n    &#x2F;&#x2F; But tasks are certainly up to the challenge, and will scale\n    &#x2F;&#x2F; up more nicely for large numbers and more complex applications\n    const WORKERS: usize = 4;\n    let client = reqwest::Client::new();\n    let mut workers = workers::Workers::new();\n    &#x2F;&#x2F; Buffers double the size of the number of workers are common\n    let (tx, rx) = async_channel::bounded(WORKERS * 2);\n\n    &#x2F;&#x2F; Spawn the task to fill up the queue\n    workers.spawn(async move {\n        for line in buffile.lines() {\n            let line = line?;\n            tx.send(line).await.unwrap();\n        }\n        Ok(())\n    });\n\n    &#x2F;&#x2F; Spawn off the individual workers\n    for _ in 0..WORKERS {\n        let client = client.clone();\n        let rx = rx.clone();\n        workers.spawn(async move {\n            loop {\n                match rx.recv().await {\n                    &#x2F;&#x2F; uses Err to represent a closed channel due to tx being dropped\n                    Err(_) =&gt; break Ok(()),\n                    Ok(line) =&gt; {\n                        let resp = client.get(&amp;line).send().await?;\n                        println!(&quot;{},{}&quot;, line, resp.status().as_u16());\n                    }\n                }\n            }\n        })\n    }\n\n    &#x2F;&#x2F; Wait for the workers to complete\n    workers.run().await\n}\n</code></pre>\n<p>And just like that, we have a concurrent job queue! It's everything we could have wanted!</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I'll admit, when I wrote the post last week, I didn't think I'd be going this deep into the topic. But once I started playing with solutions, I decided I wanted to implement a full job queue for this.</p>\n<p>I hope you found this topic interesting! If you want more Rust content, please <a href=\"https://twitter.com/snoyberg\">hit me up on Twitter</a>. Also, feel free to check out some of our other Rust content:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/pid1/\">Implementing pid1 with Rust and async/await</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/10/is-rust-functional/\">Is Rust functional?</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/",
        "slug": "http-status-codes-async-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "HTTP status codes with async Rust",
        "description": "Following up on a previous blog post, we'll examine different levels of async code while building a semi-useful program that checks web links",
        "updated": null,
        "date": "2020-09-02",
        "year": 2020,
        "month": 9,
        "day": 2,
        "taxonomies": {
          "tags": [
            "rust"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/http-status-codes-async-rust/",
        "components": [
          "blog",
          "http-status-codes-async-rust"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "fully-blocking",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#fully-blocking",
            "title": "Fully blocking",
            "children": []
          },
          {
            "level": 2,
            "id": "ditching-the-blocking-api",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#ditching-the-blocking-api",
            "title": "Ditching the blocking API",
            "children": []
          },
          {
            "level": 2,
            "id": "where-blocking-is-fine",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#where-blocking-is-fine",
            "title": "Where blocking is fine",
            "children": []
          },
          {
            "level": 2,
            "id": "concurrent-requests",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#concurrent-requests",
            "title": "Concurrent requests",
            "children": []
          },
          {
            "level": 2,
            "id": "wait-for-me",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#wait-for-me",
            "title": "Wait for me!",
            "children": []
          },
          {
            "level": 2,
            "id": "error-handling",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#error-handling",
            "title": "Error handling",
            "children": []
          },
          {
            "level": 2,
            "id": "workers",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#workers",
            "title": "Workers",
            "children": []
          },
          {
            "level": 2,
            "id": "job-queue",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#job-queue",
            "title": "Job queue",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 3821,
        "reading_time": 20,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/",
            "title": "Different levels of async in Rust"
          }
        ]
      },
      {
        "relative_path": "blog/different-levels-async-rust.md",
        "colocated_path": null,
        "content": "<p>First there was cooperative multiprocessing. Then there were processes. An operating system could run multiple processes, each performing a series of sequential, blocking actions. Then came threads. A single processes could spawn off multiple threads, each performing its own series of sequential, blocking actions. (And really, the story starts earlier, with hardware interrupts and the like, but hopefully you'll forgive a little simplification.)</p>\n<p>Sitting around and waiting for stuff? Ain't nobody got time for that. Spawning threads at the operating system level? That's too costly for a lot of what we do these days.</p>\n<p>Perhaps the first foray into asynchronous programming that really hit the mainstream was the Nginx web server, which boasted a huge increase in throughput by natively using asynchronous I/O system calls. Some programming languages, like Go, Erlang, and Haskell, built runtimes systems that support spawning cheap green threads, and handle the muck of asynchronous system calls for you under the surface. Other languages, such as Javascript and more recently Rust, provide explicit asynchronous support in the language.</p>\n<p>Given how everyone seems to be bending over backwards to make it easy for you to make your code async-friendly, it would be fair to assume that all code at all times should be async. And it would also be fair to guess that, if you stick the word <code>async</code> on a function, it's completely asynchronous. Unfortunately, neither of these assumptions are true. This post is intended to dive into this topic, in Rust, using a simple bit of code for motivation.</p>\n<p><strong>Prior knowledge</strong> This post will assume that you are familiar with the Rust programming language, as well as its <code>async/.await</code> syntax. If you'd like to brush up on either language basics or async code, I'd recommend checking out <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">FP Complete's Rust Crash Course</a>.</p>\n<p><strong>Update</strong> I've published a follow up to this post <a href=\"https://tech.fpcomplete.com/blog/http-status-codes-async-rust/\">covering a more sophisticated HTTP client example</a>.</p>\n<h2 id=\"blocking-vs-non-blocking-calls\">Blocking vs non-blocking calls</h2>\n<p>Just to make sure we're on the same page, I'm going to define here what a blocking versus non-blocking call is and explain how this impacts async vs sync. If you're already highly experienced with async programming, you can probably skip this section.</p>\n<p>For the most part, &quot;async code&quot; means code that relies on non-blocking, rather than blocking, system calls for performing I/O. By contrast, sync (or synchronous) code relies on blocking system calls. As a simple example, consider a web server that has 20 open sockets from web clients, and needs to read data from all of them. One approach would be to use the blocking <code>recv</code> system call. This will:</p>\n<ul>\n<li>Turn control of the current operating system thread over to the kernel</li>\n<li>Wait in the kernel until either new data is available, the connection dies, or some error occurs</li>\n<li>Wake up the operating system thread and let the program continue executing</li>\n</ul>\n<p>If you follow this approach, and you have the aforementioned 20 connections, you essentially have two choices:</p>\n<ol>\n<li>Have a single thread handle each of the connections one at a time</li>\n<li>Spawn 20 separate operating system threads, and let each of them handle a single connection</li>\n</ol>\n<p>(1) would be an abysmal client experience. If a slow client gets in line with a connection, you could easily end up waiting a <strong>long</strong> time to make your request. (Imagine going to a supermarket with a single checkout line, no self checkout, and the person at the front of the line is paying in coins.) (2) is much better, but spawning off those operating system threads is a relatively costly activity, ultimately.</p>\n<p>Both of these approaches are synchronous. By contrast, an asynchronous approach could handle all 20 connections in a single operating system thread, with a basic approach of:</p>\n<ul>\n<li>Register a callback function to be triggered on new data availability</li>\n<li>Register all 20 of the connections to trigger that callback</li>\n<li>Within that function, check each socket to see if data is available, and if so, handle it</li>\n</ul>\n<p>Writing code like this manually can be fairly complicated, which is why many languages have added either <code>async</code> syntax or some kind of green thread based runtime. And it seems overall that this simply makes your program better. But let's test those ideas out in practice.</p>\n<h2 id=\"count-by-lines\">Count by lines</h2>\n<p>Let's write a simple, synchronous, single threaded, blocking program in Rust. It will take all of the lines in a file (hard-coded to <code>input.txt</code>), and print to standard output the number of characters on each line. It will exit the program on any errors. The program is pretty straightforward:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">&#x2F;&#x2F; We want to use the lines method from this trait\nuse std::io::BufRead;\n\n&#x2F;&#x2F; Let&#x27;s us use ? for simple error handling\nfn main() -&gt; Result&lt;(), std::io::Error&gt; {\n    &#x2F;&#x2F; Try to open the file\n    let file = std::fs::File::open(&quot;input.txt&quot;)?;\n    &#x2F;&#x2F; Create a buffered version of the file so we can use lines\n    let buffered = std::io::BufReader::new(file);\n\n    &#x2F;&#x2F; Iterate through each line in the file\n    for line in buffered.lines() {\n        &#x2F;&#x2F; But we get a Result each time, get rid of the errors\n        let line = line?;\n        &#x2F;&#x2F; And print out the line length and content\n        println!(&quot;{} {}&quot;, line.len(), line);\n    }\n\n    &#x2F;&#x2F; Everything went fine, so we return Ok\n    Ok(())\n}\n</code></pre>\n<p><strong>Recommendation</strong> I encourage readers to start playing along with this code themselves. Assuming you <a href=\"https://www.rust-lang.org/learn/get-started\">have installed Rust</a>, you can run <code>cargo new asyncpost</code> and then copy-paste the code above into <code>src/main.rs</code>. Then add in the <code>input.txt</code> file here:</p>\n<pre><code>Hello world\nHope you have a great day!\nGoodbye\n</code></pre>\n<p>And if you run <code>cargo run</code>, you should get the expected output of:</p>\n<pre><code>11 Hello world\n26 Hope you have a great day!\n7 Goodbye\n</code></pre>\n<p>I won't comment on running the code yourself any more, but I recommend you keep updating the code and running <code>cargo run</code> throughout reading this post.</p>\n<p>Anyway, back to async code. The code above is completely synchronous. Every I/O action will fully block the main (and only) thread in our program. To be crystal clear, let's see all the places this is relevant (ignoring error cases):</p>\n<ol>\n<li>Opening the file makes a blocking <code>open</code> system call</li>\n<li>As we iterate through the lines, the <code>BufRead</code> trait will implicitly be triggering multiple <code>read</code> system calls, which block waiting for data to be available from the file descriptor</li>\n<li>The <code>println!</code> macro will make <code>write</code> system calls on the <code>stdout</code> file descriptor, each of which is a blocking call</li>\n<li>Finally, when the <code>file</code> is dropped, the <code>close</code> system call to close the file descriptor</li>\n</ol>\n<p><strong>NOTE</strong> I'm using POSIX system call terms here, things may be slightly different on some operating systems, and radically different on Windows. That shouldn't take away from the main thrust of the message here.</p>\n<h2 id=\"make-it-async\">Make it async!</h2>\n<p>The most straightforward way to write asynchronous programs in Rust is to use <code>async/await</code> syntax. Let's naively try simply converting our <code>main</code> function into something <code>async</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">async fn main() -&gt; Result&lt;(), std::io::Error&gt; {\n    &#x2F;&#x2F; code inside is unchanged\n}\n</code></pre>\n<p>That's going to fail (I <em>did</em> say naively). The reason is that you can't simply run an <code>async</code> function like <code>main</code>. Instead, you need to provide an executor that knows how to handle all of the work there. The most popular library for this is <code>tokio</code>, and it provides a nice convenience macro to make this really easy. First, let's modify our <code>Cargo.toml</code> file to add the <code>tokio</code> dependency, together with all optional features turned on (it will be convenient later):</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[dependencies]\ntokio = { version = &quot;0.2.22&quot;, features = [&quot;full&quot;] }\n</code></pre>\n<p>And then we stick the appropriate macro in front of our <code>main</code> function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() -&gt; Result&lt;(), std::io::Error&gt; {\n    &#x2F;&#x2F; unchanged\n}\n</code></pre>\n<p>And just like that, we have an asynchronous version of our program! Thank you very much everyone, have a great day, I'll see you later.</p>\n<h2 id=\"but-wait\">&quot;But wait!&quot;</h2>\n<p>&quot;But wait!&quot; you may be saying. &quot;How does Rust automatically know to rewrite all of those blocking, synchronous system calls into asynchronous, non-blocking ones just by sticking the <code>async</code> keyword on there???&quot;</p>\n<p>Answer: it doesn't. The cake is a lie.</p>\n<p>This is the first message I want to bring home. The <code>async</code> keyword allows for some special syntax (we'll see it in a little bit). It makes it much easier to write asynchronous programs. It relies on having some kind of an executor like <code>tokio</code> around to actually run things. But that's all it does. It does <em>not</em> provide any kind of asynchronous system call support. You've got to do that on your own.</p>\n<p>Fortunately for us, <code>tokio</code> <em>does</em> bring this to the table. Instead of using the <code>std</code> crate's versions of I/O functions, we'll instead lean on <code>tokio</code>'s implementation. I'm now going to follow one of my favorite development techniques, &quot;change the code and ask the compiler for help.&quot; Let's dive in!</p>\n<p>The first synchronous call we make is to open the file with <code>std::fs::File::open</code>. Fortunately for us, <code>tokio</code> provides a replacement for this method via its replacement <code>File</code> struct. We can simply swap out <code>std</code> with <code>tokio</code> and get the line:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let file = tokio::fs::File::open(&quot;input.txt&quot;)?;\n</code></pre>\n<p>Unfortunately, that's not going to compile, not even a little bit:</p>\n<pre><code>error[E0277]: the `?` operator can only be applied to values that implement `std::ops::Try`\n --&gt; src\\main.rs:8:16\n  |\n8 |     let file = tokio::fs::File::open(&quot;input.txt&quot;)?;\n  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  |                |\n  |                the `?` operator cannot be applied to type `impl std::future::Future`\n  |                help: consider using `.await` here: `tokio::fs::File::open(&quot;input.txt&quot;).await?`\n  |\n  = help: the trait `std::ops::Try` is not implemented for `impl std::future::Future`\n  = note: required by `std::ops::Try::into_result`\n</code></pre>\n<p>(To the observant among you: yes, I'm compiling this on Windows.)</p>\n<p>So what exactly does this mean? The <code>open</code> method from <code>std</code> returned a <code>Result</code> to represent &quot;this may have had an error.&quot; And we stuck a <code>?</code> after the <code>open</code> call to say &quot;hey, if there was an error, please exit this function with that error.&quot; But <code>tokio</code>'s <code>open</code> isn't returning a <code>Result</code>. Instead, it's returning a <code>Future</code>. This value represents a promise that, at some point in the future, we'll get back a <code>Result</code>.</p>\n<p>But I want the <code>Result</code> now! How do I force my program to wait for it? Easy: <code>.await</code>. The closer-to-compiling version of our code is:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let file = tokio::fs::File::open(&quot;input.txt&quot;).await?;\n</code></pre>\n<p>Now we're saying to the compiler:</p>\n<ol>\n<li>Open the file</li>\n<li>Wait for the <code>Future</code> to complete to tell me the file result is ready (via <code>.await</code>)</li>\n<li>Then, if there was an error, exit this function (via <code>?</code>)</li>\n</ol>\n<p>And we end up with the <code>file</code> variable holding a <code>tokio::fs::File</code> struct. Awesome!</p>\n<p>At this point, our code still doesn't compile, since there's a mismatch between the <code>std</code> and <code>tokio</code> sets of traits. If you want to have some fun, try to fix the code yourself. But I'll just show you the completed version here:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">&#x2F;&#x2F; Replaces std::io::BufRead trait\nuse tokio::io::AsyncBufReadExt;\n&#x2F;&#x2F; We can&#x27;t generally use normal Iterators with async code\n&#x2F;&#x2F; Instead, we use Streams\nuse tokio::stream::StreamExt;\n\n#[tokio::main]\nasync fn main() -&gt; Result&lt;(), std::io::Error&gt; {\n    let file = tokio::fs::File::open(&quot;input.txt&quot;).await?;\n    let buffered = tokio::io::BufReader::new(file);\n\n    &#x2F;&#x2F; Since we can&#x27;t use a for loop, we&#x27;ll manually\n    &#x2F;&#x2F; create our Stream of lines\n    let mut lines = buffered.lines();\n\n    &#x2F;&#x2F; Now keep popping off another, waiting for each\n    &#x2F;&#x2F; I&#x2F;O action to complete via .await\n    while let Some(line) = lines.next().await {\n        &#x2F;&#x2F; Error handling again\n        let line = line?;\n        &#x2F;&#x2F; And print out the line length and content\n        println!(&quot;{} {}&quot;, line.len(), line);\n    }\n\n    Ok(())\n}\n</code></pre>\n<p>It's wordier, but it gets the job done. And now we have a fully asynchronous version of our program... right?</p>\n<p>Well, no, not really. I mentioned four blocking I/O calls above: <code>open</code>, <code>read</code> (from the file), <code>write</code> (to <code>stdout</code>), and <code>close</code>. By switching to <code>tokio</code>, <code>open</code> and <code>read</code> are now using async versions of these system calls. <code>close</code> is a bit more complicated, since it happens implicitly when dropping a value. Ultimately, this code falls back to the same <code>close</code> system call, since it uses <code>std</code>'s implementation of <code>File</code> under the surface. But for our purposes, let's pretend like this doesn't actually block.</p>\n<p>No, the more interesting thing remaining is the <code>println!</code> macro's usage of <code>write</code>. This is still fully blocking I/O. And what I find informative is that it's sitting in the middle of a fully asynchronous <code>while</code> loop leveraging <code>.await</code>! This hopefully drives home another one of the core messages I mentioned above: being async isn't a binary on/off. You can have programs that are more or less asynchronous, depending on how many of the system calls get replaced.</p>\n<p>We'll get to replacing <code>println!</code> at the end of this post, but I want to point out something somewhat striking first. If you look at the examples in the <code>tokio</code> docs (such as the <a href=\"https://docs.rs/tokio/0.2.22/tokio/io/index.html\"><code>tokio::io</code> module</a>), you'll notice that they're using <code>println!</code> themselves. Why would a library for async I/O use blocking calls?!?</p>\n<h2 id=\"it-s-not-always-worth-it\">It's not always worth it</h2>\n<p>Let's return to our web server with 20 connections. Let's pretend we wrote a half-async version of that web server. It uses non-blocking I/O for reading data from the clients. That means our web server will not need 20 independent threads, and it will know when data is available. However, like our program above, it's going to produce output (data sent back to the clients) using blocking I/O calls. How will this affect our web server's behavior?</p>\n<p>Well, it's better than the worst case we described above. We won't block the entire server because one client is sending a really slow request. We'll be able to wait until a client fully sends its request before we put together our response and send it back. However, since we're using a blocking call to send the data back, if that same slow client is also slow at <em>receiving</em> data, we'll be back to square one with a laggy server. And it won't just slow down sending of responses. We'll end up blocking <em>all</em> I/O, such as accepting new connections and receiving on existing connections.</p>\n<p>But let's pretend, just for a moment, that we know that each and every one of these clients has a super fast receive rate. The blocking <code>send</code> calls we make will always complete in something insanely fast like a nanosecond. Would we care that they're blocking calls? No, probably not. I don't mind blocking a system thread for one nanosecond. I care about having long and possibly indeterminate blocking I/O calls.</p>\n<p>The situation with <code>stdout</code> is closer to this. It's generally a safe assumption that outputting data to <code>stdout</code> is only going to block for a short duration of time. And therefore, most people think using <code>println!</code> is a fine thing to do in async code, and it's not worth rewriting to something more complex. Taking this a step further: many things we don't think of as blocking may, in fact block. For example, reading and writing memory that is memory mapped to files (via <code>mmap</code>) may involve blocking I/O. Generally, it's impossible to expunge all traces of blocking behavior in a program.</p>\n<p>But this &quot;it's not always worth it&quot; goes much deeper. Let's review our program above. With the new async I/O calls, our program is going to:</p>\n<ol>\n<li>Make a non-blocking system call to open a file descriptor</li>\n<li>Block (via <code>.await</code>) until the file is open</li>\n<li>In a loop:\n<ol>\n<li>Read data from the descriptor with a non-blocking system call</li>\n<li>Block (via <code>.await</code>) for a complete line to be read</li>\n<li>Make a blocking call to <code>write</code> to output data to <code>stdout</code></li>\n</ol>\n</li>\n<li>Make a blocking <code>close</code> system call</li>\n</ol>\n<p>Did moving from blocking to non-blocking calls help us at all? Absolutely not! Our program is inherently single threaded and sequential. We were previously blocking our main thread inside <code>open</code> and <code>read</code> system calls. Now we're blocking that same thread waiting for the non-blocking equivalents to complete.</p>\n<img style=\"max-width:100%\" alt=\"Blocking I/O with more steps\" src=\"/images/blog/blocking-more-steps.jpg\">\n<p>So just because we <em>can</em> make something asynchronous, doesn't mean it's always better code. By changing our program above, we've made it significantly more complex, added extra dependencies, and almost certainly slower to run.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>To sum up what we covered in this post:</p>\n<ul>\n<li>Async code can be hard to write manually</li>\n<li>Rust (and other languages) provide async syntax to make it easier</li>\n<li>You need an executor to run async code</li>\n<li>But that's not enough: you need to replace blocking I/O calls with non-blocking calls</li>\n<li>Libraries like <code>tokio</code> provide both the executor and the non-blocking functions</li>\n<li>You can partially asyncify a program</li>\n<li>And it's not always worth converting to async</li>\n</ul>\n<p>I wanted to cover a more complex example of async in this post, but it's already on the long side. Instead, I'll follow up in a later post with a program that makes HTTP requests and checks their status codes. I'll update this post with a link when it's ready. Stay tuned!</p>\n<p><strong>Update</strong> And here's that updated blog post! <a href=\"https://tech.fpcomplete.com/blog/http-status-codes-async-rust/\">HTTP status codes with async Rust</a></p>\n<p>And finally...</p>\n<h2 id=\"appendix-non-blocking-output\">Appendix: non-blocking output</h2>\n<p>I promised you I'd end with an example of replacing <code>println!</code> with non-blocking I/O. Remember, this isn't something I'm generally recommending you do. But it's informative to see it in action.</p>\n<p>The simplest way to do this is to use <code>format!</code> to create a <code>String</code> with the content you want to output, and then use <code>tokio::io::stdout()</code>. (Note that this forces a heap allocation of a <code>String</code>, something that doesn't occur with <code>println!</code> usage.) This looks like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">&#x2F;&#x2F; Use the trait\nuse tokio::io::AsyncWriteExt;\n\n&#x2F;&#x2F; same code as before\n\nlet mut stdout = tokio::io::stdout();\nwhile let Some(line) = lines.next().await {\n    let line = line?;\n    &#x2F;&#x2F; Note the extra newline character!\n    let s = format!(&quot;{} {}\\n&quot;, line.len(), line);\n    stdout.write_all(s.as_bytes()).await?;\n}\n</code></pre>\n<p>This overall looks safe, and in our program will perform correctly. However, generally, there are problems with this code. From the <a href=\"https://docs.rs/tokio/0.2.22/tokio/io/fn.stdout.html\">docs on <code>stdout</code></a></p>\n<blockquote>\n<p>In particular you should be aware that writes using <code>write_all</code> are not guaranteed to occur as a single write, so multiple threads writing data with <code>write_all</code> may result in interleaved output.</p>\n</blockquote>\n<p>Since our program is sequential anyway, we don't need to worry about multiple threads creating interleaved output. But in general, that would be a concern. One approach to address that would be to create a channel of messages to be sent to <code>stdout</code>.</p>\n<p>But the most popular solution would be to simply use <code>println!</code> and similar macros.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/",
        "slug": "different-levels-async-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Different levels of async in Rust",
        "description": "Often, developers today look at asynchronous I/O as a good thing, and as a binary choice. The reality is more nuanced. In this post, we'll explore the situation with a simple Rust example.",
        "updated": null,
        "date": "2020-08-24",
        "year": 2020,
        "month": 8,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "image": "images/blog/blocking-more-steps.jpg",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/different-levels-async-rust/",
        "components": [
          "blog",
          "different-levels-async-rust"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "blocking-vs-non-blocking-calls",
            "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/#blocking-vs-non-blocking-calls",
            "title": "Blocking vs non-blocking calls",
            "children": []
          },
          {
            "level": 2,
            "id": "count-by-lines",
            "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/#count-by-lines",
            "title": "Count by lines",
            "children": []
          },
          {
            "level": 2,
            "id": "make-it-async",
            "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/#make-it-async",
            "title": "Make it async!",
            "children": []
          },
          {
            "level": 2,
            "id": "but-wait",
            "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/#but-wait",
            "title": "\"But wait!\"",
            "children": []
          },
          {
            "level": 2,
            "id": "it-s-not-always-worth-it",
            "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/#it-s-not-always-worth-it",
            "title": "It's not always worth it",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/#conclusion",
            "title": "Conclusion",
            "children": []
          },
          {
            "level": 2,
            "id": "appendix-non-blocking-output",
            "permalink": "https://tech.fpcomplete.com/blog/different-levels-async-rust/#appendix-non-blocking-output",
            "title": "Appendix: non-blocking output",
            "children": []
          }
        ],
        "word_count": 3140,
        "reading_time": 16,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/",
            "title": "HTTP status codes with async Rust"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/",
            "title": "Rust: Of course it compiles, right?"
          }
        ]
      },
      {
        "relative_path": "blog/devops-for-developers.md",
        "colocated_path": null,
        "content": "<p>In this post, I describe my personal journey as a developer skeptical\nof the seemingly ever-growing, ever more complex, array of &quot;ops&quot;\ntools. I move towards adopting some of these practices, ideas and\ntools. I write about how this journey helps me to write software\nbetter and understand discussions with the ops team at work.</p>\n<div style=\"border:1px solid black;background-color:#f8f8f8;margin-bottom:1em;padding: 0.5em 0.5em 0 0.5em;\">\n<p><strong>Table of Contents</strong></p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#on-being-skeptical\">On being skeptical</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#the-humble-app\">The humble app</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#disk-failures-are-not-that-common\">Disk failures are not that common</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#auto-deployment-is-better-than-manual\">Auto-deployment is better than manual</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#backups-become-worth-it\">Backups become worth it</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#deployment-staging\">Deployment staging</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#packaging-with-docker-is-good\">Packaging with Docker is good</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#custodians-multiple-processes-are-useful\">Custodians multiple processes are useful</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#kubernetes-provides-exactly-that\">Kubernetes provides exactly that</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#declarative-is-good-vendor-lock-in-is-bad\">Declarative is good, vendor lock-in is bad</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#more-advanced-rollout\">More advanced rollout</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#relationship-between-code-and-deployed-state\">Relationship between code and deployed state</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#argocd\">ArgoCD</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#infra-as-code\">Infra-as-code</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#where-the-dev-meets-the-ops\">Where the dev meets the ops</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#what-we-do\">What we do</a></li>\n</ul>\n</div>\n<h2 id=\"on-being-skeptical\">On being skeptical</h2>\n<p>I would characterise my attitudes to adopting technology in two\nstages:</p>\n<ul>\n<li>Firstly, I am conservative and dismissive, in that I will usually\ndisregard any popular new technology as a bandwagon or trend. I'm a\nslow adopter.</li>\n<li>Secondly, when I actually encounter a situation where I've suffered,\nI'll then circle back to that technology and give it a try, and if I\ncan really find the nugget of technical truth in there, then I'll\nadopt it.</li>\n</ul>\n<p>Here are some things that I disregarded for a year or more before\ntrying: Emacs, Haskell, Git, Docker, Kubernetes, Kafka. The whole\nNoSQL trend came, wrecked havoc, and went, while I had my back turned,\nbut I am considering using Redis for a cache at the moment.</p>\n<h2 id=\"the-humble-app\">The humble app</h2>\n<p>If you’re a developer like me, you’re probably used to writing your\nsoftware, spending most of your time developing, and then finally\ndeploying your software by simply creating a machine, either a\ndedicated machine or a virtual machine, and then uploading a binary of\nyour software (or source code if it’s interpreted), and then running\nit with the copy pasted config of systemd or simply running the\nsoftware inside GNU screen. It's a secret shame that I've done this,\nbut it's the reality.</p>\n<p>You might use nginx to reverse-proxy to the service. Maybe you set up\na PostgreSQL database or MySQL database on that machine. And then you\nwalk away and test out the system, and later you realise you need some\nslight changes to the system configuration. So you SSH into the system\nand makes the small tweaks necessary, such as port settings, encoding\nsettings, or an additional package you forgot to add. Sound familiar?</p>\n<p>But on the whole, your work here is done and for most services this is\npretty much fine. There are plenty of services running that you have\nseen in the past 30 years that have been running like this.</p>\n<h2 id=\"disk-failures-are-not-that-common\">Disk failures are not that common</h2>\n<p>Rhetoric about processes going down due to a hardware failure are\nprobably overblown. Hard drives don’t crash very often. They don’t\nreally wear out as quickly as they used to, and you can be running a\nsystem for years before anything even remotely concerning happens.</p>\n<h2 id=\"auto-deployment-is-better-than-manual\">Auto-deployment is better than manual</h2>\n<p>When you start to iterate a little bit quicker, you get bored of\nmanually building and copying and restarting the binary on the\nsystem. This is especially noticeable if you forget the steps later\non.</p>\n<!-- Implementing Auto-Deployment -->\n<p>If you’re a little bit more advanced you might have some special\nscripts or post-merge git hooks, so that when you push to your repo it\nwould apply to the same machine and you have some associated token on\nyour CI machine that is capable of uploading a binary and running a\ncommand like copy and restart (e.g. SSH key or API\nkey). Alternatively, you might implement a polling system on the\nactual production system which will check if any updates have occurred\nin get and if so pull down a new binary. This is how we were doing\nthings in e.g. 2013.</p>\n<h2 id=\"backups-become-worth-it\">Backups become worth it</h2>\n<p>Eventually, if you're lucky, your service starts to become slightly\nmore important; maybe it’s used in business and people actually are\nusing it and storing valuable things in the database. You start to\nthink that back-ups are a good idea and worth the investment.</p>\n<!-- Redundancy of DB -->\n<p>You probably also have a script to back up the database, or replicate\nit on a separate machine, for redundancy.</p>\n<h2 id=\"deployment-staging\">Deployment staging</h2>\n<p>Eventually, you might have a staged deployment strategy. So you might\nhave a developer testing machine, you might have a QA machine, a\nstaging machine, and finally a production machine. All of these are\nconfigured in pretty much the same way, but they are deployed at\ndifferent times and probably the system administrator is the only one\nwith access to deploy to production.</p>\n<!-- Continuum -->\n<p>It’s clear by this point that I’m describing a continuum from &quot;hobby\nproject&quot; to &quot;enterprise serious business synergy solutions&quot;.</p>\n<h2 id=\"packaging-with-docker-is-good\">Packaging with Docker is good</h2>\n<p>Docker effectively leads to collapsing all of your system dependencies\nfor your binary to run into one contained package. This is good,\nbecause dependency management is hell. It's also highly wasteful,\nbecause its level of granularity is very wide. But this is a trade-off\nwe accept for the benefits.</p>\n<h2 id=\"custodians-multiple-processes-are-useful\">Custodians multiple processes are useful</h2>\n<p>Docker doesn’t have much to say about starting and restarting\nservices. I’ve explored using CoreOS with the hosting provider Digital\nOcean, and simply running a fresh virtual machine, with the given\nDocker image.</p>\n<p>However, you quickly run into the problem of starting up and tearing\ndown:</p>\n<ul>\n<li>When you start the service, you need certain liveness checks\nand health checks, so if the service fails to start then you should\nnot stop the existing service from running, for example. You should\nkeep the existing ones running.</li>\n<li>If the process fails at any time during running then you should also\nrestart the process. I thought about this point a lot, and came to the\nconclusion that it’s better to have your process be restarted than to\nassume that the reason it failed was so dangerous that the process\nshouldn’t start again. Probably it’s more likely that there is an\nexception or memory issue that happened in a pathological case which\nyou can investigate in your logging system. But it doesn’t mean that\nyour users should suffer by having downtime.</li>\n<li>The natural progression of this functionality is to support\ndifferent rollout strategies. Do you want to switch everything to the\nnew system in one go, do you want it to be deployed piece-by-piece?</li>\n</ul>\n<!-- Summary: You Realise Worth Of Ops Tools -->\n<p>It’s hard to fully appreciate the added value of ops systems like\nKubernetes, Istio/Linkerd, Argo CD, Prometheus, Terraform, etc. until\nyou decide to design a complete architecture yourself, from scratch,\nthe way you want it to work in the long term.</p>\n<h2 id=\"kubernetes-provides-exactly-that\">Kubernetes provides exactly that</h2>\n<p>What system happens to accept Docker images, provide custodianship,\nroll out strategies, and trivial redeploy? Kubernetes.</p>\n<p>It provides this classical monitoring and custodian responsibilities\nthat plenty of other systems have done in the past. However, unlike\nsimply running a process and testing if it’s fine and then turning off\nanother process, Kubernetes buys into Docker all the way.  Processes\nare isolated from each other, in both the network on the file\nsystem. Therefore, you can very reliably start and stop the services\non the same machine. Nothing about a process's machine state is\npersistent, therefore you are forced to design your programs in a way\nthat state is explicitly stored either ephemerally, or elsewhere.</p>\n<!-- Cloud Managed Databases Make This Practical -->\n<p>In the past it might be a little bit scarier to have your database\nrunning in such system, what if it automatically wipes out the\ndatabase process? With today’s cloud base deployments, it's more\ncommon to use a managed database such as that provided by Amazon,\nDigital Ocean, Google or Azure. The whole problem of updating and\nbacking up your database can pretty much be put to one\nside. Therefore, you are free to mess with the configuration or\ntopology of your cluster as much as you like without affecting your\ndatabase.</p>\n<h2 id=\"declarative-is-good-vendor-lock-in-is-bad\">Declarative is good, vendor lock-in is bad</h2>\n<p>A very appealing feature of a deployment system like Kubernetes is\nthat everything is automatic and declarative. You stick all of your\nconfiguration in simple YAML files (which is also a curse because YAML\nhas its own warts and it's not common to find formal schemas for it).\nThis is also known as &quot;infrastructure as code&quot;.</p>\n<p>Ideally, you should have as much as possible about your infrastructure\nin code checked in to a repo so that you can reproduce it and track\nit.</p>\n<p>There is also a much more straight-forward path to migrate from one\nservice provider to another service provider. Kubernetes is supported\non all the major service providers (Google, Amazon, Azure), therefore\nyou are less vulnerable to vendor lock-in. They also all provide\nmanaged databases that are standard (PostgreSQL, for example) with\ntheir normal wire protocols. If you were using the vendor-specific\nAPIs to achieve some of this, you'd be stuck on one vendor. I, for\nexample, am not sure whether to go with Amazon or Azure on a big\npersonal project right now. If I use Kubernetes, I am mitigating risk.</p>\n<p>With something like Terraform you can go one step further, in which\nyou write code that can create your cluster completely from\nscratch. This is also more vendor independent/mitigated.</p>\n<h2 id=\"more-advanced-rollout\">More advanced rollout</h2>\n<p>Your load balancer and your DNS can also be in code. Typically a load\nbalancer that does the job is nginx. However, for more advanced\ndeployments such as A/B or green/blue deployments, you may need\nsomething more advanced like Istio or Linkerd.</p>\n<p>Do I really want to deploy a new feature to all of my users? Maybe,\nthat might be easier. Do I want to deploy a different way of marketing\nmy product on the website to all users at once? If I do that, then I\ndon’t exactly know how effective it is. So, I could perhaps do a\ndeployment in which half of my users see one page and half of the\nusers see another page. These kinds of deployments are\nstraight-forwardly achieved with Istio/Linkerd-type service meshes,\nwithout having to change any code in your app.</p>\n<h2 id=\"relationship-between-code-and-deployed-state\">Relationship between code and deployed state</h2>\n<p>Let's think further than this.</p>\n<p>You've set up your cluster with your provider, or Terraform. You've\nset up your Kubernetes deployments and services. You've set up your CI\nto build your project, produce a Docker image, and upload the images\nto your registry. So far so good.</p>\n<p>Suddenly, you’re wondering, how do I actually deploy this? How do I\ncall Kubernetes, with the correct credentials, to apply this new\nDoctor image to the appropriate deployment?</p>\n<p>Actually, this is still an ongoing area of innovation. An obvious way\nto do it is: you put some details on your CI system that has access to\nrun kubectl, then set the image with the image name and that will try\nto do a deployment. Maybe the deployment fails, you can look at that\nresult in your CI dashboard.</p>\n<p>However, the question comes up as what is currently actually deployed\non production? Do we really have infrastructure as code here?</p>\n<p>It’s not that I edited the file and that update suddenly got\nreflected. There’s no file anywhere in Git that contains what the\ncurrent image is. Head scratcher.</p>\n<p>Ideally, you would have a repository somewhere which states exactly\nwhich image should be deployed right now. And if you change it in a\ncommit, and then later revert that commit, you should expect the\nproduction is also reverted to reflect the code, right?</p>\n<h2 id=\"argocd\">ArgoCD</h2>\n<p>One system which attempts to address this is ArgoCD. They implement\nwhat they call &quot;GitOps&quot;. All state of the system is reflected in a Git\nrepo somewhere. In Argo CD, after your GitHub/Gitlab/Jenkins/Travis CI\nsystem has pushed your Docker image to the Docker repository, it makes\na gRPC call to Argo, which becomes aware of the new image. As an\nadmin, you can now trivially look in the UI and click &quot;Refresh&quot; to\nredeploy the new version.</p>\n<h2 id=\"infra-as-code\">Infra-as-code</h2>\n<p>The common running theme in all of this is\ninfrastructure-as-code. It’s immutability. It’s declarative. It’s\nremoving the number of steps that the human has to do or care\nabout. It’s about being able to rewind. It’s about redundancy. And\nit’s about scaling easily.</p>\n<!-- Circling Back -->\n<p>When you really try to architect your own system, and your business\nwill lose money in the case of ops mistakes, then you start to think\nthat all of these advantages of infrastructure as code start looking\nreally attractive.</p>\n<p>But before you really sit down and think about this stuff, however, it\nis pretty hard to empathise or sympathise with the kind of concerns\nthat people using these systems have.</p>\n<!-- Downsides/Tax -->\n<p>There are some downsides to these tools, as with any:</p>\n<ul>\n<li>Docker is quite wasteful of time and space</li>\n<li>Kubernetes is undoubtedly complex, and leans heavily on YAML</li>\n<li><a href=\"https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/\">All abstractions are leaky</a>,\ntherefore tools like this all leak</li>\n</ul>\n<h2 id=\"where-the-dev-meets-the-ops\">Where the dev meets the ops</h2>\n<p>Now that I’ve started looking into these things and appreciating their\nuse, I interact a lot more with the ops side of our DevOps team at work,\nand I can also be way more helpful in assisting them with the\ninformation that they need, and also writing apps which anticipate the\nkind of deployment that is going to happen. The most difficult\nchallenge typically is metrics and logging, for run-of-the-mill apps,\nI’m not talking about high-performance apps.</p>\n<!-- An Exercise -->\n<p>One way way to bridge the gap between your ops team and dev team,\ntherefore, might be an exercise meeting in which you do have a dev\nperson literally sit down and design an app architecture and\ninfrastructure, from the ground up using the existing tools that we\nhave that they are aware of and then your ops team can point out the\nadvantages and disadvantages of their proposed solution. Certainly,\nI think I would have benefited from such a mentorship, even for an\nhour or two.</p>\n<!-- Head-In-The-Sand Also Works -->\n<p>It may be that your dev team and your ops team are completely separate\nand everybody’s happy. The devs write code, they push it, and then it\nmagically works in production and nobody has any issues. That’s\ncompletely fine. If anything it would show that you have a very good\nprocess. In fact, that’s pretty much how I’ve worked for the past\neight years at this company.</p>\n<p>However, you could derive some benefit if your teams are having\ndifficulty communicating.</p>\n<p>Finally, the tools in the ops world aren't perfect, and they're made\nby us devs. If you have a hunch that you can do better than these\ntools, you should learn more about them, and you might be right.</p>\n<h2 id=\"what-we-do\">What we do</h2>\n<p>FP Complete are using a great number of these tools, and we're writing\nour own, too. If you'd like to know more, email use at\n<a href=\"mailto:[email protected]\">[email protected]</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/",
        "slug": "devops-for-developers",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps for (Skeptical) Developers",
        "description": null,
        "updated": null,
        "date": "2020-08-16",
        "year": 2020,
        "month": 8,
        "day": 16,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/devops-for-developers/",
        "components": [
          "blog",
          "devops-for-developers"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "on-being-skeptical",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#on-being-skeptical",
            "title": "On being skeptical",
            "children": []
          },
          {
            "level": 2,
            "id": "the-humble-app",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#the-humble-app",
            "title": "The humble app",
            "children": []
          },
          {
            "level": 2,
            "id": "disk-failures-are-not-that-common",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#disk-failures-are-not-that-common",
            "title": "Disk failures are not that common",
            "children": []
          },
          {
            "level": 2,
            "id": "auto-deployment-is-better-than-manual",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#auto-deployment-is-better-than-manual",
            "title": "Auto-deployment is better than manual",
            "children": []
          },
          {
            "level": 2,
            "id": "backups-become-worth-it",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#backups-become-worth-it",
            "title": "Backups become worth it",
            "children": []
          },
          {
            "level": 2,
            "id": "deployment-staging",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#deployment-staging",
            "title": "Deployment staging",
            "children": []
          },
          {
            "level": 2,
            "id": "packaging-with-docker-is-good",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#packaging-with-docker-is-good",
            "title": "Packaging with Docker is good",
            "children": []
          },
          {
            "level": 2,
            "id": "custodians-multiple-processes-are-useful",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#custodians-multiple-processes-are-useful",
            "title": "Custodians multiple processes are useful",
            "children": []
          },
          {
            "level": 2,
            "id": "kubernetes-provides-exactly-that",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#kubernetes-provides-exactly-that",
            "title": "Kubernetes provides exactly that",
            "children": []
          },
          {
            "level": 2,
            "id": "declarative-is-good-vendor-lock-in-is-bad",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#declarative-is-good-vendor-lock-in-is-bad",
            "title": "Declarative is good, vendor lock-in is bad",
            "children": []
          },
          {
            "level": 2,
            "id": "more-advanced-rollout",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#more-advanced-rollout",
            "title": "More advanced rollout",
            "children": []
          },
          {
            "level": 2,
            "id": "relationship-between-code-and-deployed-state",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#relationship-between-code-and-deployed-state",
            "title": "Relationship between code and deployed state",
            "children": []
          },
          {
            "level": 2,
            "id": "argocd",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#argocd",
            "title": "ArgoCD",
            "children": []
          },
          {
            "level": 2,
            "id": "infra-as-code",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#infra-as-code",
            "title": "Infra-as-code",
            "children": []
          },
          {
            "level": 2,
            "id": "where-the-dev-meets-the-ops",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#where-the-dev-meets-the-ops",
            "title": "Where the dev meets the ops",
            "children": []
          },
          {
            "level": 2,
            "id": "what-we-do",
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#what-we-do",
            "title": "What we do",
            "children": []
          }
        ],
        "word_count": 2618,
        "reading_time": 14,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/",
            "title": "DevOps for (Skeptical) Developers"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/",
            "title": "DevOps: Unifying Dev, Ops, and QA"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
            "title": "An Istio/mutual TLS debugging story"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
            "title": "Deploying Rust with Windows Containers on Kubernetes"
          }
        ]
      },
      {
        "relative_path": "blog/rust-at-fpco-2020.md",
        "colocated_path": null,
        "content": "<p>At FP Complete, we have long spoken about the three pillars of a software development language: productivity, robustness, and performance. Often times, these three pillars are in conflict with each other, or at least appear to be. Getting to market quickly (productivity) often involves skimping on quality assurance (robustness), or writing inefficient code (performance). Or you can write simple code which is easy to test and validate (productivity and robustness), but end up with a slow algorithm (performance). Optimizing the code takes time and may introduce new bugs.</p>\n<p>For the entire history of our company, our contention has been that while some level of trade-off here is inevitable, we can leverage better tools, languages, and methodologies to improve our standing on all of these pillars. We initially focused on Haskell, a functional programming language that uses a strong type system and offers decent performance. We still love and continue to use Haskell. However, realizing that code was only half the battle, we then began adopting DevOps methodologies and tools.</p>\n<p>We've watched with great interest as the Rust programming language has developed, matured, and been adopted in industry. Virtually all major technology companies are now putting significant effort behind Rust. Most recently, Microsoft has been <a href=\"https://youtu.be/NQBVUjdkLAA\">quite publicly embracing Rust</a>.</p>\n<p>In this post, I wanted to share some thoughts on why we're thrilled to see Rust's adoption in industry, what we're using Rust for at FP Complete, and give some advice to interested companies in how they can begin adopting this language.</p>\n<h2 id=\"why-rust\">Why Rust?</h2>\n<p>We're big believers in using the computer itself to help us write better code. Some of this can be done with methodologies like test-driven development (TDD). But there are two weak links in the chain of techniques like TDD:</p>\n<ul>\n<li>It requires active effort to think through what needs to be tested</li>\n<li>It's possible to ignore these test failures and ship broken code </li>\n</ul>\n<p>The latter might sound contrived, but we've seen it happen in industry. The limitations of testing are well known, and we've <a href=\"https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-multifaceted-testing/\">previously blogged about recommended testing strategies</a>. And don't get me wrong: testing is an absolutely vital part of software development, and you should be doing more of it!</p>\n<p>But industry experience has shown us that many bugs slip through testing. Perhaps the most common and dangerous class of bug is memory safety issues. These include buffer overruns, use-after-free and double-free. What is especially worrying about these classes of bugs is that, typically, the best case scenario is your program crashing. Worst case scenario includes major security and privacy breaches.</p>\n<p>The industry standard approach has been to bypass these bugs by using managed languages. Managed languages bypass explicit memory management and instead rely on garbage collection. This introduces some downsides, latency being the biggest one. Typically, garbage collected languages are more memory hungry as well. This is the typical efficiency-vs-correctness trade-off mentioned above. We've been quite happy to make that trade-off ourselves, using languages like Haskell and accepting some level of performance hit.</p>\n<p>Rust took a different approach, one we admire deeply. By introducing concepts around ownership and borrowing, Rust seeks to drastically reduce the presence of memory safety errors, without introducing the overhead of garbage collection. This fits completely with FP Complete's mindset of using better tools when possible.</p>\n<p>The downside to this is complexity. Understanding ownership can be a challenge. But see below for information on how to get started with Rust. This is an area where FP Complete as a company, and I personally, have taken a lot of interest.</p>\n<p>Going beyond memory safety issues, however, is the rest of the Rust language design. As a relatively new language, Rust has the opportunity to learn from many other languages on the market already. And in our opinion, it has selected some of the best features available from other languages, especially our beloved Haskell. Some of these features include:</p>\n<ul>\n<li>Strong typing</li>\n<li>Sum types (aka enums) and pattern matching</li>\n<li>Explicit error handling, but with a beautiful syntax</li>\n<li><a href=\"https://tech.fpcomplete.com/rust/pid1/\">Async syntax</a></li>\n<li>Functional style via closures and <a href=\"https://tech.fpcomplete.com/blog/2017/07/iterators-streams-rust-haskell/\"><code>Iterator</code> pipelines</a></li>\n</ul>\n<p>In other words: Rust has fully embraced the concepts of using better approaches to solve problems, and to steal great ideas that have been tried and tested. We believe Rust has the potential to drastically improve software quality in the world, and lead to more maintainable solutions. We think Rust can be instrumental in <a href=\"https://tech.fpcomplete.com/blog/2012/12/solving_the_software_crisis/\">solving the global software crisis</a>.</p>\n<h2 id=\"rust-at-fp-complete\">Rust at FP Complete</h2>\n<p>We've taken a three-pronged approach to Rust at FP Complete until now. This has included:</p>\n<ul>\n<li>Producing educational material for both internal and external audiences</li>\n<li>Using Rust for internal tooling</li>\n<li>Writing product code with Rust</li>\n</ul>\n<p>The primary educational offering we've created is our Rust Crash Course, which we'll provide at the end of this post. This course has been honed to address the most common pitfalls we've seen developers hit when onboarding with Rust.</p>\n<p>Also, as a personal project, I decided to see if Rust could be taught as a first programming language, and <a href=\"https://www.beginrust.com/\">I think it can</a>.</p>\n<p>For internal tooling and product code, we always have the debate: should we use Rust or Haskell. We've been giving our engineers more freedom to make that decision themselves in the past year. Personally, I'm still more comfortable with Haskell, which isn't really surprising: I've been using Haskell professionally longer than Rust has existed. But the progress we're seeing in Rust—both in the library ecosystem and the language itself—means that Rust becomes more competitive on an almost monthly basis.</p>\n<p>At this point, we have some specific times when Rust is a clear winner:</p>\n<ul>\n<li>When performance is critical, we prefer Rust. Haskell is usually fast enough, but microoptimizing Haskell code ends up taking more time than writing it in Rust.</li>\n<li>For client-side code (e.g., command line tooling) we've been leaning towards Rust. Overall, it has better cross-OS support than Haskell.</li>\n<li>There are some domains that have much better library coverage in Rust than in Haskell, and then we'll gravitate towards them. (The same applies in the other direction too.)</li>\n<li>And as we're engineers who like playing with shiny tools, if someone wants to have extra fun, Rust is usually it. In most places in the world, Haskell would probably be considered the shiny toy. FP Complete is pretty exceptional there.</li>\n</ul>\n<p>We're beginning to expand to a fourth area of Rust at FP Complete: consulting services. The market for Rust has been steadily growing over the past few years. We believe at this point Rust is ready for much broader adoption, and we're eager to help companies adopt this wonderful language. If you're interested in learning more, please <a href=\"mailto:[email protected]\">contact our consulting team for more information</a>.</p>\n<h2 id=\"getting-started\">Getting started</h2>\n<p>How do you get started with a language like Rust? Fortunately, the tooling and documentation for Rust is top notch. We can strongly recommend checking out <a href=\"https://www.rust-lang.org/\">the Rust homepage</a> for guidance on installing Rust and getting started. The freely available <a href=\"https://doc.rust-lang.org/book/\">Rust book</a> is great too, covering many aspects of the language.</p>\n<p>That said, my recommendation is to check out our Rust Crash Course eBook (linked below). We've tried to focus this book on answering the most common questions about Rust first, and get you up and running quickly.</p>\n<p>If you're interested in getting your team started with Rust, you may also want to reach out to us for <a href=\"https://tech.fpcomplete.com/training/\">information on our training programs</a>.</p>\n<p>Want to read more about Rust? Check out the <a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust homepage</a>.</p>\n<p>Want to learn more about FP Complete offerings? Please <a href=\"https://tech.fpcomplete.com/contact-us/\">reach out to us any time</a>.</p>\n<p><a href=\"/rust/crash-course/\"><img class=\"w-100\" src=\"/images/cta/rust-crash-course.png\" /></a></p>\n<div class=\"d-none\">\n<!--HubSpot Call-to-Action Code --><span class=\"hs-cta-wrapper\" id=\"hs-cta-wrapper-0e10c2f9-a802-42ef-b462-81162079de11\"><span class=\"hs-cta-node hs-cta-0e10c2f9-a802-42ef-b462-81162079de11\" id=\"hs-cta-0e10c2f9-a802-42ef-b462-81162079de11\"><!--[if lte IE 8]><div id=\"hs-cta-ie-element\"></div><![endif]--><a href=\"https://cta-redirect.hubspot.com/cta/redirect/2814979/0e10c2f9-a802-42ef-b462-81162079de11\" ><img class=\"hs-cta-img\" id=\"hs-cta-img-0e10c2f9-a802-42ef-b462-81162079de11\" style=\"border-width:0px;width:1000px;max-width:100%;height:auto\" src=\"https://no-cache.hubspot.com/cta/default/2814979/0e10c2f9-a802-42ef-b462-81162079de11.png\"  alt=\"Rust-Crash-Course\"/></a></span><script charset=\"utf-8\" src=\"https://js.hscta.net/cta/current.js\"></script><script type=\"text/javascript\"> hbspt.cta.load(2814979, '0e10c2f9-a802-42ef-b462-81162079de11', {}); </script></span><!-- end HubSpot Call-to-Action Code -->\n</div>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-at-fpco-2020/",
        "slug": "rust-at-fpco-2020",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Rust at FP Complete, 2020 update",
        "description": "FP Complete has been using Rust for the past few years, and recently we have increased our focus on it. Read about why we think Rust is such an important piece of software going forward.",
        "updated": null,
        "date": "2020-06-29",
        "year": 2020,
        "month": 6,
        "day": 29,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/rust-at-fpco-2020/",
        "components": [
          "blog",
          "rust-at-fpco-2020"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "why-rust",
            "permalink": "https://tech.fpcomplete.com/blog/rust-at-fpco-2020/#why-rust",
            "title": "Why Rust?",
            "children": []
          },
          {
            "level": 2,
            "id": "rust-at-fp-complete",
            "permalink": "https://tech.fpcomplete.com/blog/rust-at-fpco-2020/#rust-at-fp-complete",
            "title": "Rust at FP Complete",
            "children": []
          },
          {
            "level": 2,
            "id": "getting-started",
            "permalink": "https://tech.fpcomplete.com/blog/rust-at-fpco-2020/#getting-started",
            "title": "Getting started",
            "children": []
          }
        ],
        "word_count": 1455,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
            "title": "Philosophies of Rust and Haskell"
          },
          {
            "permalink": "https://tech.fpcomplete.com/rust/",
            "title": "FP Complete Rust"
          }
        ]
      },
      {
        "relative_path": "blog/reducing-maintenance-costs-with-functional-programming.md",
        "colocated_path": null,
        "content": "<p>Most of the discussions we have in the\nsoftware industry today revolve around developer\nproductivity: How do we make it possible for\nfewer developers to produce more software in less\ntime? Reducing the upfront costs and delivering\nquickly is essentially the mantra of the startup\nworld: Move fast and break things.</p>\n<p>However, the vast majority of time in software\ndevelopment is not spent in the initial\ndevelopment phase. For any successful project, an\noverwhelming amount of time is spent on\nmaintaining that software. To keep your customers\nhappy, it’s vital to continue improving the\nsoftware, fixing bugs and enhancing performance.\nIf you are hamstrung on your ability to innovate,\nconstantly fighting bugs and delivering an\ninferior user experience, your competitors will\nbe able to outmaneuver you in the\nmarketplace.</p>\n<p>Functional programming is a significant\nparadigm shift in the software world over the\npast 10 years. Slowly but surely, it has moved\nfrom a niche feature of a few uncommonly used\nlanguages to a mainstay of even the most\nestablished languages. Unlike preceding\nparadigms, functional programming makes a focus\nof assisting in not just the productivity of\ndevelopers, but of long-term software\nmaintenance.</p>\n<h2 id=\"features-of-functional-programming\">Features of Functional Programming</h2>\n<p>Functional programming is a broad\nterm. Some languages describe themselves\nas functional, such as F#, Haskell and\nSwift. However, functional features are\nmaking their way into other languages.\nJavascript has multiple libraries\nimplementing functional paradigms. Rust,\na relative newcomer in the systems\nprogramming world, boasts many functional\nfeatures. C++ and Java have been adding\nlambdas and other functional features for\nyears. Many of the features below can be\nimplemented regardless of the language\nbeing used by your team.</p>\n<h2 id=\"immutable-data\">Immutable Data</h2>\n<p>The bane of many programs, especially\nconcurrent and network programs, is the\nfact that data changes in unexpected\nways. Functional programming advocates\nkeeping most of your data immutable. Once\ncreated, the data does not change. You\ncan share this data with other parts of\nyour program without fear of it being\nchanged or invalidated.</p>\n<p>Languages like Haskell and Rust make\nthis a cornerstone of their\nimplementation. C++ offers the ability to\nopt-in to immutability. Many Java coding\nguidelines recommend defaulting to\nimmutable data when possible.</p>\n<h2 id=\"declarative-programming\">Declarative Programming</h2>\n<p>Classic programming involves\ninstructing the computer which steps to\ntake to solve a problem. For example, to\nadd up the numbers in a list, an\nimperative programming approach might\nbe:</p>\n<ul>\n<li>Create a temporary variable to hold the sum</li>\n<li>Create a temporary variable to hold the current index</li>\n<li>Loop the index from 0 to the length of the list</li>\n<li>Add the value in the list at the index’s position to the sum</li>\n</ul>\n<p>This kind of imperative approach\nworks, but doesn’t scale particularly\nwell. As problems become more complex,\nthe imperative approach requires ever\nmore complicated solutions. It’s\ndifficult to separate logical components\ninto multiple separate loops without\nsacrificing performance. And in the era\nof multicore programming, creating a\nmultithreaded solution requires\nsignificant expertise with safe thread\nhandling.</p>\n<p>In functional programming, the\npreference is a declarative approach.\nSumming up a list is typically done\nas:</p>\n<ul>\n<li>Write a function to add two values together</li>\n<li>Fold over the list using the add function and 0 as an initial value</li>\n</ul>\n<p>This approach naturally translates\ninto a multicore solution. Instead of\neach loop needing to handle the\ncomplexities of thread management, a\nlibrary author can write a parallel fold\nonce. The caller can then replace their\nnon-parallel fold with a parallel fold\nand immediately gain the benefits of\nmulticore.</p>\n<p>By combining this approach with other\ndeclarative programming methods, like\nmapping, functional programming can\nexpress complex data pipeline operations\nas a composition of many individual,\nsimpler components. This forms the core\nof such well-known systems as Google’s\nMapReduce.</p>\n<h2 id=\"strong-typing\">Strong Typing</h2>\n<p>For years, the industry debate around\ntyped languages was usually between the\nC++ and Java families versus the Python\nand Ruby families. The former introduced\nsome sanity checks at compile time in\nexchange for lots of ceremony with\nexplicit type annotations. This improved\ncode maintenance somewhat, at the cost of\nsignificant developer productivity.\nPython and Ruby, by contrast, skipped the\ntype annotations entirely, leaving them\nas a runtime concern. This boosted\nproductivity, at the cost of\nmaintainability.</p>\n<p>The functional world went a different\nway: strong, expressive type systems with\ntype inference. Type inference avoided\nmuch of the boilerplate introduced by the\nC++-style of type systems, allowing\nproductivity on a par with Python and\nRuby. The strong type systems in\nfunctional languages allowed even more\nguarantees to be expressed in types,\nimproving maintainability beyond the\nlevels of C++ and Java.</p>\n<p>These days, even dynamically typed\nlanguages like Python are beginning to\nintroduce type systems due to the massive\ngains they are demonstrating. New\nlanguages like Rust are borrowing some of\nthe most popular type system features\nfrom functional languages like Haskell\nand O’Caml: sum types, traits and\nmore.</p>\n<h2 id=\"introducing-functional-programming\">Introducing Functional Programming</h2>\n<p>It’s important to note that you do not\nneed to completely rewrite all of your\nsoftware in a functional programming\nlanguage to reap many of the benefits of\nfunctional programming. You can begin\nrolling out functional features in your\nexisting software today with improvements\nto your internal coding guidelines.\nFocusing on some of the features above,\nand many of the other inspirations from\nfunctional programming, is a great\nstart.</p>\n<p>One option is to train your team on\nfunctional programming techniques with an\nintensive training program in a\nfunctional programming language. Once\nyour team knows the concepts, it’s much\neasier to incorporate them in your Java,\nJavascript, C# and other codebases.</p>\n<p>With the rise of microservices\narchitectures, a hybrid deployment model\nmay make a lot of sense. Oftentimes,\noffloading a particularly critical piece\nof business logic to a separate,\nwell-tested functional programming\ncodebase, connected via network APIs, can\nreduce the burden on the rest of your\nteam and increase the stability of your\nsoftware.</p>\n<p><a href=\"https://www.forbes.com/sites/forbestechcouncil/2020/04/10/reducing-maintenance-costs-with-functional-programming/\"><em>Original articles on Forbes</em></a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/reducing-maintenance-costs-with-functional-programming/",
        "slug": "reducing-maintenance-costs-with-functional-programming",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Reducing Maintenance Costs With Functional Programming",
        "description": "Most of the discussions we have in the software industry today revolve around developer productivity",
        "updated": null,
        "date": "2020-04-10",
        "year": 2020,
        "month": 4,
        "day": 10,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wesley Crook",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/reducing-maintenance-costs-with-functional-programming/",
        "components": [
          "blog",
          "reducing-maintenance-costs-with-functional-programming"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "features-of-functional-programming",
            "permalink": "https://tech.fpcomplete.com/blog/reducing-maintenance-costs-with-functional-programming/#features-of-functional-programming",
            "title": "Features of Functional Programming",
            "children": []
          },
          {
            "level": 2,
            "id": "immutable-data",
            "permalink": "https://tech.fpcomplete.com/blog/reducing-maintenance-costs-with-functional-programming/#immutable-data",
            "title": "Immutable Data",
            "children": []
          },
          {
            "level": 2,
            "id": "declarative-programming",
            "permalink": "https://tech.fpcomplete.com/blog/reducing-maintenance-costs-with-functional-programming/#declarative-programming",
            "title": "Declarative Programming",
            "children": []
          },
          {
            "level": 2,
            "id": "strong-typing",
            "permalink": "https://tech.fpcomplete.com/blog/reducing-maintenance-costs-with-functional-programming/#strong-typing",
            "title": "Strong Typing",
            "children": []
          },
          {
            "level": 2,
            "id": "introducing-functional-programming",
            "permalink": "https://tech.fpcomplete.com/blog/reducing-maintenance-costs-with-functional-programming/#introducing-functional-programming",
            "title": "Introducing Functional Programming",
            "children": []
          }
        ],
        "word_count": 992,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/base-on-stackage.md",
        "colocated_path": null,
        "content": "<h2 id=\"preface-for-unaware\">Preface for unaware</h2>\n<p>When you install a particular version of GHC on your machine it comes with a collection of\n&quot;boot&quot; libraries. What does it mean to be a &quot;boot&quot; library? Quite simply, a library must\nbe used for implementation of GHC and other core components. Two such notable libraries are\n<a href=\"https://www.stackage.org/package/base\"><code>base</code></a> and\n<a href=\"https://www.stackage.org/package/ghc\"><code>ghc</code></a>. All the matching package names and\ntheir versions for a particular GHC release can be found in <a href=\"https://gitlab.haskell.org/ghc/ghc/wikis/commentary/libraries/version-history\">this\ntable</a></p>\n<p>The fact that a library comes wired-in with GHC means that there is never a need to\ndownload sources for the particular version from Hackage or elsewhere. In fact, there is\nreally no need to upload the sources on Hackage even for the purpose of building the\nHaddock for each individual package, since those are conveniently hosted on\n<a href=\"https://downloads.haskell.org/~ghc/latest/docs/html/libraries/index.html\">haskell.org</a></p>\n<p>That being said, Hackage has always been a central place for releasing a Haskell package\nand historically Hackage trustees would upload the exact version of almost every &quot;boot&quot;\npackage on Hackage. That is why, for example, we have\n<a href=\"https://hackage.haskell.org/package/bytestring-0.10.8.2\"><code>bytestring-0.10.8.2</code></a> available\non Hackage, despite that it comes with versions of GHC from <code>ghc-8.2.1</code> to <code>ghc-8.6.5</code> inclusive.</p>\n<p>Such an upload makes total sense. Any Haskeller using a core package as a dependency for\ntheir own package in a cabal file has a central place to look for available versions and\ndocumentation for those versions. In fact some people have become so accustomed to this\nprocess that it has been discussed on\n<a href=\"https://mail.haskell.org/pipermail/haskell-cafe/2019-October/131618.html\">Haskell-Cafe</a>\nand a few other places when such package was never uploaded:</p>\n<blockquote>\n<p>It's a crisis that the standard library is unavailable on Hackage...</p>\n</blockquote>\n<h2 id=\"the-problem\">The problem</h2>\n<p>A bit over a half a year ago <code>ghc-8.8.1</code> was released, with current latest one being\n<code>ghc-8.8.3</code>. If you carefully inspect the <a href=\"https://gitlab.haskell.org/ghc/ghc/wikis/commentary/libraries/version-history\">table of core\npackages</a>\nand try to match to available versions on Hackage for those libraries, you will quickly\nnotice that a few of them are missing. I personally don't know the exact reasoning\nbehind this is, but from what I've heard it has something to do with the fact that\n<code>ghc-8.8.1</code> now depends on <code>Cabal-3.0</code>.</p>\n<p>The problem for us is that it also affects Stackage's web interface. Let's see how and why.</p>\n<h2 id=\"the-how\">The &quot;how&quot;</h2>\n<p>The &quot;how&quot; is very simple. Until recently, if a package was missing from Hackage, it would\nnot have been listed on Stackage either. This means that if you tried to follow a\ndependency of any package on <code>base-4.13.0.0</code> in nightly snapshots starting September of\nlast year you would not find it. As I noted before, not only was <code>base</code> missing, but a few\nothers as well.</p>\n<p>This problem also depicted itself in a funny looking bug on Stackage.  For every package\nin a list of dependencies the count was always off by at least 1 when compared with the\nactual links in the list\n(eg. <a href=\"https://www.stackage.org/nightly-2020-01-17/package/primitive-0.7.0.0#dependencies\">primtive</a>). This\nhad me puzzled at first. It was later that I realized that <code>base</code> was missing and since\nalmost every every package depends on it, it was counted, but not listed, causing a\nmismatch.</p>\n<h2 id=\"the-why\">The &quot;why&quot;</h2>\n<p>Stackage was structured in such a way that it always used Hackage as true source of\navailable packages, except for the core packages, since those would always come bundled\nwith GHC. For example if you look at the specification of a latest <a href=\"https://github.com/commercialhaskell/stackage-snapshots/blob/master/lts/15/3.yaml\">LTS-15.3\nsnapshot</a>\nyou will not find any of the core packages listed there, for they are decided by the GHC version, which\nin turn is specified in the snapshot.</p>\n<p>There are a few stages, tools and actual people involved in making a Stackage snapshot\nhappen. Here are some of the steps in the pipeline:</p>\n<ul>\n<li>\n<p>a <a href=\"https://github.com/commercialhaskell/stackage/blob/master/README.md\">curated list of\npackages</a> that involves\npackage maintainers and sometimes Stackage curators.</p>\n</li>\n<li>\n<p>a <a href=\"https://github.com/commercialhaskell/curator/\">curator tool</a> that is used to\nconstruct the actual snapshot, build packages, run test suites and generate Haddocks.</p>\n</li>\n<li>\n<p>a\n<a href=\"https://github.com/fpco/stackage-server/blob/master/app/stackage-server-cron.hs\">stackage-server-cron</a>\ntool that runs at some interval and updates the\n<a href=\"https://www.stackage.org/\">stackage.org</a> database to reflect all of the above work in a\nform of package relations and their respective documentation.</p>\n</li>\n</ul>\n<p>The last step is of the most interest to us because\n<a href=\"https://www.stackage.org/\">stackage.org</a> is the place where we had stuff missing. Let's\nlook at some pieces of information the tool needs in order for <code>stackage-server</code> to create\na page for a package:</p>\n<ul>\n<li>Package name, its version and <a href=\"https://www.fpcomplete.com/blog/2018/07/pantry-part-2-trees-keys\">Pantry\nkeys</a> (cryptographic\nkeys that uniquely identify the contents of source distribution)</li>\n<li>Previously generated haddocks and hoogle files for each package</li>\n<li>Cabal file, so we can extract useful information about the package, such as description,\nlicense, maintainers, module names etc.</li>\n<li>Optionally Readme and Changelog files from the source distribution can be served on a\npackage page as well.</li>\n</ul>\n<p>Information from the latter two bullet points is only available in the source\ndistribution tarballs. Packages that are defined in the snapshot do not pose a problem\nfor us, because by definition their sources are available from Hackage or any of <a href=\"https://hackage.haskell.org/mirrors.json\">its\nmirrors</a>. Core packages on the other hand are\ndifferent, in a sense that they are always available in a build environment, so\ninformation about them is present when we build a package:</p>\n<pre data-lang=\"shell\" class=\"language-shell \"><code class=\"language-shell\" data-lang=\"shell\">$ stack --resolver lts-15.0 exec -- ghc-pkg describe base\nname:                 base\nversion:              4.13.0.0\nvisibility:           public\n...\n</code></pre>\n<p>The problem is that <code>stackage-server-cron</code> tool is just an executable that is running\nsomewhere in a cloud and it doesn't have such environment. Therefore, until recently, we\nhad no means of getting the cabal files for core packages except by checking on\nHackage. With more and more core packages missing from Hackage, especially such critical\nones as <code>base</code> and <code>bytestring</code>, we had to come up with solution.</p>\n<h2 id=\"solution\">Solution</h2>\n<p>Solving this problem should be simple, because all we really need is cabal files. Haddock\nfor missing packages has been generated and was always available, it is the extra little\nbit of the meta information that was needed in order to generate the appropriate links and\nthe package home page.</p>\n<p>The first place to look for cabal files was the GHC git repository. The whole GHC bundle though is\nquite different from all other packages that we are normally used to:</p>\n<ul>\n<li>Libraries that GHC depends on do not come from Hackage, as we already know, instead they\nare pinned as git submodules.</li>\n<li>Most of the packages that are defined in the <a href=\"https://gitlab.haskell.org/ghc/ghc\">GHC\nrepository</a> do not have cabal files. Instead they have\ntemplates that are used for generating cabal files for a particular architecture during\nthe build process.</li>\n</ul>\n<p>This means that the repository is not a good source for grabbing cabal files. Building GHC\nfrom source is a time consuming process and we don't want to be doing that for every\nrelease, just to get cabal files we need. A better alternative is to simply <a href=\"https://www.haskell.org/ghc/download.html\">download a\ndistribution package</a> for a common operating\nsystem and extract the missing cabal files from there. We used Linux x86_64 for Debian,\nbut the choice of the OS shouldn't really matter, since we only really need high level\ninformation from those cabal files.</p>\n<p>That was it. The only thing we really needed to do in order to get missing core files on\nStackage was to collect all missing cabal files and make them available to the\n<code>stackage-server-cron</code> tool</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>Going back to the origin of Stackage it turns out that there was quite a few of such core\npackages missing, one most common and most notable one was <code>ghc</code> itself. Only a handful of\nofficially released versions were ever uploaded to Hackage.</p>\n<p>From now on we have a special repository\n<a href=\"https://github.com/commercialhaskell/core-cabal-files\">commercialhaskell/core-cabal-files</a>\nwhere we can place cabal files for missing core packages, which <code>stackage-server-cron</code>\ntool will pick up automatically. As it usually goes with public repositories\nanyone from the community is encouraged to submit pull requests, whenever they notice\nthat a core package is not being listed on Stackage for a newly created snapshot.</p>\n<p>For the past few weeks the very first such missing core package from Hackage\n<a href=\"https://www.stackage.org/lts-15.3/package/base-4.13.0.0\"><code>base-4.13.0.0</code></a> was being\nincluded on Stackage. With recent notable additions being <code>bytestring-0.10.9.0</code>,\n<code>ghc-8.8.x</code> and <code>Cabal-3.0.1.0</code>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/base-on-stackage/",
        "slug": "base-on-stackage",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Get base onto stackage.org",
        "description": "Solving the problem of missing core packages from Stackage's website.",
        "updated": null,
        "date": "2020-03-11T13:45:00Z",
        "year": 2020,
        "month": 3,
        "day": 11,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Alexey Kuleshevich",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/base-on-stackage/",
        "components": [
          "blog",
          "base-on-stackage"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "preface-for-unaware",
            "permalink": "https://tech.fpcomplete.com/blog/base-on-stackage/#preface-for-unaware",
            "title": "Preface for unaware",
            "children": []
          },
          {
            "level": 2,
            "id": "the-problem",
            "permalink": "https://tech.fpcomplete.com/blog/base-on-stackage/#the-problem",
            "title": "The problem",
            "children": []
          },
          {
            "level": 2,
            "id": "the-how",
            "permalink": "https://tech.fpcomplete.com/blog/base-on-stackage/#the-how",
            "title": "The \"how\"",
            "children": []
          },
          {
            "level": 2,
            "id": "the-why",
            "permalink": "https://tech.fpcomplete.com/blog/base-on-stackage/#the-why",
            "title": "The \"why\"",
            "children": []
          },
          {
            "level": 2,
            "id": "solution",
            "permalink": "https://tech.fpcomplete.com/blog/base-on-stackage/#solution",
            "title": "Solution",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/base-on-stackage/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 1479,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/storing-generated-cabal-files.md",
        "colocated_path": null,
        "content": "<p>tl;dr: I'm moving towards recommending that hpack-using projects store their generated cabal files in their repos, and modifying Stack and Pantry to more strongly recommend this practice. This is a reversal of previous recommendations. <a href=\"https://github.com/commercialhaskell/stack/issues/5210\">Vote and comment on this proposal</a>.</p>\n<h2 id=\"backstory\">Backstory</h2>\n<p>Stack 2.0 switched over to using the Pantry library to manage dependencies. Pantry does a number of things, but at its core it focuses heavily on reproducibility. The idea is that, with a fully qualified package specification, you should always get the same source code. As an example, <code>https://example.com/foo.tar.gz</code> would not be a fully qualified package specification, because the content in that tarball could silently change without being detected. Instead, with Pantry, you would specify something like:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">size: 9526\nurl: https:&#x2F;&#x2F;github.com&#x2F;snoyberg&#x2F;filelock&#x2F;archive&#x2F;97e83ecc133cd60a99df8e1fa5a3c2739ad007dc.tar.gz\ncabal-file:\n  size: 1571\n  sha256: d97c2ee2b4f0c72b35cbaf04ad37cda2e9e6a2eb1e162b5c6ab084acb94f4634\nname: filelock\nversion: 0.1.1.2\nsha256: 78332e0d964cb2f24fdbb6b07c2a6a84a029c4fe540a0435993c85ad58eab051\npantry-tree:\n  size: 584\n  sha256: 19914e8fb09ffe2116cebb8b9d19ab51452594940f1e3770e01357b874c65767\n</code></pre>\n<p>Of course, writing these out by hand is tedious and annoying, so Stack uses Pantry to generate these values for you and put them in a lock file.</p>\n<p>Separately: Stack has long supported the ability to include hpack's <code>package.yaml</code> files in your source code, and to automate the generation of a <code>.cabal</code> file. There are two quirks we need to pay attention to with hpack:</p>\n<ul>\n<li>The cabal files it generates change from one version to the next. Some of those changes may be semantically meaningful. At the very least, each new version will stamp a different hpack version in the comments of the cabal file.</li>\n<li>hpack generation is a highly I/O-focused activity, looking at all of the files in a package. Furthermore, as <a href=\"https://github.com/commercialhaskell/stack/issues/4906\">I was recently reminded</a> it can refer to files outside of the specific package you're trying to build but inside the same Git repository or tarball.</li>\n</ul>\n<p>Finally, Stack and Pantry make a stark distinction between two different kinds of packages. <em>Immutable</em> packages are things which we can assume never change. These would be one of the following:</p>\n<ul>\n<li>A package on Hackage, specified by a name, version number, and information on the Hackage revision</li>\n<li>A tarball or ZIP file given by a file path or URL. While these absolutely <em>can</em> change over time, Pantry makes an explicit recommendation that only immutable packages should be used. And the hashes and file size stored in the lock file provide protection against changes.</li>\n<li>A Git or Mercurial repository, specified by a commit.</li>\n</ul>\n<p>On the other hand, mutable packages are packages stored as files on the file system. These are the packages that you are working on in your local project. Reproducibility is far less important here. We allow Stack to regularly check the timestamps and hashes of all of these files and determine when things need to be rebuilt.</p>\n<h2 id=\"the-conflict\">The conflict</h2>\n<p>There's been a debate for a while around how to manage your packages with Stack and hpack. The question is simple: do you store the generated <code>cabal</code> files in the repo? There are solid arguments in both directions:</p>\n<ul>\n<li>You shouldn't store the file, because generated files should in general not be stored in repositories. This can lead to unnecessary diffs, and when people are using different hpack versions, &quot;commit battles&quot; of the file jumping back and forth between different generated content.</li>\n<li>You should store the file, since for maximum reproducibility we want to ensure that we have identical cabal files as input to the build. Also, for people using build tools without built in support for hpack, it's more convenient to have a cabal file present.</li>\n</ul>\n<p>I've had this discussion off and on over the years with many different people, and before Stack 2 had personally settled on the first approach: not storing the cabal files. Then I started working on Pantry.</p>\n<h2 id=\"early-pantry\">Early Pantry</h2>\n<p>Earlier in the development of Pantry, I made a decision to focus on reproducibility. I quickly ran into a problem with hpack: I needed to be able to tell the package name and version of a package easily, but the only code path I had for that was parsing the cabal file. In order to support hpack files for this, I would need to write the entire package contents to the filesystem, run hpack on the resulting directory, and then parse the generated file.</p>\n<p>(I probably could have whipped up something hacky around parsing the hpack YAML file directly, but that felt like a can of worms.)</p>\n<p>Performing these steps each time Stack or Pantry needed to know a package name/version would have been prohibitively expensive, so I dismissed the option. I also considered caching the generated cabal file, but since the generated file contents would change version by version, I didn't follow that path, since it would violate reproducibility.</p>\n<h2 id=\"current-pantry\">Current Pantry</h2>\n<p>An early beta tester of Stack 2.0 complained about this change. While hpack worked perfectly for mutable, local packages, it no longer worked for immutable packages. If you had a Git repository with a package, that repo didn't include the generated cabal file, <em>and</em> you wanted to use that repo as an extra-dep, things would fail. This didn't fail with Stack 1, so this was viewed (correctly) as a regression in functionality.</p>\n<p>However, Stack 2 was aiming for caching and reproducibility goals that Stack 1 hadn't achieved. If anyone remembers, Stack 1 had a bad tendency to reclone Git repos far more often than you would think it should need to. Pantry's caching ultimately solved that problem, and did so by relying on reproducibility.</p>\n<p>My initial recommendation was to require changing all Git repos used as extra-deps to include the generated cabal files. However, after further discussion with beta testers, we ended up changing Pantry instead. We added the ability to cache the generated cabal files (keyed on the version of hpack used). I was uneasy about this, but ultimately it seemed to work fine, and let us keep the functionality we wanted. So we shipped this in Pantry, in Stack 2, and continued recommending people not include generated cabal files.</p>\n<h2 id=\"the-problems-arise\">The problems arise</h2>\n<p>Unfortunately, things were far from rosey. There are now at least three problems I'm aware of with this situation:</p>\n<ul>\n<li>Continuing from before: people using build tools without hpack support are still out of luck with these repos.</li>\n<li>As raised in <a href=\"https://github.com/commercialhaskell/stack/issues/4906\">issue #4906</a>, due to how Pantry handles subdirectories in megarepos, cabal file generation will fail for extra-deps in some cases.</li>\n<li>Lock files have regularly become corrupted by changing generated cabal files. If you use a new version of Stack using a different version of hpack, it will generate a different cabal file, which will change the hashes associated with a package in the lock file. This can cause a lot of frustration between teams, and undermines the whole purpose of lock files in the first place.</li>\n</ul>\n<p>There are probably solutions to the second and third problem. But there's definitely no solution to the first short of including the cabal files again.</p>\n<h2 id=\"changes\">Changes</h2>\n<p>Based on all of this, I'm recommending that we make the following changes:</p>\n<ul>\n<li>Starting immediately: update docs and Stack templates to recommend checking in generated cabal files. This would involve a few minor doc improvements, and removing <code>*.cabal</code> from a few <code>.gitignore</code> files.</li>\n<li>For the next releases of Pantry and Stack, add a warning any time an immutable package does not include a <code>.cabal</code> file. Reference potentially this blog post, and warn that lock files may be broken by doing this.</li>\n<li>Personally: I'll start including the generated cabal files in my repos. Since I have a bunch of them, I'll appreciate people sending PRs to modify my .gitignore files and adding the generated files, as you discover them.</li>\n</ul>\n<p>For those who are truly set against including generated cabal files, all is not lost. For those cases, my recommendation would be pretty simple: keep the generated file out of your repository, and then generate a source tarball with <code>stack sdist</code> to be used as an extra-dep. This will essentially mirror the <code>stack upload</code> step you would follow to upload a package to Hackage.</p>\n<h2 id=\"next-steps\">Next steps</h2>\n<p>The changes necessary to make this a reality are small, and I'm happy to make the changes myself. I'm opening up a short discussion period for this topic, probably around a week, depending on how the discussion goes. If you have an opinion, please jump over to <a href=\"https://github.com/commercialhaskell/stack/issues/5210\">issue #5210</a> and either leave an emoji reaction or a comment.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/storing-generated-cabal-files/",
        "slug": "storing-generated-cabal-files",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Storing generated cabal files",
        "description": "A discussion of some trade-offs in Stack and Pantry's design and user recommendations",
        "updated": null,
        "date": "2020-03-04T08:26:00Z",
        "year": 2020,
        "month": 3,
        "day": 4,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/storing-generated-cabal-files/",
        "components": [
          "blog",
          "storing-generated-cabal-files"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "backstory",
            "permalink": "https://tech.fpcomplete.com/blog/storing-generated-cabal-files/#backstory",
            "title": "Backstory",
            "children": []
          },
          {
            "level": 2,
            "id": "the-conflict",
            "permalink": "https://tech.fpcomplete.com/blog/storing-generated-cabal-files/#the-conflict",
            "title": "The conflict",
            "children": []
          },
          {
            "level": 2,
            "id": "early-pantry",
            "permalink": "https://tech.fpcomplete.com/blog/storing-generated-cabal-files/#early-pantry",
            "title": "Early Pantry",
            "children": []
          },
          {
            "level": 2,
            "id": "current-pantry",
            "permalink": "https://tech.fpcomplete.com/blog/storing-generated-cabal-files/#current-pantry",
            "title": "Current Pantry",
            "children": []
          },
          {
            "level": 2,
            "id": "the-problems-arise",
            "permalink": "https://tech.fpcomplete.com/blog/storing-generated-cabal-files/#the-problems-arise",
            "title": "The problems arise",
            "children": []
          },
          {
            "level": 2,
            "id": "changes",
            "permalink": "https://tech.fpcomplete.com/blog/storing-generated-cabal-files/#changes",
            "title": "Changes",
            "children": []
          },
          {
            "level": 2,
            "id": "next-steps",
            "permalink": "https://tech.fpcomplete.com/blog/storing-generated-cabal-files/#next-steps",
            "title": "Next steps",
            "children": []
          }
        ],
        "word_count": 1425,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/safe-decimal-right-on-the-money.md",
        "colocated_path": null,
        "content": "<p>Fixed point decimal numbers are used for representing all kinds of data: percentages, temperatures, distances, mass, and many others. I would like to share an approach for safely and efficiently representing currency data in Haskell with <code>safe-decimal</code>.</p>\n<h2 id=\"problems-we-want-to-solve\">Problems we want to solve</h2>\n<h3 id=\"floating-point\">Floating point</h3>\n<p>I wonder how much money gets misplaced because programmers choose a floating point type for representing money. I will not attempt to convince you that using <code>Double</code> or <code>Float</code> for monetary values is unacceptable, it is a known fact. Values like <code>NaN</code>, <code>+/-Infinity</code> and <code>+/-0</code> have no meaning in handling money. In addition, the inability to represent most decimal values exactly should be enough reason to avoid floating point.</p>\n<h3 id=\"fixed-point-decimal\">Fixed point decimal</h3>\n<p>Floating point types make sense when numerical approximation acceptable and you care primarily about performance rather than correctness. This is most common in numerical analysis, signal processing and other areas alike. In many other circumstances a type capable of representing decimal numbers exactly should be used instead. Unlike floating point in a <code>Decimal</code> type we manually restrict how many digits after the decimal point we can have. This is called <a href=\"https://en.wikipedia.org/wiki/Fixed-point_arithmetic\">fixed-point number representation</a>. We use fixed-point numbers on a daily basis when paying in the store with cash or card, tracking distance with an odometer, and reading values off of a digital hydrometer or thermometer.</p>\n<p>We can represent fixed-point decimal numbers in Haskell by using an integral type\nfor the actual value, which is called a precision, and a scale parameter, which is used\nfor keeping track of how far from the right the decimal point is. In <code>safe-decimal</code> we\ndefine a <code>Decimal</code> type that allows us to choose a precision (<code>p</code>) and supply our <code>s</code> scale\nparameter with the type level natural number:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">newtype Decimal r (s :: Nat) p = Decimal p\n  deriving (Ord, Eq, NFData, Functor, Generic)\n</code></pre>\n<p>Unlike floating point numbers we cannot move our decimal point without changing the scaling parameter and sometimes the precision as well. This means that when we use operations like multiplication or division we might have to do some rounding. The rounding strategy is selected at the type level with the <code>r</code> type variable. At time of writing the most common rounding strategies have been implemented: <code>RoundHalfEven</code>, <code>RoundHalfUp</code>, <code>RoundHalfDown</code>, <code>RoundDown</code> and <code>RoundToZero</code>. There is a plan to add more in the future.</p>\n<h3 id=\"precision\">Precision</h3>\n<p>It is common to use a type like <code>Integer</code> for decimal representation, for\nstraightforward reasons:</p>\n<ul>\n<li><code>Integer</code> is easy to use</li>\n<li><code>Integer</code> can represent any number in the universe, if you have enough memory</li>\n</ul>\n<p>Let's look at an example which starts with enabling an extension in Haskell. We need to\nturn on <code>DataKinds</code> so that we can use type level natural numbers.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; :set -XDataKinds\n&gt;&gt;&gt; x = Decimal 12345 :: Decimal RoundHalfUp 4 Integer\n&gt;&gt;&gt; x\n1.2345\n&gt;&gt;&gt; x * 5\n6.1725\n&gt;&gt;&gt; roundDecimal (x * 5) :: Decimal RoundHalfUp 3 Integer\n6.173\n</code></pre>\n<p>The concrete <code>Decimal</code> type backed by <code>Integer</code> has a <code>Num</code> instance. That is why we\nwere able to use literal <code>5</code> and GHC converted it to a <code>Decimal</code> for us. This is how the same numbers multiplied together look as <code>Double</code>:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; 1.2345 * 5 :: Double\n6.172499999999999\n</code></pre>\n<h3 id=\"storage-and-performance\">Storage and Performance</h3>\n<p><code>Integer</code> is nice, but in some applications <code>Integer</code> isn't an acceptable representation of our data. We might need to store decimal values in database, transmit them over the network, or improve performance by storing numbers in an unboxed instead of boxed array. It is faster to store a 64-bit integer value in a database rather than converting a number to a sequence of bytes in a blob as is necessary with <code>Integer</code>. Transmission over a network is another limitation that comes to mind. Having a 508 byte limit on a UDP packet can quickly become a problem for <code>Integer</code> based values.</p>\n<p>The best way to solve this is to use fixed width integer types such as <code>Int64</code>, <code>Int32</code>, <code>Word64</code>, etc. If precision of more than 64 bits is desired there are packages that provide 128-bit, 256-bit, and other variants of signed/unsigned integers. All of them can be used\nwith <code>safe-decimal</code>, eg:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; import Data.Int (Int8, Int64)\n&gt;&gt;&gt; Decimal 12345 :: Decimal RoundHalfUp 6 Int64\n0.012345\n&gt;&gt;&gt; Decimal 123 :: Decimal RoundHalfUp 6 Int8\n0.000123\n</code></pre>\n<h3 id=\"bounds\">Bounds</h3>\n<p>Even discarding the desire for better performance and ignoring the memory constraints imposed on us, there are often types that have domain-specific bounds anyway. The most common example is when people use signed types like <code>Int</code> to represent values that have no sensible negative value. Use unsigned types like <code>Word</code> for representing values that should have no negative value.</p>\n<p>Some values that can be represented by a decimal number have a lower and upper bound that we estimate. Percentages go from 0% to a 100%, the total circulation of US dollars is about 14 trillion, and the surface temperature of a star is somewhere in a range of 225-40000K. If we use our domain specific knowledge we can come up with some safe bounds, instead of blindly assuming that we need infinitely large values.</p>\n<p>Beware though, that using integral types with bounds come with real danger: integer <a href=\"https://cwe.mitre.org/data/definitions/190.html\">overflow</a> and <a href=\"https://cwe.mitre.org/data/definitions/191.html\">underflow</a>. These are common reasons for bugs in software that lead to a whole variety of exploits. This is the area where protection in <code>safe-decimal</code> really shines, and here is an example of how it protects you:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; 123 + 4 :: Int8\n127\n&gt;&gt;&gt; 123 + 5 :: Int8\n-128\n&gt;&gt;&gt; x = Decimal 123 :: Decimal RoundHalfUp 6 Int8\n&gt;&gt;&gt; x\n0.000123\n&gt;&gt;&gt; plusDecimalBounded x (Decimal 4) :: Maybe (Decimal RoundHalfUp 6 Int8)\nJust 0.000127\n&gt;&gt;&gt; plusDecimalBounded x (Decimal 5) :: Maybe (Decimal RoundHalfUp 6 Int8)\nNothing\n</code></pre>\n<h3 id=\"runtime-exceptions\">Runtime exceptions</h3>\n<p>We know that division by zero will result in <code>DivideByZero</code> exception:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; 1 `div` 0 :: Int\n*** Exception: divide by zero\n</code></pre>\n<p>Less well known is that while some integral operations result in silent overflows, others will cause runtime exceptions:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; -1 * minBound :: Int\n-9223372036854775808\n&gt;&gt;&gt; 1 `div` minBound :: Int\n-1\n&gt;&gt;&gt; minBound `div` (-1) :: Int\n*** Exception: arithmetic overflow\n</code></pre>\n<p>Floating point values also have a sad story for division by zero. You'd be surprised how often you can stumble upon those values online:</p>\n<img src=\"/images/blog/safe-decimal.jpeg\" style=\"max-width:95%\">\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; 0 &#x2F; 0 :: Double\nNaN\n&gt;&gt;&gt; 1 &#x2F; 0 :: Double\nInfinity\n&gt;&gt;&gt; -1 &#x2F; 0 :: Double\n-Infinity\n</code></pre>\n<p>Long story short we want to be able to prevent all these issues from within pure code.\nWhich is exactly what <code>safe-decimal</code> will do for you:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; -1 * pure minBound :: Arith (Decimal RoundHalfUp 2 Int)\nArithError arithmetic overflow\n&gt;&gt;&gt; pure minBound &#x2F; (-1) :: Arith (Decimal RoundHalfUp 2 Int)\nArithError arithmetic overflow\n&gt;&gt;&gt; 1 &#x2F; 0 :: Arith (Decimal RoundHalfUp 2 Int)\nArithError divide by zero\n</code></pre>\n<p><code>Arith</code> is a monad defined in <code>safe-decimal</code> and is used for working with arithmetic\noperations that can fail for any particular reason. It is isomorphic to <code>Either SomeException</code>, which means there is straightforward conversion from <code>Arith</code> monad to\nothers that have <code>MonadThrow</code> instance with <code>arithM</code> and a few other helper functions:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; arithM (1 &#x2F; 0 :: Arith (Decimal RoundHalfUp 2 Int))\n*** Exception: divide by zero\n&gt;&gt;&gt; arithMaybe (1 &#x2F; 0 :: Arith (Decimal RoundHalfUp 2 Int))\nNothing\n</code></pre>\n<h2 id=\"decimal-for-crypto\">Decimal for crypto</h2>\n<p>At the beginning of the post I mentioned that we will implement a currency. Everyone seems to be implementing cryptocurrencies nowadays, so why don't we do the same?</p>\n<p>The most popular cryptocurrency at time of writing is Bitcoin, so we'll use it for this\nexample. A few assumptions we are going to make before we start:</p>\n<ul>\n<li>The maximum amount is 21M BTC</li>\n<li>No negative amounts are allowed</li>\n<li>Precision is up to 8 decimal places</li>\n<li>Smallest expressible value is 0.00000001 BTC, which is one Satoshi. It is named after the pseudonymous Satoshi Nakamoto who published the seminal Bitcoin paper.</li>\n</ul>\n<h3 id=\"definition\">Definition</h3>\n<p>Here we'll demonstrate how we can represent Bitcoin with <code>safe-decimal</code> and in case if you\nwould like to follow along here is <a href=\"https://gist.github.com/lehins/98d835b51c15270c2a600135b64d474d\">the\ngist</a> with all of the\ncode presented in this blogpost. First, we declare the raw amount <code>Satoshi</code> that will be\nused, so we can specify its bounds. Following that is the <code>Bitcoin</code> wrapper around the\n<code>Decimal</code> that specifies all we need to know in order to operate on this currency:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">{-# LANGUAGE DataKinds #-}\n{-# LANGUAGE NumericUnderscores #-}\n{-# LANGUAGE GeneralizedNewtypeDeriving #-}\n\nmodule Bitcoin (Bitcoin) where\n\nimport Data.Word\nimport Numeric.Decimal\nimport Data.Coerce\n\nnewtype Satoshi = Satoshi Word64 deriving (Show, Eq, Ord, Enum, Num, Real, Integral)\n\ninstance Bounded Satoshi where\n  minBound = Satoshi 0\n  maxBound = Satoshi 21_000_000_00000000\n\ndata NoRounding\n\ntype BitcoinDecimal = Decimal NoRounding 8 Satoshi\n\nnewtype Bitcoin = Bitcoin BitcoinDecimal deriving (Eq, Ord, Bounded)\n\ninstance Show Bitcoin where\n  show (Bitcoin b) = show b\n</code></pre>\n<p>Important parts of these definitions are:</p>\n<ul>\n<li>We are using a newtype wrapper around <code>Word64</code> with custom bounds, so that the library\ncan protect us from creating an invalid value. Using <code>Int64</code> would have not made a\ndifference in this case, but using another type with less available bits would not be\nenough to hold large values.</li>\n<li>We define no rounding strategy to make sure that at no point rounding could cause money\nto appear or disappear.</li>\n<li>We do not export the constructor for <code>Bitcoin</code> type to ensure that invalid values cannot\nbe constructed manually. Smart constructors will follow below, which can be exported if\nneeded.</li>\n</ul>\n<h3 id=\"construction-and-arithmetic\">Construction and arithmetic</h3>\n<p>Helper functions that do zero cost coercions from <code>Data.Coerce</code> will be used to go between\ntypes without making us repeat their signatures.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">toBitcoin :: BitcoinDecimal -&gt; Bitcoin\ntoBitcoin = coerce\n\nfromBitcoin :: Bitcoin -&gt; BitcoinDecimal\nfromBitcoin = coerce\n\nmkBitcoin :: MonadThrow m =&gt; Rational -&gt; m Bitcoin\nmkBitcoin r = Bitcoin &lt;$&gt; fromRationalDecimalBoundedWithoutLoss r\n\nplusBitcoins :: MonadThrow m =&gt; Bitcoin -&gt; Bitcoin -&gt; m Bitcoin\nplusBitcoins b1 b2 = toBitcoin &lt;$&gt; (fromBitcoin b1 `plusDecimalBounded` fromBitcoin b2)\n\nminusBitcoins :: MonadThrow m =&gt; Bitcoin -&gt; Bitcoin -&gt; m Bitcoin\nminusBitcoins b1 b2 = toBitcoin &lt;$&gt; (fromBitcoin b1 `minusDecimalBounded` fromBitcoin b2)\n</code></pre>\n<p><code>mkBitcoin</code> gives us a way to construct new values, while giving us a freedom to choose\nthe monad in which we want to fail by restricting to <code>MonadThrow</code>, for simplicity we'll\nstick to <code>IO</code>, but it could just as well be <code>Maybe</code>, <code>Either</code>, <code>Arith</code> and many others.</p>\n<pre data-lang=\"haskel\" class=\"language-haskel \"><code class=\"language-haskel\" data-lang=\"haskel\">&gt;&gt;&gt; mkBitcoin 1.23\n1.23000000\n&gt;&gt;&gt; mkBitcoin (-1.23)\n*** Exception: arithmetic underflow\n</code></pre>\n<p>Examples below make it obvious that we are guarded from constructing invalid values from\n<code>Rational</code>:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; :set -XNumericUnderscores\n&gt;&gt;&gt; mkBitcoin 21_000_000.00000000\n21000000.00000000\n&gt;&gt;&gt; mkBitcoin 21_000_000.00000001\n*** Exception: arithmetic overflow\n&gt;&gt;&gt; mkBitcoin 0.123456789\n*** Exception: PrecisionLoss (123456789 % 1000000000) to 8 decimal spaces\n</code></pre>\n<p>Same logic goes for operating on <code>Bitcoin</code> values. Nothing gets past, any operation that\ncould produce an invalid value will result in a failure.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; balance &lt;- mkBitcoin 10.05\n&gt;&gt;&gt; receiveAmount &lt;- mkBitcoin 2.345\n&gt;&gt;&gt; plusBitcoins balance receiveAmount\n12.39500000\n&gt;&gt;&gt; maliciousReceiveBitcoin &lt;- mkBitcoin 20999990.0\n&gt;&gt;&gt; plusBitcoins balance maliciousReceiveBitcoin\n*** Exception: arithmetic overflow\n&gt;&gt;&gt; arithEither $ plusBitcoins balance maliciousReceiveBitcoin\nLeft arithmetic overflow\n</code></pre>\n<p>Subtracting values is handled in the same fashion. Note that going below a lower bound\nwill be reported as underflow, which, contrary to popular belief, is a real term not only\nfor floating points, but for integers as well.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; balance &lt;- mkBitcoin 10.05\n&gt;&gt;&gt; sendAmount &lt;- mkBitcoin 1.01\n&gt;&gt;&gt; balance `minusBitcoins` sendAmount\n9.04000000\n&gt;&gt;&gt; sendAmountTooMuch &lt;- mkBitcoin 11.01\n&gt;&gt;&gt; balance `minusBitcoins` sendAmountTooMuch\n*** Exception: arithmetic underflow\n&gt;&gt;&gt; sendAmountMalicious &lt;- mkBitcoin 184467440737.09551616\n*** Exception: arithmetic overflow\n</code></pre>\n<p>I would like to emphasize in the example above the fact that we did not have to check if\n<code>balance</code> was sufficient enough for the amounts to be fully deducted from it. This means\nwe are automatically protected from incorrect transactions as well as <a href=\"https://consensys.github.io/smart-contract-best-practices/known_attacks/#integer-overflow-and-underflow\">very common attack\nvectors</a>,\nsome of which <a href=\"https://medium.com/@jeancvllr/the-value-overflow-incident-in-the-bitcoin-blockchain-15th-august-2010-a59a516e03db\">really did happen with\nBitcoin</a>\nand other cryptocurrencies.</p>\n<h3 id=\"num-and-fractional\">Num and Fractional</h3>\n<p>Using a special smart constructor is cool and all, but it would be cooler if we could use\nour regular math operators to work with <code>Bitcoin</code> values and utilize GHC desugarer to\nautomatically convert numeric literal values too. For this we need instances of <code>Num</code>\nand <code>Fractional</code>. We can't create instances like that:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">instance Num Bitcoin where\n...\ninstance Fractional Bitcoin where\n...\n</code></pre>\n<p>because then we would have to use partial functions for failures, which is exactly what we\nwant to avoid. Moreover some functions simply do no make sense for monetary\nvalues. Multiplying or dividing Bitcoins together, is simply undefined. We'll\nhave to represent a special type of failure through an exception. This is a bit unfortunate, but we'll go with it anyways:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data UnsupportedOperation =\n  UnsupportedMultiplication | UnsupportedDivision\n  deriving Show\n\ninstance Exception UnsupportedOperation\n\ninstance Num (Arith Bitcoin) where\n  (+) = bindM2 plusBitcoins\n  (-) = bindM2 minusBitcoins\n  (*) = bindM2 (\\_ _ -&gt; throwM UnsupportedMultiplication)\n  abs = id\n  signum mb = fmap toBitcoin . signumDecimalBounded . fromBitcoin =&lt;&lt; mb\n  fromInteger i = toBitcoin &lt;$&gt; fromIntegerDecimalBoundedIntegral i\n\ninstance Fractional (Arith Bitcoin) where\n  (&#x2F;) = bindM2 (\\_ _ -&gt; throwM UnsupportedDivision)\n  fromRational = mkBitcoin\n</code></pre>\n<p>It is important to note that defining the instances above is strictly optional and exporting helper functions that perform the same operations is preferable. We have the instances now so we can demonstrate their use:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; 7.8 + 10 - 0.4 :: Arith Bitcoin\nArith 17.40000000\n&gt;&gt;&gt; 7.8 - 10 + 0.4 :: Arith Bitcoin\nArithError arithmetic underflow\n&gt;&gt;&gt; 7.8 * 10 &#x2F; 0.4 :: Arith Bitcoin\nArithError UnsupportedMultiplication\n&gt;&gt;&gt; 7.8 &#x2F; 10 * 0.4 :: Arith Bitcoin\nArithError UnsupportedDivision\n&gt;&gt;&gt; 7.8 - 7.7 + 0.4 :: Arith Bitcoin\nArith 0.50000000\n&gt;&gt;&gt; 0.4 - 7.7 + 7.8 :: Arith Bitcoin\nArithError arithmetic underflow\n</code></pre>\n<p>The order of operations can play tricks on you, which probably serves as another reason to stick to exporting functions: <code>mkBitcoin</code>, <code>plusBitcoins</code>, <code>minusBitcoins</code> and whatever other operations we might need.</p>\n<p>Let's take a look at a more realistic example where the amount sent is supplied to us as a <code>Scientific</code> value likely from some JSON object and we want to update the balance of our account. For simplicity's sake I will use a <code>State</code> monad, but same approach will work just as well with whatever stateful setup you have.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">newtype Balance = Balance Bitcoin deriving Show\n\nsendBitcoin :: MonadThrow m =&gt; Balance -&gt; Scientific -&gt; m (Bitcoin, Balance)\nsendBitcoin startingBalance rawAmount =\n  flip runStateT startingBalance $ do\n    amount &lt;- toBitcoin &lt;$&gt; fromScientificDecimalBounded rawAmount\n    Balance balance &lt;- get\n    newBalance &lt;- minusBitcoins balance amount\n    put $ Balance newBalance\n    pure amount\n</code></pre>\n<p>Usage of this simple function will demonstrate us the power of the approach taken in the\nlibrary as well as its limitations:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; balance &lt;- mkBitcoin 10.05\n&gt;&gt;&gt; sendBitcoin (Balance balance) 0.5\n(0.50000000,Balance 9.55000000)\n&gt;&gt;&gt; sendBitcoin (Balance balance) 1e-6\n(0.00000100,Balance 10.04999900)\n&gt;&gt;&gt; sendBitcoin (Balance balance) 1e+6\n*** Exception: arithmetic underflow\n&gt;&gt;&gt; arithEither $ sendBitcoin (Balance balance) (-1)\nLeft arithmetic underflow\n</code></pre>\n<p>We witness <code>Overflow</code>/<code>Underflow</code> errors as expected, but we get almost no information on where exactly the problem occurred and which value was responsible for it. This is something that can be fixed with customized exceptions, but for now we do achieve the most important goal, namely protecting our calculations from all the dangerous problems without doing any explicit checking.</p>\n<p>Nowhere in <code>sendBitcoin</code> did we have to validate our input, output, or intermediate values. Not a single <code>if then else</code> statement. This is because all of the information needed to determine the validity of the above operations was encoded into the type and the library enforces that validity for the programmer.</p>\n<h3 id=\"mixing-decimal-types\">Mixing Decimal types</h3>\n<p>Although multiplying two <code>Bitcoin</code> values makes no sense, computing the product of an amount and a percentage makes perfect sense. So, how do we go about multiplying different decimals together?</p>\n<p>While demonstrating interoperability of different decimal types we'd like to also show how higher precision integrals can be used with <code>Decimal</code>. In this example we'll use a <code>Word128</code> backed <code>Decimal</code> for computing <a href=\"https://en.wikipedia.org/wiki/Future_value\">future value</a>. There are a couple of packages that provide 128-bit integral types and it doesn't matter which one it comes from.</p>\n<p>Our goal is to compute the savings account balance at 1.9% APY (Annual Percentage Yield) in 30 days if you start with 10,000 BTC and add 10 BTC each day.</p>\n<p>We will start by defining the rounding strategy implementation for the <code>Word128</code> type and\nspecifying the <code>Decimal</code> type we will be using for computation:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">instance Round RoundHalfUp Word128 where\n  roundDecimal = roundHalfUp\n\ntype CDecimal = Decimal RoundHalfUp 33 Word128\n</code></pre>\n<p>This is not the implementation of <code>FV</code> (Future Value) function as it is known in finance. It is a direct translation of how we think the accrual of interest works. In plain English we can say that to compute balance of the account tomorrow, we take balance we have today, multiply it by the daily interest rate and add it to the today's balance together with the amount we promised to top up daily.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">futureValue :: MonadThrow m =&gt; CDecimal -&gt; CDecimal -&gt; CDecimal -&gt; Int -&gt; m CDecimal\nfutureValue startBalance dailyRefill apy days = do\n  dailyScale &lt;- -- apy is in % and the year of 2020 is a leap year\n    fromIntegralDecimalBounded (100 * 366)\n  dailyRate &lt;- divideDecimalBoundedWithRounding apy dailyScale\n  let go curBalance day\n        | day &lt; days = do\n          accruedDaily &lt;- timesDecimalBoundedWithRounding curBalance dailyRate\n          nextDayBalance &lt;- sumDecimalBounded [curBalance, accruedDaily, dailyRefill]\n          go nextDayBalance (day + 1)\n        | otherwise = pure curBalance\n  go startBalance 0\n</code></pre>\n<p>The above implementation works on the <code>CDecimal</code> type. What we need to calculate is <code>Bitcoin</code>. This means we have to do some type conversions and scaling in order to match up the types of <code>futureValue</code> function. Then we do some rounding and conversion again to reduce precision to obtain the new <code>Balance</code>:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">futureValueBitcoin :: MonadThrow m =&gt; Balance -&gt; Bitcoin -&gt; Rational -&gt; Int -&gt; m (Balance, CDecimal)\nfutureValueBitcoin (Balance (Bitcoin balance)) (Bitcoin dailyRefill) apy days = do\n  balance&#x27; &lt;- scaleUpBounded (fromIntegral &lt;$&gt; castRounding balance)\n  dailyRefill&#x27; &lt;- scaleUpBounded (fromIntegral &lt;$&gt; castRounding dailyRefill)\n  apy&#x27; &lt;- fromRationalDecimalBoundedWithoutLoss apy\n  endBalance &lt;- futureValue balance&#x27; dailyRefill&#x27; apy&#x27; days\n  endBalanceRounded &lt;- integralDecimalToDecimalBounded (roundDecimal endBalance)\n  pure (Balance $ Bitcoin $ castRounding endBalanceRounded, endBalance)\n</code></pre>\n<p>Now we can compute what our balance will be in 30 days:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">computeBalance :: Arith (Balance, CDecimal)\ncomputeBalance = do\n  balance &lt;- Balance &lt;$&gt; 10000\n  topup &lt;- 10\n  futureValueBitcoin balance topup 1.9 30\n</code></pre>\n<p>Let's see what values we get and how they compares to the actual <code>FV</code> function that works on <code>Double</code> (for the curious here is one possible implementation <a href=\"https://github.com/numpy/numpy/blob/e94ed84010c60961f82860d146681d3fd607de4e/numpy/lib/financial.py#L36\">numpy.fv</a>)</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; fst &lt;$&gt; arithM computeBalance\nBalance 10315.81142818\n&gt;&gt;&gt; fv (1.9 &#x2F; 36600) 30 (-10) (-10000)\n10315.811428177167\n</code></pre>\n<p>That's pretty good. We get the accurately rounded result of our new balance. But how\naccurate is the computed result before the rounding is applied? As accurate as 128 bits can\ndo in presence of rounding:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; snd &lt;$&gt; arithM computeBalance\n10315.811428176906130029412612348658890\n</code></pre>\n<p>We get much better accuracy here than we could with <code>Double</code>. This isn't surprising, since we have more bits at our disposal, but accuracy is not the only benefit of this calculation. The result is also deterministic! This is practically impossible to guarantee with floating point number calculations across different platforms and architectures.</p>\n<h2 id=\"available-solutions\">Available solutions</h2>\n<p>A very common question people usually ask when a new library is being announced: &quot;What is wrong with currently available solutions?&quot;. That is a perfectly reasonable question, which hopefully we have a compelling answer for.</p>\n<p>We had a strong requirement for safety, correctness, and performance. Which is the combination that none of the available libraries in Haskell ecosystem could provide.</p>\n<p>I will use <code>Data.Fixed</code> from <code>base</code> as an example and list some of limitations that prevented us from using it:</p>\n<ul>\n<li>\n<p>Backed by <code>Integer</code>, which makes it slower than it should be for common cases.</p>\n</li>\n<li>\n<p>Truncation instead of some more useful rounding strategies.</p>\n</li>\n</ul>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">&gt;&gt;&gt; 5.39 :: Fixed E1\n5.3\n&gt;&gt;&gt; 5.499999999999 :: Fixed E1\n5.4\n</code></pre>\n<ul>\n<li>No built-in protection against runtime exceptions:</li>\n</ul>\n<pre><code>&gt;&gt;&gt; f = 5.49 :: Fixed E1\n&gt;&gt;&gt; f &#x2F; 0\n*** Exception: divide by zero\n</code></pre>\n<ul>\n<li>\n<p>There is a limited number of scaling types: <code>E0</code>, <code>E1</code>, <code>E2</code>, <code>E3</code>, <code>E6</code>, <code>E9</code> and <code>E12</code>. It is possible to add new ones with <code>HasResolution</code>, but it is a bit inconvenient.</p>\n</li>\n<li>\n<p>No built-in ability to specify bounds. This means that there is no protection against things like negative values or going outside of artificially imposed limits.</p>\n</li>\n</ul>\n<p>Similar arguments can be applied to other libraries. Especially the objection <a href=\"https://gist.github.com/lehins/727a86c71ff32e18a51b42d3dc0736fe\">regarding performance</a>. This objection is not unfounded: our benchmarks have revealed performance issues of practical relevance with existing implementations.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I encourage everyone who writes software for finance, blockchain and other areas that require exact precision and safety of calculations, to seriously consider all implications of choosing the wrong data type for representing their numeric values.</p>\n<p>Haskell is a very safe language out of the box, but as you saw in this post, it does not offer the desired level of safety when it comes to operations on numeric values. Hopefully we were able to convince you, that, at least for decimal numbers, such safety can be achieved with <code>safe-decimal</code> library.</p>\n<p>If you feel like this post describes problems that are familiar to you and you are looking for a solution, please reach out to us and we will be glad to help.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/",
        "slug": "safe-decimal-right-on-the-money",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Decimal Safety Right on The Money",
        "description": "We developed a library for operating on decimal numbers that guards you from errors. Come learn how it works and how you can use it",
        "updated": null,
        "date": "2020-02-12T17:42:00Z",
        "year": 2020,
        "month": 2,
        "day": 12,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "blockchain"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Alexey Kuleshevich",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/safe-decimal-right-on-the-money/",
        "components": [
          "blog",
          "safe-decimal-right-on-the-money"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "problems-we-want-to-solve",
            "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#problems-we-want-to-solve",
            "title": "Problems we want to solve",
            "children": [
              {
                "level": 3,
                "id": "floating-point",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#floating-point",
                "title": "Floating point",
                "children": []
              },
              {
                "level": 3,
                "id": "fixed-point-decimal",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#fixed-point-decimal",
                "title": "Fixed point decimal",
                "children": []
              },
              {
                "level": 3,
                "id": "precision",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#precision",
                "title": "Precision",
                "children": []
              },
              {
                "level": 3,
                "id": "storage-and-performance",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#storage-and-performance",
                "title": "Storage and Performance",
                "children": []
              },
              {
                "level": 3,
                "id": "bounds",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#bounds",
                "title": "Bounds",
                "children": []
              },
              {
                "level": 3,
                "id": "runtime-exceptions",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#runtime-exceptions",
                "title": "Runtime exceptions",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "decimal-for-crypto",
            "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#decimal-for-crypto",
            "title": "Decimal for crypto",
            "children": [
              {
                "level": 3,
                "id": "definition",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#definition",
                "title": "Definition",
                "children": []
              },
              {
                "level": 3,
                "id": "construction-and-arithmetic",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#construction-and-arithmetic",
                "title": "Construction and arithmetic",
                "children": []
              },
              {
                "level": 3,
                "id": "num-and-fractional",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#num-and-fractional",
                "title": "Num and Fractional",
                "children": []
              },
              {
                "level": 3,
                "id": "mixing-decimal-types",
                "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#mixing-decimal-types",
                "title": "Mixing Decimal types",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "available-solutions",
            "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#available-solutions",
            "title": "Available solutions",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/safe-decimal-right-on-the-money/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 3370,
        "reading_time": 17,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/rust-devops.md",
        "colocated_path": null,
        "content": "<p>On February 2, 2020, one of FP Complete's Lead Software Engineers—Mike McGirr—presented a webinar on using Rust for creating DevOps tooling.</p>\n<h2 id=\"webinar-outline\">Webinar Outline</h2>\n<p>FP Complete is hosting a functional programming\nwebinar on, “Learn Rapid Rust with DevOps Success\nStrategies.” A beginner’s guide including sample Rust\ndemonstration on writing your DevOps tools with Rust\nover Haskell. An introduction to Rust, with basic DevOps\nuse cases, and the library ecosystem, airing on\nFebruary 5th, 2020.</p>\n<p>The webinar will be hosted by Mike McGirr, a DevOps\nSoftware Engineer at FP Complete which will provide an\nabundance of Rust information with respect to\nfunctional programming and DevOps, featuring (safety,\nspeed and accuracy) that make it unique and contributes\nto its popularity, and its possible preference as a\nlanguage of choice for operating systems over Haskell,\nweb browsers and device drivers among others. The\nwebinar offers an interesting opportunity to learn and\nuse Rust in developing real world projects aside from\nHaskell or other functional programming languages\navailable today.</p>\n<h2 id=\"topics-covered\">Topics covered</h2>\n<p>During the webinar we will cover the following\ntopics:</p>\n<ul>\n<li>A quick intro and background into the Rust programming language</li>\n<li>Some scenarios and reasons why you would want to use Rust for writing your DevOps tooling (and some reasons why you wouldn’t)</li>\n<li>A small example of using the existing AWS libraries to create   a basic DevOps tool</li>\n<li>How to Integrate FP into your Organization</li>\n</ul>\n<p>Mike Mcgirr, a Lead Software Engineer at FP\nComplete,will help us understand reasoning that\nsupports using Rust over other functional programming\nlanguages offered in the market today.</p>\n<h2 id=\"more-about-your-host\">More about your host</h2>\n<p>The webinar will be hosted by Mike McGirr, a veteran\nDevOps Software Engineer at FP Complete. With years of\nexperience in DevOps software development, Mike will\nwalk us through a first in a series of Rust webinars\ndiscussing why we would, and how we could utilize Rust\nas a functional programming language to build DevOps\nover other functional programming languages available\nin the market today. Mike will also share with us a\nsmall example script written in Rust showing how Rust\nmay be used.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-devops/",
        "slug": "rust-devops",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Rust with DevOps Success Strategies",
        "description": "Wednesday Feb 5th, 2020, at 10:00 AM PST. Webinar Outline: FP Complete is hosting a functional programming webinar on, “Learn Rapid Rust with DevOps Success Strategies.” A beginner’s guide including sample Rust demonstration on writing your DevOps tools with Rust over Hasell. An introduction to Rust, with basic DevOps use cases, and the library ecosystem, […]",
        "updated": null,
        "date": "2020-02-05",
        "year": 2020,
        "month": 2,
        "day": 5,
        "taxonomies": {
          "tags": [
            "devops",
            "rust",
            "insights"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mike McGirr",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/rust-devops/",
        "components": [
          "blog",
          "rust-devops"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "webinar-outline",
            "permalink": "https://tech.fpcomplete.com/blog/rust-devops/#webinar-outline",
            "title": "Webinar Outline",
            "children": []
          },
          {
            "level": 2,
            "id": "topics-covered",
            "permalink": "https://tech.fpcomplete.com/blog/rust-devops/#topics-covered",
            "title": "Topics covered",
            "children": []
          },
          {
            "level": 2,
            "id": "more-about-your-host",
            "permalink": "https://tech.fpcomplete.com/blog/rust-devops/#more-about-your-host",
            "title": "More about your host",
            "children": []
          }
        ],
        "word_count": 351,
        "reading_time": 2,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/",
            "title": "Collect in Rust, traverse in Haskell and Scala"
          }
        ]
      },
      {
        "relative_path": "blog/transformations-on-applicative-concurrent-computations.md",
        "colocated_path": null,
        "content": "<p>When deciding which language to use to solve challenges that require heavy concurrent algorithms, it's hard to not consider Haskell. Its immutable and persistent data structures reduce the introduction of accidental complexity, and the GHC runtime facilitates the creation of thousands of (green) threads without having to worry as much about the memory and performance costs.</p>\n<p>The epitome of Haskell's concurrent API is the <code>async</code> package, which provides higher-order functions (e.g. <a href=\"https://www.stackage.org/haddock/lts-12.24/async-2.2.1/Control-Concurrent-Async.html#v:race\"><code>race</code></a>, <a href=\"https://www.stackage.org/haddock/lts-12.24/async-2.2.1/Control-Concurrent-Async.html#v:mapConcurrently\"><code>mapConcurrently</code></a>, etc.) that allow us to run <code>IO</code> sub-routines and combine their results in various ways while executing concurrently. It also offers the type  <a href=\"https://www.stackage.org/haddock/lts-12.24/async-2.2.1/Control-Concurrent-Async.html#t:Concurrently\"><code>Concurrently</code></a> which allows developers to give normal sub-routines concurrent properties, and also provides <code>Applicative</code> and <code>Alternative</code>  instances that help in the creation of values from composing smaller sub-routines.</p>\n<p>In this blog post, we will discuss some of the drawbacks of using the <a href=\"https://www.stackage.org/haddock/lts-12.24/async-2.2.1/Control-Concurrent-Async.html#t:Concurrently\"><code>Concurrently</code></a> type when composing sub-routines. Then we will show how we can overcome these shortcomings by taking advantage of the structural nature of the <code>Applicative</code> and <code>Alternative</code> typeclasses; re-shaping and optimizing the execution of a tree of sub-routines.</p>\n<p>And, if you simply want to get these performance advantages in your Haskell code today, you can cut to the chase and begin using the new <a href=\"https://www.stackage.org/haddock/lts-12.24/unliftio-0.2.9.0/UnliftIO-Async.html#t:Conc\"><code>Conc</code></a> datatype we've introduced in <a href=\"https://www.stackage.org/lts-12.24/package/unliftio-0.2.9.0\"><code>unliftio</code> 0.2.9.0</a>.</p>\n<h2 id=\"the-drawbacks-of-concurrently\">The drawbacks of <code>Concurrently</code></h2>\n<p>Getting started with <code>Concurrently</code> is easy. We can wrap an <code>IO a</code> sub-routine with the <code>Concurrently</code> constructor, and then we can compose async values using the map (<code>&lt;$&gt;</code>), apply (<code>&lt;*&gt;</code>), and alternative (<code>&lt;|&gt;</code>) operators. An example might be:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">myPureFunction :: String -&gt; String -&gt; String -&gt; String\nmyPureFunction a b c = a ++ &quot; &quot; ++ b ++ &quot; &quot; ++ c\n\nmyComputation :: Concurrently String\nmyComputation =\n  myPureFunction\n  &lt;$&gt; Concurrently fetchStringFromAPI1\n  &lt;*&gt; (    Concurrently fetchStringFromAPI2_Region1\n       &lt;|&gt; Concurrently fetchStringFromAPI2_Region2\n       &lt;|&gt; Concurrently fetchStringFromAPI2_Region3\n       &lt;|&gt; Concurrently fetchStringFromAPI2_Region4)\n  &lt;*&gt; Concurrently fetchStringFromAPI3\n</code></pre>\n<p>Let's talk a bit on the drawbacks of this approach. How many threads do you think we need to make sure all these calls execute concurrently? Try to come up with a number and an explanation and then continue reading.</p>\n<p>I am guessing you are expecting this code to spawn six (6) threads, correct? One for each <code>IO</code> sub-routine that we are using. However, with the existing implementation of <code>Applicative</code> and <code>Alternative</code> in <code>Concurrently</code>, we will spawn at least ten (10) threads. Let's explore these instances to have a better understanding of what is going on:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">instance Applicative Concurrently where\n  pure = Concurrently . return\n  Concurrently fs &lt;*&gt; Concurrently as =\n    Concurrently $ (\\(f, a) -&gt; f a) &lt;$&gt; concurrently fs as\n\ninstance Alternative Concurrently where\n  Concurrently as &lt;|&gt; Concurrently bs =\n    Concurrently $ either id id &lt;$&gt; race as bs\n</code></pre>\n<p>First, let us expand the alternative calls in our example:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">    Concurrently fetchStringFromAPI2_Region1\n&lt;|&gt; Concurrently fetchStringFromAPI2_Region2\n&lt;|&gt; Concurrently fetchStringFromAPI2_Region3\n&lt;|&gt; Concurrently fetchStringFromAPI2_Region4\n\n--- is equivalent to\nConcurrently (\n  either id id &lt;$&gt;\n    race {- 2 threads -}\n      fetchStringFromAPI2_Region1\n      (either id id &lt;$&gt;\n         race {- 2 threads -}\n           fetchStringFromAPI2_Region2\n           (either id id &lt;$&gt;\n              race {- 2 threads -}\n                fetchStringFromAPI2_Region3\n                fetchStringFromAPI2_Region4))\n)\n</code></pre>\n<p>Next, let us expand the applicative calls:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">    Concurrently (myPureFunction &lt;$&gt; fetchStringFromAPI1)\n&lt;*&gt; Concurrently fetchStringFromAPI2\n&lt;*&gt; Concurrently fetchStringFromAPI3\n\n--- is equivalent to\n\nConcurrently (\n  (\\(f, a) -&gt; f a) &lt;$&gt;\n    concurrently {- 2 threads -}\n      ( (\\(f, a) -&gt; f a) &lt;$&gt;\n         concurrently {- 2 threads -}\n           (myPureFunction &lt;$&gt; fetchStringFromAPI1)\n           fetchStringFromAPI2\n      )\n      fetchStringFromAPI3\n)\n</code></pre>\n<p>You may tell we are always spawning two threads for each pair of sub-routines. Suppose we have 7 sub-routines we want to compose via <code>Applicative</code> or <code>Alternative</code>. Using this implementation we would spawn at least 14 new threads when at most 8 should do the job. For each composition we do, an extra thread is going to be spawned to deal with bookkeeping.</p>\n<p>Another drawback to consider: what happens if one of the values in the call is a <code>pure</code> call? Given this code:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">pure foo &lt;|&gt; bar\n</code></pre>\n<p>We get to spawn a new thread (unnecessarily) to wait for <code>foo</code>, even though it has already been computed and it should always win. As we mentioned before, Haskell is an excellent choice for concurrency because it makes spawning threads cheap; however, these threads don't come for free, and we should strive to avoid redundant thread creation.</p>\n<h2 id=\"introducing-the-conc-type\">Introducing the <code>Conc</code> type</h2>\n<p>To address the issues mentioned above, we implemented a new type called <code>Conc</code> in our <code>unliftio</code> package. It has the same purpose as <code>Concurrently</code>, but it offers some extra guarantees:</p>\n<ul>\n<li>There is going to be only a single bookkeeping thread for all <code>Applicative</code> and <code>Alternative</code> compositions.</li>\n<li>If we have <code>pure</code> calls in an <code>Applicative</code> or an <code>Alternative</code> composition, we will not spawn a new thread.</li>\n<li>We will optimize the code for trivial cases. For example, not spawning a thread when evaluating a single <code>Conc</code> value (instead of a composition of <code>Conc</code> values).</li>\n<li>We can compose more than <code>IO</code> sub-routines. Any monadic type that implements <code>MonadUnliftIO</code> is accepted.</li>\n<li>Children threads are always launched in an unmasked state, not the inherited state of the parent thread.</li>\n</ul>\n<p>The <code>Conc</code> type is defined as follows:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">data Conc m a where\n  Action :: m a -&gt; Conc m a\n  Apply  :: Conc m (v -&gt; a) -&gt; Conc m v -&gt; Conc m a\n  LiftA2 :: (x -&gt; y -&gt; a) -&gt; Conc m x -&gt; Conc m y -&gt; Conc m a\n  Pure   :: a -&gt; Conc m a\n  Alt    :: Conc m a -&gt; Conc m a -&gt; Conc m a\n  Empty  :: Conc m a\n\ninstance MonadUnliftIO m =&gt; Applicative (Conc m) where\n  pure   = Pure\n  (&lt;*&gt;)  = Apply\n  (*&gt;)   = Then\n  liftA2 = LiftA2\n\ninstance MonadUnliftIO m =&gt; Alternative (Conc m) where\n  (&lt;|&gt;) = Alt\n</code></pre>\n<p>If you are familiar with <code>Free</code> types, this will look eerily familiar. We are going to represent our concurrent computations as data so that we can later transform it or evaluate as we see fit. In this setting, our first example would look something like the following:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">myComputation :: Conc String\nmyComputation =\n  myPureFunction\n  &lt;$&gt; conc fetchStringFromAPI1\n  &lt;*&gt; (    conc fetchStringFromAPI2_Region1\n       &lt;|&gt; conc fetchStringFromAPI2_Region2\n       &lt;|&gt; conc fetchStringFromAPI2_Region3\n       &lt;|&gt; conc fetchStringFromAPI2_Region4)\n\n--- is equivalent to\n\nApply (myPureFunction &lt;$&gt; fetchStringFromAPI1)\n      (Alt (Action fetchStringFromAPI2_Region1)\n           (Alt (Action fetchStringFromAPI2_Region2)\n                (Alt (Action fetchStringFromAPI2_Region3)\n                     (Action fetchStringFromAPI2_Region4))))\n\n</code></pre>\n<p>You may notice we keep the tree structure of the <code>Concurrently</code> implementation. However, given we are dealing with a pure data structure, we can modify our <code>Conc</code> value to something that is easier to evaluate. Indeed, thanks to the <code>Applicative</code>  interface, we don't need to evaluate any of the <code>IO</code> sub-routines to do transformations (magic!).</p>\n<p>We have additional (internal) types that flatten all our alternatives and applicative values:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">data Flat a\n  = FlatApp !(FlatApp a)\n  | FlatAlt !(FlatApp a) !(FlatApp a) ![FlatApp a]\n\ndata FlatApp a where\n  FlatPure   :: a -&gt; FlatApp a\n  FlatAction :: IO a -&gt; FlatApp a\n  FlatApply  :: Flat (v -&gt; a) -&gt; Flat v -&gt; FlatApp a\n  FlatLiftA2 :: (x -&gt; y -&gt; a) -&gt; Flat x -&gt; Flat y -&gt; FlatApp a\n</code></pre>\n<p>These types are equivalent to our <code>Conc</code> type, but they have a few differences from <code>Conc</code>:</p>\n<ul>\n<li>The <code>Flat</code> type separates <code>Conc</code> values created via <code>Applicative</code> from the ones created via <code>Alternative</code></li>\n<li>The <code>FlatAlt</code> constructor flattens an <code>Alternative</code> tree into a list (helping us spawn all of them at once and facilitating the usage of a single bookkeeping thread).\n<ul>\n<li>Note that we represent this as a &quot;at least two&quot; list, with a similar representation of a non empty list from the <code>semigroups</code> package.</li>\n</ul>\n</li>\n<li>The <code>Flat</code> and <code>FlatApp</code> records are not polymorphic on their monadic context given they rely directly on <code>IO</code>. We can transform the <code>m</code> parameter in our <code>Conc m a</code>  type to <code>IO</code> via the <code>MonadUnliftIO</code>  constraint.</li>\n</ul>\n<p>The first example of our blog post, when flattened, would look something like the following:</p>\n<pre data-lang=\"Haskell\" class=\"language-Haskell \"><code class=\"language-Haskell\" data-lang=\"Haskell\">FlatApp\n  (FlatApply\n    (FlatApp (FlatAction (myPureFunction &lt;$&gt; fetchStringFromAPI1)))\n    (FlatAlt (FlatAction fetchStringFromAPI2_Region1)\n             (FlatAction fetchStringFromAPI2_Region2)\n             [ FlatAction fetchStringFromAPI2_Region3\n             , FlatAction fetchStringFromAPI2_Regoin4 ]))\n</code></pre>\n<p>Using a <a href=\"https://github.com/fpco/unliftio/blob/2c68959a5f498bf4ea457fc4bd673b4dfce5c512/unliftio/src/UnliftIO/Internals/Async.hs#L617\"><code>flatten</code></a> function that transforms a <code>Conc</code> value into a <code>Flat</code> value, we can <a href=\"https://github.com/fpco/unliftio/blob/2c68959a5f498bf4ea457fc4bd673b4dfce5c512/unliftio/src/UnliftIO/Internals/Async.hs#L662\">later evaluate</a> the concurrent sub-routine tree in a way that is optimal for our use case.</p>\n<h2 id=\"performance\">Performance</h2>\n<p>So given that the <code>Conc</code> API reduces the number of threads created via <code>Alternative</code>, our implementation should work best, correct? Sadly, it is not all peachy. To ensure that we get the result of the first thread that finishes on an <code>Alternative</code> composition, we make use of the STM API. This approach works great when we want to gather values from multiple concurrent threads. Sadly, the STM monad doesn't <a href=\"https://www.oreilly.com/library/view/parallel-and-concurrent/9781449335939/ch10.html#sec_stm-cost\">scale too well when composing lots of reads</a>, making this approach prohibitive if you are composing tens of thousands of <code>Conc</code> values.</p>\n<p>Considering this limitation, we only use <code>STM</code> when an <code>Alternative</code> function is involved; otherwise, we rely on <code>MVar</code>s for multiple thread result composition via <code>Applicative</code>. We can do this without sweating because we can change the evaluator of the sub-routine tree created by <code>Conc</code> on the fly.</p>\n<h2 id=\"conclusions\">Conclusions</h2>\n<p>We showcased how we can model the composition of computations using an <code>Applicative</code> and <code>Alternative</code> tree, and then, taking advantage of this APIs; we transformed this computation tree into something more approachable to execute concurrently. We also took advantage of this <em>sub-routines as data</em> approach to change the evaluator from <code>MVar</code> to <code>STM</code> compositions.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/transformations-on-applicative-concurrent-computations/",
        "slug": "transformations-on-applicative-concurrent-computations",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Transformations on Applicative Concurrent Computations",
        "description": "We wrote a data type called Conc, which provides for more efficient concurrent computations. Come read how you can use this in your Haskell code today!",
        "updated": null,
        "date": "2020-01-27T04:28:32Z",
        "year": 2020,
        "month": 1,
        "day": 27,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Román González",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/transformations-on-applicative-concurrent-computations/",
        "components": [
          "blog",
          "transformations-on-applicative-concurrent-computations"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "the-drawbacks-of-concurrently",
            "permalink": "https://tech.fpcomplete.com/blog/transformations-on-applicative-concurrent-computations/#the-drawbacks-of-concurrently",
            "title": "The drawbacks of Concurrently",
            "children": []
          },
          {
            "level": 2,
            "id": "introducing-the-conc-type",
            "permalink": "https://tech.fpcomplete.com/blog/transformations-on-applicative-concurrent-computations/#introducing-the-conc-type",
            "title": "Introducing the Conc type",
            "children": []
          },
          {
            "level": 2,
            "id": "performance",
            "permalink": "https://tech.fpcomplete.com/blog/transformations-on-applicative-concurrent-computations/#performance",
            "title": "Performance",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusions",
            "permalink": "https://tech.fpcomplete.com/blog/transformations-on-applicative-concurrent-computations/#conclusions",
            "title": "Conclusions",
            "children": []
          }
        ],
        "word_count": 1528,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/teaching-haskell-with-duet.md",
        "colocated_path": null,
        "content": "<h1 id=\"teaching-haskell-with-duet\">Teaching Haskell with Duet</h1>\n<p>Teaching Haskell to complete beginners is an enjoyable\nexperience. Haskell is foreign; many of its features are alien to\nother programmers. It's purely functional. It's non-strict. Its type\nsystem is among the more pervasive of other practical languages.</p>\n<h2 id=\"simple-at-the-core\">Simple at the core</h2>\n<p>Haskell's core language is simple, though. It shares this with Lisp.\n<a href=\"https://mitpress.mit.edu/sites/default/files/sicp/full-text/sicp/book/node10.html\">Structured and Interpretation of Computer Programs by MIT</a>\nteaches Lisp beginning with a substitution model for function\napplication. This turns out to work well for Haskell too. This is how\nI've been teaching Haskell to beginners at FP Complete for our\nclients.</p>\n<p>For example, in SICP, they use the example:</p>\n<pre data-lang=\"lisp\" class=\"language-lisp \"><code class=\"language-lisp\" data-lang=\"lisp\">(+ (square 6) (square 10))\n</code></pre>\n<p>which reduces the function <code>square</code> to</p>\n<pre data-lang=\"lisp\" class=\"language-lisp \"><code class=\"language-lisp\" data-lang=\"lisp\">(+ (* 6 6) (* 10 10))\n</code></pre>\n<p>which reduces by multiplication to:</p>\n<pre data-lang=\"lisp\" class=\"language-lisp \"><code class=\"language-lisp\" data-lang=\"lisp\">(+ 36 100)\n</code></pre>\n<p>and finally to</p>\n<pre data-lang=\"lisp\" class=\"language-lisp \"><code class=\"language-lisp\" data-lang=\"lisp\">136\n</code></pre>\n<p>As they note in SICP,</p>\n<blockquote>\n<p>The purpose of the substitution is to help us think about procedure\napplication, not to provide a description of how the interpreter\nreally works. Typical interpreters do not evaluate procedure\napplications by manipulating the text of a procedure to substitute\nvalues for the formal parameters.</p>\n</blockquote>\n<p>That this, this is a <em>model</em>, it's not the real thing. In fact, if you\nreally eyeball the very first step, you might wonder which <code>(square ..)</code> argument is evaluated first between the two. Scheme doesn't\nspecify an argument order; it varies. An implementation may even\ninline the whole thing.</p>\n<p>Rather, if we think about programs in terms of a simple sequence of\nrewrites, we get a lot of bang for our buck in terms of reasoning and\nunderstanding.</p>\n<h2 id=\"the-right-language-to-model\">The right language to model</h2>\n<p>Motivated by this goal, I started thinking about automating this\nprocess, so that students could use this model more readily, and see\nthe shape of functions and algorithms visually. The solution I came up\nwith was a new language that is a subset of Haskell, which I'll cover\nin this post.</p>\n<p>The reason that it's not full Haskell is that Haskell has a lot of\nsurface-level syntactic sugar. Evaluating the real language is\ncomplicated and infeasible. The following contains too many things at\nonce to consider:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">quicksort1 :: (Ord a) =&gt; [a] -&gt; [a]\nquicksort1 [] = []\nquicksort1 (x:xs) =\n  let smallerSorted = quicksort1 [a | a &lt;- xs, a &lt;= x]\n      biggerSorted = quicksort1 [a | a &lt;- xs, a &gt; x]\n  in  smallerSorted ++ [x] ++ biggerSorted\n</code></pre>\n<p>Pattern matches at the definition, list syntax, comprehensions,\nlets. There's a lot going on here that makes a newbie's eyes glaze\nover. We have to start simpler. But how simple?</p>\n<p>GHC Haskell has a tiny language called Core to which all Haskell\nprograms compile down to. Its AST looks roughly like this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Expr\n  = App Expr Expr\n  | Var Var\n  | Lam Name Type Expr\n  | Case Expr [Alt]\n  | Let Bind Expr\n  | Lit Literal\n</code></pre>\n<p>Evaluation of Core is simple. However, Core is also a little <em>too\nlow-level</em>, because it puts polymorphic types and type-class\ndictionaries as normal arguments, inlines a lot of things, looks\nunderneath boxed types like <code>Int</code> (into <code>I#</code>), and adds some extra\ncapabilities normal Haskell doesn't have that are only appropriate for\na compiler writer to see. The above function compiled to Core starts\nlike this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">quicksort1\n  = \\ (@ a_a1Zd) ($dOrd_a1Zf :: Ord a_a1Zd) (ds_d22B :: [a_a1Zd]) -&gt;\n      case ds_d22B of {\n        [] -&gt; GHC.Types.[] @ a_a1Zd;\n        : x_a1sG xs_a1sH -&gt;\n          ++\n            @ a_a1Zd\n            (quicksort1\n               @ a_a1Zd\n               $dOrd_a1Zf\n               (letrec {\n                  ds1_d22C [Occ=LoopBreaker] :: [a_a1Zd] -&gt; [a_a1Zd]\n           ...\n</code></pre>\n<p>We have to explain the special list syntax, module qualification,\npolymorphic types, dictionaries, etc. all in one go, besides the\nobvious challenge of the unreadable naming convention. Core is made\nfor compilers and compiler writers, not for humans.</p>\n<h2 id=\"duet\">Duet</h2>\n<p>Therefore I took a middle way. Last year I wrote\n<a href=\"https://github.com/chrisdone/duet\">a language called Duet</a>, which is\na Haskell subset made specifically for teaching at this period of\nlearning Haskell. Duet only has these language features: data types,\ntype-classes, top-level definitions, lambdas, case expressions, and\nsome literals (strings, integrals, rationals). Its main feature is\nsteppability; the ability to step through the code. Every step\nproduces a valid program.</p>\n<p>Returning to the SICP example with our new tool, here's the same\nprogram in Duet:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">square = \\x -&gt; x * x\nmain = square 6 + square 10\n</code></pre>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">chris@precision:~&#x2F;Work&#x2F;duet-lang&#x2F;duet$ duet run examples&#x2F;sicp.hs\n(square 6) + (square 10)\n((\\x -&gt; x * x) 6) + (square 10)\n(6 * 6) + (square 10)\n36 + (square 10)\n36 + ((\\x -&gt; x * x) 10)\n36 + (10 * 10)\n36 + 100\n136\n</code></pre>\n<p>Here we see a substitution model in action. Each line is a valid\nprogram! You can take any line from the output and run it from that\npoint.</p>\n<p>Unlike Scheme, Duet picks an argument order (left-to-right) for strict\nfunctions like integer operations.</p>\n<p><strong>Note</strong>: You can follow along at home by creating a file and running\nit using <a href=\"https://github.com/chrisdone/duet\">docker run</a>\non Linux, OS X or Windows.</p>\n<h2 id=\"folds\">Folds</h2>\n<p>Let's turn our attention to the teaching of folds, which is a classic\nhurdle to get newbies through, as it is a kind of forcing function for\na variety of topics.</p>\n<p>The right fold is clasically defined like this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">foldr f z []     = z\nfoldr f z (x:xs) = f x (foldr f z xs)\n</code></pre>\n<p>This is not a valid Duet program, because (1) it uses list syntax\n(lists aren't special), and (2) it uses case analysis at the\ndeclaration level. If you try substitution stepping these, you quickly\narrive at an awkward conversation about the difference between the\nseemingly three-argument function <code>foldr</code>, and lambdas, partial\napplication, currying, and pattern matching, and whether we're\ndefining two functions or one. Here is the same program in Duet:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data List a = Nil | Cons a (List a)\nfoldr = \\f -&gt; \\z -&gt; \\l -&gt;\n  case l of\n    Nil -&gt; z\n    Cons x xs -&gt; f x (foldr f z xs)\n</code></pre>\n<p>At the end of teaching the substitution model, I cover that <code>\\x y z</code>\nis syntactic sugar for <code>\\x -&gt; \\y -&gt; \\z -&gt; ...</code>, but only after the\nintuition has been solidified that all Haskell functions take one\nargument. They may return other functions. So the updated program is:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data List a = Nil | Cons a (List a)\nfoldr = \\f z l -&gt;\n  case l of\n    Nil -&gt; z\n    Cons x xs -&gt; f x (foldr f z xs)\n</code></pre>\n<p>Which is perfectly valid Haskell, and each part of it can be rewritten\npredictably.</p>\n<p>Let's look at comparing <code>foldr</code> with <code>foldl</code>.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data List a = Nil | Cons a (List a)\nfoldr = \\f z l -&gt;\n  case l of\n    Nil -&gt; z\n    Cons x xs -&gt; f x (foldr f z xs)\nfoldl = \\f z l -&gt;\n  case l of\n    Nil -&gt; z\n    Cons x xs -&gt; foldl f (f z x) xs\nlist = Cons 1 (Cons 2 Nil)\n</code></pre>\n<h2 id=\"folds-at-a-glance\">Folds at a glance</h2>\n<p>For a <em>quick</em> summary, we can use holes like in normal Haskell\nindicated by <code>_</code> or <code>_foo</code>. In Duet, these are ignored by the type\nsystem and the stepper, letting you run the stepper with holes in\ntoo. They don't result in an error, so you can build up expressions\nwith them inside.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main_foldr = foldr _f _nil list\nmain_foldl = foldl _f _nil list\nlist = Cons 1 (Cons 2 (Cons 3 (Cons 3 Nil)))\n</code></pre>\n<p>(I increased the size of the list for a longer more compelling\noutput.)</p>\n<p>We can pass <code>--concise</code> which is a convenience flag to literally\nfilter out intermediate steps (cases, lambdas) which helps us see the\n&quot;high-level&quot; recursion. This flag is still under evaluation (no pun\nintended), but is useful here. Full output is worth studying with\nstudents too, but is too long to fit in this blog post. I will include\na snippet from a non-concise example below.</p>\n<p>The output looks like this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">$ duet run examples&#x2F;folds-strictness.hs --main main_foldr --concise\nfoldr _f _nil list\n_f 1 (foldr _f _nil (Cons 2 (Cons 3 (Cons 4 Nil))))\n_f 1 (_f 2 (foldr _f _nil (Cons 3 (Cons 4 Nil))))\n_f 1 (_f 2 (_f 3 (foldr _f _nil (Cons 4 Nil))))\n_f 1 (_f 2 (_f 3 (_f 4 (foldr _f _nil Nil))))\n_f 1 (_f 2 (_f 3 (_f 4 _nil)))\n$ duet run examples&#x2F;folds-strictness.hs --main main_foldl --concise\nfoldl _f _nil list\nfoldl _f (_f _nil 1) (Cons 2 (Cons 3 (Cons 4 Nil)))\nfoldl _f (_f (_f _nil 1) 2) (Cons 3 (Cons 4 Nil))\nfoldl _f (_f (_f (_f _nil 1) 2) 3) (Cons 4 Nil)\nfoldl _f (_f (_f (_f (_f _nil 1) 2) 3) 4) Nil\n_f (_f (_f (_f _nil 1) 2) 3) 4\n</code></pre>\n<p>We can immediately see what the &quot;right&quot; part of <code>foldr</code>\nmeans. Experienced Haskellers can already see the teaching\nopportunities sprouting from the earth at this point. We're using O(n)\nspace here, building nested thunks, or using too much stack. Issues\nabound.</p>\n<p>Meanwhile, in <code>foldl</code>, we've shifted accumulation of the nested thunks\nto an argument of <code>foldr</code>, but at the end, we still have a nested\nthunk. Enter strict left fold!</p>\n<p>We also see the argument order come into play: <code>_f</code> is applied to <code>1</code>\nfirst in <code>foldr</code> (<code>_f 1 (foldr ...)</code>), but last in <code>foldl</code> (<code>_f (_f _nil 1) ...</code>, which is another important part of understanding the\ndistinction between the two.</p>\n<h2 id=\"strict-folds\">Strict folds</h2>\n<p>To see the low-level mechanics, and as a precursor to teaching strict\nfold, we ought to use an actual arithmetic operation (because you\ncan't strictly evaluate a <code>_</code> hole, by definition, it's missing):</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main_foldr = foldr (\\x y -&gt; x + y) 0 list\nmain_foldl = foldl (\\x y -&gt; x + y) 0 list\n</code></pre>\n<p>Both folds eventually yield:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">1 + (2 + 0)\n1 + 2\n3\n</code></pre>\n<p>And:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">((\\x y -&gt; x + y) 0 1) + 2\n((\\y -&gt; 0 + y) 1) + 2\n(0 + 1) + 2\n1 + 2\n3\n</code></pre>\n<p>(Here you can also easily see where the <code>0</code> lies in the tree.)</p>\n<p>Which both have the built up thunk problem mentioned above.</p>\n<p>Duet has bang patterns, so we can define a strict fold like this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data List a = Nil | Cons a (List a)\nfoldr = \\f z l -&gt;\n  case l of\n    Nil -&gt; z\n    Cons x xs -&gt; f x (foldr f z xs)\nfoldl = \\f z l -&gt;\n  case l of\n    Nil -&gt; z\n    Cons x xs -&gt; foldl f (f z x) xs\nfoldl_ = \\f z l -&gt;\n  case l of\n    Nil -&gt; z\n    Cons x xs -&gt;\n      case f z x of\n        !z_ -&gt; foldl_ f z_ xs\nlist = Cons 1 (Cons 2 Nil)\nmain_foldr = foldr (\\x y -&gt; x + y) 0 list\nmain_foldl = foldl (\\x y -&gt; x + y) 0 list\nmain_foldl_ = foldl_ (\\x y -&gt; x + y) 0 list\n</code></pre>\n<p>(We don't allow <code>'</code> as part of a variable name, as it's not really\nnecessary and is confusing for non-Haskeller beginners. An undercore\nsuffices.)</p>\n<p>Now, looking in detail without the <code>--concise</code> arg, just before the\nrecursion, we see the force of the addition:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">case Cons 1 (Cons 2 Nil) of\n  Nil -&gt; 0\n  Cons x xs -&gt;\n    case (\\x y -&gt; x + y) 0 x of\n      !z_ -&gt; foldl_ (\\x y -&gt; x + y) z_ xs\ncase (\\x y -&gt; x + y) 0 1 of\n  !z_ -&gt; foldl_ (\\x y -&gt; x + y) z_ (Cons 2 Nil)\ncase (\\y -&gt; 0 + y) 1 of\n  !z_ -&gt; foldl_ (\\x y -&gt; x + y) z_ (Cons 2 Nil)\ncase 0 + 1 of\n  !z_ -&gt; foldl_ (\\x y -&gt; x + y) z_ (Cons 2 Nil)\ncase 1 of\n  !z_ -&gt; foldl_ (\\x y -&gt; x + y) z_ (Cons 2 Nil)\nfoldl_ (\\x y -&gt; x + y) 1 (Cons 2 Nil)\n</code></pre>\n<p>And finally, taking a glance with <code>--concise</code>, we see:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">$ duet run examples&#x2F;folds-strictness.hs --main main_foldl_ --concise\nfoldl_ (\\x y -&gt; x + y) 0 list\nfoldl_ (\\x y -&gt; x + y) 1 (Cons 2 (Cons 3 (Cons 4 Nil)))\nfoldl_ (\\x y -&gt; x + y) 3 (Cons 3 (Cons 4 Nil))\nfoldl_ (\\x y -&gt; x + y) 6 (Cons 4 Nil)\nfoldl_ (\\x y -&gt; x + y) 10 Nil\n10\n</code></pre>\n<p>Which spells out quite clearly that now we are: (1) doing direct\nrecursion, and (2) calculating the accumulator with each recursion\nstep (<code>0</code>, <code>1</code>, <code>3</code>, <code>6</code>, <code>10</code>).</p>\n<h2 id=\"concluding\">Concluding</h2>\n<p>This post serves as both knowledge sharing for our team and a public\npost to show the kind of detailed level of training that we're doing\nfor our clients.</p>\n<p>If you'd like Haskell training for your company,\n<a href=\"mailto:[email protected]\">contact us</a> to arrange a meeting.</p>\n<p>Want to read more about Haskell? Check out <a href=\"https://tech.fpcomplete.com/blog/\">our blog</a> and our <a href=\"https://tech.fpcomplete.com/haskell/\">Haskell\nhomepage</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/",
        "slug": "teaching-haskell-with-duet",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Teaching Haskell with Duet",
        "description": "Teaching Haskell to beginners with Duet",
        "updated": null,
        "date": "2019-12-30T05:18:36Z",
        "year": 2019,
        "month": 12,
        "day": 30,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/teaching-haskell-with-duet/",
        "components": [
          "blog",
          "teaching-haskell-with-duet"
        ],
        "summary": null,
        "toc": [
          {
            "level": 1,
            "id": "teaching-haskell-with-duet",
            "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/#teaching-haskell-with-duet",
            "title": "Teaching Haskell with Duet",
            "children": [
              {
                "level": 2,
                "id": "simple-at-the-core",
                "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/#simple-at-the-core",
                "title": "Simple at the core",
                "children": []
              },
              {
                "level": 2,
                "id": "the-right-language-to-model",
                "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/#the-right-language-to-model",
                "title": "The right language to model",
                "children": []
              },
              {
                "level": 2,
                "id": "duet",
                "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/#duet",
                "title": "Duet",
                "children": []
              },
              {
                "level": 2,
                "id": "folds",
                "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/#folds",
                "title": "Folds",
                "children": []
              },
              {
                "level": 2,
                "id": "folds-at-a-glance",
                "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/#folds-at-a-glance",
                "title": "Folds at a glance",
                "children": []
              },
              {
                "level": 2,
                "id": "strict-folds",
                "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/#strict-folds",
                "title": "Strict folds",
                "children": []
              },
              {
                "level": 2,
                "id": "concluding",
                "permalink": "https://tech.fpcomplete.com/blog/teaching-haskell-with-duet/#concluding",
                "title": "Concluding",
                "children": []
              }
            ]
          }
        ],
        "word_count": 2034,
        "reading_time": 11,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/async-exceptions-haskell-rust.md",
        "colocated_path": null,
        "content": "<p>Before getting started: no, there is no such thing as an async exception in Rust. I'll explain what I mean shortly. Notice the comma in the title :).</p>\n<p>GHC Haskell supports a feature called asynchronous (or async) exceptions. Normal, synchronous exceptions are generated by the currently running code from doing something like trying to read a file that doesn't exist. Asynchronous exceptions are generated from a <em>different</em> thread of execution, either another Haskell green thread, or the runtime system itself.</p>\n<p>Perhaps the best example of using async exception is the <code>timeout</code> function. This function will take a certain number of microseconds and an action to run. If the action completes in that time, all is well. If the action <em>doesn't</em> complete in that time, then the thread running that action receives an async exception.</p>\n<p>Rust does not have exceptions at all, much less async exceptions. (Yes, <code>panic</code>s behave fairly similarly to synchronous exceptions, but we'll ignore those in this context. They aren't relevant.) Rust also doesn't have a green thread-based runtime like Haskell does. There's basically no direct way to compare this async exception concept from Haskell into Rust.</p>\n<p>Or, at least, there wasn't. With Tokio, <code>async/.await</code>, executor, tasks, and futures, the story is quite different. A Haskell green thread looks quite a bit like a Rust task. Suddenly there's a <a href=\"https://docs.rs/tokio/0.2.6/tokio/time/fn.timeout.html\"><code>timeout</code> function in Tokio</a>. This post is going to compare the Haskell async exception mechanism to whatever powers Tokio's <code>timeout</code>. It's going to look at various trade-offs of the two different approaches. And I'll end with my own personal analysis.</p>\n<h2 id=\"async-exceptions-in-haskell\">Async exceptions in Haskell</h2>\n<p>The GHC Haskell runtime provides a green thread system. This means that there is a scheduler which assigns different green threads to actual OS threads to run on. These threads continue operating until they hit yield points. A common example of a yield point would be socket I/O. Take the pseudocode below:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">socket &lt;- openConnection address\nsend socket &quot;Hello world!&quot; -- yields\nmsg &lt;- recv socket -- yields\nputStrLn (&quot;Received message: &quot; ++ show msg)\n</code></pre>\n<p>Each time we perform what looks like blocking I/O in Haskell, in reality we are:</p>\n<ul>\n<li>Registering a wakeup call with the scheduler when the socket completes its send or receive</li>\n<li>Putting the current green thread to sleep</li>\n<li>We'll get woken up again when the scheduler has a free OS thread and there is data on the socket</li>\n</ul>\n<p>However, yield points happen far more often than just async I/O. Every time we perform any allocation, GHC automatically inserts a yield point. Since Haskell (unfortunately) tends to do a <em>lot</em> of heap allocation, this means that our code is implicitly littered with huge numbers of yield points. So much so, that we can essentially assume that at any point in our execution, we may hit a yield point.</p>\n<p>And this brings us to async exceptions. Each green thread has its own queue of incoming async exceptions. And at each yield point, the runtime system will check if there are exceptions waiting on that queue. If so, it will pop one off the queue and throw it in the current green thread, where it can either be caught or, ultimately, take down the entire thread.</p>\n<p>My <a href=\"https://tech.fpcomplete.com/blog/2018/04/async-exception-handling-haskell/\">best practice advice</a> is to never <em>recover</em> from an async exception. Instead, you should only ever clean up your resources when an async exception occurs. In other words, if you ever catch an async exception, you may do some cleanup, but then you must immediately rethrow the exception.</p>\n<p>Since an async exception can occur anywhere, we have to be highly paranoid when writing resource-safe code in Haskell. For example, consider this pseudocode:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">h &lt;- openFile fp WriteMode\nsetPerms 0o600 h `onException` closeFile h\nuseFile h `finally` closeFile h\n</code></pre>\n<p>In a world without async exceptions, this is exception safe. We first open the file. If opening throws an exception, then the <code>openFile</code> call itself is responsible for releasing any resources it acquires. Next, if <code>setPerms</code> throws an exception, our <code>onException</code> call ensures that <code>closeFile</code> will close the file handle. And finally, when we call <code>useFile</code>, we use <code>finally</code> to ensure that <code>closeFile</code> will be called regardless of an async exception occurring.</p>\n<p>However, in a world with async exceptions, lots more can go wrong:</p>\n<ul>\n<li>An exception can be generated between the call to <code>openFile</code> and <code>setPerms</code>, where there's not exception handler.</li>\n<li>An exception can be generated between the call to <code>setPerms</code> and <code>useFile</code></li>\n</ul>\n<p>Instead, in Haskell, we have to <em>mask</em> async exceptions, which temporarily stops them from being delivered. The code above could be written as:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">mask $ \\restore -&gt; do\n    h &lt;- openFile fp WriteMode\n    setPerms 0o600 h `onException` closeFile h\n    restore (useFile h) `finally` closeFile h\n</code></pre>\n<p>However, dealing with masking states is really complicated in general. So instead, we like to use helper functions like <code>bracket</code>:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">bracket (openFile fp WriteMode) closeFile $ \\h -&gt; do\n    setPerms 0o600 h\n    useFile h\n</code></pre>\n<p>There are many more details around implementation and usage of async exceptions in Haskell, but this is sufficient for our comparison for now.</p>\n<h2 id=\"canceled-futures-in-rust\">Canceled futures in Rust</h2>\n<p>The <code>Future</code> trait in Rust defines an abstraction for anything that can be <code>await</code>ed on. The core function is <code>poll</code>, which works something like this:</p>\n<ul>\n<li>Tell me if you're ready</li>\n<li>If you are ready, great! Tell me the completed value</li>\n<li>If you're not ready, I want to register a <code>Waker</code></li>\n</ul>\n<p>The <code>Waker</code> can then interact with the executor to make sure that the <em>task</em> which is <code>await</code>ing gets woken up when the <code>Future</code> is ready.</p>\n<p>In a simple async application in Rust, you'll have a task that waits on one <code>Future</code> at a time. For example, in pseudocode again:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">async {\n    let socket = open_connection(&amp;address);\n    socket::send(&quot;Hello world!&quot;).await;\n    let msg = socket::recv().await;\n    println!(&quot;Received message: {}&quot;, msg);\n}\n</code></pre>\n<p>Each of those <code>await</code>s is a yield point. The executor can allow another task to run, and will wake up the current task when the I/O is complete. This is very similar to the Haskell example I gave above.</p>\n<p>However, unlike Haskell:</p>\n<ul>\n<li>There is no queue of async exceptions sitting and waiting to kill our task</li>\n<li>There are no implicit yield points created by allocation</li>\n</ul>\n<p>If there are no async exceptions, how exactly does a <code>timeout</code> work in Rust? Well, instead of a task waiting for a <em>single</em> <code>Future</code> to complete, it waits for one of two <code>Future</code>s to complete. You can <a href=\"https://docs.rs/tokio/0.2.6/src/tokio/time/timeout.rs.html#134\">check out the code yourself</a>, but the basic idea is:</p>\n<ul>\n<li>Create two <code>Future</code>s\n<ul>\n<li>The action you want to try to run</li>\n<li>A timer that will complete when the timeout has expired</li>\n</ul>\n</li>\n<li>Whenever we <code>poll</code> to see if things are ready:\n<ul>\n<li>Check if the action is ready. If so: yay! Return its result as an <code>Ok</code></li>\n<li>Check if the timer is ready. If so: our timeout has expired, and we should return an <code>Err</code> saying how much time has elapsed.</li>\n<li>If neither is ready, say that we're not ready either and wait to get woken up again</li>\n</ul>\n</li>\n</ul>\n<p>Personally, I think this is a pretty elegant solution to the problem. Like the Haskell solution, it means that the action can only be stopped at a yield point. However, unlike the Haskell solution, yield points will be far less common in a Rust program, since we don't have the implicit sprinkling of yields caused by allocation.</p>\n<p>But now, let's talk about resource management. I made it clear that properly handling resources in the presence of async exceptions in Haskell is tricky. Not so in Rust! The standard way to handle resources is with RAII: you define a data type and stick a <code>Drop</code> on it. And in the world of cancellable <code>Future</code>s, this all works perfectly:</p>\n<ul>\n<li>The <code>Future</code> itself owns any resources it's using</li>\n<li>If the <code>timeout</code> triggers before the action completed, the <code>Future</code> in question is dropped</li>\n<li>When the <code>Future</code> is dropped, the resources it owns are also dropped</li>\n</ul>\n<p>The example below is more verbose than the Haskell equivalent above, but that's because we're defining a synthetic <code>Resource</code> struct. In real life code, such structs would likely already exist.</p>\n<p>NOTE: You'll need at least Rust 1.39 to run the code below, and add a dependency on Tokio with a line like: <code>tokio = { version = &quot;0.2&quot;, features = [&quot;macros&quot;, &quot;time&quot;] }</code>.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use tokio::time::{delay_for, timeout};\nuse std::time::Duration;\n\nstruct Resource;\n\nimpl Resource {\n    fn new() -&gt; Self {\n        println!(&quot;acquire&quot;);\n        Resource\n    }\n}\n\nimpl Drop for Resource {\n    fn drop(&amp;mut self) {\n        println!(&quot;release&quot;);\n    }\n}\n\nasync fn worker() {\n    let _resource = Resource::new();\n    for i in 1..=10 {\n        delay_for(Duration::from_millis(100)).await;\n        println!(&quot;i == {}&quot;, i);\n    }\n}\n\n#[tokio::main]\nasync fn main() {\n    println!(&quot;Round 1&quot;);\n    let res = timeout(Duration::from_millis(2000), worker()).await;\n    println!(&quot;{:?}&quot;, res);\n\n    println!(&quot;\\n\\nRound 2&quot;);\n    let res = timeout(Duration::from_millis(1000), worker()).await;\n    println!(&quot;{:?}&quot;, res);\n\n    println!(&quot;\\n\\nRound 3&quot;);\n    let res = timeout(Duration::from_millis(500), worker()).await;\n    println!(&quot;{:?}&quot;, res);\n}\n</code></pre>\n<h2 id=\"my-analysis\">My analysis</h2>\n<p>The big point in Haskell's favor in all of this is its ability to preempt inside of computations. Whereas Rust's model lets you preempt most I/O actions, there won't be many yield points in other code. This can lead to lots of accidental blocking. There has been <a href=\"https://www.reddit.com/r/rust/comments/ebfj3x/stop_worrying_about_blocking_the_new_asyncstd/\">some</a> <a href=\"https://www.reddit.com/r/rust/comments/ebpzqx/do_not_stop_worrying_about_blocking_in_async/\">discussion</a> recently about possible mitigations of this issue at the executor level.</p>\n<p>Haskell's advantage here is diminished by the fact that, if you have code that does not allocate any memory, you don't get any yield points. However, in practice, this almost never happens. This did <a href=\"https://github.com/simonmar/async/issues/93\">affect some of my coworkers</a> recently, so it's not unheard of. But it's relatively rare, and you can insert yield points back into an optimized application with <a href=\"https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/using-optimisation.html#ghc-flag--fomit-yields\"><code>-fno-omit-yields</code></a>. You can argue that the fact that this <em>sometimes</em> fails spectacularly is even worse.</p>\n<p>I like the fact that, in Rust, you know exactly where your program may simply stop executing. Every time you see a <code>.await</code>, you know &quot;well, it's entirely possible that the executor will just drop me before I come back.&quot; And the fact that ownership, RAII, and dropping solves resource management exactly the same in async and synchronous Rust code is beautiful.</p>\n<p>Haskell pays a lot for the ability to kill threads with async exceptions. Every bit of code that manages resources needs to pay a cost in cognitive overhead. In practice, this truly does lead to a large number of bugs. Figuring out how and when to mask exceptions, and whether to have interruptible or uninterruptible masking (something I didn't really discuss), is another major curve ball. I think proper API design can mitigate a lot of the pain here. But the base library does <em>not</em> contain such API design, and bad practices abound.</p>\n<p>And finally, a question: how important is cancellable tasks/killable threads in practice? Being able to time things out is certainly powerful, in some cases. Racing two actions to see which one completes first? Less valuable in my opinion. I certainly teach it when I give Haskell training, but there were usually more elegant ways to solve the same problem.</p>\n<p>Since I'm stuck with async exceptions, I'll use <code>timeout</code> and <code>race</code> in Haskell, because <em>using</em> them isn't the dangerous part, it's having them in the first place. Were I to design a runtime system for Haskell from the ground up, I'm not sure I'd introduce the concept. It certainly solves some really tricky problems, like interrupting long-running pure code. But I'm not convinced the feature really pulls its weight.</p>\n<p>On the other hand, in Rust, the feature is essentially free. The <code>Future</code> trait was designed to solve a bunch of general problems, and then at the library level it's possible to introduce a solution to cancel tasks. Pretty nifty.</p>\n<p>And finally, where these two languages are the same. They both elegantly and easily solve async I/O problems in general. You get to write blocking-style code without the blocking. And both of them have pretty complicated details under the surface (Haskell: masking, Rust: the <code>poll</code> method) which we can usually, and fortunately, ignore and leave to others to mess around with.</p>\n<h2 id=\"further-reading\">Further reading</h2>\n<p>Feel free to check out our <a href=\"https://tech.fpcomplete.com/haskell/\">Haskell</a> and <a href=\"https://tech.fpcomplete.com/rust/\">Rust</a> homepages for lots more content. If you're interested in learning all about exception handling in Haskell, check out our <a href=\"https://tech.fpcomplete.com/haskell/tutorial/exceptions/\">safe exception handling</a> tutorial. And if you want to learn about <code>async/await</code> in Rust, I'd recommend <a href=\"https://www.snoyman.com/blog/2019/12/rust-crash-course-08-down-dirty-future\">lessons 8 and 9 of the Rust Crash Course</a>.</p>\n<p class=\"text-center\">\n  <a class=\"button-coral\" href=\"https://www.fpcomplete.com/contact-us/\">\n    Set up an engineering consultation\n  </a>\n</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/async-exceptions-haskell-rust/",
        "slug": "async-exceptions-haskell-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Async Exceptions in Haskell, and Rust",
        "description": "Haskell and Rust both support asynchronous programming. Haskell includes a feature called async exceptions, which allow cancelling threads, but they come at a cost. See how Rust does the same job, and the relative trade-offs of each approach.",
        "updated": null,
        "date": "2019-12-24T05:56:21Z",
        "year": 2019,
        "month": 12,
        "day": 24,
        "taxonomies": {
          "tags": [
            "haskell",
            "rust"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/async-exceptions-haskell-rust/",
        "components": [
          "blog",
          "async-exceptions-haskell-rust"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "async-exceptions-in-haskell",
            "permalink": "https://tech.fpcomplete.com/blog/async-exceptions-haskell-rust/#async-exceptions-in-haskell",
            "title": "Async exceptions in Haskell",
            "children": []
          },
          {
            "level": 2,
            "id": "canceled-futures-in-rust",
            "permalink": "https://tech.fpcomplete.com/blog/async-exceptions-haskell-rust/#canceled-futures-in-rust",
            "title": "Canceled futures in Rust",
            "children": []
          },
          {
            "level": 2,
            "id": "my-analysis",
            "permalink": "https://tech.fpcomplete.com/blog/async-exceptions-haskell-rust/#my-analysis",
            "title": "My analysis",
            "children": []
          },
          {
            "level": 2,
            "id": "further-reading",
            "permalink": "https://tech.fpcomplete.com/blog/async-exceptions-haskell-rust/#further-reading",
            "title": "Further reading",
            "children": []
          }
        ],
        "word_count": 2127,
        "reading_time": 11,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/",
            "title": "Monads and GATs in nightly Rust"
          },
          {
            "permalink": "https://tech.fpcomplete.com/rust/",
            "title": "FP Complete Rust"
          }
        ]
      },
      {
        "relative_path": "blog/serverless-rust-wasm-cloudflare.md",
        "colocated_path": null,
        "content": "<p>I run a website for <a href=\"https://www.haskellers.com/\">Haskellers</a>. People are able to put their email addresses on this website for others to contact them. These email addresses were historically protected by Mailhide, which would use a Captcha to prevent bots from scraping that information. Unfortunately, Mailhide was shut down. And from there, <a href=\"https://www.sortasecret.com\">Sorta Secret</a> was born.</p>\n<p>Sorta Secret provides a pretty simple service, as well as a simple API. Using the <a href=\"https://www.sortasecret.com/v1/encrypt?secret=The+rebel+base+is+on+Hoth\">encrypt endpoint</a>, you can get an encrypted version of your secret. Using the <a href=\"https://www.sortasecret.com/v1/show?secret=a7a62a355f5a93b21b2490ee9ee3094a9c684d26780ba31e3c15a6e7b42c607bd41e1d49a4b24f5bdbef402df9fde145f4\">show endpoint</a>, you can get a webpage that will decrypt the information after passing a Recaptcha. That's basically it. You can <a href=\"https://www.haskellers.com/user/snoyberg\">go to my Haskellers profile</a> and click &quot;Reveal email address&quot; to see this in action.</p>\n<p>I originally wrote Sorta Secret a year ago in Rust using <code>actix-web</code> and deployed it, like most services we write at FP Complete, to our Kubernetes cluster. When Rust 1.39 was released with <code>async</code>/<code>await</code> support, and then Hyper 0.13 was released using that support, I decided I wanted to try rewriting against Hyper. But that's a story for another time.</p>\n<p>After that, more out of curiosity than anything else, I decided to rewrite it as a serverless application using Cloudflare Workers, a serverless platform that supports Rust and WASM. To quote the <a href=\"https://www.cloudflare.com/learning/serverless/what-is-serverless/\">Cloudflare page on the topic</a>:</p>\n<blockquote>\n<p>Serverless computing is a method of providing backend services on an as-used basis. A Serverless provider allows users to write and deploy code without the hassle of worrying about the underlying infrastructure.</p>\n</blockquote>\n<p>This post will describe my experiences doing this, what I thought worked well (and not so well), and why you may consider doing something like this yourself.</p>\n<h2 id=\"advantages\">Advantages</h2>\n<p>Let me start off with the major advantages of using Cloudflare Workers over my previous setup:</p>\n<ul>\n<li><strong>Geographic distribution</strong> A typical hosting setup, including the Kubernetes cluster I deploy to, is set up in a single geographic location. For an embarrassingly parallel application like this, having your code run in all of Cloudflare's data centers is pretty awesome.</li>\n<li><strong>Setup time/cost</strong> I already have access to a Kubernetes cluster. But for someone who doesn't already have a preexisting server or cluster to deploy their service, the time to set up a secure, high availability deployment environment, and the cost of running these machines, can be high. I'm currently paying $0 to host this service on Cloudflare.</li>\n<li><strong>Ease of testing/deployment</strong> The Cloudflare team has done a great job with the Wrangler tool. Deploying an update is a call to <code>wrangler publish</code>. I can do testing with <code>wrangler preview --watch</code>. This is pretty awesome. And the publishing is <em>fast</em>.</li>\n</ul>\n<h2 id=\"disadvantages\">Disadvantages</h2>\n<p>There are definitely some hurdles to overcome along the way.</p>\n<ul>\n<li><strong>Lack of examples</strong> I found it very difficult to get even basic things working correctly. I'm hoping this post helps with that.</li>\n<li><strong>WASM libraries didn't work perfectly</strong> Most libraries designed to help with WASM are targeted at the browser. In a Cloudflare Worker, for example, there's no <a href=\"https://developer.mozilla.org/en-US/docs/Web/API/Window\"><code>Window</code></a>. Instead, to call <a href=\"https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API\"><code>fetch</code></a>, I needed a <a href=\"https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerGlobalScope\"><code>ServiceWorkerGlobalScope</code></a>.</li>\n<li><strong>Slower dev cycle than I like</strong> While <code>wrangler preview</code> is awesome, it still takes quite a bit of time to see a change. Each code change requires recompiling the Rust code, packaging up the bundle, sending it to Cloudflare, and a refresh of the page. Especially since I was using compile-time-checked HTML templates, this ended up being pretty slow.</li>\n<li><strong>Secrets management</strong> Unlike Kubernetes, there's no built in secrets management in Cloudflare Workers. Someone on the Cloudflare team advised me that I could use their key/value store for secrets. I elected to be really dumb and compile the secrets (encryption key and Recaptcha secret key) directly into the executable.</li>\n<li><strong>Difficult debugging</strong> It seems that the combination of <code>async</code> code, panics, and the bridge to JavaScript results in error messages getting completely dropped, which makes debugging very difficult.</li>\n</ul>\n<p>That's enough motivation and demotivation for now. Let's see how this all fits together.</p>\n<h2 id=\"getting-started\">Getting started</h2>\n<p>The Cloudflare team has put together a very nice command line tool, <code>wrangler</code>, which happens to be written in Rust. Getting started with a brand new Cloudflare Workers Rust project is nice and easy, you don't even need to set up an account or provide any credentials.</p>\n<pre><code>cargo install wrangler\nwrangler generate wasm-worker https:&#x2F;&#x2F;github.com&#x2F;cloudflare&#x2F;rustwasm-worker-template.git\ncd wasm-worker\nwrangler preview --watch\n</code></pre>\n<p>The problem is that this template doesn't do much. There's a Rust function called <code>greet</code> that returns a <code>String</code>. That Rust function is exposed to the JavaScript world via <code>wasm-bindgen</code>. There's a small JavaScript wrapper that imports that function and calls it when a new request comes in. However, we want to do a lot more in this application:</p>\n<ul>\n<li>Perform routing inside Rust</li>\n<li>Perform async operations (specifically making requests to the Recaptcha server)</li>\n<li>Generating more than just 200 success status responses</li>\n<li>Parse submitted JSON bodies</li>\n<li>Use HTML templating</li>\n</ul>\n<p>So let's dive down the rabbit hole!</p>\n<h2 id=\"wasm-bindgen\">wasm-bindgen</h2>\n<p>I've played with WASM a bit before this project, but not much. Coming up to speed with <code>wasm-bindgen</code> was honestly pretty difficult for me, and involved a lot of trial-and-error. Ultimately, I discovered that I could probably get away with one of two approaches for the binding layer between the JavaScript and Rust worlds:</p>\n<ol>\n<li>Have a thin wrapper in JavaScript that produces simple JSON objects, and then use <code>serde</code> inside Rust to turn those into nice <code>struct</code>s</li>\n<li>Use the <code>Request</code> and <code>Response</code> types in <code>web-sys</code> directly</li>\n</ol>\n<p>I discovered the first approach first, and went with it. I briefly played with moving over to the second approach, but it involved a lot of overhaul to the code, so I ended up sticking with my approach 1. Those more skilled with this may disagree with this approach. Anyway, here's what the JavaScript half of this looks like:</p>\n<pre data-lang=\"javascript\" class=\"language-javascript \"><code class=\"language-javascript\" data-lang=\"javascript\">const { respond_wrapper } = wasm_bindgen;\nawait wasm_bindgen(wasm)\n\nvar body;\nif (request.body) {\n    body = await request.text();\n} else {\n    body = &quot;&quot;;\n}\n\nvar headers = {};\nfor(var key of request.headers.keys()) {\n    headers[key] = request.headers.get(key);\n}\n\nconst response = await respond_wrapper({\n    method: request.method,\n    headers: headers,\n    url: request.url,\n    body: body,\n})\nreturn new Response(response.body, {\n    status: response.status,\n    headers: response.headers,\n})\n</code></pre>\n<p>Some interesting things to note here:</p>\n<ul>\n<li>I'm pulling in the entire request body as a string. That works for our case (the only request body is form data), but isn't intelligent enough in general.</li>\n<li>The <code>respond_wrapper</code> itself is returning a <code>Promise</code> on the JavaScript side. We're about to see some <code>wasm-bindgen</code> awesomeness.</li>\n<li>There's not much work to convert between the simplified JSON values and the real JavaScript objects.</li>\n</ul>\n<p>Now let's look at the Rust side of the equation. First we've got our <code>Request</code> and <code>Response</code> structs with appropriate <code>serde</code> deriving:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Deserialize)]\npub struct Request {\n    method: String,\n    headers: HashMap&lt;String, String&gt;,\n    url: String,\n    body: String, &#x2F;&#x2F; should really be Vec&lt;u8&gt;, I&#x27;m cheating here\n}\n\n#[derive(Serialize)]\npub struct Response {\n    status: u16,\n    headers: HashMap&lt;String, String&gt;,\n    body: String,\n}\n</code></pre>\n<p>Within the Rust world we want to deal exclusively with these types, and so our application lives inside a function with signature:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">async fn respond(req: Request) -&gt; Result&lt;Response, Box&lt;dyn std::error::Error&gt;&gt;\n</code></pre>\n<p>However, we can't export that to the JavaScript world. We need to ensure that our input and output types are things <code>wasm-bindgen</code> can handle. And to achieve that, we have a wrapper function that deals with the <code>serde</code> conversions and displaying the errors:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[wasm_bindgen]\npub async fn respond_wrapper(req: JsValue) -&gt; Result&lt;JsValue, JsValue&gt; {\n    let req = req.into_serde().map_err(|e| e.to_string())?;\n    let res = respond(req).await.map_err(|e| e.to_string())?;\n    let res = JsValue::from_serde(&amp;res).map_err(|e| e.to_string())?;\n    Ok(res)\n}\n</code></pre>\n<p>A <code>wasm-bindgen</code> function can accept <code>JsValue</code>s (and lots of other types), and can return a <code>Result&lt;JsValue, JsValue&gt;</code>. In the case of an <code>Err</code> return, we'll get a runtime exception in the JavaScript world. We make our function <code>pub</code> so it can be exported. And by marking it <code>async</code>, we generate a <code>Promise</code> on the JavaScript side that can be <code>await</code>ed.</p>\n<p>Other than that, it's some fairly standard <code>serde</code> stuff: converting from a <code>JsValue</code> into a <code>Request</code> via its <code>Deserialize</code> and converting a <code>Response</code> into a <code>JsValue</code> via its <code>Serialize</code>. In between those, we call our actual <code>respond</code> function, and map all error values into a <code>String</code> representation.</p>\n<h2 id=\"routing\">Routing</h2>\n<p>Our <code>respond</code> function receives a <code>Request</code>, and that <code>Request</code> has a <code>url: String</code> field. I was able to pull in the <code>url</code> crate directly, and then use its <code>Url</code> struct for easier processing:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let url: url::Url = req.url.parse()?;\n</code></pre>\n<p>Also, I wanted all requests to land on the <code>www.sortasecret.com</code> subdomain, so I added a bare domain redirect:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn redirect_to_www(mut url: url::Url) -&gt; Result&lt;Response, url::ParseError&gt; {\n    url.set_host(Some(&quot;www.sortasecret.com&quot;))?;\n    let mut headers = HashMap::new();\n    headers.insert(&quot;Location&quot;.to_string(), url.to_string());\n    Ok(Response {\n        status: 307,\n        body: format!(&quot;Redirecting to {}&quot;, url),\n        headers,\n    })\n}\n\nif url.host_str() == Some(&quot;sortasecret.com&quot;) {\n    return Ok(redirect_to_www(url)?);\n}\n</code></pre>\n<p>This is already giving us some nice type safety guarantees from the Rust world, which I'm very happy to take advantage of. Next comes the routing itself. If I was more of a purist, I would make sure I was checking the request methods correctly, returning 405 &quot;bad method&quot; responses in some cases, and so on. Instead, I went for a very hacky implementation:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">Ok(match (req.method == &quot;GET&quot;, url.path()) {\n    (true, &quot;&#x2F;&quot;) =&gt; html(200, server::homepage_html()?),\n    (true, &quot;&#x2F;v1&#x2F;script.js&quot;) =&gt; js(200, server::script_js()?),\n    (false, &quot;&#x2F;v1&#x2F;decrypt&quot;) =&gt; {\n        let (status, body) = server::decrypt(&amp;req.body).await;\n        html(status, body)\n    }\n    (true, &quot;&#x2F;v1&#x2F;encrypt&quot;) =&gt; {\n        let (status, body) = server::encrypt(&amp;req.url.parse()?)?;\n        html(status, body)\n    }\n    (true, &quot;&#x2F;v1&#x2F;show&quot;) =&gt; {\n        let (status, body) = server::show_html(&amp;req.url.parse()?)?;\n        html(status, body)\n    }\n    (_method, path) =&gt; html(404, format!(&quot;Not found: {}&quot;, path)),\n})\n</code></pre>\n<p>Which relies on some helper functions:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn html(status: u16, body: String) -&gt; Response {\n    let mut headers = HashMap::new();\n    headers.insert(&quot;Content-Type&quot;.to_string(), &quot;text&#x2F;html; charset=utf-8&quot;.to_string());\n    Response { status, headers, body }\n}\n\nfn js(status: u16, body: String) -&gt; Response {\n    let mut headers = HashMap::new();\n    headers.insert(&quot;Content-Type&quot;.to_string(), &quot;text&#x2F;javascript; charset=utf-8&quot;.to_string());\n    Response { status, headers, body }\n}\n</code></pre>\n<p>Let's dig in on some of these route handlers.</p>\n<h2 id=\"templating\">Templating</h2>\n<p>I'm using the <code>askama</code> crate for templating. This provides compile-time-parsed templates. For me, this is great because:</p>\n<ul>\n<li>Errors are caught at compile time</li>\n<li>Less files need to be shipped to the deployed system</li>\n</ul>\n<p>The downside is you have to go through a complete compile/link step before you can see your changes.</p>\n<p>I'm happy to report that there were absolutely no issues using <code>askama</code> on this project. It compiled without any difference in the code for WASM.</p>\n<p>I have just one HTML template, which I use for both the homepage and the <code>/v1/show</code> route. There is only one variable in the template: the encrypted secret value. In the case of the homepage, we use some default message. For <code>/v1/show</code>, we use the value provided by the query string. Let's look at the entirety of the homepage logic:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Template)]\n#[template(path = &quot;homepage.html&quot;)]\nstruct Homepage {\n    secret: String,\n}\n\nfn make_homepage(keypair: &amp;Keypair) -&gt; Result&lt;String, Box&lt;dyn std::error::Error&gt;&gt; {\n    Ok(Homepage {\n        secret: keypair.encrypt(&quot;The secret message has now been decrypted, congratulations!&quot;)?,\n    }.render()?)\n}\n</code></pre>\n<p>Virtually all of the work is handled for us by <code>askama</code> itself. I defined a <code>struct</code>, added a few attributes, and then called <code>render()</code> on the value. Easy! I won't bore you with the details of the HTML here, but if you want, feel free to <a href=\"https://github.com/snoyberg/sortasecret/blob/219b29657fc296a08cdfbed06e049891633ab83b/templates/homepage.html\">check out homepage.html on Github</a>.</p>\n<p>The story for the <code>script.js</code> is similar, except it takes the Recaptcha site key as a variable.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Template)]\n#[template(path = &quot;script.js&quot;, escape = &quot;none&quot;)]\nstruct Script&lt;&#x27;a&gt; {\n    site: &amp;&#x27;a str,\n}\n\npub(crate) fn script_js() -&gt; Result&lt;String, askama::Error&gt; {\n    Script {\n        site: super::secrets::RECAPTCHA_SITE,\n    }.render()\n}\n</code></pre>\n<h2 id=\"cryptography\">Cryptography</h2>\n<p>When I originally wrote Sorta Secret using <code>actix-web</code>, I used the <a href=\"https://crates.io/crates/sodiumoxide\">sodiumoxide</a> crate to access the sealedbox approach within <code>libsodium</code>. This provides a public key-based method of encrypting a secret. Unfortunately, <code>sodiumoxide</code> didn't compile trivially with WASM, which isn't surprising given that it's a binding to a C library. It may have been possible to brute force my way through this, but I decided to take a different approach.</p>\n<p>Instead, I moved over to the pure-Rust <a href=\"https://crates.io/crates/cryptoxide\">cryptoxide</a> crate. It doesn't provide the same high-level APIs of <code>sodiumoxide</code>, but it does provide <code>chacha20poly1305</code>, which is more than enough to implement symmetric key encryption.</p>\n<p>This meant I also needed to generate some random values to create nonces, which was my first debugging nightmare. I used the <code>getrandom</code> crate to generate the random values, and initially added the dependency as:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">getrandom = &quot;0.1.13&quot;\n</code></pre>\n<p>I naively assumed that it would automatically turn on the correct set of features to use WASM-relevant random data sources. Unfortunately, that wasn't the case. Instead, the calls to <code>getrandom</code> would simply <code>panic</code> about an unsupported backend. And while Cloudflare's preview system overall gives a great experience with error messages, the combination of a panic and a <code>Promise</code> meant that the exception was lost. By temporarily turning off the <code>async</code> bits and some other hacky workarounds, I eventually found out what the problem was, and eventually fixed it all by replacing the above line with:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">getrandom = { version = &quot;0.1.13&quot;, features = [&quot;wasm-bindgen&quot;] }\n</code></pre>\n<p>If you're curious, you can <a href=\"https://github.com/snoyberg/sortasecret/blob/219b29657fc296a08cdfbed06e049891633ab83b/keypair/src/lib.rs#L97\">check out the encrypt and decrypt methods on Github</a>. One pleasant finding was that, once I got the code compiling, all of the tests passed the first time, which is always an experience I strive for in strongly typed languages.</p>\n<h2 id=\"parsing-query-strings\">Parsing query strings</h2>\n<p>Both the <code>/v1/encrypt</code> and <code>/v1/show</code> endpoints take a single query string parameter, <code>secret</code>. In the case of <code>encrypt</code>, this is a plaintext value. In the case of <code>show</code>, it's the encrypted ciphertext. However, they both parse initially to a <code>String</code>, so I used the same (poorly named) <code>struct</code> to handle parsing both of them. If you remember from before, I already parsed the requested URL into a <code>url::Url</code> value. Using <code>serde_urlencoded</code> makes it easy to throw all of this together:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Deserialize, Debug)]\nstruct EncryptRequest {\n    secret: String,\n}\n\nimpl EncryptRequest {\n    fn from_url(url: &amp;url::Url) -&gt; Option&lt;Self&gt; {\n        serde_urlencoded::from_str(url.query()?).ok()\n    }\n}\n</code></pre>\n<p>Using this from the <code>encrypt</code> endpoint looks like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub(crate) fn encrypt(url: &amp;url::Url) -&gt; Result&lt;(u16, String), Box&lt;dyn std::error::Error&gt;&gt; {\n    match EncryptRequest::from_url(url) {\n        Some(encreq) =&gt; {\n            let keypair = make_keypair()?;\n            let encrypted = keypair.encrypt(&amp;encreq.secret)?;\n            Ok((200, encrypted))\n        }\n        None =&gt; Ok((400, &quot;Invalid parameters&quot;.into())),\n    }\n}\n</code></pre>\n<p>Feel free to <a href=\"https://github.com/snoyberg/sortasecret/blob/219b29657fc296a08cdfbed06e049891633ab83b/src/server.rs#L19\">check out the <code>show_html</code> endpoint too</a>.</p>\n<h2 id=\"parsing-json-request-body\">Parsing JSON request body</h2>\n<p>On the homepage and <code>/v1/show</code> page, we load up the <code>script.js</code> file to talk to the Recaptcha servers, get a token, and then send the encrypted secrets and that token to the <code>/v1/decrypt</code> endpoint. This data is sent in a <code>PUT</code> request with a JSON request body. We call this a <code>DecryptRequest</code>, and once again we can use <code>serde</code> to handle all of the parsing:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Deserialize)]\nstruct DecryptRequest {\n    token: String,\n    secrets: Vec&lt;String&gt;,\n}\n\npub(crate) async fn decrypt(body: &amp;str) -&gt; (u16, String) {\n    let decreq: DecryptRequest = match serde_json::from_str(body) {\n        Ok(x) =&gt; x,\n        Err(_) =&gt; return (400, &quot;Invalid request&quot;.to_string()),\n    };\n\n    ...\n}\n</code></pre>\n<p>At the beginning of this post, I mentioned the possibility of using the original JavaScript <code>Request</code> value instead of creating a simplified JSON representation of it. If we did so, we could call out to the <a href=\"https://developer.mozilla.org/en-US/docs/Web/API/Body/json\"><code>json</code> method</a> instead. As it stands now, converting the request body to a <code>String</code> and parsing with <code>serde</code> works just fine.</p>\n<p>I haven't looked into them myself, but there are certainly performance and code size trade-offs to be considered around this for deciding what the best solution here would be.</p>\n<h2 id=\"outgoing-http-requests\">Outgoing HTTP requests</h2>\n<p>The final major hurdle was making the outgoing HTTP request to the Recaptcha server. When I did my Hyper implementation of Sorta Secret, I used the <a href=\"https://crates.io/crates/surf\">surf crate</a>, which seemed at first to have WASM support. Unfortunately, I ended up running into two major (and difficult to debug) issues trying to use Surf for the WASM part of this:</p>\n<ul>\n<li>\n<p>The Surf code assumes that there will be a <code>Window</code>, and <code>panic</code>s if there isn't. Within Cloudflare, there isn't a Window available. Instead, I had to use a <code>ServiceWorkerGlobalScope</code>. Debugging this was again tricky because of the dropped error messages. But I eventually fixed this by tweaking the Surf codebase with a function like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub fn worker_global_scope() -&gt; Option&lt;web_sys::ServiceWorkerGlobalScope&gt; {\n    js_sys::global().dyn_into::&lt;web_sys::ServiceWorkerGlobalScope&gt;().ok()\n}\n</code></pre>\n</li>\n<li>\n<p>However, once I did this, I kept getting 400 invalid parameter responses from the Recaptcha servers. I eventually spun up a local server to dump all request information, used <code>ngrok</code> to make that service available to Cloudflare, and pointed the code at that <code>ngrok</code> hostname. I found out that it wasn't sending any request body at all.</p>\n</li>\n</ul>\n<p>I dug through the codebase a bit, and eventually found <a href=\"https://github.com/http-rs/surf/issues/26\">issue #26</a>, which demonstrated that body uploads weren't supported yet. I considered trying to patch the library to add that support, but after a few initial attempts it looks like that will require some deeper modifications than I was ready to attempt.</p>\n<p>So instead, I decided to go the opposite direction, and directly call the <code>fetch</code> API myself via the <code>web-sys</code> crate. This involves these logic steps:</p>\n<ul>\n<li>Create a <code>RequestInit</code> value</li>\n<li>Fill it with the appropriate request method and form data</li>\n<li>Create a <code>Request</code> from that <code>RequestInit</code> and the Recaptcha URL</li>\n<li>Get the global <code>ServiceWorkerGlobalScope</code></li>\n<li>Call <code>fetch</code> on it</li>\n<li>Convert some <code>Promise</code>s into <code>Future</code>s and <code>.await</code> them</li>\n<li>Use serde to convert the <code>JsValue</code> containing the JSON response body into a <code>VerifyResponse</code></li>\n</ul>\n<p>Got that? Great! Putting all of that together looks like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use web_sys::{Request, RequestInit, Response};\nlet mut opts = RequestInit::new();\nopts.method(&quot;POST&quot;);\nlet form_data = web_sys::FormData::new()?; &#x2F;&#x2F; web-sys should really require mut here...\nform_data.append_with_str(&quot;secret&quot;, body.secret)?;\nform_data.append_with_str(&quot;response&quot;, &amp;body.response)?;\nopts.body(Some(&amp;form_data));\nlet request = Request::new_with_str_and_init(\n    &quot;https:&#x2F;&#x2F;www.google.com&#x2F;recaptcha&#x2F;api&#x2F;siteverify&quot;,\n    &amp;opts,\n)?;\n\nrequest.headers().set(&quot;User-Agent&quot;, &quot;sortasecret&quot;)?;\n\nlet global = worker_global_scope().ok_or(VerifyError::NoGlobal)?;\nlet resp_value = JsFuture::from(global.fetch_with_request(&amp;request)).await?;\nlet resp: Response = resp_value.dyn_into()?;\nlet json = JsFuture::from(resp.json()?).await?;\nlet verres: VerifyResponse = json.into_serde()?;\n\nOk(verres)\n</code></pre>\n<p>And with that, all was well!</p>\n<h2 id=\"surprises\">Surprises</h2>\n<p>I've called out a few of these above, but let me collect some of my surprise points while implementing this.</p>\n<ul>\n<li>The lack of error messages during the <code>panic</code> and <code>async</code> combo was a real killer. Maybe there's a way to improve that situation that I haven't figured out yet.</li>\n<li>I was pretty surprised that <code>getrandom</code> would <code>panic</code> without the correct feature set.</li>\n<li>I was also surprised that Surf silently dropped all form data, and implicitly expected a <code>Window</code> context that wasn't there.</li>\n</ul>\n<p>On the Cloudflare side itself, the only real hurdles I hit were when it came to deploying to my own domain name instead of a <code>workers.dev</code> domain. The biggest gotcha was that I needed to fill in a dummy A record. I eventually <a href=\"https://community.cloudflare.com/t/setup-workers-on-personal-domain/88012\">found an explanation here</a>. I got more confused during the debugging of this due to DNS propagation issues, but that's entirely my own fault.</p>\n<p>Also, I shot myself in the foot with the <code>route</code> syntax in the <code>wrangler.toml</code>. I had initially put <code>www.sortasecret.com</code>, which meant it used workers to handle the homepage, but passed off requests for all other paths to my original <code>actix-web</code> service. I changed my <code>route</code> to be:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">route = &quot;*sortasecret.com&#x2F;*&quot;\n</code></pre>\n<p>I don't really blame Cloudflare docs for that, it's pretty well spelled out, but I did overlook it.</p>\n<p>Once all of that was in place, it's wonderful to have access to the full suite of domain management tools for Cloudflare, such as HTTP to HTTPS redirection, and the ability to set virtual CNAMEs on the bare domain name. This made it trivial to set up my redirect from <code>sortasecret.com</code> to <code>www.sortasecret.com</code>.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I figured this rewrite would be a long one, and it was. I was unfamiliar with basically all of the technologies I ended up using: <code>wasm-bindgen</code>, Cloudflare Workers, and <code>web-sys</code>. Given all that, I'm not disappointed with the time investment.</p>\n<p>If I was going to do this again, I'd probably factor out a significant number of common components to a <code>cloudflare</code> crate I could use, and provide things like:</p>\n<ul>\n<li>More fully powered <code>Request</code> and <code>Response</code> types</li>\n<li>A wrapper function to promote a <code>async fn (Request) -&gt; Result&lt;Response, Box&lt;dyn Error&gt;&gt;</code> into something that can be exported by <code>wasm-bindgen</code></li>\n<li>Helper functions for the <code>fetch</code> API</li>\n<li>Possibly wrap some of the other JavaScript and WASM APIs around things like JSON and crypto (though <code>cryptoxide</code> worked great for me)</li>\n</ul>\n<p>With those tools in place, I would definitely consider using Cloudflare Workers like this again. The cost and maintenance benefits are great, the performance promises to be great, and I get to keep the safety guarantees I love about Rust.</p>\n<p>Are others using Cloudflare Workers with Rust? Interested in it? Please <a href=\"https://twitter.com/snoyberg\">let me know on Twitter</a>.</p>\n<p>And if your company is considering options in the DevOps, serverless, or Rust space, please consider reaching out to our team to find out how we can help you.</p>\n<p class=\"text-center\">\n  <a class=\"button-coral\" href=\"https://www.fpcomplete.com/contact-us/\">\n    Set up an engineering consultation\n  </a>\n</p>\n<p><a href=\"https://tech.fpcomplete.com/blog/\">Read more from out blog</a> | <a href=\"https://tech.fpcomplete.com/rust/\">Rust</a> | <a href=\"https://tech.fpcomplete.com/platformengineering/\">DevOps</a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/",
        "slug": "serverless-rust-wasm-cloudflare",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Serverless Rust using WASM and Cloudflare",
        "description": "I recently rewrote a standard Kubernetes-deployed web service to use Cloudflare Workers. This post will explain how this process went, and how and why you may want to do the same.",
        "updated": null,
        "date": "2019-12-19T15:18:00Z",
        "year": 2019,
        "month": 12,
        "day": 19,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/serverless-rust-wasm-cloudflare/",
        "components": [
          "blog",
          "serverless-rust-wasm-cloudflare"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "advantages",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#advantages",
            "title": "Advantages",
            "children": []
          },
          {
            "level": 2,
            "id": "disadvantages",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#disadvantages",
            "title": "Disadvantages",
            "children": []
          },
          {
            "level": 2,
            "id": "getting-started",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#getting-started",
            "title": "Getting started",
            "children": []
          },
          {
            "level": 2,
            "id": "wasm-bindgen",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#wasm-bindgen",
            "title": "wasm-bindgen",
            "children": []
          },
          {
            "level": 2,
            "id": "routing",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#routing",
            "title": "Routing",
            "children": []
          },
          {
            "level": 2,
            "id": "templating",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#templating",
            "title": "Templating",
            "children": []
          },
          {
            "level": 2,
            "id": "cryptography",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#cryptography",
            "title": "Cryptography",
            "children": []
          },
          {
            "level": 2,
            "id": "parsing-query-strings",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#parsing-query-strings",
            "title": "Parsing query strings",
            "children": []
          },
          {
            "level": 2,
            "id": "parsing-json-request-body",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#parsing-json-request-body",
            "title": "Parsing JSON request body",
            "children": []
          },
          {
            "level": 2,
            "id": "outgoing-http-requests",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#outgoing-http-requests",
            "title": "Outgoing HTTP requests",
            "children": []
          },
          {
            "level": 2,
            "id": "surprises",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#surprises",
            "title": "Surprises",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/serverless-rust-wasm-cloudflare/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 3627,
        "reading_time": 19,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/",
            "title": "Rust: Of course it compiles, right?"
          }
        ]
      },
      {
        "relative_path": "blog/casa-and-stack.md",
        "colocated_path": null,
        "content": "<p>This post is aimed at Haskellers who are roughly aware of how build\ninfrastructure works for Haskell.</p>\n<p>But the topic may have general audience outside of the Haskell\ncommunity, so this post will briefly describe each part of the\ninfrastructure from the bottom up: compiling modules, building and\nconfiguring packages, to downloading and storing those packages\nonline.</p>\n<p>This post is a semi-continuation from <a href=\"https://tech.fpcomplete.com/blog/casa/\">last week's post on\nCasa</a>.</p>\n<h2 id=\"ghc\">GHC</h2>\n<p>GHC is the de facto standard Haskell compiler. It knows how to load\npackages and compile files, and produce binary libraries and\nexecutables. It has a small database of installed packages, with a\nsimple command-line interface for registering and querying them:</p>\n<pre><code>$ ghc-pkg register yourpackage\n$ ghc-pkg list\n</code></pre>\n<p>Apart from that, it doesn't know anything else about how to build\npackages or where to get them.</p>\n<h2 id=\"cabal\">Cabal</h2>\n<p>Cabal is the library which builds Haskell packages from a .cabal file\npackage description, which consists of a name, version, package\ndependencies and build flags. To build a Haskell package, you create a\nfile (typically <code>Setup.hs</code>), with contents roughly like:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">import Distribution.Simple -- from the Cabal library\nmain = defaultMain\n</code></pre>\n<p>This (referred to as a &quot;Simple&quot; build), creates a program that you can\nrun to configure, build and install your package.</p>\n<pre data-lang=\"shell\" class=\"language-shell \"><code class=\"language-shell\" data-lang=\"shell\">$ ghc Setup.hs\n$ .&#x2F;Setup configure # Checks dependencies via ghc-pkg\n$ .&#x2F;Setup build # Compiles the modules with GHC\n$ .&#x2F;Setup install # Runs the register step via ghc-pkg\n</code></pre>\n<p>This file tends to be included in the source repository of your\npackage. And modern package build tools tend to create this file\nautomatically if it doesn't already exist. The reason the build system\nworks like this is so that you can have custom build setups: you can\nmake pre/post build hooks and things like that.</p>\n<p>But the Cabal library doesn't download packages or manage projects\nconsisting of multiple packages, etc.</p>\n<h2 id=\"hackage\">Hackage</h2>\n<p><a href=\"https://hackage.haskell.org/\">Hackage</a> is an online archive of\nversioned package tarballs. Anyone can upload packages to this\narchive, where the package must have a version associated with it, so\nthat you can later download a specific instance of the package that\nyou want, e.g. <code>text-1.2.4.0</code>. Each package is restricted to a set of\nmaintainers (such as the author) who is able to upload to it.</p>\n<p>The Hackage admins and authors are able to revise the .cabal package\ndescription without publishing a new version, and regularly do. These\nnew revisions supersede previous revisions of the cabal files, while\nthe original revisions still remain available if specifically\nrequested (if supported by tooling being used).</p>\n<h2 id=\"cabal-install\">cabal-install</h2>\n<p>There is a program called <code>cabal-install</code> which is able to download\npackages from Hackage automatically and does some constraint solving\nto produce a build plan. A <em>build plan</em> is when the tool picks what\nversions of package dependencies your package needs to build.</p>\n<p>It might look like:</p>\n<ul>\n<li>base-4.12.0.0</li>\n<li>bytestring-0.10.10.0</li>\n<li>your-package-0.0</li>\n</ul>\n<p>Version bounds (&lt;2.1 and &gt;1.3) are used by <code>cabal-install</code> as\nheuristics to do the solving. It isn't actually known whether any of\nthese packages build together, or that the build plan will\nsucceed. It's a best guess.</p>\n<p>Finally, once it has a build plan, it uses both GHC and the Cabal\nlibrary to build Haskell packages, by creating the aforementioned\n<code>Setup.hs</code> automatically if it doesn't already exist, and running the\n<code>./Setup configure</code>, build, etc. step.</p>\n<h2 id=\"stackage\">Stackage</h2>\n<p>As mentioned, the build plans produced by <code>cabal-install</code> are a best\nguess based on constraint solving of version bounds. There is a matrix\nof possible build plans, and the particular one you get may be\nentirely novel, that no one has ever tried before. Some call this\n&quot;version hell&quot;.</p>\n<p>To rectify this situation, <a href=\"https://www.stackage.org/\">Stackage</a> is a\n&quot;stable Hackage&quot; service, which\n<a href=\"https://www.fpcomplete.com/blog/2014/05/stackage-server\">publishes known subsets of Hackage that are <em>known</em> to build and pass tests together</a>,\ncalled snapshots. There are nightly snapshots published, and long-term\nsnapshots called lts-1.0, lts-2.2, etc. which tend to steadily roll\nalong with the GHC release cycle. These LTS releases are intended to\nbe what people put in source control for their projects.</p>\n<p>The Stackage initiative has been running since it was announced\n<a href=\"https://www.yesodweb.com/blog/2012/11/stable-vetted-hackage\">in 2012</a>.</p>\n<h2 id=\"stack\">stack</h2>\n<p>The <code>stack</code> program was created to specifically make reproducible\nbuild plans based on Stackage. Authors include a <code>stack.yaml</code> file in their\nproject root, which looks like this:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">snapshot: lts-1.2\npackages: [mypackage1, mypackage2]\n</code></pre>\n<p>This tells <code>stack</code> that:</p>\n<ol>\n<li>We want to use the <code>lts-1.2</code> snapshot, therefore any package\ndependencies that we need for this project will come from there.</li>\n<li>That within this directory, there are two package directories that\nwe want to build.</li>\n</ol>\n<p>The snapshot also indicates which version of GHC is used to build that\nsnapshot; so <code>stack</code> also automatically downloads, installs and\nmanages the GHC version for the user. GHC releases tend to come out\nevery 6 months to one year, depending on scheduling, so it's common to\nhave several GHC versions installed on your machine at once. This is\nhandled transparently out of the box with <code>stack</code>.</p>\n<p>Additionally, we can add extra dependencies for when we have patched\nversions of upstream libraries, which happens a lot in the fast-moving\nworld of Haskell:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">snapshot: lts-1.2\npackages: [mypackage1, mypackage2]\nextra-deps: [&quot;bifunctors-5.5.4&quot;]\n</code></pre>\n<p>The build plan for Stack is easy: the snapshot is already a build\nplan. We just need to add our source packages and extra dependencies\non top of the pristine build plan.</p>\n<p>Finally, once it has a build plan, it uses both GHC and the Cabal\nlibrary to build Haskell packages, by creating the aforementioned\n<code>Setup.hs</code> automatically if it doesn't already exist, and running the\n<code>./Setup configure</code>, build, etc. step.</p>\n<h2 id=\"pantry\">Pantry</h2>\n<p>Since new revisions of cabal files can be made available at any time,\na package identifier like <code>bifunctors-5.5.4</code> is not reproducible. Its\nmeaning can change over time as new revisions become available. In\norder to get reproducible build plans, we have to track &quot;revisions&quot;\nsuch as <code>bifunctors-5.5.4@rev:1</code>.</p>\n<p>Stack has a library called Pantry to store all of this package\nmetadata into an sqlite database on the developer's machine. It does\nso in\n<a href=\"https://en.wikipedia.org/wiki/Content-addressable_storage\">a content-addressable way</a>\n(CAS),\nso that every variation on version and revision of a package has a\nunique SHA256 cryptographic hash summarising both the .cabal package\ndescription, and the complete contents of the package.</p>\n<p>This lets Stackage be exactly precise. Stackage snapshots used to look\nlike this:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">packages:\n- hackage: List-0.5.2\n- hackage: ListLike-4.2.1\n...\n</code></pre>\n<p>Now it looks like this:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">packages:\n- hackage: ALUT-2.4.0.3@sha256:ab8c2af4c13bc04c7f0f71433ca396664a4c01873f68180983718c8286d8ee05,4118\n  pantry-tree:\n    size: 1562\n    sha256: c9968ebed74fd3956ec7fb67d68e23266b52f55b2d53745defeae20fbcba5579\n- hackage: ANum-0.2.0.2@sha256:c28c0a9779ba6e7c68b5bf9e395ea886563889bfa2c38583c69dd10aa283822e,1075\n  pantry-tree:\n    size: 355\n    sha256: ba7baa3fadf0a733517fd49c73116af23ccb2e243e08b3e09848dcc40de6bc90\n</code></pre>\n<p>So we're able to CAS identify the .cabal file by a hash and length,</p>\n<pre><code>ALUT-2.4.0.3@sha256:ab8c2af4c13bc04c7f0f71433ca396664a4c01873f68180983718c8286d8ee05,4118\n</code></pre>\n<p>And we're able to CAS identify the contents of the package:</p>\n<pre><code>pantry-tree:\n  size: 355\n  sha256: ba7baa3fadf0a733517fd49c73116af23ccb2e243e08b3e09848dcc40de6bc90\n</code></pre>\n<p>Additionally, each and every file within the package is\nCAS-stored. The &quot;pantry-tree&quot; refers to a list of CAS hash-len keys\n(which is also serialised to a binary blob and stored in the same CAS\nstore as the files inside the tarball themselves). With every file\nstored, we remove a lot of duplication that we had storing a whole\ntarball for every single variation of a package.</p>\n<p>Parenthetically, the <code>01-index.tar</code> that Hackage serves up with all\nthe latest <code>.cabal</code> files and revisions has to be downloaded <em>every\ntime</em>. As this file is quite large this is slow and wasteful.</p>\n<p>Another side point: Hackage Security is not needed or consulted for\nthis. CAS already allows us to know in advance whether what we are\nreceiving is correct or not, as stated elsewhere.</p>\n<p>When switching to a newer snapshot, lots of packages will be updated,\nbut within each package, only a few files will have changed. Therefore\nwe only need to download those few files that are different. However,\nto achieve that, we need an online service capable of serving up those\nblobs by their SHA256...</p>\n<h2 id=\"enter-casa\">Enter Casa</h2>\n<p>As announced in our <a href=\"https://tech.fpcomplete.com/blog/casa/\">casa</a> post, Casa stands for\n&quot;content-addressable storage archive&quot;, and also means &quot;home&quot; in\nromance languages, and it is an online service we're announcing to\nstore packages in a content-addressable way.</p>\n<p>Now, the same process which produces Stackage snapshots, can also:</p>\n<ul>\n<li>Download all package versions and revisions from Hackage, and store\nthem in a Pantry database.</li>\n<li>Download all Stackage snapshots, and store them in the same Pantry\ndatabase.</li>\n<li>All the unique CAS blobs stored in the pantry database are then\npushed to Casa, completing the circle.</li>\n</ul>\n<p>Stack can now download all its assets needed to build a package from\nCasa:</p>\n<ul>\n<li>Stackage snapshots.</li>\n<li>Cabal files.</li>\n<li>Individual package files.</li>\n</ul>\n<p>Furthermore, the snapshot format of Stackage supports specifying\nlocations other than Hackage, such as a git repository at a given\ncommit, or a URL with a tarball. These would also be automatically\npushed to Casa, and Stack would download them from Casa automatically\nlike any other package. Parenthetically, Stackage does not currently\ninclude packages from outside of Hackage, but Stack's custom\nsnapshots--which use the same format--do support that.</p>\n<h2 id=\"internal-company-casas\">Internal Company Casas</h2>\n<p>Companies often run their own Hackage on their own network (or\nIP-limited public server) and upload their custom packages to it, to\nbe used by everyone in the company.</p>\n<p>With the advent of Stack, this became less needed because it's trivial\nto fork any package on GitHub and then link to the Git repo in a\nstack.yaml. Plus, it's more reproducible, because you refer to a hash\nrather than a mutable version. Combined with the additional\nPantry-based SHA256+length described above, you don't have to trust\nGitHub to serve the right content, either.</p>\n<p>The <a href=\"https://github.com/fpco/casa\">Casa repository is here</a> which\nincludes both the server and a (Haskell) client library with which you\ncan push arbitrary files to the casa service. Additionally, to\npopulate your Casa server with everything from a given snapshot, or\nall of Hackage, you can use <code>casa-curator</code> from the\n<a href=\"https://github.com/commercialhaskell/curator\">curator</a> repo, which is\nwhat we use ourselves.</p>\n<p>If you're a company interested in running your own Casa server, please\n<a href=\"mailto:[email protected]\">contact us</a>. Or, if you'd like to\ndiscuss the possibility of caching packages in binary form and\ntherefore skipping the build step altogther, please\n<a href=\"mailto:[email protected]\">contact us</a>. Also\n<a href=\"mailto:[email protected]\">contact us</a> if you would like to\ndiscuss storing GHC binary releases into Casa and have Stack pull from\nit, to allow for a completely Casa-enabled toolchain.</p>\n<h2 id=\"summary\">Summary</h2>\n<p>Here's what we've brought to Haskell build infrastructure:</p>\n<ul>\n<li>Reliable, reproducible referring to packages and their files.</li>\n<li>De-duplication of package files; fewer things to download, on your\ndev machine or on CI.</li>\n<li>An easy to use and rely on server.</li>\n<li>A way to run an archive of your own that is trivial to run.</li>\n</ul>\n<p>When you upgrade to Stack <code>master</code> or the next release of Stack, you\nwill automatically be using the Casa server.</p>\n<p>We believe this CAS architecture has use in other language ecosystems,\nnot just Haskell. See the <a href=\"https://tech.fpcomplete.com/blog/casa/\">Casa</a> post for more details.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/",
        "slug": "casa-and-stack",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Casa and Stack",
        "description": "Last week, we spoke about our new Content Addressable Storage Archive system. In this post, we'll be discussing how this affects Haskell tooling.",
        "updated": null,
        "date": "2019-12-16T12:13:00Z",
        "year": 2019,
        "month": 12,
        "day": 16,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/casa-and-stack/",
        "components": [
          "blog",
          "casa-and-stack"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "ghc",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#ghc",
            "title": "GHC",
            "children": []
          },
          {
            "level": 2,
            "id": "cabal",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#cabal",
            "title": "Cabal",
            "children": []
          },
          {
            "level": 2,
            "id": "hackage",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#hackage",
            "title": "Hackage",
            "children": []
          },
          {
            "level": 2,
            "id": "cabal-install",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#cabal-install",
            "title": "cabal-install",
            "children": []
          },
          {
            "level": 2,
            "id": "stackage",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#stackage",
            "title": "Stackage",
            "children": []
          },
          {
            "level": 2,
            "id": "stack",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#stack",
            "title": "stack",
            "children": []
          },
          {
            "level": 2,
            "id": "pantry",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#pantry",
            "title": "Pantry",
            "children": []
          },
          {
            "level": 2,
            "id": "enter-casa",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#enter-casa",
            "title": "Enter Casa",
            "children": []
          },
          {
            "level": 2,
            "id": "summary",
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/#summary",
            "title": "Summary",
            "children": []
          }
        ],
        "word_count": 1871,
        "reading_time": 10,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/casa/",
            "title": "Casa: Content-Addressable Storage Archive"
          }
        ]
      },
      {
        "relative_path": "blog/casa.md",
        "colocated_path": null,
        "content": "<p>Casa stands for &quot;content-addressable storage archive&quot;, and also means\n&quot;home&quot; in romance languages, and it is an online service we're\nannouncing to store packages in a content-addressable way.</p>\n<p>It's the natural next step in our general direction towards\nreproducible builds and immutable infrastructure. Its first\napplication is use in the most popular Haskell build tool,\n<a href=\"https://tech.fpcomplete.com/blog/casa-and-stack/\">Stack</a>. The <code>master</code> branch of this tool is now\ndownload its package indexes, metadata and content from this service.</p>\n<p>Although its primary use case was for Haskell, it could easily apply\nto other languages, such as Rust's Cargo package manager. This post\nwill focus on Casa in general. Next week, we'll dive into its\nimplications for Haskell build tooling.</p>\n<h2 id=\"content-addressable-storage-in-a-nutshell\">Content-addressable storage in a nutshell</h2>\n<p>CAS is primarily an addressing system:</p>\n<ul>\n<li>When you store content in the storage system, you generate a key for\nit by hashing the content, e.g. a SHA256.</li>\n<li>When you want to retrieve the content, you use this SHA256 key.</li>\n</ul>\n<p>Because the SHA256 refers to only this piece of content, you can\nvalidate that what you get out is what you put in originally. The\nlogic goes something like:</p>\n<ul>\n<li>Put &quot;Hello, World!&quot; into system.</li>\n<li>Key is: <code>dffd6021bb2bd5b0af676290809ec3a53191dd81c7f70a4b28688a362182986f</code></li>\n<li>Later, request\n<code>dffd6021bb2bd5b0af676290809ec3a53191dd81c7f70a4b28688a362182986f</code>\nfrom system.</li>\n<li>Receive back <code>content</code>, check that sha256sum(content) =\n<code>dffd6021bb2bd5b0af676290809ec3a53191dd81c7f70a4b28688a362182986f</code>.</li>\n<li>If so, great! If not, reject this content and raise an error.</li>\n</ul>\n<p>This is how Casa works. Other popular systems that use this style of\naddressing are IPFS and, of course, Git.</p>\n<h2 id=\"casa-endpoints\">Casa endpoints</h2>\n<p>There is one simple download entry point to the service.</p>\n<ul>\n<li>GET <code>https://casa.fpcomplete.com/&lt;your key&gt;</code> -- to easily grab the content\nof a key with curl. This doesn't have an API version associated with\nit, because it will only ever accept a key and return a blob.</li>\n</ul>\n<p>These two are versioned because they accept and return JSON/binary\nformats that may change in the future:</p>\n<ul>\n<li>GET <code>https://casa.fpcomplete.com/v1/metadata/&lt;your key&gt;</code> -- to display\nmetadata about a value.</li>\n<li>POST <code>https://casa.fpcomplete.com/v1/pull</code> - we POST up to a thousand\nkey-len pairs in binary format (32 bytes for the key, 8 bytes for\nthe length) and the server will stream all the contents back to the\nclient in key-content pairs.</li>\n</ul>\n<p>Beyond 1000 keys, the client must make separate requests for the next\n1000, etc. This is due to request length limits intentionally applied\nto the server for protection.</p>\n<h2 id=\"protected-upload\">Protected upload</h2>\n<p>Upload is protected under the endpoint <code>/v1/push</code>. This is similar to\nthe pull format, but sends length-content pairs instead. The server\nstreamingly inserts these into the database.</p>\n<p>The current workflow here is that the operator of the archive sets up\na regular push system which accesses casa on a separate port which is\nnot publicly exposed. In the Haskell case, we pull from Stackage and\nHackage (two Haskell package repositories) every 15 minutes, and push\ncontent to Casa.</p>\n<p>Furthermore, rather than uploading packages as tarballs, we instead\nupload individual files. With this approach, we remove a tonne of\nduplication on the server. Most new package uploads change only a few\nfiles, and yet an upgrading user has to download the whole package all\nover again.</p>\n<h2 id=\"service-characteristics\">Service characteristics</h2>\n<p>Here are some advantages of using CAS for package data:</p>\n<ol>\n<li>It's reproducible. You always get the package that you wanted.</li>\n<li>It's secure on the wire; a man-in-the-middle attack cannot alter a\npackage without the SHA256 changing, which can be trivially\nrejected. However, we connect over a TLS-encrypted HTTP connection\nto preserve privacy.</li>\n<li>You don't have to trust the server. It could get hacked, and you\ncould still trust content from it if it gives you content with the\ncorrect SHA256 digest.</li>\n<li>The client is protected from a DoS by a man-in-the-middle that\nmight send an infinitely sized blob in return; the client already\n<em>knows the length</em> of the blob, so it can streamingly consume only\nthis length, and check it against the SHA256.</li>\n<li>It's inherently mirror-able. Because we don't need to trust\nservers, anyone can be a mirror.</li>\n</ol>\n<p>Recalling the fact that each unique blob is a file from a package, a\ncabal file, a snapshot, or a tree rendered to a binary blob, that\nremoves a lot of redundancy. The storage requirements for Casa are\ntrivial. There are currently around 1,000,000 unique blobs (with the\nlargest file at 46MB). Rather than growing linearly with respect to\nthe number of uploaded package versions, we grow linearly with respect\nto unique files.</p>\n<h2 id=\"internal-company-casas\">Internal Company Casas</h2>\n<p>Companies often run their own package archive on their own network (or\nIP-limited public server) and upload their custom packages to it, to\nbe used by everyone in the company.</p>\n<p>Here are some reasons you might want to do that:</p>\n<ul>\n<li>Some organizations block outside Internet access, for security and\nretaining IP.</li>\n<li>Even if the download has integrity guarantees, organizations might\nnot want to reveal <em>what</em> is being downloaded for privacy.</li>\n<li>An organization may simply for speed reasons want downloads of\npackages to come within the same network, rather than reaching\nacross the world which can have significant latency.</li>\n</ul>\n<p>You can do the same with Casa.</p>\n<p>The <a href=\"https://github.com/fpco/casa\">Casa repository is here</a> which\nincludes both the server and a binary for uploading and querying\nblobs.</p>\n<p>In the future we will include in the Casa server a trivial way to\nsupport mirroring, by querying keys on-demand from other Casa servers\n(including the main one run by us).</p>\n<h2 id=\"summary\">Summary</h2>\n<p>Here's what we've brought to the table with Casa:</p>\n<ul>\n<li>Reliable, reproducible referring to packages and their files.</li>\n<li>De-duplication of package files; fewer things to download, on your\ndev machine or on CI.</li>\n<li>An easy to use and rely on server.</li>\n<li>A way to run an archive of your own that is trivial to run.</li>\n</ul>\n<p>We believe this CAS architecture has use in other language ecosystems,\nnot just Haskell. If you're a company interested in running your own\nCasa server, and/or updating your tooling, e.g. Cargo, to use this\nservice, please <a href=\"mailto:[email protected]\">contact us</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/casa/",
        "slug": "casa",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Casa: Content-Addressable Storage Archive",
        "description": "We're rolling out Casa, a Content Addressable Storage Archive targeted at reproducible build plans. Come learn about how it works and what it can do.",
        "updated": null,
        "date": "2019-12-09T12:13:00Z",
        "year": 2019,
        "month": 12,
        "day": 9,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/casa/",
        "components": [
          "blog",
          "casa"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "content-addressable-storage-in-a-nutshell",
            "permalink": "https://tech.fpcomplete.com/blog/casa/#content-addressable-storage-in-a-nutshell",
            "title": "Content-addressable storage in a nutshell",
            "children": []
          },
          {
            "level": 2,
            "id": "casa-endpoints",
            "permalink": "https://tech.fpcomplete.com/blog/casa/#casa-endpoints",
            "title": "Casa endpoints",
            "children": []
          },
          {
            "level": 2,
            "id": "protected-upload",
            "permalink": "https://tech.fpcomplete.com/blog/casa/#protected-upload",
            "title": "Protected upload",
            "children": []
          },
          {
            "level": 2,
            "id": "service-characteristics",
            "permalink": "https://tech.fpcomplete.com/blog/casa/#service-characteristics",
            "title": "Service characteristics",
            "children": []
          },
          {
            "level": 2,
            "id": "summary",
            "permalink": "https://tech.fpcomplete.com/blog/casa/#summary",
            "title": "Summary",
            "children": []
          }
        ],
        "word_count": 1018,
        "reading_time": 6,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/casa-and-stack/",
            "title": "Casa and Stack"
          }
        ]
      },
      {
        "relative_path": "blog/mainstream-ides-haskell.md",
        "colocated_path": null,
        "content": "<h1 id=\"haskell-support-in-mainstream-ides\">Haskell Support in Mainstream IDEs</h1>\n<p>I've tested out the Haskell support of the top mainstream IDEs. Here's\na rundown of the current state of things.</p>\n<p>As a dyed-in-the-wool Emacs hacker I've never used any of the more\nrecent mainstream IDEs, so I can probably offer an unbiased review of\nthe support provided by each.</p>\n<p>Note: I tried approaching it as a client would, or prospective Haskell\nuser, so for any manual intervention I had to do, I've used a tone\nthat indicates I'm not happy about having to do it, and anything that\ndoesn't just work I just discard with little patience, as a real\nperson would and do today. Even if I know there are probably extra\nmanual investigations that I could do knowing what I do about Haskell,\na normal user wouldn't have that advantage.</p>\n<h2 id=\"intellij-idea\">IntelliJ IDEA</h2>\n<p>I installed it according to the\n<a href=\"https://www.jetbrains.com/idea/download/\">instructions on the IntelliJ IDEA web site</a>. I\ndownloaded it to my Ubuntu laptop and installed it under\n<code>/opt/intellij</code>.</p>\n<p>After installing IntelliJ, running it opens up a splash screen. Rather\nthan starting a project, I went straight to the Configure-&gt;Plugins\nbutton. In the plugins list, I chose <code>IntelliJ-Haskell</code>. After that,\nit was suggested that I restart, so I hit <code>Restart IDE</code>.</p>\n<p>After restarting, on the splash screen I hit Create New Project and\nchose &quot;Haskell module&quot;. At this point, it asked me to &quot;Select the\nstack binary&quot;. I picked the one at <code>/home/chris/.local/bin/stack</code>, but\nsomeone else might find it under <code>/usr/local/bin/stack</code>. I hit Next.</p>\n<p>Warning: there was a long wait after this step. I entered my project\nname and proceeded. Opening the project workspace, it now claims &quot;busy\ninstalling hlint&quot;, which is a Haskell linting tool. It does this for\nvarious tools; hlint, hindent, stylish-haskell, hoogle. This took\neasily 15 minutes on my machine. Go make a cup of tea.</p>\n<p>Finally, after finishing this process, it's ready to go. Here are some\nthings that I observed work correctly:</p>\n<ol>\n<li>Compile errors when changing code on the fly. Slow, but works. You\ncan hit the &quot;Haskell Problems&quot; tab to see the actual compiler\nmessages.</li>\n<li>Hitting Ctrl and mousing over something, which is how you get\nmetadata in IDEA.\n<ol>\n<li>Go to definition of library code.</li>\n<li>Go to definition of local code.</li>\n<li>Type info at point.</li>\n<li>Go to definition of local bindings.</li>\n</ol>\n</li>\n</ol>\n<p>I tested this out by opening the Stack codebase itself. Took about 10\nseconds on &quot;Indexing...&quot; and then was ready.</p>\n<p>There's a very picky set of steps to opening an existing project\nproperly:</p>\n<ol>\n<li>You have to go &quot;Create project from existing source&quot;</li>\n<li>Choose &quot;Create from external model&quot;</li>\n<li>Choose the &quot;Haskell&quot; SDK.</li>\n</ol>\n<p>Then it should be good to go. Other ways didn't work for me and I got\nstuck.</p>\n<p>I've also seen that it's possible to define test and executable\ntargets quite reasonably.</p>\n<p>IntelliJ has support to &quot;optimize imports&quot; which will remove unneeded\nones, which is very common when refactoring. I'd call that feature a\nmust-have.</p>\n<p>Overall, this IDE experience is not bad. As a Haskeller, I could get\nby if I had to use this.</p>\n<h2 id=\"visual-studio-code\">Visual Studio Code</h2>\n<p>I followed along with the\n<a href=\"https://code.visualstudio.com/docs/setup/linux\">install instructions for Linux</a>. I\ndownloaded the .deb and ran <code>sudo apt install ./&lt;file&gt;.deb</code>.</p>\n<p>I launched Visual Studio Code from the Ubuntu Activities menu. It\ndisplays its full UI immediately, which was quite a lot faster than\nIntelliJ, which takes about 5 seconds before displaying a UI\nwindow. Not that I care about start-up times:\n<a href=\"https://www.gnu.org/fun/jokes/gnuemacs.acro.exp.html\">I use Emacs</a>.</p>\n<h3 id=\"visual-studio-code-haskero\">Visual Studio Code: Haskero</h3>\n<p>I went to the Customize section and then &quot;Tools and languages&quot;. Up\npops a menu for language choices (also quite quickly). I tried\ninstalling the\n<a href=\"https://marketplace.visualstudio.com/items?itemName=Vans.haskero\">Haskero</a>\nplugin, which, as I understand, is in spirit the same backend and\nfunctionality of IntelliJ-Haskell. It said &quot;This extension is enabled\nglobally&quot;.</p>\n<p>Assuming that it was ready to use, I looked for a way to create a\nproject. I didn't find one, so I opted to try opening an existing\nHaskell project: stack. I used File -&gt; Open Workspace and chose the\nrepository root directory.</p>\n<p>VSC reports &quot;Unable to watch for file changes in this large\nworkspace.&quot; I followed\n<a href=\"https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc\">the link which had a hint to increase the limit</a>. I\nedited my <code>sysctl.conf</code> file as instructed to allow VSC to watch all\nthe files in my project.</p>\n<p>Opening, for example, <code>src/main/Main.hs</code>, it opens quickly, but\ndoesn't appear to be doing any work like IntelliJ was. So I create\nsome obvious errors in the file to see whether anything works.</p>\n<p>After waiting a while, it seems that I have to save the file to see\nany kind of reaction from VSC. So I save the file and wait. I timed\nit:</p>\n<pre><code>$ date\nWed 13 Nov 10:09:38 CET 2019\n$ date\nWed 13 Nov 10:10:40 CET 2019\n</code></pre>\n<p>After a full minute, I got in the Problems tab the problem.</p>\n<p>It seems to be recompiling the whole project on every change. This\npretty much makes this plugin unusable. I don't think the author has\ntested this on a large project.</p>\n<p>In its current state, I would not recommend Haskero. I uninstalled it\nand decided to look at others.</p>\n<h3 id=\"visual-studio-code-haskelly\">Visual Studio Code: Haskelly</h3>\n<p>I decided to try the other similar offering called\nHaskelly. After a reload and re-opening Stack, I made an intentional\nerror in <code>src/main/Main.hs</code> again and found that nothing happened. No\nCPU usage by any process.</p>\n<p>There weren't any indicators on the screen of anything failing to\nwork. However, I had an intentional type error in my file that was not\nflagged up anywhere.</p>\n<p>Another plugin that I would rate as not usable. I uninstalled it.</p>\n<h3 id=\"visual-studio-code-haskell-language-server\">Visual Studio Code: Haskell Language Server</h3>\n<p>I installed the &quot;Haskell Language Server&quot;, which is supposed to be the\nlatest state of the art in language backends for Haskell.</p>\n<p>Enabling it, I see the message:</p>\n<blockquote>\n<p>hie executable missing, please make sure it is installed, see\ngithub.com/haskell/haskell-ide-engine.</p>\n</blockquote>\n<p>Apparently I have to manually install something. Okay, sure, why not?</p>\n<p>There's\n<a href=\"https://github.com/haskell/haskell-ide-engine\">a variety of installation methods</a>. I'm\nnot sure which one will work. But I already have <code>stack</code> installed, so\nI try the install from source option:</p>\n<pre><code>$ git clone https:&#x2F;&#x2F;github.com&#x2F;haskell&#x2F;haskell-ide-engine --recurse-submodules --depth 1\n</code></pre>\n<p>This seems to clone the whole world and takes a while. Definitely a\nget a cup of tea moment. After that was done, I went to the directory\nand ran this as per the instructions:</p>\n<pre><code>$ stack .&#x2F;install.hs help\n</code></pre>\n<p>I am presented with a myriad of options:</p>\n<pre><code>Targets:\n   [snip]\n    stack-build             Builds hie with all installed GHCs; with stack\n    stack-build-all         Builds hie for all installed GHC versions and the data files; with stack\n    stack-build-data        Get the required data-files for `hie` (Hoogle DB); with stack\n    stack-install-cabal     Install the cabal executable. It will install the required minimum version for hie (currently 2.4.1.0) if it isn&#x27;t already present in $PATH; with stack\n    stack-hie-8.4.2         Builds hie for GHC version 8.4.2; with stack\n    stack-hie-8.4.3         Builds hie for GHC version 8.4.3; with stack\n    stack-hie-8.4.4         Builds hie for GHC version 8.4.4; with stack\n    stack-hie-8.6.1         Builds hie for GHC version 8.6.1; with stack\n    stack-hie-8.6.2         Builds hie for GHC version 8.6.2; with stack\n    stack-hie-8.6.3         Builds hie for GHC version 8.6.3; with stack\n    stack-hie-8.6.4         Builds hie for GHC version 8.6.4; with stack\n    stack-hie-8.6.5         Builds hie for GHC version 8.6.5; with stack\n   [snip]\n</code></pre>\n<p>I lookup the GHC version that's being used by the stack source code:</p>\n<pre><code>~&#x2F;Work&#x2F;fpco&#x2F;stack$ stack ghc -- --version\nThe Glorious Glasgow Haskell Compilation System, version 8.2.2\n</code></pre>\n<p>Apparently the GHC version in use by stack is too old. At this point I\nstop and uninstall the plugin.</p>\n<h3 id=\"visual-studio-code-ghcid\">Visual Studio Code: ghcid</h3>\n<p>As a last resort, I tried one more plugin. But nothing seemed to\nhappen with this one either. So I uninstalled it.</p>\n<h2 id=\"sublimetext\">SublimeText</h2>\n<p>Another popular editor is SublimeText. I installed it via the apt\nrepository\n<a href=\"https://www.sublimetext.com/docs/3/linux_repositories.html#apt\">documented here</a>. I\ndecided to try the\n<a href=\"https://packagecontrol.io/packages/SublimeHaskell\">SublimeHaskell</a>\nplugin which seems popular.</p>\n<p>Installing things in SublimeHaskell is a little arcane: you first have\nto install &quot;Package Control&quot;. I don't remember which menu item this\nwas from. However, SublimeText installs this for you. Once that's\ndone, you have to use Tools-&gt;Command Pallete, which is a kind of\nquick-access tool that's apparently common in SublimeText. In there\nyou have to literally type &quot;package control&quot; and then go to &quot;Package\nControl: Install Package&quot; and hit RET. Then you can type\nSublimeHaskell and hit RET. As an Emacs user, I'm not afraid of arcane\nUIs.</p>\n<p>After installing, it pops up a dialog with:</p>\n<blockquote>\n<p>No usable backends (hsdev, ghc-mod) found in PATH. [..] Please check\nor update your SublimeHaskell user settings or install hsdev or\nghc-mod.</p>\n</blockquote>\n<p>It displays a tab with the README from SublimeHaskell and I assume\nthis is where SublimeText is done helping me.</p>\n<p>Okay, let's install hsdev!</p>\n<p>I had to create a file <code>hsdev.yaml</code>:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">packages: []\nresolver: lts-13.29\nextra-deps:\n- hsdev-0.3.3.1\n- haddock-api-2.21.0\n- hdocs-0.5.3.1\n- network-3.0.1.1\n</code></pre>\n<p>And then run</p>\n<pre><code>$ stack install hsdev --stack-yaml hsdev.yaml\n</code></pre>\n<p>That took 5 minutes but succeeded. There isn't a &quot;next button&quot; on\nSublimeText, so I just restarted it. I did File-&gt;Open Folder and\nopened the stack directory and the <code>Main.hs</code> file.</p>\n<p>I see &quot;Inspecting stack&quot; which indicates that it's actually doing\nsomething. However, after that finishes, I still don't see any error\nmessages for my type error. Finally, I make a new change and save the\nfile, and a little messages area pops up below.</p>\n<pre><code>Could not find module ‘Data.Aeson’\n</code></pre>\n<p>And so one for pretty much every library module in the project.</p>\n<p>At this point I can't find a menu or anything else to help me\nconfigure packages or anything related.</p>\n<p>At this point it seems like SublimeText is <em>almost</em> workable. The cons\nappear to be the manual install process, and the complete lack of\nguidance in the user experience. I'm afraid I can't recommend this to\nclients either at the moment.</p>\n<h2 id=\"summary\">Summary</h2>\n<p>The story for Visual Studio Code is pretty dire. I did not find a\nstraight-forward install or reliably working IDE for Haskell in Visual\nStudio Code, and therefore, at the moment, cannot recommend it to our\nclients. Perhaps a little work done on Haskero could bring it up to par.</p>\n<p>SublimeText falls over at the finish line. It seems like with a little\nwork on the user experience could bring this up to par.</p>\n<p>IntelliJ IDEA however worked quite well with little to no intervention\nrequired. So I would indeed recommend it to clients.</p>\n<p>Read more about\n<a href=\"https://www.fpcomplete.com/blog\">Haskell development, tooling and business</a>\non our blog. <a href=\"mailto:[email protected]\">Email us</a> to setup a free\nconsultation with our engineering team to discuss editor options.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/",
        "slug": "mainstream-ides-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Haskell Support in Mainstream IDEs",
        "description": "An emacs user tests out the Haskell support of the top mainstream IDEs. Here's a rundown of the current state of things.",
        "updated": null,
        "date": "2019-12-02T05:30:00Z",
        "year": 2019,
        "month": 12,
        "day": 2,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/mainstream-ides-haskell/",
        "components": [
          "blog",
          "mainstream-ides-haskell"
        ],
        "summary": null,
        "toc": [
          {
            "level": 1,
            "id": "haskell-support-in-mainstream-ides",
            "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#haskell-support-in-mainstream-ides",
            "title": "Haskell Support in Mainstream IDEs",
            "children": [
              {
                "level": 2,
                "id": "intellij-idea",
                "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#intellij-idea",
                "title": "IntelliJ IDEA",
                "children": []
              },
              {
                "level": 2,
                "id": "visual-studio-code",
                "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#visual-studio-code",
                "title": "Visual Studio Code",
                "children": [
                  {
                    "level": 3,
                    "id": "visual-studio-code-haskero",
                    "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#visual-studio-code-haskero",
                    "title": "Visual Studio Code: Haskero",
                    "children": []
                  },
                  {
                    "level": 3,
                    "id": "visual-studio-code-haskelly",
                    "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#visual-studio-code-haskelly",
                    "title": "Visual Studio Code: Haskelly",
                    "children": []
                  },
                  {
                    "level": 3,
                    "id": "visual-studio-code-haskell-language-server",
                    "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#visual-studio-code-haskell-language-server",
                    "title": "Visual Studio Code: Haskell Language Server",
                    "children": []
                  },
                  {
                    "level": 3,
                    "id": "visual-studio-code-ghcid",
                    "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#visual-studio-code-ghcid",
                    "title": "Visual Studio Code: ghcid",
                    "children": []
                  }
                ]
              },
              {
                "level": 2,
                "id": "sublimetext",
                "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#sublimetext",
                "title": "SublimeText",
                "children": []
              },
              {
                "level": 2,
                "id": "summary",
                "permalink": "https://tech.fpcomplete.com/blog/mainstream-ides-haskell/#summary",
                "title": "Summary",
                "children": []
              }
            ]
          }
        ],
        "word_count": 1877,
        "reading_time": 10,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/using-packer-for-building-windows-server-images.md",
        "colocated_path": null,
        "content": "<p>Packer is a useful tool for creating pre-built machine images. While it's \nusually associated with creating Linux images for a variety of platforms,\nit also has first class support for Windows. </p>\n<p>We'd like to explain why someone should consider adding Packer made images\nand dive into the variety of ways it benefits a Windows server DevOps \nenvironment.</p>\n<h3 id=\"motivation\">Motivation</h3>\n<p>Pre-built images are useful in a number of ways.  Packer can use the same build\nconfiguration and provisioning recipe to create AWS AMI's and Azure machine \nimages that will be used in production, as well as the machine images for testing\nlocally in Virtualbox and Vagrant.  This allows teams to develop and test their \ncode using the same setup running in production, as well as the setup their \ncolleagues are using.</p>\n<p>In this kind of a setup, you use Packer early in your development process. We \nfollow the workflow where we create the image first, then at any point in the \nfuture the image is available for development and deployments.  This shifts the\nwork that goes into installing software and configuring an image to long before \nyour deployment time. Therefore there's one less step at deployment and the\nWindows server image will come out of the gates fully configured and provisioned \nwith the correct software and settings.</p>\n<p>Using a pre-built image also has the added benefit that we're able to catch \nconfiguration and setup bugs early during the machine creation phase.  Any errors\nwhich would have occurred during deployment are caught early while we're creating\nour Windows server image.  We'll be confident that our pre-built Windows server\nimage will be ready at the time of deployment.</p>\n<p>This could be handy in any number of situations. Imagine a scenario where we need\nto install an outside piece of software on our Windows server.  Maybe we need to\nsetup our Windows server as \na <a href=\"https://puppet.com/docs/puppet/6.4/services_agent_windows.html#concept-577\">Puppet agent</a>\nprior to deployment.  As part of this we'd like to download the <code>.msi</code> package\nusing a simple Powershell script during setup:</p>\n<pre><code>$msi_source = &quot;https:&#x2F;&#x2F;downloads.puppetlabs.com&#x2F;windows&#x2F;puppet6&#x2F;puppet-agent-6.4.2-x64.msi&quot;\n$msi_dest   = &quot;C:\\Windows\\Temp\\puppet-agent-6.4.2-x64.msi&quot;\nInvoke-WebRequest -Uri $msi_source -OutFile $msi_dest\n</code></pre>\n<p>Any issue downloading and retrieving that piece of software from its vendor \ncould delay our entire Windows server deployment and potentially cause downtime \nor production errors.  This sort of problem could arise for a number of reasons: </p>\n<ul>\n<li>There's an unexpected issue with the network preventing our server from \ngetting the file</li>\n<li>The software vendor's site is down</li>\n<li>There's even a humble typo in our download URL</li>\n</ul>\n<p>These sort of DevOps pain points should not be allowed to occur at deployment.\nIf we instead started with a pre-built and pre-configured image for our production\nWindows servers, we could deploy new servers knowing that they would be safely \nprovisioned and set up to our liking. </p>\n<h3 id=\"what-is-packer\">What is Packer?</h3>\n<p>So far we've discussed why an engineer would use pre-built Windows images \nin their DevOps setup without discussing specific tools and methodology. \nLet's introduce the Packer tool and why it's such a good fit for this problem space.</p>\n<p><a href=\"https://www.packer.io/\">Packer</a> is an open-source tool developed by \n<a href=\"https://www.hashicorp.com/\">HashiCorp</a> for creating machine images.\nIt's an ideal tool to use for our purposes here where we want to create images for multiple \nplatforms (AWS, Azure, Virtualbox) from one build template file.</p>\n<p>At a high level, Packer works by allowing us to define which platform we'd like \nto create our machine or image for with a \n<a href=\"https://www.packer.io/docs/builders\">builder</a>. There are builders \nfor a variety of platforms and we'll touch on using a few of these in our example.</p>\n<p>The next thing that Packer lets us do is use \n<a href=\"https://www.packer.io/docs/provisioners\">provisioners</a> to define \nsteps we want packer to run.  We define these <em>provisioning</em> steps \nin our packer config file and packer will use them to setup our machine\nimages identically, independent of the platforms we target with our builders.</p>\n<p>As we mentioned earlier, Packer has excellent Windows support.\nWe'll touch on using the <a href=\"https://www.packer.io/docs/provisioners/file.html\">file provisioner</a> \nas well as as the <a href=\"https://www.packer.io/docs/provisioners/powershell.html\">powershell provisioner</a>\nin depth later. For now it's worth knowing that we can use the file provisioner \nto upload files to the Windows server machines we're building.  Likewise we can use \nthe PowerShell provisioner to run Powershell scripts that we have on our host \nmachine (the one we're using to create our Windows server images from) on the \nWindows server we're building.</p>\n<h2 id=\"the-nitty-gritty-a-real-world-example\">The nitty gritty - a real world example</h2>\n<p>Packer works by using a JSON formatted config file. \nThis config file is also referred to as the Packer \n<a href=\"https://www.packer.io/docs/templates\">build template</a>.\nYou specify the builders and provisioners for Packer that we discussed earlier \nwithin this build template.</p>\n<p>At this point if you would like to follow along and try the next few steps in \nthis example on your own, you should first install Packer on your machine.\nThe <a href=\"https://www.packer.io/intro/getting-started/install.html\">official install guide for Packer is here</a>\nand if you need to install Vagrant, then please follow the official \n<a href=\"https://www.vagrantup.com/docs/installation/\">install guide here</a>.\nAlso, check out the corresponding <a href=\"https://github.com/fpco/packer-windows\">code repository for this blog post here</a>.</p>\n<p>Packer is a mature, well-used tool and there are many excellent templates and \nexamples available for a variety of use cases.  For our example we're basing our\ntemplate code on the \n<a href=\"https://github.com/StefanScherer/packer-windows\">Packer Windows templates by Stefan Scherer</a>. \nThe set of templates available in that repository are an excellent resource for \ngetting started.  The build template specific to our example is \navailable in its entirety <a href=\"https://github.com/fpco/packer-windows/blob/master/windows_2019.json\">at the code repo</a> \nassociated with this blog, but we'll go over a few of the important details next.</p>\n<p>The first thing that we'd like to cover is the builder section. For the Vagrant \nbox builder we're using:</p>\n<pre><code>{\n  &quot;boot_wait&quot;: &quot;2m&quot;,\n  &quot;communicator&quot;: &quot;winrm&quot;,\n  &quot;cpus&quot;: 2,\n  &quot;disk_size&quot;: &quot;{{user `disk_size`}}&quot;,\n  &quot;floppy_files&quot;: [\n    &quot;{{user `autounattend`}}&quot;,\n    &quot;.&#x2F;scripts&#x2F;disable-screensaver.ps1&quot;,\n    &quot;.&#x2F;scripts&#x2F;disable-winrm.ps1&quot;,\n    &quot;.&#x2F;scripts&#x2F;enable-winrm.ps1&quot;,\n    &quot;.&#x2F;scripts&#x2F;microsoft-updates.bat&quot;,\n    &quot;.&#x2F;scripts&#x2F;win-updates.ps1&quot;,\n    &quot;.&#x2F;scripts&#x2F;unattend.xml&quot;,\n    &quot;.&#x2F;scripts&#x2F;sysprep.bat&quot;\n  ],\n  &quot;guest_additions_mode&quot;: &quot;disable&quot;,\n  &quot;guest_os_type&quot;: &quot;Windows2016_64&quot;,\n  &quot;headless&quot;: &quot;{{user `headless`}}&quot;,\n  &quot;iso_checksum&quot;: &quot;{{user `iso_checksum`}}&quot;,\n  &quot;iso_checksum_type&quot;: &quot;{{user `iso_checksum_type`}}&quot;,\n  &quot;iso_url&quot;: &quot;{{user `iso_url`}}&quot;,\n  &quot;memory&quot;: 2048,\n  &quot;shutdown_command&quot;: &quot;a:&#x2F;sysprep.bat&quot;,\n  &quot;type&quot;: &quot;virtualbox-iso&quot;,\n  &quot;vm_name&quot;: &quot;WindowsServer2019&quot;,\n  &quot;winrm_username&quot;: &quot;vagrant&quot;,\n  &quot;winrm_password&quot;: &quot;vagrant&quot;,\n  &quot;winrm_timeout&quot;: &quot;{{user `winrm_timeout`}}&quot;\n}\n</code></pre>\n<p>Here the line:</p>\n<pre><code>&quot;{{user `autounattend`}}&quot;,\n</code></pre>\n<p>is referring to the <code>autounattend</code> variable from the <code>variables</code> section of the \nPacker build template file:</p>\n<pre><code>&quot;variables&quot;: {\n    &quot;autounattend&quot;: &quot;.&#x2F;answer_files&#x2F;Autounattend.xml&quot;,\n</code></pre>\n<p>When you boot a Windows server installation image (like we're doing here with Packer)\nyou'll typically use the <code>Autounattend.xml</code> to automate installation instructions\nthat the user would normally be prompted for. Here we're mounting this file on \nthe virtual machine using the floppy drive (the <code>floppy_files</code> section). \nWe also use this functionality to load PowerShell scripts onto the virtual \nmachine as well. <code>win-updates.ps1</code> for example installs the latest updates at the\ntime the Windows server image is created.</p>\n<p>We're also going to add additional scripts to run with provisioners. These are \nin the <code>provisioners</code> section of the packer build template and are independent \nof any specific platform specified by each of the <code>builders</code> section entries.</p>\n<p>The provisioners section in our build template looks like the following:</p>\n<pre><code>&quot;provisioners&quot;: [\n  {\n    &quot;execute_command&quot;: &quot;{{ .Vars }} cmd &#x2F;c \\&quot;{{ .Path }}\\&quot;&quot;,\n    &quot;scripts&quot;: [\n      &quot;.&#x2F;scripts&#x2F;vm-guest-tools.bat&quot;,\n      &quot;.&#x2F;scripts&#x2F;enable-rdp.bat&quot;\n    ],\n    &quot;type&quot;: &quot;windows-shell&quot;\n  },\n  {\n    &quot;scripts&quot;: [\n      &quot;.&#x2F;scripts&#x2F;debloat-windows.ps1&quot;\n    ],\n    &quot;type&quot;: &quot;powershell&quot;\n  },\n  {\n    &quot;restart_timeout&quot;: &quot;{{user `restart_timeout`}}&quot;,\n    &quot;type&quot;: &quot;windows-restart&quot;\n  },\n  {\n    &quot;execute_command&quot;: &quot;{{ .Vars }} cmd &#x2F;c \\&quot;{{ .Path }}\\&quot;&quot;,\n    &quot;scripts&quot;: [\n      &quot;.&#x2F;scripts&#x2F;pin-powershell.bat&quot;,\n      &quot;.&#x2F;scripts&#x2F;set-winrm-automatic.bat&quot;,\n      &quot;.&#x2F;scripts&#x2F;uac-enable.bat&quot;,\n      &quot;.&#x2F;scripts&#x2F;compile-dotnet-assemblies.bat&quot;,\n      &quot;.&#x2F;scripts&#x2F;dis-updates.bat&quot;\n    ],\n    &quot;type&quot;: &quot;windows-shell&quot;\n  }\n],\n</code></pre>\n<p>We're using both the <a href=\"https://www.packer.io/docs/provisioners/powershell.html\">powershell provisioner</a>\nas well as the <a href=\"https://www.packer.io/docs/provisioners/windows-shell.html\">Windows Shell provisioner</a>\nfor older Windows CMD scripts. The reason we're using provisioners to run these\nscripts instead of placing them in the floppy drive like we did in the <code>builder</code> \nfor the Vagrant box earlier is that these scripts are generic to all platforms \nwe'd like our build template to target. For that reason, we would like these to \nrun regardless of the platforms we're using our build template for.</p>\n<h3 id=\"creating-and-running-a-local-windows-server-in-vagrant\">Creating and running a local Windows server in Vagrant</h3>\n<p>For running our Windows server locally, the general overview is:</p>\n<ol>\n<li>First we will build our Windows server Vagrant box file with Packer</li>\n<li>We will add that box to Vagrant</li>\n<li>We'll then initialize it with our Vagrantfile template </li>\n<li>And finally we'll boot it</li>\n</ol>\n<p>Building the Packer box can be done with the \n<a href=\"https://www.packer.io/docs/commands/build.html\"><code>packer build</code></a> command.\nIn our example our Windows server build template is called\n<code>windows_2019.json</code> so we start the packer build with </p>\n<pre><code>packer build windows_2019.json\n</code></pre>\n<p>If we have multiple builders we can tell packer that we would only like to \nuse the virtualbox type with the command:</p>\n<pre><code>packer build --only=virtualbox-iso windows_2019.json\n</code></pre>\n<p>(Note the <code>type</code> value we set earlier in our Vagrant box builder section of the \npacker build template was: <code>&quot;type&quot;: &quot;virtualbox-iso&quot;,</code>).</p>\n<p>Next, we'll add the box to vagrant with the <code>vagrant box add</code> command which is \nused in the following way:</p>\n<pre><code>vagrant box add BOX_NAME BOX_FILE\n</code></pre>\n<p>Or more precisely for our example we're invoking this command as:</p>\n<pre><code>vagrant box add windows_2019_virtualbox windows_2019_virtualbox.box\n</code></pre>\n<p>We then need to initialize our </p>\n<pre><code>vagrant init --template vagrantfile-windows_2019.template windows_2019_virtualbox\n</code></pre>\n<p>and boot it with:</p>\n<pre><code>vagrant up\n</code></pre>\n<p>At this point we will have a fully provisioned and running Windows server in \nVagrant.</p>\n<p>The set of commands we used above to build and use our Packer build template are\nneatly encapsulated in the <a href=\"https://github.com/fpco/packer-windows/blob/master/Makefile\">Makefile targets</a>. \nIf you're using the example code in the <a href=\"https://github.com/fpco/packer-windows\">accompanying repo</a>\nfor this blog post you can simply run the following <code>make</code> commands:</p>\n<pre><code>make packer-build-box\nmake vagrant-add-box\nmake vagrant-init\nmake vagrant-up\n</code></pre>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>At this point even though we're only going to be using this Vagrant box and its\nassociated Vagrantfile for local testing purposes, we've eliminated the potential\nfor errors that could occur during our Windows server setup.  When we use this box\nfor future development and testing (or give it to other colleagues to do likewise)\nwe won't need to be worried that one of our setup scripts may fail and we would\nneed to fix it in order to continue working. We've been able to eliminate an entire \ncategory of DevOps errors and a particular development pain point by using Packer \nto create our Windows server image.</p>\n<p>We also know that if we're able to build our box with Packer, and run the \nprovisioning steps, that we'll have a image that will be identical to our \nproduction images that we can use to test and work with.</p>\n<h3 id=\"next-steps\">Next steps</h3>\n<p>If this blog post sounded interesting, or you're curious about other \nways modern DevOps tools like Packer can improve your projects, you should \ncheck out our future blog posts. \nWe have a series coming soon on using tools, like Packer, to improve your DevOps \nenvironment.</p>\n<p>In future posts we'll cover ways to use Vagrant with Packer\nas well as how to use Packer to produce AWS AMI's to deploy your production environment.\nThese will be natural next steps if you wanted to pursue the topics covered in \nthis post further.</p>\n<p>We're also adding new DevOps posts all the time and you can sign up for our\n<a href=\"https://tech.fpcomplete.com/platformengineering/signup/\">DevOps mailing list</a>\nif you would like our latest DevOps articles delivered to your inbox.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/using-packer-for-building-windows-server-images/",
        "slug": "using-packer-for-building-windows-server-images",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Using Packer for building Windows Server Images",
        "description": "Packer is a useful tool for creating pre-built machine images. While it's usually associated with creating Linux images for a variety of platforms, it also has first class support for Windows. We'd like to explain why someone should consider adding Packer made images and dive into the variety of ways it benefits a Windows server DevOps environment.",
        "updated": null,
        "date": "2019-11-28T06:08:16Z",
        "year": 2019,
        "month": 11,
        "day": 28,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mike McGirr",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/using-packer-for-building-windows-server-images/",
        "components": [
          "blog",
          "using-packer-for-building-windows-server-images"
        ],
        "summary": null,
        "toc": [
          {
            "level": 3,
            "id": "motivation",
            "permalink": "https://tech.fpcomplete.com/blog/using-packer-for-building-windows-server-images/#motivation",
            "title": "Motivation",
            "children": []
          },
          {
            "level": 3,
            "id": "what-is-packer",
            "permalink": "https://tech.fpcomplete.com/blog/using-packer-for-building-windows-server-images/#what-is-packer",
            "title": "What is Packer?",
            "children": []
          },
          {
            "level": 2,
            "id": "the-nitty-gritty-a-real-world-example",
            "permalink": "https://tech.fpcomplete.com/blog/using-packer-for-building-windows-server-images/#the-nitty-gritty-a-real-world-example",
            "title": "The nitty gritty - a real world example",
            "children": [
              {
                "level": 3,
                "id": "creating-and-running-a-local-windows-server-in-vagrant",
                "permalink": "https://tech.fpcomplete.com/blog/using-packer-for-building-windows-server-images/#creating-and-running-a-local-windows-server-in-vagrant",
                "title": "Creating and running a local Windows server in Vagrant",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/using-packer-for-building-windows-server-images/#conclusion",
            "title": "Conclusion",
            "children": [
              {
                "level": 3,
                "id": "next-steps",
                "permalink": "https://tech.fpcomplete.com/blog/using-packer-for-building-windows-server-images/#next-steps",
                "title": "Next steps",
                "children": []
              }
            ]
          }
        ],
        "word_count": 1939,
        "reading_time": 10,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/how-to-be-successful-at-blockchain-development.md",
        "colocated_path": null,
        "content": "<h2 id=\"how-to-be-successful-at-blockchain-development\">How to be Successful at Blockchain Development</h2>\n<p>The following webinar discusses various strategies for Blockchain Development success and the many uses for Blockchain technology outside of an audit. The webinar also discusses best practices for integrating the Blockchain development strategies into your organization.</p>\n<p>The following topics are covered in the webinar:</p>\n<ul>\n<li>What is a Blockchain &amp; how do companies make use of them?</li>\n<li>What kinds of things might a company need help with?</li>\n<li>What makes Blockchain Challenging?\n<ul>\n<li>What issues need to be decided before proceeding?</li>\n<li>What are the downsides of using Blockchain?</li>\n</ul>\n</li>\n<li>What are the main security issues with Blockchain?</li>\n<li>What makes Blockchain challenging to implement?</li>\n<li>Best practices for developing with Blockchain!</li>\n</ul>\n<h2 id=\"watch-the-webinar\">Watch the Webinar</h2>\n<iframe allowfullscreen=\n            \"allowfullscreen\" height=\"315\" src=\n            \"https://www.youtube.com/embed/jngHo0Gzk6s\"\n            target=\"_blank\" width=\n            \"100%\"></iframe>\n<br/>\n<br/>\n<h2 id=\"do-you-know-fp-complete\">Do You Know FP Complete?</h2>\n<p>At FP Complete we build Next Generation Software to Solve Complex Problems.  We are your global full-stack technology partner that specializes in Server-Side Software, DevSecOps, Cloud Native Computing, and Advanced Programming Languages. We are a one-stop, full-stack technology shop that delivers agile, reliable, repeatable and highly secure software.  Want to learn more about us? <a href=\"https://tech.fpcomplete.com/contact-us/\"><strong>Talk to our Team.</strong></a></p>\n<iframe allowfullscreen=\n            \"allowfullscreen\" height=\"315\" src=\n            \"https://www.youtube.com/embed/WY8WjsoMa2I\"\n            target=\"_blank\" width=\n            \"100%\"></iframe>\n<br>\n<br>\n",
        "permalink": "https://tech.fpcomplete.com/blog/how-to-be-successful-at-blockchain-development/",
        "slug": "how-to-be-successful-at-blockchain-development",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "How to be Successful at Blockchain Development",
        "description": "We have been sponsoring free webinars for a few years now at FP Complete. The amount of feedback we have been receiving from the IT community has been overwhelmingly positive. We have been working towards producing a new webinar topic every month, and we plan to keep moving at that pace. In this webinar, Neil […]",
        "updated": null,
        "date": "2019-11-11",
        "year": 2019,
        "month": 11,
        "day": 11,
        "taxonomies": {
          "tags": [
            "blockchain",
            "insights"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Neil Mayhew",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/how-to-be-successful-at-blockchain-development/",
        "components": [
          "blog",
          "how-to-be-successful-at-blockchain-development"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "how-to-be-successful-at-blockchain-development",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-be-successful-at-blockchain-development/#how-to-be-successful-at-blockchain-development",
            "title": "How to be Successful at Blockchain Development",
            "children": []
          },
          {
            "level": 2,
            "id": "watch-the-webinar",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-be-successful-at-blockchain-development/#watch-the-webinar",
            "title": "Watch the Webinar",
            "children": []
          },
          {
            "level": 2,
            "id": "do-you-know-fp-complete",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-be-successful-at-blockchain-development/#do-you-know-fp-complete",
            "title": "Do You Know FP Complete?",
            "children": []
          }
        ],
        "word_count": 217,
        "reading_time": 2,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/when-children-processes-exit-debugging-story.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/when-children-processes-exit-debugging-story/",
        "slug": "when-children-processes-exit-debugging-story",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "When children processes exit - a debugging story",
        "description": "A debugging story revolving around children processes, pipes shared between them, and signals.",
        "updated": null,
        "date": "2019-07-01T02:10:00Z",
        "year": 2019,
        "month": 7,
        "day": 1,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/when-children-processes-exit-debugging-story.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/when-children-processes-exit-debugging-story/",
        "components": [
          "blog",
          "when-children-processes-exit-debugging-story"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/ann-stack-2.1.1-release.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/ann-stack-2.1.1-release/",
        "slug": "ann-stack-2-1-1-release",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "ANN: stack-2.1.1 release",
        "description": "Announcing the first release in the Stack 2 series, stack-2.1.1",
        "updated": null,
        "date": "2019-06-13T13:36:00Z",
        "year": 2019,
        "month": 6,
        "day": 13,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/ann-stack-2.1.1-release.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/ann-stack-2.1.1-release/",
        "components": [
          "blog",
          "ann-stack-2.1.1-release"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/blockchain-and-cryptocurrency-security-0.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/blockchain-and-cryptocurrency-security-0/",
        "slug": "blockchain-and-cryptocurrency-security-0",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Blockchain and Cryptocurrency Security",
        "description": "Blockchain and cryptocurrency are digital technologies that exist only as software code. Proving this code is doing what it is designed to do and that it's safe is why independent audits are critical. We explore why blockchain and cryptocurrency audits are important.",
        "updated": null,
        "date": "2019-05-30T12:14:00Z",
        "year": 2019,
        "month": 5,
        "day": 30,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "blockchain"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/blockchain-and-cryptocurrency-security-0.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/blockchain-and-cryptocurrency-security-0/",
        "components": [
          "blog",
          "blockchain-and-cryptocurrency-security-0"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/faking-sql-server-in-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/faking-sql-server-in-haskell/",
        "slug": "faking-sql-server-in-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Faking SQL Server in Haskell",
        "description": "We make an intercepting SQL Server proxy in Haskell, and implement it using standard libraries, in a streaming fashion.",
        "updated": null,
        "date": "2019-05-29T05:00:00Z",
        "year": 2019,
        "month": 5,
        "day": 29,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/faking-sql-server-in-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/faking-sql-server-in-haskell/",
        "components": [
          "blog",
          "faking-sql-server-in-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/maximizing_haskell_webinar_review.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/maximizing_haskell_webinar_review/",
        "slug": "maximizing-haskell-webinar-review",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Maximizing Haskell Webinar Review",
        "description": "Webinar review on maximizing the power of Haskell Success Program, a low-cost mentoring program to share our real-world “how-to” expertise with your team",
        "updated": null,
        "date": "2019-05-08T10:30:00Z",
        "year": 2019,
        "month": 5,
        "day": 8,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Greg Morancey",
          "html": "hubspot-blogs/maximizing_haskell_webinar_review.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/maximizing_haskell_webinar_review/",
        "components": [
          "blog",
          "maximizing_haskell_webinar_review"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stackage-changes-and-stack-2.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2019/04/stackage-changes-and-stack-2/",
        "slug": "stackage-changes-and-stack-2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stackage changes and Stack 2",
        "description": "We're working towards a Stack 2 release, as well as changes to Stackage. Learn more and help us test things before release!",
        "updated": null,
        "date": "2019-04-24T02:00:00Z",
        "year": 2019,
        "month": 4,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stackage-changes-and-stack-2.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2019/04/stackage-changes-and-stack-2/",
        "components": [
          "blog",
          "2019",
          "04",
          "stackage-changes-and-stack-2"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/why-stack-is-moving-its-ci-to-azure-pipelines.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/why-stack-is-moving-its-ci-to-azure-pipelines/",
        "slug": "why-stack-is-moving-its-ci-to-azure-pipelines",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Why Stack is moving its CI to Azure Pipelines",
        "description": "Stack is moving its CI to Azure Pipelines (from Travis CI, AppVeyor, and Gitlab CI).  This will simplify our configuration, and give us faster and more reliable automated builds.  We've also written documentation to help you set up Azure Pipelines for your own Haskell projects.",
        "updated": null,
        "date": "2019-04-12T08:35:00Z",
        "year": 2019,
        "month": 4,
        "day": 12,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/why-stack-is-moving-its-ci-to-azure-pipelines.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/why-stack-is-moving-its-ci-to-azure-pipelines/",
        "components": [
          "blog",
          "why-stack-is-moving-its-ci-to-azure-pipelines"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/rio-standard-library-for-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/rio-standard-library-for-haskell/",
        "slug": "rio-standard-library-for-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "RIO - a Standard Library for Haskell",
        "description": "Hear Alexey Kuleshevich, Software Engineer at FP Complete help us understand RIO, the standard library for Haskell. RIO is a Haskell library that provides a collection of solutions to some of the most common problems in the Haskell ecosystem.",
        "updated": null,
        "date": "2019-03-07T11:53:00Z",
        "year": 2019,
        "month": 3,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/rio-standard-library-for-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/rio-standard-library-for-haskell/",
        "components": [
          "blog",
          "rio-standard-library-for-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/quickcheck-hedgehog-validity.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/quickcheck-hedgehog-validity/",
        "slug": "quickcheck-hedgehog-validity",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "QuickCheck, Hedgehog, Validity",
        "description": "Property testing has so much potential to become a powerful tool for software development yet it remains mysterious to most developers.  This blog explores ways to utilize the power of property testing as it pertains to Haskell.  Learn about QuickCheck and HedgeHog, and “Validity-based Testing”.",
        "updated": null,
        "date": "2019-02-27T08:21:00Z",
        "year": 2019,
        "month": 2,
        "day": 27,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Tom Sydney Kerckhove",
          "html": "hubspot-blogs/quickcheck-hedgehog-validity.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/quickcheck-hedgehog-validity/",
        "components": [
          "blog",
          "quickcheck-hedgehog-validity"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/blockchain-programming-applications-beyond-cryptocurrency.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/blockchain-programming-applications-beyond-cryptocurrency/",
        "slug": "blockchain-programming-applications-beyond-cryptocurrency",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Blockchain Programming Applications Beyond Cryptocurrency",
        "description": "Blockchain has revolutionized the way monetary value can be transferred, but its abilities span far beyond cryptocurrencies. Discover five other ways that blockchain can be applied to continue revolutionizing the way data is transferred.",
        "updated": null,
        "date": "2019-02-20T13:45:00Z",
        "year": 2019,
        "month": 2,
        "day": 20,
        "taxonomies": {
          "tags": [
            "blockchain"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/blockchain-programming-applications-beyond-cryptocurrency.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/blockchain-programming-applications-beyond-cryptocurrency/",
        "components": [
          "blog",
          "blockchain-programming-applications-beyond-cryptocurrency"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/randomization-testing-for-an-sql-translator.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/randomization-testing-for-an-sql-translator/",
        "slug": "randomization-testing-for-an-sql-translator",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Randomization Testing for an SQL Translator",
        "description": "Learn how to implement an automatic translator from one SQL dialect to another in Haskell & how to use randomization tests to drive the translator forward.",
        "updated": null,
        "date": "2019-02-13T08:29:00Z",
        "year": 2019,
        "month": 2,
        "day": 13,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Alexey Kuleshevich",
          "html": "hubspot-blogs/randomization-testing-for-an-sql-translator.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/randomization-testing-for-an-sql-translator/",
        "components": [
          "blog",
          "randomization-testing-for-an-sql-translator"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/webassembly-in-rust.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/webassembly-in-rust/",
        "slug": "webassembly-in-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "WebAssembly in Rust: A Primer",
        "description": "Watch this webinar to hear Aniket Deshpande, Software & DevOps Engineer at FP Complete explain how to use WebAssembly in Rust. Topics include - Introduction to WebAssembly, setup and tooling for development, programming web applications in Rust, and demonstration of a small web application.",
        "updated": null,
        "date": "2019-02-12T08:05:00Z",
        "year": 2019,
        "month": 2,
        "day": 12,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/webassembly-in-rust.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/webassembly-in-rust/",
        "components": [
          "blog",
          "webassembly-in-rust"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/defining-exceptions-in-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/defining-exceptions-in-haskell/",
        "slug": "defining-exceptions-in-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Defining exceptions in Haskell",
        "description": "We have various ways of defining exceptions in Haskell, but how do we raise our exception types? Learn how to define exception in Haskell in this post.   ",
        "updated": null,
        "date": "2019-01-29T13:16:00Z",
        "year": 2019,
        "month": 1,
        "day": 29,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/defining-exceptions-in-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/defining-exceptions-in-haskell/",
        "components": [
          "blog",
          "defining-exceptions-in-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/when-rust-is-safer-than-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/when-rust-is-safer-than-haskell/",
        "slug": "when-rust-is-safer-than-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "When Rust is safer than Haskell",
        "description": "Haskell generally has better safety guarantees than Rust, there are some cases when Rust is safer than Haskell. This post explores when Rust is safe to use.",
        "updated": null,
        "date": "2019-01-17T10:09:00Z",
        "year": 2019,
        "month": 1,
        "day": 17,
        "taxonomies": {
          "tags": [
            "haskell",
            "rust"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/when-rust-is-safer-than-haskell.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/when-rust-is-safer-than-haskell/",
        "components": [
          "blog",
          "when-rust-is-safer-than-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/building-tuis-in-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/building-tuis-in-haskell/",
        "slug": "building-tuis-in-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Building Terminal User Interfaces in Haskell",
        "description": "This webinar will explain how to get up and running with making your own TUI applications by live-coding example TUIs with the brick. Terminal User Interfaces are text-based user interfaces for use from a terminal. Great for Haskell programmers to enhance productivity.",
        "updated": null,
        "date": "2018-12-03T11:10:00Z",
        "year": 2018,
        "month": 12,
        "day": 3,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/building-tuis-in-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/building-tuis-in-haskell/",
        "components": [
          "blog",
          "building-tuis-in-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/haskell-and-rust.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/11/haskell-and-rust/",
        "slug": "haskell-and-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Haskell and Rust",
        "description": "Learn more about how the Rust programming language shares many of the advantages offered by Haskell such as a strong type system, great tooling, polymorphism, immutability, concurrency, and great software testing methodologies.  Rust is a good choice when you need to squeeze in extra performance.",
        "updated": null,
        "date": "2018-11-26T09:33:00Z",
        "year": 2018,
        "month": 11,
        "day": 26,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Allen",
          "html": "hubspot-blogs/haskell-and-rust.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/2018/11/haskell-and-rust/",
        "components": [
          "blog",
          "2018",
          "11",
          "haskell-and-rust"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/is-rust-functional.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/10/is-rust-functional/",
        "slug": "is-rust-functional",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Is Rust functional?",
        "description": "Rust is an imperative systems programming language. Why does it have so much attention from functional programming advocates? Is it hiding a functional nature?",
        "updated": null,
        "date": "2018-10-17T20:02:00Z",
        "year": 2018,
        "month": 10,
        "day": 17,
        "taxonomies": {
          "tags": [
            "rust"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/is-rust-functional.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/2018/10/is-rust-functional/",
        "components": [
          "blog",
          "2018",
          "10",
          "is-rust-functional"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/http-status-codes-async-rust/",
            "title": "HTTP status codes with async Rust"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/",
            "title": "Monads and GATs in nightly Rust"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/",
            "title": "Rust: Of course it compiles, right?"
          },
          {
            "permalink": "https://tech.fpcomplete.com/rust/",
            "title": "FP Complete Rust"
          }
        ]
      },
      {
        "relative_path": "blog/development-workflows-in-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/development-workflows-in-haskell/",
        "slug": "development-workflows-in-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Development Workflows in Haskell",
        "description": "This webinar will help you take your workflow skills to the next level by exploring different approaches to Haskell development. From compiling a full application whenever you save a file, to experiment with in-progress code drafts over a REPL.",
        "updated": null,
        "date": "2018-10-17T13:04:00Z",
        "year": 2018,
        "month": 10,
        "day": 17,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/development-workflows-in-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/development-workflows-in-haskell/",
        "components": [
          "blog",
          "development-workflows-in-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/2018-haskell-survey-results.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018-haskell-survey-results/",
        "slug": "2018-haskell-survey-results",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "2018 Haskell Survey Results",
        "description": "As a major advocate for growing the Haskell community, we are proud to deliver our State of Haskell 2018 report. We are happy to report that Haskell is thriving and growing in diverse industries, satisfaction is high with Haskell tools, and Haskell is being used for large-scale commercial projects",
        "updated": null,
        "date": "2018-10-08T10:52:00Z",
        "year": 2018,
        "month": 10,
        "day": 8,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/2018-haskell-survey-results.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018-haskell-survey-results/",
        "components": [
          "blog",
          "2018-haskell-survey-results"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/resourcet-necessary-evil.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/10/resourcet-necessary-evil/",
        "slug": "resourcet-necessary-evil",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "ResourceT: A necessary evil",
        "description": "The resourcet library and its ResourceT newtype wrapper appear in a lot of code, especially in the conduit ecosystem. While ResourceT is absolutely necessary in some cases, this blog post claims that you should avoid overusing it.",
        "updated": null,
        "date": "2018-10-04T12:32:00Z",
        "year": 2018,
        "month": 10,
        "day": 4,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/resourcet-necessary-evil.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/10/resourcet-necessary-evil/",
        "components": [
          "blog",
          "2018",
          "10",
          "resourcet-necessary-evil"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/yesod-postgres-kubernetes-deployment.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/09/yesod-postgres-kubernetes-deployment/",
        "slug": "yesod-postgres-kubernetes-deployment",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Deploying Postgres based Yesod web application to Kubernetes using Helm",
        "description": "If you enjoy using Haskell and building web frameworks, then you definitely know about Yesod. Learn how to deploy a PostgresSQL web application using Kubernetes using Helm.",
        "updated": null,
        "date": "2018-09-20T15:45:00Z",
        "year": 2018,
        "month": 9,
        "day": 20,
        "taxonomies": {
          "tags": [
            "haskell",
            "devops",
            "kubernetes"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Sibi Prabakaran",
          "html": "hubspot-blogs/yesod-postgres-kubernetes-deployment.html",
          "blogimage": "/images/blog-listing/kubernetes.png"
        },
        "path": "/blog/2018/09/yesod-postgres-kubernetes-deployment/",
        "components": [
          "blog",
          "2018",
          "09",
          "yesod-postgres-kubernetes-deployment"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/containerization/",
            "title": "Containerization"
          }
        ]
      },
      {
        "relative_path": "blog/deploying_haskell_apps_with_kubernetes.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/deploying_haskell_apps_with_kubernetes/",
        "slug": "deploying-haskell-apps-with-kubernetes",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Deploying Haskell Apps with Kubernetes",
        "description": "This webinar describes how to Deploy Haskell applications using Kubernetes. Topics to be discussed include creation of a Kube cluster using Terraform and Kops, describe pods, deployments, services, load balancers, etc., deployment of a built image using kubectl and deploy, and more.",
        "updated": null,
        "date": "2018-09-11T16:24:00Z",
        "year": 2018,
        "month": 9,
        "day": 11,
        "taxonomies": {
          "tags": [
            "haskell",
            "devops"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/deploying_haskell_apps_with_kubernetes.html",
          "blogimage": "/images/blog-listing/kubernetes.png"
        },
        "path": "/blog/deploying_haskell_apps_with_kubernetes/",
        "components": [
          "blog",
          "deploying_haskell_apps_with_kubernetes"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
            "title": "Our history with containerization"
          },
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/containerization/",
            "title": "Containerization"
          }
        ]
      },
      {
        "relative_path": "blog/haskell-development-workflows-4-ways.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/08/haskell-development-workflows-4-ways/",
        "slug": "haskell-development-workflows-4-ways",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Haskell Development Workflows (4 ways)",
        "description": "Learn how FP Complete uses 4 different development workflows in Haskell. Learn about auto-compile on save via stack, REPL driven development via GHCi, Auto-reload GHCi REPL with ghcid, Emacs + Intero development. Using these will enhance your workflow substantially.",
        "updated": null,
        "date": "2018-08-23T12:45:00Z",
        "year": 2018,
        "month": 8,
        "day": 23,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Roman Gonzalez",
          "html": "hubspot-blogs/haskell-development-workflows-4-ways.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/08/haskell-development-workflows-4-ways/",
        "components": [
          "blog",
          "2018",
          "08",
          "haskell-development-workflows-4-ways"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/haskell-library-audit-reports.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/08/haskell-library-audit-reports/",
        "slug": "haskell-library-audit-reports",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Haskell Library Audit Reports",
        "description": "On behalf of Cardano Foundation, FP Complete has begun auditing Haskell open source libraries. Curious about the results? Come read more about it!",
        "updated": null,
        "date": "2018-08-09T14:47:00Z",
        "year": 2018,
        "month": 8,
        "day": 9,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/haskell-library-audit-reports.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/08/haskell-library-audit-reports/",
        "components": [
          "blog",
          "2018",
          "08",
          "haskell-library-audit-reports"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/specifying-dependencies.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/08/pantry-part-3/specifying-dependencies/",
        "slug": "specifying-dependencies",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Pantry, part 3: Specifying Dependencies",
        "description": "What's wrong with something like `resolver: lts-12.0`? In part 3 of a blog post series on Pantry, we'll find out.",
        "updated": null,
        "date": "2018-08-01T10:28:00Z",
        "year": 2018,
        "month": 8,
        "day": 1,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/specifying-dependencies.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/08/pantry-part-3/specifying-dependencies/",
        "components": [
          "blog",
          "2018",
          "08",
          "pantry-part-3",
          "specifying-dependencies"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/streaming-utf8-haskell-rust.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/07/streaming-utf8-haskell-rust/",
        "slug": "streaming-utf8-haskell-rust",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Streaming UTF-8 in Haskell and Rust",
        "description": "An investigation into getting Haskell-like error handling ergonomics into a Rust application dealing with streaming UTF-8 encoding and decoding.",
        "updated": null,
        "date": "2018-07-30T02:00:00Z",
        "year": 2018,
        "month": 7,
        "day": 30,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "rust",
            "conduit"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/streaming-utf8-haskell-rust.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/2018/07/streaming-utf8-haskell-rust/",
        "components": [
          "blog",
          "2018",
          "07",
          "streaming-utf8-haskell-rust"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/",
            "title": "Collect in Rust, traverse in Haskell and Scala"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/",
            "title": "Monads and GATs in nightly Rust"
          }
        ]
      },
      {
        "relative_path": "blog/pantry-part-2-trees-keys.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/07/pantry-part-2-trees-keys/",
        "slug": "pantry-part-2-trees-keys",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Pantry, part 2: Trees and keys",
        "description": "Part 2 in a blog series on Pantry for content addressable package storage in Haskell. In this edition, we'll motivate and describe a new format for representing package contents.",
        "updated": null,
        "date": "2018-07-23T22:31:00Z",
        "year": 2018,
        "month": 7,
        "day": 23,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/pantry-part-2-trees-keys.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/07/pantry-part-2-trees-keys/",
        "components": [
          "blog",
          "2018",
          "07",
          "pantry-part-2-trees-keys"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/pantry-part-1-package-index.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/07/pantry-part-1-package-index/",
        "slug": "pantry-part-1-package-index",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Pantry, part 1: The Package Index",
        "description": "Part 1 in a series of posts on introducing content addressable storage for packages in Stack and Stackage.",
        "updated": null,
        "date": "2018-07-19T08:09:00Z",
        "year": 2018,
        "month": 7,
        "day": 19,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/pantry-part-1-package-index.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/07/pantry-part-1-package-index/",
        "components": [
          "blog",
          "2018",
          "07",
          "pantry-part-1-package-index"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/deploying-rust-with-docker-and-kubernetes.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/",
        "slug": "deploying-rust-with-docker-and-kubernetes",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Deploying Rust with Docker and Kubernetes",
        "description": "Using a tiny Rust app to demonstrate deploying Rust with Docker and Kubernetes.",
        "updated": null,
        "date": "2018-07-17T14:36:00Z",
        "year": 2018,
        "month": 7,
        "day": 17,
        "taxonomies": {
          "tags": [
            "rust",
            "devops",
            "kubernetes"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Allen",
          "html": "hubspot-blogs/deploying-rust-with-docker-and-kubernetes.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/2018/07/deploying-rust-with-docker-and-kubernetes/",
        "components": [
          "blog",
          "2018",
          "07",
          "deploying-rust-with-docker-and-kubernetes"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
            "title": "Levana NFT Launch"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
            "title": "Our history with containerization"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
            "title": "Deploying Rust with Windows Containers on Kubernetes"
          },
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/containerization/",
            "title": "Containerization"
          }
        ]
      },
      {
        "relative_path": "blog/why-blockchain-and-cryptocurrency-audits.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/why-blockchain-and-cryptocurrency-audits/",
        "slug": "why-blockchain-and-cryptocurrency-audits",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Why blockchain and cryptocurrency audits?",
        "description": "Blockchain and cryptocurrency are digital technologies that exist only as software code. Proving this code is doing what it is designed to do and that it's safe is why independent audits are critical. We explore why blockchain and cryptocurrency audits are important.",
        "updated": null,
        "date": "2018-06-28T14:01:00Z",
        "year": 2018,
        "month": 6,
        "day": 28,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "blockchain"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/why-blockchain-and-cryptocurrency-audits.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/why-blockchain-and-cryptocurrency-audits/",
        "components": [
          "blog",
          "why-blockchain-and-cryptocurrency-audits"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/sed-a-debugging-story.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/06/sed-a-debugging-story/",
        "slug": "sed-a-debugging-story",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Sed: a debugging story",
        "description": "The semi-complete retelling of a debugging adventure involving sed, Windows, Haskell, and Stack. Come enjoy the ride!",
        "updated": null,
        "date": "2018-06-19T12:12:00Z",
        "year": 2018,
        "month": 6,
        "day": 19,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/sed-a-debugging-story.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/06/sed-a-debugging-story/",
        "components": [
          "blog",
          "2018",
          "06",
          "sed-a-debugging-story"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/hackathon-review-and-stack-maintenance.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/06/hackathon-review-and-stack-maintenance/",
        "slug": "hackathon-review-and-stack-maintenance",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Hackathon Review and Stack Maintenance",
        "description": "Review of the Haskell Hackathon following LambdaConf 2018, and some ideas that came from it for better Stack maintainership.",
        "updated": null,
        "date": "2018-06-13T15:46:00Z",
        "year": 2018,
        "month": 6,
        "day": 13,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/hackathon-review-and-stack-maintenance.html",
          "blogimage": "/images/blog-listing/functional.png",
          "author_avatar": "/images/leaders/michael-snoyman.png"
        },
        "path": "/blog/2018/06/hackathon-review-and-stack-maintenance/",
        "components": [
          "blog",
          "2018",
          "06",
          "hackathon-review-and-stack-maintenance"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/devops-to-prepare-for-a-blockchain-world.md",
        "colocated_path": null,
        "content": "<h2 id=\"introduction\">Introduction</h2>\n<p>As the world adopts blockchain technologies, your IT infrastructure — and its\npredictability — become critical. Many companies lack the levels of automation\nand control needed to survive in this high-opportunity, high-threat environment.</p>\n<p>Are your software, cloud, and server systems automated and robust enough? Do you\nhave enough quality control for both your development and your online operations?\nOr will you join the list of companies bruised by huge data breaches and loss o\nf control over their own computer systems? If you are involved in blockchain, or\nany industry for that matter, these are the questions you need to ask yourself.</p>\n<p>Blockchain will require you to put more information online than ever before,\ncreating huge exposures for organizations that do not have a handle on their\nsecurity. Modern DevOps technologies, including many open-source systems, offer\npowerful solutions that can improve your systems to a level suitable for use with\nblockchain.</p>\n<h2 id=\"are-companies-really-ready-for-blockchain-technology\">Are companies REALLY ready for Blockchain technology?</h2>\n<p>The answer to it is most of the companies are NOT and those who are need to audit\nor reevaluate whether they are. The reason is BlockChain puts data to public making\nit prone to outside attacks if systems are not hardenend and updated on timely\nmanner.</p>\n<p>Big companies such as Equifax had millions of records stolen, Heartland credit\nprocessing was hacked and eventually had to pay 110 million and Airbus A400M due \nto wrong installation of manual software patch resulted in death of everyone on\non the plain. These are few of many such big companies that was hacked due to poorly\nimplemented IT technology.</p>\n<p>Once hailed as unhackable, blockchains are now getting hacked. According to a MIT\ntechnology review, hackers have stolen nearly $2 billion worth of cryptocurrency\nsince the beginning of 2017.</p>\n<h2 id=\"big-question-why-companies-are-getting-hacked\">Big Question: Why Companies are getting hacked ?</h2>\n<p>Blockchain itself isn't always the problem. Sometimes the blockchain is  secure \nbut the IT infrastructure is not capable to supporting it. There are cases where \nopen firewalls, unencrypted data, poor testing and manual errors were reasons \nbehind the hacking.</p>\n<p>So, the question to ask is: Is the majority of your IT infrastructure secure \nand reliable enough to support Blockchain Technology ?</p>\n<h2 id=\"what-is-an-it-factory\">What is an IT Factory ?</h2>\n<p>IT factory as per <a href=\"https://www.fpcomplete.com/our-team/\">Aaron Contorer</a>, founder \nand Chariman of FP Complete is divided into 3 parts</p>\n<ol>\n<li>Development</li>\n<li>Deployment</li>\n<li>System Operations</li>\n</ol>\n<p>If IT factory is implemented properly at each stage it could result in a new and\nbetter IT services leading to a more reliable, scalable and secure environment.</p>\n<p>Deployment is a bridge that allows software running on a developer laptop all the\nway to a scalable system and running Ops for monitoring. With DevOps practice,\nwe can ensure all the three stages of IT factory implemented.</p>\n<p>But, the key to build a working IT factory is Automation that ensure each step\nin the deployment process is reliable. With microservices architecture ,building\nand testing a reliable containerized based system is much easier now compared to\nthe earlier days.</p>\n<p>The only way to ensure a reliable, reproducible system is if companies start\nautomating each step of their software life cycle journey. Companies that are ensuring\ngood DevOps practices have a robust IT infrastructure compared to those that are\nNOT.</p>\n<h2 id=\"devops-for-blockchain\">DevOps for Blockchain</h2>\n<p>DevOps tools helps BlockChain better as it can ensure all code is tracked, tested,\ndeployed automatically, audited and Quality Assurance tested along each stage of\nthe delivery pipeline.</p>\n<p>The other benefits of having DevOps methods implemented in BlockChain is that it \nreduces the overall operational cost to companies, speeds up the overall pace of \nsoftware development and release cycle, improves the software quality and increases\nthe productivity.</p>\n<p>The following DevOps methods, if implemented in Blockchain, can be very helpful</p>\n<p><strong>1. Engineer for Safety</strong></p>\n<ul>\n<li>With proper version control tool like GITHUB , source code can be viewed,\ntracked with proper history of all changes to the base</li>\n<li>Development tools used by developers should be of the same version, should be\ntracked and should be  uniform across the project</li>\n<li>Continuous Integration (CI) pipeline must be implemented at the development\nstage to ensure nothing breaks on each commit. There are tools such as Jenkins,\nBamboo, Code Pipeline and many more that can help in setting up a proper CI .</li>\n<li>Each commit should be properly tested using test case management system with\nproper unit test cases for each commit</li>\n<li>Each Project should also have an Issue tracking system like JIRA, GITLAB etc\nto ensure all requests are properly tracked and closed.</li>\n</ul>\n<p><strong>2. Deploy for Safety</strong></p>\n<ul>\n<li>Continuous Deployment via DevOps tools to ensure code is automatically deployed\nto each environment</li>\n<li>Each environment (Development, Testing, DR, Production) should be a replica\nof each other</li>\n<li>Allow automation to setup all relevant infrastructure related to allow successful\ndeployment of code</li>\n<li>Setup infrastructure as code (IAC) to provision infrastructure that helps in\nreducing manual errors</li>\n<li>Sanity of each deployment by running test cases to ensure each component is\nfunctioning as expected</li>\n<li>Running Security testing after each Deployment on each environment</li>\n<li>Ensure system can be  RollBack/Rollforward without any manual intervention like\nCanary/Blue-Green Deployment</li>\n<li>Use container based deployments that provide more reliability for deployments</li>\n</ul>\n<p><strong>3. Operate for Safety</strong></p>\n<ul>\n<li>Set up Continuous Automated Monitoring and Logging</li>\n<li>Set up Anomaly detection and alerting mechanism</li>\n<li>Set up Automated Response and Recovery for any failures</li>\n<li>Ensure a Highly Available and scalable system for reliability</li>\n<li>Ensure data is encrypted for all outbound and inbound communication</li>\n<li>Ensure separation of admin powers, database powers, deployment powers , user \naccess etc. The more the powers are separated the lesser the risk</li>\n</ul>\n<p><strong>4. Separate for Safety</strong></p>\n<ul>\n<li>Separate each system internally from each other by using multiple small networks.\nFor Eg: database/backend on private subnets while UI on public subnets</li>\n<li>Set Internal and MutFirewalls ensure the database systems are protected with no access</li>\n<li>Separate Responsibility and credentials for reduce risk of exposure</li>\n</ul>\n<p><strong>5. Human systems</strong></p>\n<p>Despite keeping hardware and software checks, most the breaking of blockchain\nsystems today has happened because of &quot;People&quot; or &quot;Human Errors&quot;.</p>\n<p>Most people try hacks/workaround to get stuff working on production with no knowledge\non the impacts it could do on the system. Sometimes these stuff are not documented\nmaking it hard for the other person to fix it. Sometimes asking others to login\nto unauthorized systems by sharing credentials over calls paves a path for unsecure\nsystems</p>\n<p>To ensure companies must,</p>\n<ul>\n<li>Train people to STOP doing manual efforts to fix a broken system.</li>\n<li>Train people  NOT to do &quot;Social Engineering&quot; like asking colleagues \nto login to systems on their behalf, sharing passwords etc.</li>\n</ul>\n<p><strong>6. Quality Assurance</strong></p>\n<ul>\n<li>Need to review the Architectural as well as best practices are ensured in the\nproduct life cycle</li>\n<li>Need to ensure the code deploy pipeline has scope for penetration Testing</li>\n<li>Need to ensure there is weekly/monthly auditing of metrics, logs , systems to\ncheck for threats to the systems</li>\n<li>Each component and patch on system should be tested and approved by QA before\nrolling out to Production</li>\n<li>Companies could also hire third parties to audit their system on their behalf</li>\n</ul>\n<h2 id=\"how-to-get-there\">How to get there ?</h2>\n<p>The good news is &quot;IT IS POSSIBLE&quot;. There is no need for giant or all-in-one solutions.</p>\n<p>Companies that are starting fresh need  to start at the early phase of development\nto building a reliable system by focussing on above 6 points mentioned above. They\nneed to start thinking on all areas in the &quot;Plan and Design&quot; phase itself.</p>\n<p>For companies who are already on production or nearing production does not need\nto have to start fresh . They can start making incremental progress but it needs\nto start TODAY.</p>\n<p>Automation is the only SCIENCE in IT that can reduce errors and help towards building \na more and more reliable system. It will in the future save money and resources that \ncan be redirected to focus on other areas.</p>\n<p>To conclude, <a href=\"https://www.fpcomplete.com\">FP Complete</a> has been a leading consultant \non providing DevOps services. We excel at what we do and if you are looking to implement \nDevOps in your BlockChain. Please feel free to reach out to us for free consultations.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/",
        "slug": "devops-to-prepare-for-a-blockchain-world",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps to Prepare for a Blockchain World",
        "description": "This webinar describes how Devops can be used to prepare any company that is interested in adopting blockchain technology. Many companies lack the level of automation and control needed to survive in this high-opportunity, high-threat environment but DevOps technologies can offer powerful solutions.",
        "updated": null,
        "date": "2018-06-07T08:03:00Z",
        "year": 2018,
        "month": 6,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming",
            "devops"
          ],
          "tags": [
            "devops",
            "blockchain"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/devops-to-prepare-for-a-blockchain-world/",
        "components": [
          "blog",
          "devops-to-prepare-for-a-blockchain-world"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "introduction",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#introduction",
            "title": "Introduction",
            "children": []
          },
          {
            "level": 2,
            "id": "are-companies-really-ready-for-blockchain-technology",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#are-companies-really-ready-for-blockchain-technology",
            "title": "Are companies REALLY ready for Blockchain technology?",
            "children": []
          },
          {
            "level": 2,
            "id": "big-question-why-companies-are-getting-hacked",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#big-question-why-companies-are-getting-hacked",
            "title": "Big Question: Why Companies are getting hacked ?",
            "children": []
          },
          {
            "level": 2,
            "id": "what-is-an-it-factory",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#what-is-an-it-factory",
            "title": "What is an IT Factory ?",
            "children": []
          },
          {
            "level": 2,
            "id": "devops-for-blockchain",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#devops-for-blockchain",
            "title": "DevOps for Blockchain",
            "children": []
          },
          {
            "level": 2,
            "id": "how-to-get-there",
            "permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#how-to-get-there",
            "title": "How to get there ?",
            "children": []
          }
        ],
        "word_count": 1354,
        "reading_time": 7,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/practical-property-testing-in-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/practical-property-testing-in-haskell/",
        "slug": "practical-property-testing-in-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Practical Property Testing in Haskell",
        "description": "Learn how to implement proper property testing in Haskell. Learn real-world scenarios on how to make property tests interact with web services or database systems. Also covering, Testing with Databases, Testing with Services, Writing Custom Generators, Using Testing Combinators",
        "updated": null,
        "date": "2018-05-10T11:53:00Z",
        "year": 2018,
        "month": 5,
        "day": 10,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/practical-property-testing-in-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/practical-property-testing-in-haskell/",
        "components": [
          "blog",
          "practical-property-testing-in-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/pinpointing-deadlocks-in-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/05/pinpointing-deadlocks-in-haskell/",
        "slug": "pinpointing-deadlocks-in-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Pinpointing deadlocks in Haskell",
        "description": "This blog post introduces a new technique that takes advantage of Haskell Run Time System (RTS) and timeout function in order to locate exact line where a deadlock occurs in the code. It also describes how to debug concurrent programs in Haskell using Dining Philosophers problem as an example.",
        "updated": null,
        "date": "2018-05-09T13:32:00Z",
        "year": 2018,
        "month": 5,
        "day": 9,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Alexey Kuleshevich",
          "html": "hubspot-blogs/pinpointing-deadlocks-in-haskell.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/2018/05/pinpointing-deadlocks-in-haskell/",
        "components": [
          "blog",
          "2018",
          "05",
          "pinpointing-deadlocks-in-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/haskell-library-talking-odbc-databases.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/05/haskell-library-talking-odbc-databases/",
        "slug": "haskell-library-talking-odbc-databases",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "A new Haskell library for talking to ODBC databases",
        "description": "A package/library for working with SQL Server databases from Haskell using ODBC, reliably, safely with good documentation on Windows, macOS (OS X) or Linux.",
        "updated": null,
        "date": "2018-05-01T16:14:00Z",
        "year": 2018,
        "month": 5,
        "day": 1,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/haskell-library-talking-odbc-databases.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/05/haskell-library-talking-odbc-databases/",
        "components": [
          "blog",
          "2018",
          "05",
          "haskell-library-talking-odbc-databases"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/async-exception-handling-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/04/async-exception-handling-haskell/",
        "slug": "async-exception-handling-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Asynchronous Exception Handling in Haskell",
        "description": "Learn about the asynchronous exception handling mechanism in Haskell, how to write exception-safe code that allows prompt termination and guaranteed resource cleanup.",
        "updated": null,
        "date": "2018-04-26T12:36:00Z",
        "year": 2018,
        "month": 4,
        "day": 26,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/async-exception-handling-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/04/async-exception-handling-haskell/",
        "components": [
          "blog",
          "2018",
          "04",
          "async-exception-handling-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/async-exceptions-haskell-rust/",
            "title": "Async Exceptions in Haskell, and Rust"
          },
          {
            "permalink": "https://tech.fpcomplete.com/haskell/syllabus/",
            "title": "Applied Haskell Syllabus"
          },
          {
            "permalink": "https://tech.fpcomplete.com/haskell/tutorial/exceptions/",
            "title": "Safe exception handling"
          }
        ]
      },
      {
        "relative_path": "blog/why-haskell-is-hot-for-cryptocurrencies.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/why-haskell-is-hot-for-cryptocurrencies/",
        "slug": "why-haskell-is-hot-for-cryptocurrencies",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Why Haskell is hot for cryptocurrencies",
        "description": "There are almost 1,600 cryptocurrencies available today and that number continues to grow. Standing out among these currencies is almost an impossible task - Unless your cryptocurrency is written in Haskell. Code is Money and Haskell is your best bet for a rock solid cryptocurrency.",
        "updated": null,
        "date": "2018-04-18T11:35:00Z",
        "year": 2018,
        "month": 4,
        "day": 18,
        "taxonomies": {
          "tags": [
            "blockchain",
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Niklas Hambüchen",
          "html": "hubspot-blogs/why-haskell-is-hot-for-cryptocurrencies.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/why-haskell-is-hot-for-cryptocurrencies/",
        "components": [
          "blog",
          "why-haskell-is-hot-for-cryptocurrencies"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/how-to-handle-asynchronous-exceptions-in-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/how-to-handle-asynchronous-exceptions-in-haskell/",
        "slug": "how-to-handle-asynchronous-exceptions-in-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "How to Handle Asynchronous Exceptions in Haskell",
        "description": "Many programming languages have exceptions. In order to write programs correctly in such languages, you need to write exception-safe code. This requirement applies to Haskell as well. However, in Haskell, we have an extra twist: asynchronous exceptions, which can be thrown to any thread at any time.",
        "updated": null,
        "date": "2018-04-12T13:46:00Z",
        "year": 2018,
        "month": 4,
        "day": 12,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/how-to-handle-asynchronous-exceptions-in-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/how-to-handle-asynchronous-exceptions-in-haskell/",
        "components": [
          "blog",
          "how-to-handle-asynchronous-exceptions-in-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/10-common-mistakes-to-avoid-in-fintech-software-development.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/10-common-mistakes-to-avoid-in-fintech-software-development/",
        "slug": "10-common-mistakes-to-avoid-in-fintech-software-development",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "10 Common Mistakes to Avoid in FinTech Software Development",
        "description": "There are a lot of uncertainties in the newer areas of the FinTech industry right now and knowing how to navigate these issues is not an easy task. Learn about the most common mistakes to avoid here. ",
        "updated": null,
        "date": "2018-02-28T12:13:00Z",
        "year": 2018,
        "month": 2,
        "day": 28,
        "taxonomies": {
          "categories": [
            "functional programming",
            "insights"
          ],
          "tags": [
            "fintech"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/10-common-mistakes-to-avoid-in-fintech-software-development.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/10-common-mistakes-to-avoid-in-fintech-software-development/",
        "components": [
          "blog",
          "10-common-mistakes-to-avoid-in-fintech-software-development"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/best-practices-for-developing-medical-device-software.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/02/best-practices-for-developing-medical-device-software/",
        "slug": "best-practices-for-developing-medical-device-software",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Best Practices for Developing Medical Device Software",
        "description": "Medical device regulation makes development more complex than typical software. Learn about the best practices for sound medical device software development. ",
        "updated": null,
        "date": "2018-02-07T12:30:00Z",
        "year": 2018,
        "month": 2,
        "day": 7,
        "taxonomies": {
          "tags": [
            "haskell",
            "regulated"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Niklas Hambüchen",
          "html": "hubspot-blogs/best-practices-for-developing-medical-device-software.html",
          "blogimage": "/images/blog-listing/pharmacology.png"
        },
        "path": "/blog/2018/02/best-practices-for-developing-medical-device-software/",
        "components": [
          "blog",
          "2018",
          "02",
          "best-practices-for-developing-medical-device-software"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cache-ci-builds-to-an-s3-bucket.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/02/cache-ci-builds-to-an-s3-bucket/",
        "slug": "cache-ci-builds-to-an-s3-bucket",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cache CI builds to an S3 Bucket",
        "description": "We're happy to announce a new tool, cache-s3, aimed at providing a consistent caching solution across CI tools.",
        "updated": null,
        "date": "2018-02-05T04:00:00Z",
        "year": 2018,
        "month": 2,
        "day": 5,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Alexey Kuleshevich",
          "html": "hubspot-blogs/cache-ci-builds-to-an-s3-bucket.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/02/cache-ci-builds-to-an-s3-bucket/",
        "components": [
          "blog",
          "2018",
          "02",
          "cache-ci-builds-to-an-s3-bucket"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/hash-based-package-downloads-part-2-of-2.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/01/hash-based-package-downloads-part-2-of-2/",
        "slug": "hash-based-package-downloads-part-2-of-2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Hash Based Package Downloads - part 2 of 2",
        "description": "A plan for implementing hash-based content addressing in the Haskell build ecosystem.",
        "updated": null,
        "date": "2018-01-31T06:00:00Z",
        "year": 2018,
        "month": 1,
        "day": 31,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/hash-based-package-downloads-part-2-of-2.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/01/hash-based-package-downloads-part-2-of-2/",
        "components": [
          "blog",
          "2018",
          "01",
          "hash-based-package-downloads-part-2-of-2"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fp-complete-and-cardano-blockchain-audit-partnership.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/fp-complete-and-cardano-blockchain-audit-partnership/",
        "slug": "fp-complete-and-cardano-blockchain-audit-partnership",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Complete and Cardano Blockchain Audit Partnership",
        "description": "FP Complete's software development specialists will provide a comprehensive review of Cardano’s code and technical documentation. FP Complete's focus on FinTech is a logical fit for blockchain technology providers.",
        "updated": null,
        "date": "2018-01-24T17:32:00Z",
        "year": 2018,
        "month": 1,
        "day": 24,
        "taxonomies": {
          "tags": [
            "blockchain"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/fp-complete-and-cardano-blockchain-audit-partnership.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/fp-complete-and-cardano-blockchain-audit-partnership/",
        "components": [
          "blog",
          "fp-complete-and-cardano-blockchain-audit-partnership"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/hash-based-package-downloads-part-1-of-2.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/01/hash-based-package-downloads-part-1-of-2/",
        "slug": "hash-based-package-downloads-part-1-of-2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Hash Based Package Downloads - part 1 of 2",
        "description": "Reproducible build plans are vital for many industries. Can we be doing more to make our build tools more reliable?",
        "updated": null,
        "date": "2018-01-23T10:19:00Z",
        "year": 2018,
        "month": 1,
        "day": 23,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/hash-based-package-downloads-part-1-of-2.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/01/hash-based-package-downloads-part-1-of-2/",
        "components": [
          "blog",
          "2018",
          "01",
          "hash-based-package-downloads-part-1-of-2"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/weakly-typed-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/01/weakly-typed-haskell/",
        "slug": "weakly-typed-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Weakly Typed Haskell",
        "description": "Haskell is often described as a strongly typed programming language. Does that mean that your Haskell code is automatically strongly typed?",
        "updated": null,
        "date": "2018-01-02T06:00:00Z",
        "year": 2018,
        "month": 1,
        "day": 2,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/weakly-typed-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2018/01/weakly-typed-haskell/",
        "components": [
          "blog",
          "2018",
          "01",
          "weakly-typed-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/parsing-command-line-arguments.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/12/parsing-command-line-arguments/",
        "slug": "parsing-command-line-arguments",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Parsing command line arguments",
        "description": "Learn about our recommendations on how to reliably parse command line arguments into commands, arguments, flags, configuration, settings, and instructions.",
        "updated": null,
        "date": "2017-12-28T05:15:00Z",
        "year": 2017,
        "month": 12,
        "day": 28,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Tom Sydney Kerckhove",
          "html": "hubspot-blogs/parsing-command-line-arguments.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/12/parsing-command-line-arguments/",
        "components": [
          "blog",
          "2017",
          "12",
          "parsing-command-line-arguments"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/building-haskell-apps-with-docker.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/12/building-haskell-apps-with-docker/",
        "slug": "building-haskell-apps-with-docker",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Building Haskell Apps with Docker",
        "description": "How do you build a runtime Docker image from Haskell code? This post will show you a few ways, including the newer multi-stage Docker build technique.",
        "updated": null,
        "date": "2017-12-21T09:30:00Z",
        "year": 2017,
        "month": 12,
        "day": 21,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "docker"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Deni Bertovic",
          "html": "hubspot-blogs/building-haskell-apps-with-docker.html",
          "blogimage": "/images/blog-listing/docker.png"
        },
        "path": "/blog/2017/12/building-haskell-apps-with-docker/",
        "components": [
          "blog",
          "2017",
          "12",
          "building-haskell-apps-with-docker"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-stack-1.6.1-release.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/12/announcing-stack-1.6.1-release/",
        "slug": "announcing-stack-1-6-1-release",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing Stack 1.6.1 release",
        "description": "The Stack build tool for Haskell, version 1.6.1, is now available. Come read about the new features.",
        "updated": null,
        "date": "2017-12-07T09:30:00Z",
        "year": 2017,
        "month": 12,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/announcing-stack-1.6.1-release.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/12/announcing-stack-1.6.1-release/",
        "components": [
          "blog",
          "2017",
          "12",
          "announcing-stack-1.6.1-release"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/lambda-conference-and-haskell-survey.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/lambda-conference-and-haskell-survey/",
        "slug": "lambda-conference-and-haskell-survey",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Lambda Conference and Haskell Survey",
        "description": "See Michael Snoyman discuss Haskell Monads at the Lambda World Conference in Cadiz, Spain. We also discuss how you can participate in our 2017 Haskell Survey.",
        "updated": null,
        "date": "2017-11-22T10:16:00Z",
        "year": 2017,
        "month": 11,
        "day": 22,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/lambda-conference-and-haskell-survey.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/lambda-conference-and-haskell-survey/",
        "components": [
          "blog",
          "lambda-conference-and-haskell-survey"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cryptographic-hashing-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/09/cryptographic-hashing-haskell/",
        "slug": "cryptographic-hashing-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cryptographic Hashing in Haskell",
        "description": "Cookbook-style blog post demonstrating how to do cryptographic hashing with the cryptonite library, with related work like base-16 encoding.",
        "updated": null,
        "date": "2017-09-12T16:33:00Z",
        "year": 2017,
        "month": 9,
        "day": 12,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/cryptographic-hashing-haskell.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/2017/09/cryptographic-hashing-haskell/",
        "components": [
          "blog",
          "2017",
          "09",
          "cryptographic-hashing-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/all-about-strictness.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/09/all-about-strictness/",
        "slug": "all-about-strictness",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "All About Strictness",
        "description": "Understanding how lazy evaluation affects your data in Haskell is vital to writing efficient programs. Get a crash course in the basics.",
        "updated": null,
        "date": "2017-08-28T20:33:00Z",
        "year": 2017,
        "month": 8,
        "day": 28,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/all-about-strictness.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/09/all-about-strictness/",
        "components": [
          "blog",
          "2017",
          "09",
          "all-about-strictness"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/tutorial/all-about-strictness/",
            "title": "All about strictness"
          }
        ]
      },
      {
        "relative_path": "blog/exiting-haskell-process.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/08/exiting-haskell-process/",
        "slug": "exiting-haskell-process",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Exiting a Haskell process",
        "description": "You've probably been able to successfully exit most of your Haskell processes. But there are some perhaps surprising corner cases worth mentioning.",
        "updated": null,
        "date": "2017-08-24T14:10:00Z",
        "year": 2017,
        "month": 8,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/exiting-haskell-process.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/08/exiting-haskell-process/",
        "components": [
          "blog",
          "2017",
          "08",
          "exiting-haskell-process"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/functional-programming-and-modern-devops.md",
        "colocated_path": null,
        "content": "<p>In this presentation, Aaron Contorer presents on how modern tools can\nbe used to reach the Engineering sweet spot.</p>\n<iframe width=\"100%\" height=\"315\"\nsrc=\"https://www.youtube.com/embed/ybSBCVhVWs8\" frameborder=\"0\"\nallow=\"accelerometer; autoplay; encrypted-media; gyroscope;\npicture-in-picture\" allowfullscreen></iframe>\n<br>\n<br>\n<h2 id=\"do-you-know-fp-complete\">Do you know FP Complete</h2>\n<p>At FP Complete, we do so many things to help companies it’s hard to\nencapsulate our impact in a few words. They say a picture is worth a\nthousand words, so a video has to be worth 10,000 words (at\nleast). Therefore, to tell all we can in as little time as possible,\ncheck out our explainer video. It’s only 108 seconds to get the full\nstory of FP Complete.</p>\n<iframe allowfullscreen=\n            \"allowfullscreen\" height=\"315\" src=\n            \"https://www.youtube.com/embed/JCcuSn_lFKs\"\n            target=\"_blank\" width=\n            \"100%\"></iframe>\n<br>\n<br>\n<p>Reach us to on <a href=\"mailto:[email protected]\">[email protected]</a> if you have suggestions or if\nyou would like to learn more about FP Complete and the services we\noffer.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/functional-programming-and-modern-devops/",
        "slug": "functional-programming-and-modern-devops",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Functional Programming and Modern DevOps",
        "description": "In this presentation, Aaron Contorer presents on how modern tools can be used to reach the Engineering sweet spot.",
        "updated": null,
        "date": "2017-08-11",
        "year": 2017,
        "month": 8,
        "day": 11,
        "taxonomies": {
          "tags": [
            "devops",
            "haskell",
            "insights"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/functional-programming-and-modern-devops/",
        "components": [
          "blog",
          "functional-programming-and-modern-devops"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "do-you-know-fp-complete",
            "permalink": "https://tech.fpcomplete.com/blog/functional-programming-and-modern-devops/#do-you-know-fp-complete",
            "title": "Do you know FP Complete",
            "children": []
          }
        ],
        "word_count": 162,
        "reading_time": 1,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-issue-triagers.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/08/stack-issue-triagers/",
        "slug": "stack-issue-triagers",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stack Issue Triagers",
        "description": "We're starting a new initiative, the Stack Issue Triagers. Come join the team and seize your destiny!",
        "updated": null,
        "date": "2017-08-07T04:20:00Z",
        "year": 2017,
        "month": 8,
        "day": 7,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stack-issue-triagers.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/08/stack-issue-triagers/",
        "components": [
          "blog",
          "2017",
          "08",
          "stack-issue-triagers"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/to-void-or-to-void.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/07/to-void-or-to-void/",
        "slug": "to-void-or-to-void",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "To Void or to void",
        "description": "The Void type can indicate the absence of any useful value. But should you use the Void type, or the void type variable?",
        "updated": null,
        "date": "2017-07-31T03:40:00Z",
        "year": 2017,
        "month": 7,
        "day": 31,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/to-void-or-to-void.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/07/to-void-or-to-void/",
        "components": [
          "blog",
          "2017",
          "07",
          "to-void-or-to-void"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/the-rio-monad.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/07/the-rio-monad/",
        "slug": "the-rio-monad",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The RIO Monad",
        "description": "Tying together unliftio, the ReaderT pattern, and a tale of two brackets: learn about how the Stack codebase is leveraging the new RIO monad",
        "updated": null,
        "date": "2017-07-24T03:30:00Z",
        "year": 2017,
        "month": 7,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/the-rio-monad.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/07/the-rio-monad/",
        "components": [
          "blog",
          "2017",
          "07",
          "the-rio-monad"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/tutorial/monad-transformers/",
            "title": "Monad Transformers"
          }
        ]
      },
      {
        "relative_path": "blog/announcing-new-unliftio-library.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/07/announcing-new-unliftio-library/",
        "slug": "announcing-new-unliftio-library",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: the new unliftio library",
        "description": "Learn about the brand new unliftio library, a new, simplified approach to running transformer actions in IO",
        "updated": null,
        "date": "2017-07-17T17:30:00Z",
        "year": 2017,
        "month": 7,
        "day": 17,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announcing-new-unliftio-library.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/07/announcing-new-unliftio-library/",
        "components": [
          "blog",
          "2017",
          "07",
          "announcing-new-unliftio-library"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/tutorial/monad-transformers/",
            "title": "Monad Transformers"
          }
        ]
      },
      {
        "relative_path": "blog/stacks-new-extensible-snapshots.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/07/stacks-new-extensible-snapshots/",
        "slug": "stacks-new-extensible-snapshots",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stack's New Extensible Snapshots",
        "description": "The next release of Stack will include a major overhaul to how snapshots and dependencies are managed.",
        "updated": null,
        "date": "2017-07-13T17:30:00Z",
        "year": 2017,
        "month": 7,
        "day": 13,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stacks-new-extensible-snapshots.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/07/stacks-new-extensible-snapshots/",
        "components": [
          "blog",
          "2017",
          "07",
          "stacks-new-extensible-snapshots"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/iterators-streams-rust-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/07/iterators-streams-rust-haskell/",
        "slug": "iterators-streams-rust-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Iterators and Streams in Rust and Haskell",
        "description": "Both Rust and Haskell have their own approaches for modeling streaming data. But what does it look like when we try to share these approaches between languages?",
        "updated": null,
        "date": "2017-07-10T13:20:00Z",
        "year": 2017,
        "month": 7,
        "day": 10,
        "taxonomies": {
          "tags": [
            "rust",
            "haskell",
            "conduit"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/iterators-streams-rust-haskell.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/2017/07/iterators-streams-rust-haskell/",
        "components": [
          "blog",
          "2017",
          "07",
          "iterators-streams-rust-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/",
            "title": "Collect in Rust, traverse in Haskell and Scala"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/",
            "title": "Monads and GATs in nightly Rust"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-at-fpco-2020/",
            "title": "Rust at FP Complete, 2020 update"
          },
          {
            "permalink": "https://tech.fpcomplete.com/rust/pid1/",
            "title": "Implementing pid1 with Rust and async/await"
          }
        ]
      },
      {
        "relative_path": "blog/tale-of-two-brackets.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/06/tale-of-two-brackets/",
        "slug": "tale-of-two-brackets",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "A Tale of Two Brackets",
        "description": "A debugging story covering such varied topics as ResourceT, monad-control, and different bracket type signatures",
        "updated": null,
        "date": "2017-06-26T20:52:00Z",
        "year": 2017,
        "month": 6,
        "day": 26,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/tale-of-two-brackets.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/06/tale-of-two-brackets/",
        "components": [
          "blog",
          "2017",
          "06",
          "tale-of-two-brackets"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/understanding-resourcet.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/06/understanding-resourcet/",
        "slug": "understanding-resourcet",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Understanding ResourceT",
        "description": "The ResourceT monad transformer allows you to acquire resources with exception safety. This post demonstrates from the ground up when you need it and how it works.",
        "updated": null,
        "date": "2017-06-19T08:52:00Z",
        "year": 2017,
        "month": 6,
        "day": 19,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/understanding-resourcet.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/06/understanding-resourcet/",
        "components": [
          "blog",
          "2017",
          "06",
          "understanding-resourcet"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/readert-design-pattern.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/06/readert-design-pattern/",
        "slug": "readert-design-pattern",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The ReaderT Design Pattern",
        "description": "Haskell isn't usually a language known for C++-style design patterns. Here's one counter-example, showing how to structure your applications.",
        "updated": null,
        "date": "2017-06-12T16:24:00Z",
        "year": 2017,
        "month": 6,
        "day": 12,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/readert-design-pattern.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/06/readert-design-pattern/",
        "components": [
          "blog",
          "2017",
          "06",
          "readert-design-pattern"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/library/rio/",
            "title": "rio: A standard library"
          }
        ]
      },
      {
        "relative_path": "blog/pure-functional-programming-part-2.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/05/pure-functional-programming-part-2/",
        "slug": "pure-functional-programming-part-2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "What pure functional programming is all about: Part 2",
        "description": "Exploring what pure functional programming is all about: what it means, reasoning about it, and performance gains. Part 2",
        "updated": null,
        "date": "2017-05-01T16:04:00Z",
        "year": 2017,
        "month": 5,
        "day": 1,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/pure-functional-programming-part-2.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/05/pure-functional-programming-part-2/",
        "components": [
          "blog",
          "2017",
          "05",
          "pure-functional-programming-part-2"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/pure-functional-programming.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/04/pure-functional-programming/",
        "slug": "pure-functional-programming",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "What pure functional programming is all about: Part 1",
        "description": "Exploring what pure functional programming is all about: what it means, reasoning about it, and performance gains.",
        "updated": null,
        "date": "2017-04-14T17:44:00Z",
        "year": 2017,
        "month": 4,
        "day": 14,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/pure-functional-programming.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/04/pure-functional-programming/",
        "components": [
          "blog",
          "2017",
          "04",
          "pure-functional-programming"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/ci-build-process-in-code-repository.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/04/ci-build-process-in-code-repository/",
        "slug": "ci-build-process-in-code-repository",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Your CI build process should be in your code repository",
        "description": "Having your build pipeline and environment defined in the code repository gives developers more control and makes branching and building old version easy.",
        "updated": null,
        "date": "2017-04-07T16:00:00Z",
        "year": 2017,
        "month": 4,
        "day": 7,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/ci-build-process-in-code-repository.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/04/ci-build-process-in-code-repository/",
        "components": [
          "blog",
          "2017",
          "04",
          "ci-build-process-in-code-repository"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/",
            "title": "Announcing Amber, encrypted secrets management"
          },
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/cicd/",
            "title": "Continuous Integration and Deployment"
          }
        ]
      },
      {
        "relative_path": "blog/partial-patterns-do-blocks.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/03/partial-patterns-do-blocks/",
        "slug": "partial-patterns-do-blocks",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Partial patterns in do blocks: let vs return",
        "description": "If you're in a do block, and you need a partial pattern match, should you use a let? This blog post explains why not. Recommended reading for Haskellers.",
        "updated": null,
        "date": "2017-03-10T17:44:00Z",
        "year": 2017,
        "month": 3,
        "day": 10,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/partial-patterns-do-blocks.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/2017/03/partial-patterns-do-blocks/",
        "components": [
          "blog",
          "2017",
          "03",
          "partial-patterns-do-blocks"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/typed-process.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/02/typed-process/",
        "slug": "typed-process",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The typed-process library",
        "description": "The typed-process library is a newer library for launching external processes from Haskell. Learn why you may want to try it out.",
        "updated": null,
        "date": "2017-02-24T17:02:00Z",
        "year": 2017,
        "month": 2,
        "day": 24,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/typed-process.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/02/typed-process/",
        "components": [
          "blog",
          "2017",
          "02",
          "typed-process"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/immutability-docker-haskells-st-type.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/02/immutability-docker-haskells-st-type/",
        "slug": "immutability-docker-haskells-st-type",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Immutability, Docker, and Haskell's ST type",
        "description": "Immutability in software development is a well known constant in functional programming but is relatively new in modern devops and the parallels are worth examining.",
        "updated": null,
        "date": "2017-02-13T15:24:00Z",
        "year": 2017,
        "month": 2,
        "day": 13,
        "taxonomies": {
          "tags": [
            "haskell",
            "docker",
            "devops"
          ],
          "categories": [
            "functional programming",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/immutability-docker-haskells-st-type.html",
          "blogimage": "/images/blog-listing/docker.png"
        },
        "path": "/blog/2017/02/immutability-docker-haskells-st-type/",
        "components": [
          "blog",
          "2017",
          "02",
          "immutability-docker-haskells-st-type"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
            "title": "Our history with containerization"
          }
        ]
      },
      {
        "relative_path": "blog/monadmask-vs-monadbracket.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/02/monadmask-vs-monadbracket/",
        "slug": "monadmask-vs-monadbracket",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "MonadMask vs MonadBracket",
        "description": "The exceptions package has a MonadMask typeclass. Is it the right abstraction, or do we need something different?",
        "updated": null,
        "date": "2017-02-06T14:44:00Z",
        "year": 2017,
        "month": 2,
        "day": 6,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/monadmask-vs-monadbracket.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/02/monadmask-vs-monadbracket/",
        "components": [
          "blog",
          "2017",
          "02",
          "monadmask-vs-monadbracket"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/speeding-up-distributed-computation.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/01/speeding-up-distributed-computation/",
        "slug": "speeding-up-distributed-computation",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Speeding up a distributed computation in Haskell",
        "description": "By dropping down to a more low level programming we were able to greatly improve the performance of a distributed application.",
        "updated": null,
        "date": "2017-01-18T15:10:00Z",
        "year": 2017,
        "month": 1,
        "day": 18,
        "taxonomies": {
          "tags": [
            "haskell",
            "data"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Francesco Mazzoli",
          "html": "hubspot-blogs/speeding-up-distributed-computation.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/2017/01/speeding-up-distributed-computation/",
        "components": [
          "blog",
          "2017",
          "01",
          "speeding-up-distributed-computation"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/green-threads-are-like-garbage-collection.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2017/01/green-threads-are-like-garbage-collection/",
        "slug": "green-threads-are-like-garbage-collection",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Green Threads are like Garbage Collection",
        "description": "Learn what Green Threads are and why are they important in programming languages like Haskell, Go and Erlang. They can do more than simplify concurrent code.",
        "updated": null,
        "date": "2017-01-06T16:00:00Z",
        "year": 2017,
        "month": 1,
        "day": 6,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/green-threads-are-like-garbage-collection.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2017/01/green-threads-are-like-garbage-collection/",
        "components": [
          "blog",
          "2017",
          "01",
          "green-threads-are-like-garbage-collection"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/software-project-maintenance-is-where-haskell-shines.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/12/software-project-maintenance-is-where-haskell-shines/",
        "slug": "software-project-maintenance-is-where-haskell-shines",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Software project maintenance is where Haskell shines",
        "description": "Software Maintenance is the single biggest activity in developing successful software. Learn how Haskell makes this job better, saving your project time and money.",
        "updated": null,
        "date": "2016-12-30T20:00:00Z",
        "year": 2016,
        "month": 12,
        "day": 30,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/software-project-maintenance-is-where-haskell-shines.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/12/software-project-maintenance-is-where-haskell-shines/",
        "components": [
          "blog",
          "2016",
          "12",
          "software-project-maintenance-is-where-haskell-shines"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/concurrency-and-node.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/12/concurrency-and-node/",
        "slug": "concurrency-and-node",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Concurrency and Node",
        "description": "Is Haskell better at I/O than node.js? We think so. FP Complete compares concurrency approaches in node.js/Javascript with Haskell to make the case.",
        "updated": null,
        "date": "2016-12-07T02:00:00Z",
        "year": 2016,
        "month": 12,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Andrew Rademacher",
          "html": "hubspot-blogs/concurrency-and-node.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/12/concurrency-and-node/",
        "components": [
          "blog",
          "2016",
          "12",
          "concurrency-and-node"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/comparison-scala-and-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/11/comparison-scala-and-haskell/",
        "slug": "comparison-scala-and-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Do you like Scala? Give Haskell a try!",
        "description": ".",
        "updated": null,
        "date": "2016-11-29T12:15:00Z",
        "year": 2016,
        "month": 11,
        "day": 29,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Philipp Kant",
          "html": "hubspot-blogs/comparison-scala-and-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/11/comparison-scala-and-haskell/",
        "components": [
          "blog",
          "2016",
          "11",
          "comparison-scala-and-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/comparative-concurrency-with-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/11/comparative-concurrency-with-haskell/",
        "slug": "comparative-concurrency-with-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Comparative Concurrency with Haskell",
        "description": ".",
        "updated": null,
        "date": "2016-11-22T02:00:00Z",
        "year": 2016,
        "month": 11,
        "day": 22,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/comparative-concurrency-with-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/11/comparative-concurrency-with-haskell/",
        "components": [
          "blog",
          "2016",
          "11",
          "comparative-concurrency-with-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/scripting-in-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/11/scripting-in-haskell/",
        "slug": "scripting-in-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Scripting in Haskell",
        "description": ".",
        "updated": null,
        "date": "2016-11-22T02:00:00Z",
        "year": 2016,
        "month": 11,
        "day": 22,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/scripting-in-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/11/scripting-in-haskell/",
        "components": [
          "blog",
          "2016",
          "11",
          "scripting-in-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/mastering-time-to-market-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/11/mastering-time-to-market-haskell/",
        "slug": "mastering-time-to-market-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Mastering Time-to-Market with Haskell",
        "description": ".",
        "updated": null,
        "date": "2016-11-21T02:00:00Z",
        "year": 2016,
        "month": 11,
        "day": 21,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/mastering-time-to-market-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/11/mastering-time-to-market-haskell/",
        "components": [
          "blog",
          "2016",
          "11",
          "mastering-time-to-market-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/covariance-contravariance.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/11/covariance-contravariance/",
        "slug": "covariance-contravariance",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Covariance and Contravariance",
        "description": ".",
        "updated": null,
        "date": "2016-11-09T02:00:00Z",
        "year": 2016,
        "month": 11,
        "day": 9,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/covariance-contravariance.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/11/covariance-contravariance/",
        "components": [
          "blog",
          "2016",
          "11",
          "covariance-contravariance"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/library/unliftio/",
            "title": "unliftio: Lifting and unlifting IO actions"
          },
          {
            "permalink": "https://tech.fpcomplete.com/haskell/syllabus/",
            "title": "Applied Haskell Syllabus"
          },
          {
            "permalink": "https://tech.fpcomplete.com/haskell/tutorial/common-typeclasses/",
            "title": "Common Typeclasses"
          }
        ]
      },
      {
        "relative_path": "blog/exceptions-best-practices-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/11/exceptions-best-practices-haskell/",
        "slug": "exceptions-best-practices-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Exceptions Best Practices in Haskell",
        "description": ".",
        "updated": null,
        "date": "2016-11-07T02:00:00Z",
        "year": 2016,
        "month": 11,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/exceptions-best-practices-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/11/exceptions-best-practices-haskell/",
        "components": [
          "blog",
          "2016",
          "11",
          "exceptions-best-practices-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/tutorial/exceptions/",
            "title": "Safe exception handling"
          }
        ]
      },
      {
        "relative_path": "blog/static-compilation-with-stack.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/10/static-compilation-with-stack/",
        "slug": "static-compilation-with-stack",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Static compilation with Stack",
        "description": ".",
        "updated": null,
        "date": "2016-10-07T02:00:00Z",
        "year": 2016,
        "month": 10,
        "day": 7,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Tim Dysinger",
          "html": "hubspot-blogs/static-compilation-with-stack.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/10/static-compilation-with-stack/",
        "components": [
          "blog",
          "2016",
          "10",
          "static-compilation-with-stack"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/updated-hackage-mirroring.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/09/updated-hackage-mirroring/",
        "slug": "updated-hackage-mirroring",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Updated Hackage mirroring",
        "description": ".",
        "updated": null,
        "date": "2016-09-27T12:00:00Z",
        "year": 2016,
        "month": 9,
        "day": 27,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/updated-hackage-mirroring.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/09/updated-hackage-mirroring/",
        "components": [
          "blog",
          "2016",
          "09",
          "updated-hackage-mirroring"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/practical-haskell-simple-file-mirror-2.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/09/practical-haskell-simple-file-mirror-2/",
        "slug": "practical-haskell-simple-file-mirror-2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Practical Haskell: Simple File Mirror (Part 2)",
        "description": ".",
        "updated": null,
        "date": "2016-09-21T12:00:00Z",
        "year": 2016,
        "month": 9,
        "day": 21,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/practical-haskell-simple-file-mirror-2.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/09/practical-haskell-simple-file-mirror-2/",
        "components": [
          "blog",
          "2016",
          "09",
          "practical-haskell-simple-file-mirror-2"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/data-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/09/data-haskell/",
        "slug": "data-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Working with data in Haskell",
        "description": ".",
        "updated": null,
        "date": "2016-09-14T07:00:00Z",
        "year": 2016,
        "month": 9,
        "day": 14,
        "taxonomies": {
          "tags": [
            "haskell",
            "data"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/data-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/09/data-haskell/",
        "components": [
          "blog",
          "2016",
          "09",
          "data-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/practical-haskell-simple-file-mirror-1.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/09/practical-haskell-simple-file-mirror-1/",
        "slug": "practical-haskell-simple-file-mirror-1",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Practical Haskell: Simple File Mirror (Part 1)",
        "description": ".",
        "updated": null,
        "date": "2016-09-14T07:00:00Z",
        "year": 2016,
        "month": 9,
        "day": 14,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/practical-haskell-simple-file-mirror-1.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/09/practical-haskell-simple-file-mirror-1/",
        "components": [
          "blog",
          "2016",
          "09",
          "practical-haskell-simple-file-mirror-1"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/public-jenkins-server.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/08/public-jenkins-server/",
        "slug": "public-jenkins-server",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announce: public Jenkins CI server",
        "description": ".",
        "updated": null,
        "date": "2016-08-01T00:00:00Z",
        "year": 2016,
        "month": 8,
        "day": 1,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/public-jenkins-server.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/08/public-jenkins-server/",
        "components": [
          "blog",
          "2016",
          "08",
          "public-jenkins-server"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announce-safe-exceptions.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/06/announce-safe-exceptions/",
        "slug": "announce-safe-exceptions",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announce: safe-exceptions, for async exception safety",
        "description": ".",
        "updated": null,
        "date": "2016-06-29T00:00:00Z",
        "year": 2016,
        "month": 6,
        "day": 29,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announce-safe-exceptions.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/06/announce-safe-exceptions/",
        "components": [
          "blog",
          "2016",
          "06",
          "announce-safe-exceptions"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/async-exceptions-stm-deadlocks.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/06/async-exceptions-stm-deadlocks/",
        "slug": "async-exceptions-stm-deadlocks",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "async exceptions, STM, and deadlocks",
        "description": ".",
        "updated": null,
        "date": "2016-06-20T00:00:00Z",
        "year": 2016,
        "month": 6,
        "day": 20,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/async-exceptions-stm-deadlocks.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/06/async-exceptions-stm-deadlocks/",
        "components": [
          "blog",
          "2016",
          "06",
          "async-exceptions-stm-deadlocks"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/weigh-package.md",
        "colocated_path": null,
        "content": "<h2 id=\"work-motivation\">Work motivation</h2>\n<p>While working for various clients that needed fast binary\nserialization, we had discovered that the <a href=\n\"https://hackage.haskell.org/package/binary\"><code>binary</code></a>\nand <a href=\n\"https://hackage.haskell.org/package/cereal\"><code>cereal</code></a>\npackages are both inefficient and so we created the <a href=\n\"https://github.com/fpco/store\"><code>store</code></a> package.</p>\n<p>In the high-frequency trading sector, we had to decode and\nencode a binary protocol into Haskell data structures for analysis.\nDuring this process it was made apparent to us that while we had\nbeen attentive to micro-benchmark with the venerable <a href=\n\"https://hackage.haskell.org/package/criterion\"><code>criterion</code></a>\npackage, we hadn't put a lot of work into ensuring that memory\nusage was well studied. Bringing down allocations (and thus work,\nand garbage collection) was key to achieving reasonable speed.</p>\n<h2 id=\"let39s-measure-space\">Let's measure space</h2>\n<p>In response, let's measure space more, in an automatic way.</p>\n<p>The currently available way to do this is by compiling with\nprofiling enabled and adding call centers and then running our\nprogram with RTS options. For example, we write a program with an\n<code>SCC</code> call center, like this:</p>\n<pre><span class=\"hs-definition\">main</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">IO</span> <span class=\"hs-conid\">()</span>\n<span class=\"hs-definition\">main</span> <span class=\n\"hs-keyglyph\">=</span> <span class=\"hs-keyword\">do</span>\n  <span class=\"hs-keyword\">let</span> <span class=\n\"hs-varop\">!</span><span class=\"hs-keyword\">_</span> <span class=\n\"hs-keyglyph\">=</span> <span class=\n\"hs-comment\">{-# SCC myfunction_10 #-}</span> <span class=\n\"hs-varid\">myfunction</span> <span class=\"hs-num\">10</span>\n  <span class=\"hs-varid\">return</span> <span class=\n\"hs-conid\">()</span></pre>\n<p>Then compile with profiling enabled with <code>-p</code> and run\nwith <code>+RTS -P</code> and we get an output like this:</p>\n<pre><code>COST CENTRE       MODULE no. entries  ... bytes\n<p>MAIN              MAIN   43  0        ... 760\nCAF:main1        Main   85  0        ... 0\nmain            Main   86  1        ... 0\nmyfunction_10  Main   87  1        ... 160</code></pre></p>\n<p>(Information omitted with <code>...</code> to save space.)</p>\n<p>That's great, exactly the kind of information we'd like to get.\nBut we want it in a more concise, programmatic fashion. On a test\nsuite level.</p>\n<h2 id=\"announcing-codeweighcode\">Announcing\n<code>weigh</code></h2>\n<p>To serve this purpose, I've written the <a href=\n\"https://github.com/fpco/weigh\"><code>weigh</code></a> package,\nwhich seeks to automate the measuring of memory usage of programs,\nin the same way that <code>criterion</code> does for timing of\nprograms.</p>\n<p>It doesn't promise perfect measurement and comes with a grain of\nsalt, but it's reproducible. Unlike timing, allocation is generally\nreliable provided you use something like <a href=\n\"https://haskellstack.org\"><code>stack</code></a> to pin the GHC\nversion and packages, so you can also make a test suite out of\nit.</p>\n<h2 id=\"how-it-works\">How it works</h2>\n<p>There is a simple DSL, like <code>hspec</code>, for writing out\nyour tests. It looks like this:</p>\n<pre><span class=\"hs-keyword\">import</span> <span class=\n\"hs-conid\">Weigh</span>\n<span class=\"hs-definition\">main</span> <span class=\n\"hs-keyglyph\">=</span>\n  <span class=\"hs-varid\">mainWith</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-keyword\">do</span> <span class=\n\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 0\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">0</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 1\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">1</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 2\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">2</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 3\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">3</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 10\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">10</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 100\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">100</span><span class=\n\"hs-layout\">)</span>\n  <span class=\"hs-keyword\">where</span> <span class=\n\"hs-varid\">count</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Integer</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\"hs-conid\">()</span>\n        <span class=\"hs-varid\">count</span> <span class=\n\"hs-num\">0</span> <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-conid\">()</span>\n        <span class=\"hs-varid\">count</span> <span class=\n\"hs-varid\">a</span> <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-varid\">count</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-varid\">a</span> <span class=\n\"hs-comment\">-</span> <span class=\"hs-num\">1</span><span class=\n\"hs-layout\">)</span></pre>\n<p>This example weighs the function <code>count</code>, which\ncounts down to zero. We want to measure the bytes allocated to\nperform the action. The output is:</p>\n<pre><code>Case                Bytes  GCs  Check\nintegers count 0        0    0  OK\nintegers count 1       32    0  OK\nintegers count 2       64    0  OK\nintegers count 3       96    0  OK\nintegers count 10     320    0  OK\nintegers count 100  3,200    0  OK</code></pre>\n<p>Weee! We can now go around weighing everything! I encourage you\nto do that. Even Haskell newbies can make use of this to get a\nvague idea of how costly their code (or libraries they're using)\nis.</p>\n<h2 id=\"real-world-use-case:-codestorecode\">Real-world use-case:\n<code>store</code></h2>\n<p>I wrote a few tests, while developing <code>weigh</code>, for\nthe <code>store</code> package: encoding of lists, vectors and\nstorable vectors. Here's the <code>criterion</code> result for\nencoding a regular <code>Vector</code> type:</p>\n<pre><code>benchmarking encode/1kb normal (Vector Int32)/store\ntime                 3.454 μs   (3.418 μs .. 3.484 μs)\nbenchmarking encode/1kb normal (Vector Int32)/cereal\ntime                 19.56 μs   (19.34 μs .. 19.79 μs)\n<p>benchmarking encode/10kb normal (Vector Int32)/store\ntime                 33.09 μs   (32.73 μs .. 33.57 μs)\nbenchmarking encode/10kb normal (Vector Int32)/cereal\ntime                 202.7 μs   (201.2 μs .. 204.6 μs)</code></pre></p>\n<p><code>store</code> is <b>6x</b> faster than <code>cereal</code>\nat encoding Int32 vectors. Great! Our job is done, we've overcome\nprevious limitations of binary encoding speed. Let's take a look at\nhow heavy this process is. Weighing the program on 1 million and 10\nmillion elements yields:</p>\n<pre>\n<code>   1,000,000 Boxed Vector Int     Encode: Store      88,008,584     140  OK\n   1,000,000 Boxed Vector Int     Encode: Cereal    600,238,200   1,118  OK\n<p>10,000,000 Boxed Vector Int     Encode: Store     880,078,896   1,384  OK\n10,000,000 Boxed Vector Int     Encode: Cereal  6,002,099,680  11,168  OK</code></pre></p>\n<p><code>store</code> is 6.8x more memory efficient than\n<code>cereal</code>. Excellent. But is our job really finished?\nTake a look at those allocations. To simply allocate a vector of\nthat size, it's:</p>\n<pre>\n<code>   1,000,000 Boxed Vector Int     Allocate            8,007,936       1  OK\n<p>10,000,000 Boxed Vector Int     Allocate           80,078,248       1  OK</code></pre></p>\n<p>While <code>store</code> is more efficient than\n<code>cereal</code>, how are we allocating 11x the amount of space\nnecessary? We looked into this in the codebase, it turned out more\ninlining was needed. After comprehensively applying the\n<code>INLINE</code> pragma to key methods and functions, the memory\nwas brought down to:</p>\n<pre>\n<code>   1,000,000 Boxed Vector Int     Allocate            8,007,936       1  OK\n   1,000,000 Boxed Vector Int     Encode: Store      16,008,568       2  OK\n   1,000,000 Boxed Vector Int     Encode: Cereal    600,238,200   1,118  OK\n<p>10,000,000 Boxed Vector Int     Allocate           80,078,248       1  OK\n10,000,000 Boxed Vector Int     Encode: Store     160,078,880       2  OK\n10,000,000 Boxed Vector Int     Encode: Cereal  6,002,099,680  11,168  OK</code></pre></p>\n<p>Now, <code>store</code> takes an additional 8MB to encode an 8MB\nvector, 80MB for an 80MB buffer. That's perfect 1:1 memory usage!\nLet's check out the new speed without these allocations:</p>\n<pre><code>benchmarking encode/1kb normal (Vector Int32)/store\ntime                 848.4 ns   (831.6 ns .. 868.6 ns)\nbenchmarking encode/1kb normal (Vector Int32)/cereal\ntime                 20.80 μs   (20.33 μs .. 21.20 μs)\n<p>benchmarking encode/10kb normal (Vector Int32)/store\ntime                 7.708 μs   (7.606 μs .. 7.822 μs)\nbenchmarking encode/10kb normal (Vector Int32)/cereal\ntime                 207.4 μs   (204.9 μs .. 210.3 μs)</code></pre></p>\n<p><code>store</code> is 4x faster than previously!\n<code>store</code> is also now 20x faster than <code>cereal</code>\nat encoding a vector of ints.</p>\n<h2 id=\"containers-vs-unordered-containers\">Containers vs unordered-containers</h2>\n<p>Another quick example, the Map structures from the two\ncontainers packages. Let's weigh how heavy <code>fromList</code> is\non 1 million elements. For fun, the keys are randomly generated\nrather than ordered. We force the list completely ahead of time,\nbecause we just want to see the allocations by the library, not our\ninput list.</p>\n<pre><span class=\"hs-definition\">fromlists</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Weigh</span> <span class=\"hs-conid\">()</span>\n<span class=\"hs-definition\">fromlists</span> <span class=\n\"hs-keyglyph\">=</span>\n  <span class=\"hs-keyword\">do</span> <span class=\n\"hs-keyword\">let</span> <span class=\"hs-varop\">!</span><span class=\n\"hs-varid\">elems</span> <span class=\"hs-keyglyph\">=</span>\n           <span class=\"hs-varid\">force</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-varid\">zip</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-varid\">randoms</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-varid\">mkStdGen</span> <span class=\n\"hs-num\">0</span><span class=\"hs-layout\">)</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-keyglyph\">[</span><span class=\"hs-conid\">Int</span><span class=\n\"hs-keyglyph\">]</span><span class=\"hs-layout\">)</span>\n                      <span class=\n\"hs-keyglyph\">[</span><span class=\"hs-num\">1</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Int</span> <span class=\n\"hs-keyglyph\">..</span> <span class=\n\"hs-num\">1000000</span><span class=\n\"hs-keyglyph\">]</span><span class=\"hs-layout\">)</span>\n     <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"Data.Map.Strict.fromList     (1 million)\"</span> <span class=\"hs-conid\">Data</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">Map</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">Strict</span><span class=\"hs-varop\">.</span><span class=\"hs-varid\">fromList</span> <span class=\"hs-varid\">elems</span>\n     <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"Data.Map.Lazy.fromList       (1 million)\"</span> <span class=\"hs-conid\">Data</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">Map</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">Lazy</span><span class=\"hs-varop\">.</span><span class=\"hs-varid\">fromList</span> <span class=\"hs-varid\">elems</span>\n     <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"Data.IntMap.Strict.fromList  (1 million)\"</span> <span class=\"hs-conid\">Data</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">IntMap</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">Strict</span><span class=\"hs-varop\">.</span><span class=\"hs-varid\">fromList</span> <span class=\"hs-varid\">elems</span>\n     <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"Data.IntMap.Lazy.fromList    (1 million)\"</span> <span class=\"hs-conid\">Data</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">IntMap</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">Lazy</span><span class=\"hs-varop\">.</span><span class=\"hs-varid\">fromList</span> <span class=\"hs-varid\">elems</span>\n     <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"Data.HashMap.Strict.fromList (1 million)\"</span> <span class=\"hs-conid\">Data</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">HashMap</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">Strict</span><span class=\"hs-varop\">.</span><span class=\"hs-varid\">fromList</span> <span class=\"hs-varid\">elems</span>\n     <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"Data.HashMap.Lazy.fromList   (1 million)\"</span> <span class=\"hs-conid\">Data</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">HashMap</span><span class=\"hs-varop\">.</span><span class=\"hs-conid\">Lazy</span><span class=\"hs-varop\">.</span><span class=\"hs-varid\">fromList</span> <span class=\"hs-varid\">elems</span></pre>\n<p>We clearly see that <code>IntMap</code> from\n<code>containers</code> is about 1.3x more memory efficient than\nthe generic <code>Ord</code>-based <code>Map</code>. However,\n<code>HashMap</code> wipes the floor with both of them (for\n<code>Int</code>, at least), using 6.3x less memory than\n<code>Map</code> and 4.8x less memory than <code>IntMap</code>:</p>\n<pre>\n<code>Data.Map.Strict.fromList     (1 million)  1,016,187,152  1,942  OK\nData.Map.Lazy.fromList       (1 million)  1,016,187,152  1,942  OK\nData.IntMap.Strict.fromList  (1 million)    776,852,648  1,489  OK\nData.IntMap.Lazy.fromList    (1 million)    776,852,648  1,489  OK\nData.HashMap.Strict.fromList (1 million)    161,155,384    314  OK\nData.HashMap.Lazy.fromList   (1 million)    161,155,384    314  OK</code></pre>\n<p>This is just a trivial few lines of code to generate this\nresult, as you see above.</p>\n<h2 id=\"caveat\">Caveat</h2>\n<p>But beware: it's not going to be obvious exactly where\nallocations are coming from in the computation (if you need to know\nthat, use the profiler). It's better to consider a computation\nholistically: this is how much was allocated to produce this\nresult.</p>\n<p>Analysis at finer granularity is likely to be guess-work (beyond\neven what's available in profiling). For the brave, let's study\nsome examples of that.</p>\n<h2 id=\"interpreting-the-results:-codeintegercode\">Interpreting the\nresults: <code>Integer</code></h2>\n<p>Notice that in the table we generated, there is a rather odd\nincrease of allocations:</p>\n<pre><code>Case                Bytes  GCs  Check\nintegers count 0        0    0  OK\nintegers count 1       32    0  OK\nintegers count 2       64    0  OK\nintegers count 3       96    0  OK\nintegers count 10     320    0  OK\nintegers count 100  3,200    0  OK</code></pre>\n<p>What's the explanation for those bytes in each iteration?</p>\n<p>Refreshing our memory: The space taken up by a \"small\" Integer\nis <a href=\"https://stackoverflow.com/a/3256825/89574\">two machine\nwords.</a> On 64-bit that's 16 bytes. <code>Integer</code> is\ndefined like this:</p>\n<pre><span class=\"hs-keyword\">data</span> <span class=\n\"hs-conid\">Integer</span>\n  <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-conid\">S</span><span class=\"hs-cpp\">#</span> <span class=\n\"hs-conid\">Int</span><span class=\n\"hs-cpp\">#</span>                            <span class=\n\"hs-comment\">-- small integers</span>\n  <span class=\"hs-keyglyph\">|</span> <span class=\n\"hs-conid\">J</span><span class=\"hs-cpp\">#</span> <span class=\n\"hs-conid\">Int</span><span class=\"hs-cpp\">#</span> <span class=\n\"hs-conid\">ByteArray</span><span class=\n\"hs-cpp\">#</span>                 <span class=\n\"hs-comment\">-- large integers</span></pre>\n<p>For the rest, we'd expect only 16 bytes per iteration, but we're\nseeing more than that. <b>Why?</b> Let's look at <a href=\n\"https://stackoverflow.com/questions/6121146/reading-ghc-core\">the\nCore</a> for <code>count</code>:</p>\n<pre><span class=\"hs-conid\">Main</span><span class=\n\"hs-varop\">.</span><span class=\n\"hs-varid\">main48</span> <span class=\"hs-keyglyph\">=</span> <span class=\"hs-sel\">__integer</span> <span class=\"hs-num\">0</span>\n<span class=\"hs-conid\">Main</span><span class=\n\"hs-varop\">.</span><span class=\n\"hs-varid\">main41</span> <span class=\"hs-keyglyph\">=</span> <span class=\"hs-sel\">__integer</span> <span class=\"hs-num\">1</span>\n<span class=\"hs-conid\">Rec</span> <span class=\"hs-layout\">{</span>\n<span class=\"hs-conid\">Main</span><span class=\n\"hs-varop\">.</span><span class=\n\"hs-varid\">main_count</span> <span class=\n\"hs-keyglyph\">[</span><span class=\"hs-conid\">Occ</span><span class=\n\"hs-keyglyph\">=</span><span class=\n\"hs-conid\">LoopBreaker</span><span class=\n\"hs-keyglyph\">]</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Integer</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\"hs-conid\">()</span>\n<span class=\"hs-keyglyph\">[</span><span class=\n\"hs-conid\">GblId</span><span class=\n\"hs-layout\">,</span> <span class=\"hs-conid\">Arity</span><span class=\"hs-keyglyph\">=</span><span class=\"hs-num\">1</span><span class=\"hs-layout\">,</span> <span class=\"hs-conid\">Str</span><span class=\"hs-keyglyph\">=</span><span class=\"hs-conid\">DmdType</span> <span class=\"hs-varop\">&lt;</span><span class=\"hs-conid\">S</span><span class=\"hs-layout\">,</span><span class=\"hs-conid\">U</span><span class=\"hs-varop\">&gt;</span><span class=\"hs-keyglyph\">]</span>\n<span class=\"hs-conid\">Main</span><span class=\n\"hs-varop\">.</span><span class=\n\"hs-varid\">main_count</span> <span class=\"hs-keyglyph\">=</span>\n  <span class=\"hs-keyglyph\">\\</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-varid\">ds_d4Am</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Integer</span><span class=\n\"hs-layout\">)</span> <span class=\"hs-keyglyph\">-&gt;</span>\n    <span class=\"hs-keyword\">case</span> <span class=\n\"hs-varid\">eqInteger</span><span class=\n\"hs-cpp\">#</span> <span class=\n\"hs-varid\">ds_d4Am</span> <span class=\"hs-conid\">Main</span><span class=\"hs-varop\">.</span><span class=\"hs-varid\">main48</span> <span class=\"hs-keyword\">of</span> <span class=\"hs-varid\">wild_a4Fq</span> <span class=\"hs-layout\">{</span> <span class=\"hs-sel\">__DEFAULT</span> <span class=\"hs-keyglyph\">-&gt;</span>\n    <span class=\"hs-keyword\">case</span> <span class=\n\"hs-varid\">ghc</span><span class=\"hs-comment\">-</span><span class=\n\"hs-varid\">prim</span><span class=\"hs-comment\">-</span><span class=\n\"hs-num\">0.4</span><span class=\"hs-varop\">.</span><span class=\n\"hs-num\">0.0</span><span class=\"hs-conop\">:</span><span class=\n\"hs-conid\">GHC</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">Prim</span><span class=\"hs-varop\">.</span><span class=\n\"hs-varid\">tagToEnum</span><span class=\n\"hs-cpp\">#</span> <span class=\"hs-keyglyph\">@</span> <span class=\n\"hs-conid\">Bool</span> <span class=\"hs-varid\">wild_a4Fq</span>\n    <span class=\"hs-keyword\">of</span> <span class=\n\"hs-keyword\">_</span> <span class=\n\"hs-keyglyph\">[</span><span class=\"hs-conid\">Occ</span><span class=\n\"hs-keyglyph\">=</span><span class=\n\"hs-conid\">Dead</span><span class=\"hs-keyglyph\">]</span> <span class=\"hs-layout\">{</span>\n      <span class=\"hs-conid\">False</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-conid\">Main</span><span class=\"hs-varop\">.</span><span class=\n\"hs-varid\">main_count</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-varid\">minusInteger</span> <span class=\n\"hs-varid\">ds_d4Am</span> <span class=\n\"hs-conid\">Main</span><span class=\"hs-varop\">.</span><span class=\n\"hs-varid\">main41</span><span class=\n\"hs-layout\">)</span><span class=\"hs-layout\">;</span>\n      <span class=\"hs-conid\">True</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-varid\">ghc</span><span class=\"hs-comment\">-</span><span class=\n\"hs-varid\">prim</span><span class=\"hs-comment\">-</span><span class=\n\"hs-num\">0.4</span><span class=\"hs-varop\">.</span><span class=\n\"hs-num\">0.0</span><span class=\"hs-conop\">:</span><span class=\n\"hs-conid\">GHC</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">Tuple</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">()</span>\n    <span class=\"hs-layout\">}</span>\n    <span class=\"hs-layout\">}</span>\n<span class=\"hs-definition\">end</span> <span class=\n\"hs-conid\">Rec</span> <span class=\"hs-layout\">}</span></pre>\n<p>The <code>eqInteger#</code> function is <a href=\n\"https://ghc.haskell.org/trac/ghc/wiki/PrimBool#Implementationdetails\">\na pretend-primop</a>, which apparently combines with\n<code>tagToEnum#</code> and is optimized away at <a href=\n\"https://ghc.haskell.org/trac/ghc/ticket/8317\">the code generation\nphase.</a> This should lead to an unboxed comparison of something\nlike <code>Int#</code>, which should not allocate. This leaves only\nthe addition operation, which should allocate one new 16-byte\n<code>Integer</code>.</p>\n<p>So where are those additional 16 bytes from? <a href=\n\"https://hackage.haskell.org/package/integer-gmp-1.0.0.1/docs/src/GHC.Integer.Type.html#minusInteger\">\nThe implementation of <code>minusInteger</code> for\n<code>Integer</code> types</a> is actually implemented as <code>x +\n-y</code>:</p>\n<pre><span class=\"hs-comment\">-- TODO</span>\n<span class=\n\"hs-comment\">-- | Subtract two 'Integer's from each other.</span>\n<span class=\"hs-definition\">minusInteger</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Integer</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-conid\">Integer</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\"hs-conid\">Integer</span>\n<span class=\"hs-definition\">minusInteger</span> <span class=\n\"hs-varid\">x</span> <span class=\"hs-varid\">y</span> <span class=\n\"hs-keyglyph\">=</span> <span class=\n\"hs-varid\">inline</span> <span class=\n\"hs-varid\">plusInteger</span> <span class=\n\"hs-varid\">x</span> <span class=\"hs-layout\">(</span><span class=\n\"hs-varid\">inline</span> <span class=\n\"hs-varid\">negateInteger</span> <span class=\n\"hs-varid\">y</span><span class=\"hs-layout\">)</span></pre>\n<p>This means we're allocating <b>one more Integer</b>. That\nexplains the additional 16 bytes!</p>\n<p>There's a <code>TODO</code> there. I guess someone implemented\n<code>negateInteger</code> and <code>plusInteger</code> (which is\n<a href=\n\"https://hackage.haskell.org/package/integer-gmp-1.0.0.1/docs/src/GHC.Integer.Type.html#plusInteger\">\nnon-trivial</a>) and had enough.</p>\n<p>If we implement a second function <code>count'</code> that takes\nthis into account,</p>\n<pre><span class=\"hs-keyword\">import</span> <span class=\n\"hs-conid\">Weigh</span>\n<span class=\"hs-definition\">main</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">IO</span> <span class=\"hs-conid\">()</span>\n<span class=\"hs-definition\">main</span> <span class=\n\"hs-keyglyph\">=</span>\n  <span class=\"hs-varid\">mainWith</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-keyword\">do</span> <span class=\n\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 0\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">0</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 1\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">1</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 2\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">2</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count 3\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">3</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count' 0\"</span> <span class=\n\"hs-varid\">count'</span> <span class=\"hs-num\">0</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count' 1\"</span> <span class=\n\"hs-varid\">count'</span> <span class=\"hs-num\">1</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count' 2\"</span> <span class=\n\"hs-varid\">count'</span> <span class=\"hs-num\">2</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"integers count' 3\"</span> <span class=\n\"hs-varid\">count'</span> <span class=\"hs-num\">3</span><span class=\n\"hs-layout\">)</span>\n  <span class=\"hs-keyword\">where</span> <span class=\n\"hs-varid\">count</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Integer</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\"hs-conid\">()</span>\n        <span class=\"hs-varid\">count</span> <span class=\n\"hs-num\">0</span> <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-conid\">()</span>\n        <span class=\"hs-varid\">count</span> <span class=\n\"hs-varid\">a</span> <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-varid\">count</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-varid\">a</span> <span class=\n\"hs-comment\">-</span> <span class=\"hs-num\">1</span><span class=\n\"hs-layout\">)</span>\n        <span class=\"hs-varid\">count'</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Integer</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\"hs-conid\">()</span>\n        <span class=\"hs-varid\">count'</span> <span class=\n\"hs-num\">0</span> <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-conid\">()</span>\n        <span class=\"hs-varid\">count'</span> <span class=\n\"hs-varid\">a</span> <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-varid\">count'</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-varid\">a</span> <span class=\n\"hs-varop\">+</span> <span class=\"hs-layout\">(</span><span class=\n\"hs-comment\">-</span><span class=\"hs-num\">1</span><span class=\n\"hs-layout\">)</span><span class=\"hs-layout\">)</span></pre>\n<p>we get more reasonable allocations:</p>\n<pre><code>Case                Bytes  GCs  Check\nintegers count 0        0    0  OK\nintegers count 1       32    0  OK\nintegers count 2       64    0  OK\nintegers count 3       96    0  OK\n<p>integers count' 0       0    0  OK\nintegers count' 1      16    0  OK\nintegers count' 2      32    0  OK\nintegers count' 3      48    0  OK</code></pre></p>\n<p>It turns out that <code>count'</code> is 20% faster (from\n<code>criterion</code> benchmarks), but realistically, if speed\nmatters, we'd be using <code>Int</code>, which is practically 1000x\nfaster.</p>\n<p>What did we learn? Even something as simple as\n<code>Integer</code> subtraction doesn't behave as you would\nnaively expect.</p>\n<h2 id=\"considering-a-different-type:-codeintcode\">Considering a\ndifferent type: <code>Int</code></h2>\n<p>Comparatively, let's look at <code>Int</code>:</p>\n<pre><span class=\"hs-keyword\">import</span> <span class=\n\"hs-conid\">Weigh</span>\n<span class=\"hs-definition\">main</span> <span class=\n\"hs-keyglyph\">=</span>\n  <span class=\"hs-varid\">mainWith</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-keyword\">do</span> <span class=\n\"hs-varid\">func</span> <span class=\n\"hs-str\">\"int count 0\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">0</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"int count 1\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">1</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"int count 10\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">10</span>\n               <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"int count 100\"</span> <span class=\n\"hs-varid\">count</span> <span class=\"hs-num\">100</span><span class=\n\"hs-layout\">)</span>\n  <span class=\"hs-keyword\">where</span> <span class=\n\"hs-varid\">count</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Int</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\"hs-conid\">()</span>\n        <span class=\"hs-varid\">count</span> <span class=\n\"hs-num\">0</span> <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-conid\">()</span>\n        <span class=\"hs-varid\">count</span> <span class=\n\"hs-varid\">a</span> <span class=\"hs-keyglyph\">=</span> <span class=\n\"hs-varid\">count</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-varid\">a</span> <span class=\n\"hs-comment\">-</span> <span class=\"hs-num\">1</span><span class=\n\"hs-layout\">)</span></pre>\n<p>The output is:</p>\n<pre><code>Case                Bytes  GCs  Check\nints count 1            0    0  OK\nints count 10           0    0  OK\nints count 1000000      0    0  OK</code></pre>\n<p>It allocates zero bytes. Why? Let's take a look at the Core:</p>\n<pre><span class=\"hs-conid\">Rec</span> <span class=\n\"hs-layout\">{</span>\n<span class=\"hs-conid\">Main</span><span class=\n\"hs-varop\">.$</span><span class=\n\"hs-varid\">wcount1</span> <span class=\n\"hs-keyglyph\">[</span><span class=\n\"hs-conid\">InlPrag</span><span class=\n\"hs-keyglyph\">=</span><span class=\n\"hs-keyglyph\">[</span><span class=\"hs-num\">0</span><span class=\n\"hs-keyglyph\">]</span><span class=\"hs-layout\">,</span> <span class=\n\"hs-conid\">Occ</span><span class=\"hs-keyglyph\">=</span><span class=\n\"hs-conid\">LoopBreaker</span><span class=\"hs-keyglyph\">]</span>\n  <span class=\"hs-keyglyph\">::</span> <span class=\n\"hs-varid\">ghc</span><span class=\"hs-comment\">-</span><span class=\n\"hs-varid\">prim</span><span class=\"hs-comment\">-</span><span class=\n\"hs-num\">0.4</span><span class=\"hs-varop\">.</span><span class=\n\"hs-num\">0.0</span><span class=\"hs-conop\">:</span><span class=\n\"hs-conid\">GHC</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">Prim</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">Int</span><span class=\"hs-cpp\">#</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\"hs-conid\">()</span>\n<span class=\"hs-keyglyph\">[</span><span class=\n\"hs-conid\">GblId</span><span class=\n\"hs-layout\">,</span> <span class=\"hs-conid\">Arity</span><span class=\"hs-keyglyph\">=</span><span class=\"hs-num\">1</span><span class=\"hs-layout\">,</span> <span class=\"hs-conid\">Caf</span><span class=\"hs-keyglyph\">=</span><span class=\"hs-conid\">NoCafRefs</span><span class=\"hs-layout\">,</span> <span class=\"hs-conid\">Str</span><span class=\"hs-keyglyph\">=</span><span class=\"hs-conid\">DmdType</span> <span class=\"hs-varop\">&lt;</span><span class=\"hs-conid\">S</span><span class=\"hs-layout\">,</span><span class=\"hs-num\">1</span><span class=\"hs-varop\">*</span><span class=\"hs-conid\">U</span><span class=\"hs-varop\">&gt;</span><span class=\"hs-keyglyph\">]</span>\n<span class=\"hs-conid\">Main</span><span class=\n\"hs-varop\">.$</span><span class=\n\"hs-varid\">wcount1</span> <span class=\"hs-keyglyph\">=</span>\n  <span class=\"hs-keyglyph\">\\</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-varid\">ww_s57C</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-varid\">ghc</span><span class=\"hs-comment\">-</span><span class=\n\"hs-varid\">prim</span><span class=\"hs-comment\">-</span><span class=\n\"hs-num\">0.4</span><span class=\"hs-varop\">.</span><span class=\n\"hs-num\">0.0</span><span class=\"hs-conop\">:</span><span class=\n\"hs-conid\">GHC</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">Prim</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">Int</span><span class=\"hs-cpp\">#</span><span class=\n\"hs-layout\">)</span> <span class=\"hs-keyglyph\">-&gt;</span>\n    <span class=\"hs-keyword\">case</span> <span class=\n\"hs-varid\">ww_s57C</span> <span class=\n\"hs-keyword\">of</span> <span class=\n\"hs-varid\">ds_X4Gu</span> <span class=\"hs-layout\">{</span>\n      <span class=\"hs-sel\">__DEFAULT</span> <span class=\n\"hs-keyglyph\">-&gt;</span>\n        <span class=\"hs-conid\">Main</span><span class=\n\"hs-varop\">.$</span><span class=\n\"hs-varid\">wcount1</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-varid\">ghc</span><span class=\n\"hs-comment\">-</span><span class=\"hs-varid\">prim</span><span class=\n\"hs-comment\">-</span><span class=\"hs-num\">0.4</span><span class=\n\"hs-varop\">.</span><span class=\"hs-num\">0.0</span><span class=\n\"hs-conop\">:</span><span class=\"hs-conid\">GHC</span><span class=\n\"hs-varop\">.</span><span class=\"hs-conid\">Prim</span><span class=\n\"hs-varop\">.-#</span> <span class=\n\"hs-varid\">ds_X4Gu</span> <span class=\"hs-num\">1</span><span class=\n\"hs-layout\">)</span><span class=\"hs-layout\">;</span>\n      <span class=\"hs-num\">0</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-varid\">ghc</span><span class=\"hs-comment\">-</span><span class=\n\"hs-varid\">prim</span><span class=\"hs-comment\">-</span><span class=\n\"hs-num\">0.4</span><span class=\"hs-varop\">.</span><span class=\n\"hs-num\">0.0</span><span class=\"hs-conop\">:</span><span class=\n\"hs-conid\">GHC</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">Tuple</span><span class=\"hs-varop\">.</span><span class=\n\"hs-conid\">()</span>\n    <span class=\"hs-layout\">}</span>\n<span class=\"hs-definition\">end</span> <span class=\n\"hs-conid\">Rec</span> <span class=\"hs-layout\">}</span></pre>\n<p>It's clear that GHC is able to optimize this tight loop, and\nunbox the <code>Int</code> into an <code>Int#</code>, which can be\nput into a register rather than being allocated by the GHC runtime\nallocator to be freed later.</p>\n<p>The lesson is not to take for granted that everything has a 1:1\nmemory mapping at runtime with your source, and to take each case\nin context.</p>\n<h2 id=\"data-structures\">Data structures</h2>\n<p>Finally, from our contrived examples we can take a look at\nuser-defined data types and observe some of the optimizations that\nGHC does for memory.</p>\n<p>Let's demonstrate that unpacking a data structure yields less\nmemory. Here is a data type that contains an <code>Int</code>:</p>\n<pre><span class=\"hs-keyword\">data</span> <span class=\n\"hs-conid\">HasInt</span> <span class=\n\"hs-keyglyph\">=</span> <span class=\n\"hs-conid\">HasInt</span> <span class=\n\"hs-varop\">!</span><span class=\"hs-conid\">Int</span>\n  <span class=\"hs-keyword\">deriving</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-conid\">Generic</span><span class=\"hs-layout\">)</span>\n<span class=\"hs-keyword\">instance</span> <span class=\n\"hs-conid\">NFData</span> <span class=\"hs-conid\">HasInt</span></pre>\n<p>Here are two identical data types which use <code>HasInt</code>,\nbut the first simply uses <code>HasInt</code>, and the latter\nunpacks it.</p>\n<pre><span class=\"hs-keyword\">data</span> <span class=\n\"hs-conid\">HasPacked</span> <span class=\n\"hs-keyglyph\">=</span> <span class=\n\"hs-conid\">HasPacked</span> <span class=\"hs-conid\">HasInt</span>\n  <span class=\"hs-keyword\">deriving</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-conid\">Generic</span><span class=\"hs-layout\">)</span>\n<span class=\"hs-keyword\">instance</span> <span class=\n\"hs-conid\">NFData</span> <span class=\"hs-conid\">HasPacked</span>\n<p><span class=\"hs-keyword\">data</span> <span class=\n\"hs-conid\">HasUnpacked</span> <span class=\n\"hs-keyglyph\">=</span> <span class=\n\"hs-conid\">HasUnpacked</span> <span class=\n\"hs-comment\">{-# UNPACK #-}</span> <span class=\n\"hs-varop\">!</span><span class=\"hs-conid\">HasInt</span>\n<span class=\"hs-keyword\">deriving</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-conid\">Generic</span><span class=\"hs-layout\">)</span>\n<span class=\"hs-keyword\">instance</span> <span class=\n\"hs-conid\">NFData</span> <span class=\n\"hs-conid\">HasUnpacked</span></pre></p>\n<p>We can measure the difference by weighing them like this:</p>\n<pre><span class=\n\"hs-comment\">-- | Weigh: packing vs no packing.</span>\n<span class=\"hs-definition\">packing</span> <span class=\n\"hs-keyglyph\">::</span> <span class=\n\"hs-conid\">Weigh</span> <span class=\"hs-conid\">()</span>\n<span class=\"hs-definition\">packing</span> <span class=\n\"hs-keyglyph\">=</span>\n  <span class=\"hs-keyword\">do</span> <span class=\n\"hs-varid\">func</span> <span class=\n\"hs-str\">\"\\\\x -&gt; HasInt x\"</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-keyglyph\">\\</span><span class=\n\"hs-varid\">x</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-conid\">HasInt</span> <span class=\n\"hs-varid\">x</span><span class=\"hs-layout\">)</span> <span class=\n\"hs-num\">5</span>\n     <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"\\\\x -&gt; HasPacked (HasInt x)\"</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-keyglyph\">\\</span><span class=\n\"hs-varid\">x</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-conid\">HasPacked</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-conid\">HasInt</span> <span class=\n\"hs-varid\">x</span><span class=\"hs-layout\">)</span><span class=\n\"hs-layout\">)</span> <span class=\"hs-num\">5</span>\n     <span class=\"hs-varid\">func</span> <span class=\n\"hs-str\">\"\\\\x -&gt; HasUnpacked (HasInt x)\"</span> <span class=\n\"hs-layout\">(</span><span class=\"hs-keyglyph\">\\</span><span class=\n\"hs-varid\">x</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-conid\">HasUnpacked</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-conid\">HasInt</span> <span class=\n\"hs-varid\">x</span><span class=\"hs-layout\">)</span><span class=\n\"hs-layout\">)</span> <span class=\"hs-num\">5</span></pre>\n<p>The output is:</p>\n<pre><span class=\"hs-keyglyph\">\\</span><span class=\n\"hs-varid\">x</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-conid\">HasInt</span> <span class=\n\"hs-varid\">x</span>                      <span class=\n\"hs-num\">16</span>    <span class=\"hs-num\">0</span>  <span class=\n\"hs-conid\">OK</span>\n<span class=\"hs-keyglyph\">\\</span><span class=\n\"hs-varid\">x</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-conid\">HasPacked</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-conid\">HasInt</span> <span class=\n\"hs-varid\">x</span><span class=\"hs-layout\">)</span>          <span class=\"hs-num\">32</span>    <span class=\"hs-num\">0</span>  <span class=\"hs-conid\">OK</span>\n<span class=\"hs-keyglyph\">\\</span><span class=\n\"hs-varid\">x</span> <span class=\n\"hs-keyglyph\">-&gt;</span> <span class=\n\"hs-conid\">HasUnpacked</span> <span class=\n\"hs-layout\">(</span><span class=\n\"hs-conid\">HasInt</span> <span class=\n\"hs-varid\">x</span><span class=\"hs-layout\">)</span>        <span class=\"hs-num\">16</span>    <span class=\"hs-num\">0</span>  <span class=\"hs-conid\">OK</span></pre>\n<p>Voilà! Here we've demonstrated that:</p>\n<ul>\n<li><code>HasInt x</code> consists of the 8 byte header for the\nconstructor, and 8 bytes for the Int.</li>\n<li><code>HasPacked</code> has 8 bytes for the constructor, 8 bytes\nfor the first slot, then another 8 bytes for the\n<code>HasInt</code> constructor, finally 8 bytes for the\n<code>Int</code> itself.</li>\n<li><code>HasUnpacked</code> only allocates 8 bytes for the header,\nand 8 bytes for the <code>Int</code>.</li>\n</ul>\n<p>GHC did what we wanted!</p>\n<h2 id=\"summary\">Summary</h2>\n<p>We've looked at:</p>\n<ul>\n<li>What lead to this package.</li>\n<li>Propose that we start measuring our functions more, especially\nlibraries.</li>\n<li>How to use this package.</li>\n<li>Some of our use-cases at FP Complete.</li>\n<li>Caveats.</li>\n<li>Some contrived examples do not lead to obvious\nexplanations.</li>\n</ul>\n<p>Now I encourage you to try it out!</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/2016/05/weigh-package/",
        "slug": "weigh-package",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "weigh: Measuring allocations in Haskell",
        "description": ".",
        "updated": null,
        "date": "2016-05-27T00:00:00Z",
        "year": 2016,
        "month": 5,
        "day": 27,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/05/weigh-package/",
        "components": [
          "blog",
          "2016",
          "05",
          "weigh-package"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "containers-vs-unordered-containers",
            "permalink": "https://tech.fpcomplete.com/blog/2016/05/weigh-package/#containers-vs-unordered-containers",
            "title": "Containers vs unordered-containers",
            "children": []
          }
        ],
        "word_count": 6157,
        "reading_time": 31,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/library/containers/",
            "title": "containers: Maps, Sets, and more"
          }
        ]
      },
      {
        "relative_path": "blog/moving-stackage-nightly-ghc-8.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/05/moving-stackage-nightly-ghc-8/",
        "slug": "moving-stackage-nightly-ghc-8",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Moving Stackage Nightly to GHC 8.0",
        "description": ".",
        "updated": null,
        "date": "2016-05-26T18:15:00Z",
        "year": 2016,
        "month": 5,
        "day": 26,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/moving-stackage-nightly-ghc-8.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/05/moving-stackage-nightly-ghc-8/",
        "components": [
          "blog",
          "2016",
          "05",
          "moving-stackage-nightly-ghc-8"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/store-package.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/05/store-package/",
        "slug": "store-package",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "store: a new and efficient binary serialization library",
        "description": ".",
        "updated": null,
        "date": "2016-05-24T05:00:00Z",
        "year": 2016,
        "month": 5,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Sloan",
          "html": "hubspot-blogs/store-package.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/05/store-package/",
        "components": [
          "blog",
          "2016",
          "05",
          "store-package"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-security-gnupg-keys.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/05/stack-security-gnupg-keys/",
        "slug": "stack-security-gnupg-keys",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stack Security GnuPG Keys",
        "description": ".",
        "updated": null,
        "date": "2016-05-05T04:50:00Z",
        "year": 2016,
        "month": 5,
        "day": 5,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Tim Dysinger",
          "html": "hubspot-blogs/stack-security-gnupg-keys.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/05/stack-security-gnupg-keys/",
        "components": [
          "blog",
          "2016",
          "05",
          "stack-security-gnupg-keys"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/",
            "title": "Controlling access to Nomad clusters"
          }
        ]
      },
      {
        "relative_path": "blog/stackage-data-flow.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/04/stackage-data-flow/",
        "slug": "stackage-data-flow",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The Stackage data flow",
        "description": ".",
        "updated": null,
        "date": "2016-04-14T14:30:00Z",
        "year": 2016,
        "month": 4,
        "day": 14,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stackage-data-flow.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/04/stackage-data-flow/",
        "components": [
          "blog",
          "2016",
          "04",
          "stackage-data-flow"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/efficient-binary-serialization.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/03/efficient-binary-serialization/",
        "slug": "efficient-binary-serialization",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Efficient binary serialization",
        "description": ".",
        "updated": null,
        "date": "2016-03-14T05:45:00Z",
        "year": 2016,
        "month": 3,
        "day": 14,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/efficient-binary-serialization.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/03/efficient-binary-serialization/",
        "components": [
          "blog",
          "2016",
          "03",
          "efficient-binary-serialization"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/testing-ghc-with-stackage.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/02/testing-ghc-with-stackage/",
        "slug": "testing-ghc-with-stackage",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Testing GHC with Stackage",
        "description": ".",
        "updated": null,
        "date": "2016-02-22T12:00:00Z",
        "year": 2016,
        "month": 2,
        "day": 22,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/testing-ghc-with-stackage.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/02/testing-ghc-with-stackage/",
        "components": [
          "blog",
          "2016",
          "02",
          "testing-ghc-with-stackage"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/updated-haskell-travis-config.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/02/updated-haskell-travis-config/",
        "slug": "updated-haskell-travis-config",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Updated Haskell Travis config",
        "description": ".",
        "updated": null,
        "date": "2016-02-17T08:00:00Z",
        "year": 2016,
        "month": 2,
        "day": 17,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/updated-haskell-travis-config.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/02/updated-haskell-travis-config/",
        "components": [
          "blog",
          "2016",
          "02",
          "updated-haskell-travis-config"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/soh-status.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/01/soh-status/",
        "slug": "soh-status",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Status of School of Haskell 2.0",
        "description": ".",
        "updated": null,
        "date": "2016-01-06T08:00:00Z",
        "year": 2016,
        "month": 1,
        "day": 6,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Sloan",
          "html": "hubspot-blogs/soh-status.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/01/soh-status/",
        "components": [
          "blog",
          "2016",
          "01",
          "soh-status"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-travis.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/12/stack-travis/",
        "slug": "stack-travis",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Using Stack on Travis CI",
        "description": ".",
        "updated": null,
        "date": "2015-12-21T17:20:00Z",
        "year": 2015,
        "month": 12,
        "day": 21,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/stack-travis.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/12/stack-travis/",
        "components": [
          "blog",
          "2015",
          "12",
          "stack-travis"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-with-ghc-7-10-3.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/12/stack-with-ghc-7-10-3/",
        "slug": "stack-with-ghc-7-10-3",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Using Stack with GHC 7.10.3",
        "description": ".",
        "updated": null,
        "date": "2015-12-11T20:00:00Z",
        "year": 2015,
        "month": 12,
        "day": 11,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/stack-with-ghc-7-10-3.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/12/stack-with-ghc-7-10-3/",
        "components": [
          "blog",
          "2015",
          "12",
          "stack-with-ghc-7-10-3"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-stabilization.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/11/stack-stabilization/",
        "slug": "stack-stabilization",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stack entering stabilization phase",
        "description": ".",
        "updated": null,
        "date": "2015-11-07T05:00:00Z",
        "year": 2015,
        "month": 11,
        "day": 7,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/stack-stabilization.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/11/stack-stabilization/",
        "components": [
          "blog",
          "2015",
          "11",
          "stack-stabilization"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/new-haskell-ide-repo.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/10/new-haskell-ide-repo/",
        "slug": "new-haskell-ide-repo",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The new haskell-ide repo",
        "description": ".",
        "updated": null,
        "date": "2015-10-26T05:00:00Z",
        "year": 2015,
        "month": 10,
        "day": 26,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/new-haskell-ide-repo.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/10/new-haskell-ide-repo/",
        "components": [
          "blog",
          "2015",
          "10",
          "new-haskell-ide-repo"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stackage-badges.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/10/stackage-badges/",
        "slug": "stackage-badges",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stackage Badges",
        "description": ".",
        "updated": null,
        "date": "2015-10-19T05:00:00Z",
        "year": 2015,
        "month": 10,
        "day": 19,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stackage-badges.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/10/stackage-badges/",
        "components": [
          "blog",
          "2015",
          "10",
          "stackage-badges"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/seeking-software-and-systems-engineers.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/10/seeking-software-and-systems-engineers/",
        "slug": "seeking-software-and-systems-engineers",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Seeking Software and Systems Engineers (Telecommute)",
        "description": ".",
        "updated": null,
        "date": "2015-10-07T10:00:00Z",
        "year": 2015,
        "month": 10,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/seeking-software-and-systems-engineers.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/10/seeking-software-and-systems-engineers/",
        "components": [
          "blog",
          "2015",
          "10",
          "seeking-software-and-systems-engineers"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/retiring-fphc.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/10/retiring-fphc/",
        "slug": "retiring-fphc",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Retiring FP Haskell Center",
        "description": ".",
        "updated": null,
        "date": "2015-10-02T06:00:00Z",
        "year": 2015,
        "month": 10,
        "day": 2,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/retiring-fphc.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/10/retiring-fphc/",
        "components": [
          "blog",
          "2015",
          "10",
          "retiring-fphc"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-pvp.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/09/stack-pvp/",
        "slug": "stack-pvp",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "stack and the PVP",
        "description": ".",
        "updated": null,
        "date": "2015-09-21T06:00:00Z",
        "year": 2015,
        "month": 9,
        "day": 21,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stack-pvp.html",
          "blogimage": "/images/blog-listing/infrastructure.png"
        },
        "path": "/blog/2015/09/stack-pvp/",
        "components": [
          "blog",
          "2015",
          "09",
          "stack-pvp"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-more-binary-package-sharing.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/09/stack-more-binary-package-sharing/",
        "slug": "stack-more-binary-package-sharing",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "stack: more binary package sharing",
        "description": ".",
        "updated": null,
        "date": "2015-09-01T06:00:00Z",
        "year": 2015,
        "month": 9,
        "day": 1,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stack-more-binary-package-sharing.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/09/stack-more-binary-package-sharing/",
        "components": [
          "blog",
          "2015",
          "09",
          "stack-more-binary-package-sharing"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/new-in-depth-guide-stack.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/08/new-in-depth-guide-stack/",
        "slug": "new-in-depth-guide-stack",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "New in-depth guide to stack",
        "description": ".",
        "updated": null,
        "date": "2015-08-31T06:00:00Z",
        "year": 2015,
        "month": 8,
        "day": 31,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/new-in-depth-guide-stack.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/08/new-in-depth-guide-stack/",
        "components": [
          "blog",
          "2015",
          "08",
          "new-in-depth-guide-stack"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-ghc-windows.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/08/stack-ghc-windows/",
        "slug": "stack-ghc-windows",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "stack and GHC on Windows",
        "description": ".",
        "updated": null,
        "date": "2015-08-24T21:30:00Z",
        "year": 2015,
        "month": 8,
        "day": 24,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stack-ghc-windows.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/08/stack-ghc-windows/",
        "components": [
          "blog",
          "2015",
          "08",
          "stack-ghc-windows"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/bitrot-free-scripts.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2016/08/bitrot-free-scripts/",
        "slug": "bitrot-free-scripts",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Practical Haskell: Bitrot-free Scripts",
        "description": ".",
        "updated": null,
        "date": "2015-08-11T14:15:00Z",
        "year": 2015,
        "month": 8,
        "day": 11,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/bitrot-free-scripts.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2016/08/bitrot-free-scripts/",
        "components": [
          "blog",
          "2016",
          "08",
          "bitrot-free-scripts"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/haskell/tutorial/stack-script/",
            "title": "How to Script with Stack"
          }
        ]
      },
      {
        "relative_path": "blog/stack-docker.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/08/stack-docker/",
        "slug": "stack-docker",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "How stack can use Docker under the hood",
        "description": ".",
        "updated": null,
        "date": "2015-08-05T16:00:00Z",
        "year": 2015,
        "month": 8,
        "day": 5,
        "taxonomies": {
          "tags": [
            "haskell",
            "docker"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/stack-docker.html",
          "blogimage": "/images/blog-listing/docker.png"
        },
        "path": "/blog/2015/08/stack-docker/",
        "components": [
          "blog",
          "2015",
          "08",
          "stack-docker"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/platformengineering/containerization/",
            "title": "Containerization"
          }
        ]
      },
      {
        "relative_path": "blog/package-security-in-stack.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/07/package-security-in-stack/",
        "slug": "package-security-in-stack",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Package security in stack",
        "description": ".",
        "updated": null,
        "date": "2015-07-20T13:00:00Z",
        "year": 2015,
        "month": 7,
        "day": 20,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/package-security-in-stack.html",
          "blogimage": "/images/blog-listing/network-security.png"
        },
        "path": "/blog/2015/07/package-security-in-stack/",
        "components": [
          "blog",
          "2015",
          "07",
          "package-security-in-stack"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/why-is-stack-not-cabal.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/06/why-is-stack-not-cabal/",
        "slug": "why-is-stack-not-cabal",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Why is stack not cabal?",
        "description": ".",
        "updated": null,
        "date": "2015-06-24T13:00:00Z",
        "year": 2015,
        "month": 6,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mathieu Boespflug",
          "html": "hubspot-blogs/why-is-stack-not-cabal.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/06/why-is-stack-not-cabal/",
        "components": [
          "blog",
          "2015",
          "06",
          "why-is-stack-not-cabal"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stack-0-1-release.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/06/stack-0-1-release/",
        "slug": "stack-0-1-release",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "stack 0.1 released",
        "description": ".",
        "updated": null,
        "date": "2015-06-23T17:00:00Z",
        "year": 2015,
        "month": 6,
        "day": 23,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stack-0-1-release.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/06/stack-0-1-release/",
        "components": [
          "blog",
          "2015",
          "06",
          "stack-0-1-release"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-first-public-beta-stack.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/06/announcing-first-public-beta-stack/",
        "slug": "announcing-first-public-beta-stack",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "ANNOUNCING: first public beta of stack",
        "description": ".",
        "updated": null,
        "date": "2015-06-09T08:00:00Z",
        "year": 2015,
        "month": 6,
        "day": 9,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/announcing-first-public-beta-stack.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/06/announcing-first-public-beta-stack/",
        "components": [
          "blog",
          "2015",
          "06",
          "announcing-first-public-beta-stack"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/thousand-user-haskell-survey.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/thousand-user-haskell-survey/",
        "slug": "thousand-user-haskell-survey",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "What do Haskellers want? Over a thousand tell us",
        "description": ".",
        "updated": null,
        "date": "2015-05-22T20:00:00Z",
        "year": 2015,
        "month": 5,
        "day": 22,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/thousand-user-haskell-survey.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/05/thousand-user-haskell-survey/",
        "components": [
          "blog",
          "2015",
          "05",
          "thousand-user-haskell-survey"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/new-stackage-server.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/new-stackage-server/",
        "slug": "new-stackage-server",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The new Stackage Server",
        "description": ".",
        "updated": null,
        "date": "2015-05-22T07:30:00Z",
        "year": 2015,
        "month": 5,
        "day": 22,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/new-stackage-server.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/05/new-stackage-server/",
        "components": [
          "blog",
          "2015",
          "05",
          "new-stackage-server"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/inline-c.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/inline-c/",
        "slug": "inline-c",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Call C functions from Haskell without bindings",
        "description": ".",
        "updated": null,
        "date": "2015-05-20T10:00:00Z",
        "year": 2015,
        "month": 5,
        "day": 20,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Francesco Mazzoli",
          "html": "hubspot-blogs/inline-c.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/05/inline-c/",
        "components": [
          "blog",
          "2015",
          "05",
          "inline-c"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/psa-ghc-710-cabal-windows.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/psa-ghc-710-cabal-windows/",
        "slug": "psa-ghc-710-cabal-windows",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "PSA: GHC 7.10, cabal, and Windows",
        "description": ".",
        "updated": null,
        "date": "2015-05-19T08:20:00Z",
        "year": 2015,
        "month": 5,
        "day": 19,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/psa-ghc-710-cabal-windows.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/05/psa-ghc-710-cabal-windows/",
        "components": [
          "blog",
          "2015",
          "05",
          "psa-ghc-710-cabal-windows"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/secure-package-distribution.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/secure-package-distribution/",
        "slug": "secure-package-distribution",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Secure package distribution: ready to roll",
        "description": ".",
        "updated": null,
        "date": "2015-05-11T00:00:00Z",
        "year": 2015,
        "month": 5,
        "day": 11,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/secure-package-distribution.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/2015/05/secure-package-distribution/",
        "components": [
          "blog",
          "2015",
          "05",
          "secure-package-distribution"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/haskell-at-front-row.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/haskell-at-front-row/",
        "slug": "haskell-at-front-row",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Guest post: Haskell at Front Row",
        "description": ".",
        "updated": null,
        "date": "2015-05-09T18:00:00Z",
        "year": 2015,
        "month": 5,
        "day": 9,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/haskell-at-front-row.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/05/haskell-at-front-row/",
        "components": [
          "blog",
          "2015",
          "05",
          "haskell-at-front-row"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/haskell-web-server-in-5mb.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/haskell-web-server-in-5mb/",
        "slug": "haskell-web-server-in-5mb",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Haskell Web Server in a 5MB Docker Image",
        "description": ".",
        "updated": null,
        "date": "2015-05-06T11:00:00Z",
        "year": 2015,
        "month": 5,
        "day": 6,
        "taxonomies": {
          "tags": [
            "haskell",
            "docker",
            "devops"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Tim Dysinger",
          "html": "hubspot-blogs/haskell-web-server-in-5mb.html",
          "blogimage": "/images/blog-listing/docker.png"
        },
        "path": "/blog/2015/05/haskell-web-server-in-5mb/",
        "components": [
          "blog",
          "2015",
          "05",
          "haskell-web-server-in-5mb"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/school-of-haskell-2.0.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/school-of-haskell-2.0/",
        "slug": "school-of-haskell-2-0",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "School of Haskell 2.0",
        "description": ".",
        "updated": null,
        "date": "2015-05-04T17:00:00Z",
        "year": 2015,
        "month": 5,
        "day": 4,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Sloan",
          "html": "hubspot-blogs/school-of-haskell-2.0.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/05/school-of-haskell-2.0/",
        "components": [
          "blog",
          "2015",
          "05",
          "school-of-haskell-2.0"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/update-ghc-7-10-stackage.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/update-ghc-7-10-stackage/",
        "slug": "update-ghc-7-10-stackage",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Update on GHC 7.10 in Stackage",
        "description": ".",
        "updated": null,
        "date": "2015-04-30T18:38:00Z",
        "year": 2015,
        "month": 4,
        "day": 30,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/update-ghc-7-10-stackage.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/update-ghc-7-10-stackage/",
        "components": [
          "blog",
          "2015",
          "04",
          "update-ghc-7-10-stackage"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-stackage-install.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/announcing-stackage-install/",
        "slug": "announcing-stackage-install",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: stackage-install",
        "description": ".",
        "updated": null,
        "date": "2015-04-29T09:30:00Z",
        "year": 2015,
        "month": 4,
        "day": 29,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announcing-stackage-install.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/announcing-stackage-install/",
        "components": [
          "blog",
          "2015",
          "04",
          "announcing-stackage-install"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-stackage-upload.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/announcing-stackage-upload/",
        "slug": "announcing-stackage-upload",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: stackage-upload",
        "description": ".",
        "updated": null,
        "date": "2015-04-28T04:30:00Z",
        "year": 2015,
        "month": 4,
        "day": 28,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announcing-stackage-upload.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/announcing-stackage-upload/",
        "components": [
          "blog",
          "2015",
          "04",
          "announcing-stackage-upload"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/ghc-prof-flamegraph.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/ghc-prof-flamegraph/",
        "slug": "ghc-prof-flamegraph",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Flame graphs for GHC time profiles with ghc-prof-flamegraph",
        "description": ".",
        "updated": null,
        "date": "2015-04-27T13:00:00Z",
        "year": 2015,
        "month": 4,
        "day": 27,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Francesco Mazzoli",
          "html": "hubspot-blogs/ghc-prof-flamegraph.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/ghc-prof-flamegraph/",
        "components": [
          "blog",
          "2015",
          "04",
          "ghc-prof-flamegraph"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-stackage-cli.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/announcing-stackage-cli/",
        "slug": "announcing-stackage-cli",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: first release of Stackage CLI (Command Line Tools)",
        "description": ".",
        "updated": null,
        "date": "2015-04-20T08:38:00Z",
        "year": 2015,
        "month": 4,
        "day": 20,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announcing-stackage-cli.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/announcing-stackage-cli/",
        "components": [
          "blog",
          "2015",
          "04",
          "announcing-stackage-cli"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/future-of-soh-fphc.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/future-of-soh-fphc/",
        "slug": "future-of-soh-fphc",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The future of School of Haskell and FP Haskell Center",
        "description": ".",
        "updated": null,
        "date": "2015-04-15T05:00:00Z",
        "year": 2015,
        "month": 4,
        "day": 15,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/future-of-soh-fphc.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/future-of-soh-fphc/",
        "components": [
          "blog",
          "2015",
          "04",
          "future-of-soh-fphc"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stackage-view.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/stackage-view/",
        "slug": "stackage-view",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: stackage-view",
        "description": ".",
        "updated": null,
        "date": "2015-04-14T11:20:00Z",
        "year": 2015,
        "month": 4,
        "day": 14,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/stackage-view.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/stackage-view/",
        "components": [
          "blog",
          "2015",
          "04",
          "stackage-view"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-monad-unlift.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/announcing-monad-unlift/",
        "slug": "announcing-monad-unlift",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: monad-unlift",
        "description": ".",
        "updated": null,
        "date": "2015-04-08T11:00:00Z",
        "year": 2015,
        "month": 4,
        "day": 8,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announcing-monad-unlift.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/announcing-monad-unlift/",
        "components": [
          "blog",
          "2015",
          "04",
          "announcing-monad-unlift"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-lts-2.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/04/announcing-lts-2/",
        "slug": "announcing-lts-2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: LTS (Long Term Support) Haskell 2",
        "description": ".",
        "updated": null,
        "date": "2015-04-02T09:00:00Z",
        "year": 2015,
        "month": 4,
        "day": 2,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announcing-lts-2.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/04/announcing-lts-2/",
        "components": [
          "blog",
          "2015",
          "04",
          "announcing-lts-2"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/minghc-ghc-7-10.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/03/minghc-ghc-7-10/",
        "slug": "minghc-ghc-7-10",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "MinGHC for GHC 7.10",
        "description": ".",
        "updated": null,
        "date": "2015-03-27T09:30:00Z",
        "year": 2015,
        "month": 3,
        "day": 27,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/minghc-ghc-7-10.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/03/minghc-ghc-7-10/",
        "components": [
          "blog",
          "2015",
          "03",
          "minghc-ghc-7-10"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/composable-community-infrastructure.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/03/composable-community-infrastructure/",
        "slug": "composable-community-infrastructure",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Our composable community infrastructure",
        "description": ".",
        "updated": null,
        "date": "2015-03-26T05:50:00Z",
        "year": 2015,
        "month": 3,
        "day": 26,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mathieu Boespflug",
          "html": "hubspot-blogs/composable-community-infrastructure.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/03/composable-community-infrastructure/",
        "components": [
          "blog",
          "2015",
          "03",
          "composable-community-infrastructure"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/hackage-mirror.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/03/hackage-mirror/",
        "slug": "hackage-mirror",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Complete's Hackage mirror",
        "description": ".",
        "updated": null,
        "date": "2015-03-25T14:00:00Z",
        "year": 2015,
        "month": 3,
        "day": 25,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Emanuel Borsboom",
          "html": "hubspot-blogs/hackage-mirror.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/03/hackage-mirror/",
        "components": [
          "blog",
          "2015",
          "03",
          "hackage-mirror"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stackage-ghc-7-10.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/03/stackage-ghc-7-10/",
        "slug": "stackage-ghc-7-10",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stackage and GHC 7.10 update",
        "description": ".",
        "updated": null,
        "date": "2015-03-18T15:40:00Z",
        "year": 2015,
        "month": 3,
        "day": 18,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stackage-ghc-7-10.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/03/stackage-ghc-7-10/",
        "components": [
          "blog",
          "2015",
          "03",
          "stackage-ghc-7-10"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-executable-hash.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/03/announcing-executable-hash/",
        "slug": "announcing-executable-hash",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing executable-hash",
        "description": ".",
        "updated": null,
        "date": "2015-03-18T08:00:00Z",
        "year": 2015,
        "month": 3,
        "day": 18,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Sloan",
          "html": "hubspot-blogs/announcing-executable-hash.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/03/announcing-executable-hash/",
        "components": [
          "blog",
          "2015",
          "03",
          "announcing-executable-hash"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/upcoming-stackage-lts-2-0.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/03/upcoming-stackage-lts-2-0/",
        "slug": "upcoming-stackage-lts-2-0",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Upcoming Stackage LTS 2.0",
        "description": ".",
        "updated": null,
        "date": "2015-03-09T10:00:00Z",
        "year": 2015,
        "month": 3,
        "day": 9,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/upcoming-stackage-lts-2-0.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/03/upcoming-stackage-lts-2-0/",
        "components": [
          "blog",
          "2015",
          "03",
          "upcoming-stackage-lts-2-0"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/primitive-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/02/primitive-haskell/",
        "slug": "primitive-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Primitive Haskell",
        "description": ".",
        "updated": null,
        "date": "2015-02-17T10:00:00Z",
        "year": 2015,
        "month": 2,
        "day": 17,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/primitive-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/02/primitive-haskell/",
        "components": [
          "blog",
          "2015",
          "02",
          "primitive-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/commercial-haskell-sig.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/01/commercial-haskell-sig/",
        "slug": "commercial-haskell-sig",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Commercial Haskell Special Interest Group",
        "description": ".",
        "updated": null,
        "date": "2015-01-22T22:00:00Z",
        "year": 2015,
        "month": 1,
        "day": 22,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/commercial-haskell-sig.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/01/commercial-haskell-sig/",
        "components": [
          "blog",
          "2015",
          "01",
          "commercial-haskell-sig"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-mutable-containers-0-2.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/01/announcing-mutable-containers-0-2/",
        "slug": "announcing-mutable-containers-0-2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: mutable-containers 0.2",
        "description": ".",
        "updated": null,
        "date": "2015-01-07T11:00:00Z",
        "year": 2015,
        "month": 1,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announcing-mutable-containers-0-2.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/01/announcing-mutable-containers-0-2/",
        "components": [
          "blog",
          "2015",
          "01",
          "announcing-mutable-containers-0-2"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fphc-release-3.2.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/01/fphc-release-3.2/",
        "slug": "fphc-release-3-2",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "A New Release for the New Year: Announcing Release 3.2",
        "description": ".",
        "updated": null,
        "date": "2015-01-06T00:00:00Z",
        "year": 2015,
        "month": 1,
        "day": 6,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/fphc-release-3.2.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/01/fphc-release-3.2/",
        "components": [
          "blog",
          "2015",
          "01",
          "fphc-release-3.2"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-lts-haskell-1-0.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/01/announcing-lts-haskell-1-0/",
        "slug": "announcing-lts-haskell-1-0",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing LTS Haskell 1.0",
        "description": ".",
        "updated": null,
        "date": "2015-01-04T11:03:00Z",
        "year": 2015,
        "month": 1,
        "day": 4,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announcing-lts-haskell-1-0.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2015/01/announcing-lts-haskell-1-0/",
        "components": [
          "blog",
          "2015",
          "01",
          "announcing-lts-haskell-1-0"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/ghc-7-10-stackage-build.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/12/ghc-7-10-stackage-build/",
        "slug": "ghc-7-10-stackage-build",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "GHC 7.10RC1 Stackage build results",
        "description": ".",
        "updated": null,
        "date": "2014-12-24T10:20:00Z",
        "year": 2014,
        "month": 12,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/ghc-7-10-stackage-build.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/12/ghc-7-10-stackage-build/",
        "components": [
          "blog",
          "2014",
          "12",
          "ghc-7-10-stackage-build"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stackage-survey-results.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/12/stackage-survey-results/",
        "slug": "stackage-survey-results",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stackage: Survey results, easier usage, and LTS Haskell 0.X",
        "description": ".",
        "updated": null,
        "date": "2014-12-15T16:00:00Z",
        "year": 2014,
        "month": 12,
        "day": 15,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/stackage-survey-results.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/12/stackage-survey-results/",
        "components": [
          "blog",
          "2014",
          "12",
          "stackage-survey-results"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/dropping-ghc-74-support-fphc.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/12/dropping-ghc-74-support-fphc/",
        "slug": "dropping-ghc-74-support-fphc",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Dropping GHC 7.4 support in FP Haskell Center",
        "description": ".",
        "updated": null,
        "date": "2014-12-08T06:00:00Z",
        "year": 2014,
        "month": 12,
        "day": 8,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/dropping-ghc-74-support-fphc.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/12/dropping-ghc-74-support-fphc/",
        "components": [
          "blog",
          "2014",
          "12",
          "dropping-ghc-74-support-fphc"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/backporting-bug-fixes.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/12/backporting-bug-fixes/",
        "slug": "backporting-bug-fixes",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Backporting bug fixes: Towards LTS Haskell",
        "description": ".",
        "updated": null,
        "date": "2014-12-07T04:00:00Z",
        "year": 2014,
        "month": 12,
        "day": 7,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/backporting-bug-fixes.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/12/backporting-bug-fixes/",
        "components": [
          "blog",
          "2014",
          "12",
          "backporting-bug-fixes"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/experimental-packages-stackage.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/12/experimental-packages-stackage/",
        "slug": "experimental-packages-stackage",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Experimental package releases via Stackage Server",
        "description": ".",
        "updated": null,
        "date": "2014-12-01T06:00:00Z",
        "year": 2014,
        "month": 12,
        "day": 1,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/experimental-packages-stackage.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/12/experimental-packages-stackage/",
        "components": [
          "blog",
          "2014",
          "12",
          "experimental-packages-stackage"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stackage-open-source.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/11/stackage-open-source/",
        "slug": "stackage-open-source",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stackage server: new features and open source",
        "description": ".",
        "updated": null,
        "date": "2014-11-20T00:00:00Z",
        "year": 2014,
        "month": 11,
        "day": 20,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/stackage-open-source.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/11/stackage-open-source/",
        "components": [
          "blog",
          "2014",
          "11",
          "stackage-open-source"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/release-3.1.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/11/release-3.1/",
        "slug": "release-3-1",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "3.1 release changes",
        "description": ".",
        "updated": null,
        "date": "2014-11-06T00:00:00Z",
        "year": 2014,
        "month": 11,
        "day": 6,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/release-3.1.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/11/release-3.1/",
        "components": [
          "blog",
          "2014",
          "11",
          "release-3.1"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/new-stackage-features.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/10/new-stackage-features/",
        "slug": "new-stackage-features",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "New Stackage features",
        "description": ".",
        "updated": null,
        "date": "2014-10-20T00:00:00Z",
        "year": 2014,
        "month": 10,
        "day": 20,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/new-stackage-features.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/10/new-stackage-features/",
        "components": [
          "blog",
          "2014",
          "10",
          "new-stackage-features"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/conduit-stream-fusion.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/08/conduit-stream-fusion/",
        "slug": "conduit-stream-fusion",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "IAP: conduit stream fusion",
        "description": ".",
        "updated": null,
        "date": "2014-08-27T00:00:00Z",
        "year": 2014,
        "month": 8,
        "day": 27,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "conduit"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/conduit-stream-fusion.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/08/conduit-stream-fusion/",
        "components": [
          "blog",
          "2014",
          "08",
          "conduit-stream-fusion"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/iap-speeding-up-conduit.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/08/iap-speeding-up-conduit/",
        "slug": "iap-speeding-up-conduit",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "IAP: Speeding up conduit",
        "description": ".",
        "updated": null,
        "date": "2014-08-21T00:00:00Z",
        "year": 2014,
        "month": 8,
        "day": 21,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell",
            "conduit"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/iap-speeding-up-conduit.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/08/iap-speeding-up-conduit/",
        "components": [
          "blog",
          "2014",
          "08",
          "iap-speeding-up-conduit"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-stackage-server.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/08/announcing-stackage-server/",
        "slug": "announcing-stackage-server",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing Stackage Server",
        "description": ".",
        "updated": null,
        "date": "2014-07-31T16:25:00Z",
        "year": 2014,
        "month": 7,
        "day": 31,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/announcing-stackage-server.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/08/announcing-stackage-server/",
        "components": [
          "blog",
          "2014",
          "08",
          "announcing-stackage-server"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fphc-open-publish-announcement.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/07/fphc-open-publish-announcement/",
        "slug": "fphc-open-publish-announcement",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Haskell Center is Going Free",
        "description": ".",
        "updated": null,
        "date": "2014-07-30T16:25:00Z",
        "year": 2014,
        "month": 7,
        "day": 30,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/fphc-open-publish-announcement.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/07/fphc-open-publish-announcement/",
        "components": [
          "blog",
          "2014",
          "07",
          "fphc-open-publish-announcement"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/vectorbuilder-packed-conduit-yielding.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/07/vectorbuilder-packed-conduit-yielding/",
        "slug": "vectorbuilder-packed-conduit-yielding",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "vectorBuilder: packed-representation yielding for conduit",
        "description": ".",
        "updated": null,
        "date": "2014-05-20T16:25:00Z",
        "year": 2014,
        "month": 5,
        "day": 20,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/vectorbuilder-packed-conduit-yielding.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/07/vectorbuilder-packed-conduit-yielding/",
        "components": [
          "blog",
          "2014",
          "07",
          "vectorbuilder-packed-conduit-yielding"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/stackage-server.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/05/stackage-server/",
        "slug": "stackage-server",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Stackage Server",
        "description": ".",
        "updated": null,
        "date": "2014-05-20T11:46:00Z",
        "year": 2014,
        "month": 5,
        "day": 20,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chris Done",
          "html": "hubspot-blogs/stackage-server.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/05/stackage-server/",
        "components": [
          "blog",
          "2014",
          "05",
          "stackage-server"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/lenient-lower-bounds.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/05/lenient-lower-bounds/",
        "slug": "lenient-lower-bounds",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "GHC 7.8, transformers 0.3, and lenient lower bounds",
        "description": ".",
        "updated": null,
        "date": "2014-05-12T08:00:00Z",
        "year": 2014,
        "month": 5,
        "day": 12,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/lenient-lower-bounds.html",
          "blogimage": "/images/blog-listing/rust.png"
        },
        "path": "/blog/2014/05/lenient-lower-bounds/",
        "components": [
          "blog",
          "2014",
          "05",
          "lenient-lower-bounds"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/heartbleed.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/04/heartbleed/",
        "slug": "heartbleed",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The heartbleed bug and FP Haskell Center",
        "description": ".",
        "updated": null,
        "date": "2014-04-14T14:25:00Z",
        "year": 2014,
        "month": 4,
        "day": 14,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mike Meyer",
          "html": "hubspot-blogs/heartbleed.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/04/heartbleed/",
        "components": [
          "blog",
          "2014",
          "04",
          "heartbleed"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/mvp.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/04/mvp/",
        "slug": "mvp",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Calculating the Minimum Variance Portfolio in R, Pandas and IAP",
        "description": ".",
        "updated": null,
        "date": "2014-04-04T20:07:00Z",
        "year": 2014,
        "month": 4,
        "day": 4,
        "taxonomies": {
          "tags": [
            "haskell",
            "data"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Mike Meyer",
          "html": "hubspot-blogs/mvp.html",
          "blogimage": "/images/blog-listing/distributed-ledger.png"
        },
        "path": "/blog/2014/04/mvp/",
        "components": [
          "blog",
          "2014",
          "04",
          "mvp"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/monte-carlo-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2014/03/monte-carlo-haskell/",
        "slug": "monte-carlo-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Monte carlo analysis in Haskell",
        "description": ".",
        "updated": null,
        "date": "2014-03-20T08:26:00Z",
        "year": 2014,
        "month": 3,
        "day": 20,
        "taxonomies": {
          "tags": [
            "haskell",
            "data"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/monte-carlo-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2014/03/monte-carlo-haskell/",
        "components": [
          "blog",
          "2014",
          "03",
          "monte-carlo-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/call-for-entries.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/11/call-for-entries/",
        "slug": "call-for-entries",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Call For Entries",
        "description": ".",
        "updated": null,
        "date": "2013-11-25T13:12:00Z",
        "year": 2013,
        "month": 11,
        "day": 25,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Natalia Muska",
          "html": "hubspot-blogs/call-for-entries.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/11/call-for-entries/",
        "components": [
          "blog",
          "2013",
          "11",
          "call-for-entries"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/community-edition-announcement.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/11/community-edition-announcement/",
        "slug": "community-edition-announcement",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing FP Haskell Center Community Edition and Feature Upgrades",
        "description": ".",
        "updated": null,
        "date": "2013-11-18T13:12:00Z",
        "year": 2013,
        "month": 11,
        "day": 18,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Natalia Muska",
          "html": "hubspot-blogs/community-edition-announcement.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/11/community-edition-announcement/",
        "components": [
          "blog",
          "2013",
          "11",
          "community-edition-announcement"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/state-of-stackage.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/09/state-of-stackage/",
        "slug": "state-of-stackage",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The State of Stackage",
        "description": ".",
        "updated": null,
        "date": "2013-09-30T13:12:00Z",
        "year": 2013,
        "month": 9,
        "day": 30,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/state-of-stackage.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/09/state-of-stackage/",
        "components": [
          "blog",
          "2013",
          "09",
          "state-of-stackage"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/august-competition-winners.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/09/august-competition-winners/",
        "slug": "august-competition-winners",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "August Competition Winners",
        "description": ".",
        "updated": null,
        "date": "2013-09-17T13:12:00Z",
        "year": 2013,
        "month": 9,
        "day": 17,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Natalia Muska",
          "html": "hubspot-blogs/august-competition-winners.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/09/august-competition-winners/",
        "components": [
          "blog",
          "2013",
          "09",
          "august-competition-winners"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/snap-happstack-anything-else.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/08/snap-happstack-anything-else/",
        "slug": "snap-happstack-anything-else",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Snap, Happstack and anything else",
        "description": ".",
        "updated": null,
        "date": "2013-08-05T13:30:00Z",
        "year": 2013,
        "month": 8,
        "day": 5,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/snap-happstack-anything-else.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/08/snap-happstack-anything-else/",
        "components": [
          "blog",
          "2013",
          "08",
          "snap-happstack-anything-else"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/beta-refresh-one.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/07/beta-refresh-one/",
        "slug": "beta-refresh-one",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Haskell Center Beta Refresh",
        "description": ".",
        "updated": null,
        "date": "2013-07-17T20:00:00Z",
        "year": 2013,
        "month": 7,
        "day": 17,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Gregg Lebovitz",
          "html": "hubspot-blogs/beta-refresh-one.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/07/beta-refresh-one/",
        "components": [
          "blog",
          "2013",
          "07",
          "beta-refresh-one"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/ide-stackage.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/07/ide-stackage/",
        "slug": "ide-stackage",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "IDE and Stackage",
        "description": ".",
        "updated": null,
        "date": "2013-07-16T20:00:00Z",
        "year": 2013,
        "month": 7,
        "day": 16,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/ide-stackage.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/07/ide-stackage/",
        "components": [
          "blog",
          "2013",
          "07",
          "ide-stackage"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/competition-announcement.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/07/competition-announcement/",
        "slug": "competition-announcement",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Complete Launches FP Haskell Competition with $1,000 Cash Prize Each Month",
        "description": ".",
        "updated": null,
        "date": "2013-07-16T19:51:00Z",
        "year": 2013,
        "month": 7,
        "day": 16,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Ken Liu",
          "html": "hubspot-blogs/competition-announcement.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/07/competition-announcement/",
        "components": [
          "blog",
          "2013",
          "07",
          "competition-announcement"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/beta-activation-update.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/07/beta-activation-update/",
        "slug": "beta-activation-update",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Haskell Center Beta Sign-Up Still Open. Scheduled Activations Ongoing",
        "description": ".",
        "updated": null,
        "date": "2013-07-05T19:51:00Z",
        "year": 2013,
        "month": 7,
        "day": 5,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Ken Liu",
          "html": "hubspot-blogs/beta-activation-update.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/07/beta-activation-update/",
        "components": [
          "blog",
          "2013",
          "07",
          "beta-activation-update"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fp-haskell-center-beta-announcement.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/06/fp-haskell-center-beta-announcement/",
        "slug": "fp-haskell-center-beta-announcement",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Haskell Center Beta Released, and Beta Accounts Activated",
        "description": ".",
        "updated": null,
        "date": "2013-06-30T19:51:00Z",
        "year": 2013,
        "month": 6,
        "day": 30,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Ken Liu",
          "html": "hubspot-blogs/fp-haskell-center-beta-announcement.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/06/fp-haskell-center-beta-announcement/",
        "components": [
          "blog",
          "2013",
          "06",
          "fp-haskell-center-beta-announcement"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fp-haskell-center-beta-demo.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/06/fp-haskell-center-beta-demo/",
        "slug": "fp-haskell-center-beta-demo",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Haskell Center Beta Demo",
        "description": ".",
        "updated": null,
        "date": "2013-06-27T19:51:00Z",
        "year": 2013,
        "month": 6,
        "day": 27,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/fp-haskell-center-beta-demo.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/06/fp-haskell-center-beta-demo/",
        "components": [
          "blog",
          "2013",
          "06",
          "fp-haskell-center-beta-demo"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/beta-sign-up.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/06/beta-sign-up/",
        "slug": "beta-sign-up",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Haskell Center Beta Sign-Up",
        "description": ".",
        "updated": null,
        "date": "2013-06-17T14:42:00Z",
        "year": 2013,
        "month": 6,
        "day": 17,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Ken Liu",
          "html": "hubspot-blogs/beta-sign-up.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/06/beta-sign-up/",
        "components": [
          "blog",
          "2013",
          "06",
          "beta-sign-up"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/call-for-submissions.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/06/call-for-submissions/",
        "slug": "call-for-submissions",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Complete Launches Haskell in Real World Competition",
        "description": ".",
        "updated": null,
        "date": "2013-06-02T16:31:00Z",
        "year": 2013,
        "month": 6,
        "day": 2,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Ken Liu",
          "html": "hubspot-blogs/call-for-submissions.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/06/call-for-submissions/",
        "components": [
          "blog",
          "2013",
          "06",
          "call-for-submissions"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fp-haskell-center-video-blog.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/06/fp-haskell-center-video-blog/",
        "slug": "fp-haskell-center-video-blog",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Haskell Center Video Blog",
        "description": ".",
        "updated": null,
        "date": "2013-05-24T14:42:00Z",
        "year": 2013,
        "month": 5,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/fp-haskell-center-video-blog.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/06/fp-haskell-center-video-blog/",
        "components": [
          "blog",
          "2013",
          "06",
          "fp-haskell-center-video-blog"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/haskell-from-c.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/06/haskell-from-c/",
        "slug": "haskell-from-c",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Haskell from C: Where are the for Loops?",
        "description": ".",
        "updated": null,
        "date": "2013-03-19T16:31:00Z",
        "year": 2013,
        "month": 3,
        "day": 19,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chad Scherrer",
          "html": "hubspot-blogs/haskell-from-c.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/06/haskell-from-c/",
        "components": [
          "blog",
          "2013",
          "06",
          "haskell-from-c"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/learning-through-koans.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/03/learning-through-koans/",
        "slug": "learning-through-koans",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Learning Haskell Through Koans",
        "description": ".",
        "updated": null,
        "date": "2013-03-04T16:32:00Z",
        "year": 2013,
        "month": 3,
        "day": 4,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Chad Scherrer",
          "html": "hubspot-blogs/learning-through-koans.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/03/learning-through-koans/",
        "components": [
          "blog",
          "2013",
          "03",
          "learning-through-koans"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/soh-goes-public.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/03/soh-goes-public/",
        "slug": "soh-goes-public",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "School of Haskell Goes Public",
        "description": ".",
        "updated": null,
        "date": "2013-02-17T16:32:00Z",
        "year": 2013,
        "month": 2,
        "day": 17,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/soh-goes-public.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/03/soh-goes-public/",
        "components": [
          "blog",
          "2013",
          "03",
          "soh-goes-public"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announcing-case-studies.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/02/announcing-case-studies/",
        "slug": "announcing-case-studies",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Case studies of commercial Haskell use",
        "description": ".",
        "updated": null,
        "date": "2013-01-31T16:32:00Z",
        "year": 2013,
        "month": 1,
        "day": 31,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/announcing-case-studies.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/02/announcing-case-studies/",
        "components": [
          "blog",
          "2013",
          "02",
          "announcing-case-studies"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/school-of-haskell-goes-beta.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/01/school-of-haskell-goes-beta/",
        "slug": "school-of-haskell-goes-beta",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "School of Haskell Goes Beta",
        "description": ".",
        "updated": null,
        "date": "2013-01-21T16:32:00Z",
        "year": 2013,
        "month": 1,
        "day": 21,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/school-of-haskell-goes-beta.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/01/school-of-haskell-goes-beta/",
        "components": [
          "blog",
          "2013",
          "01",
          "school-of-haskell-goes-beta"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/solving_the_software_crisis.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/12/solving_the_software_crisis/",
        "slug": "solving-the-software-crisis",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Solving the Global Software Crisis",
        "description": ".",
        "updated": null,
        "date": "2012-11-19T16:32:00Z",
        "year": 2012,
        "month": 11,
        "day": 19,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/solving_the_software_crisis.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/12/solving_the_software_crisis/",
        "components": [
          "blog",
          "2012",
          "12",
          "solving_the_software_crisis"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/rust-at-fpco-2020/",
            "title": "Rust at FP Complete, 2020 update"
          }
        ]
      },
      {
        "relative_path": "blog/designing-the-haskell-ide.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/11/designing-the-haskell-ide/",
        "slug": "designing-the-haskell-ide",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Designing the Haskell IDE",
        "description": ".",
        "updated": null,
        "date": "2012-10-17T16:32:00Z",
        "year": 2012,
        "month": 10,
        "day": 17,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/designing-the-haskell-ide.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/11/designing-the-haskell-ide/",
        "components": [
          "blog",
          "2012",
          "11",
          "designing-the-haskell-ide"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/yesod-tutorial-1-my-first-web-site.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/10/yesod-tutorial-1-my-first-web-site/",
        "slug": "yesod-tutorial-1-my-first-web-site",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Yesod Tutorial 1. My First Web Site",
        "description": "1",
        "updated": null,
        "date": "2012-10-01T18:43:00Z",
        "year": 2012,
        "month": 10,
        "day": 1,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/yesod-tutorial-1-my-first-web-site.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/10/yesod-tutorial-1-my-first-web-site/",
        "components": [
          "blog",
          "2012",
          "10",
          "yesod-tutorial-1-my-first-web-site"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/yesod-tutorial-2-playing-with-routes-and-links.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/10/yesod-tutorial-2-playing-with-routes-and-links/",
        "slug": "yesod-tutorial-2-playing-with-routes-and-links",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Yesod Tutorial 2. Playing with Routes and Links",
        "description": ".",
        "updated": null,
        "date": "2012-10-01T16:32:00Z",
        "year": 2012,
        "month": 10,
        "day": 1,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/yesod-tutorial-2-playing-with-routes-and-links.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/10/yesod-tutorial-2-playing-with-routes-and-links/",
        "components": [
          "blog",
          "2012",
          "10",
          "yesod-tutorial-2-playing-with-routes-and-links"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/commercialuserneeds.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/09/commercialuserneeds/",
        "slug": "commercialuserneeds",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "What do commercial users want from Haskell?",
        "description": ".",
        "updated": null,
        "date": "2012-09-24T16:32:00Z",
        "year": 2012,
        "month": 9,
        "day": 24,
        "taxonomies": {
          "categories": [
            "functional programming"
          ],
          "tags": [
            "haskell"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/commercialuserneeds.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/09/commercialuserneeds/",
        "components": [
          "blog",
          "2012",
          "09",
          "commercialuserneeds"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/functional-patterns-in-c.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/09/functional-patterns-in-c/",
        "slug": "functional-patterns-in-c",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Functional Patterns in C++",
        "description": ".",
        "updated": null,
        "date": "2012-09-05T16:32:00Z",
        "year": 2012,
        "month": 9,
        "day": 5,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/functional-patterns-in-c.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/09/functional-patterns-in-c/",
        "components": [
          "blog",
          "2012",
          "09",
          "functional-patterns-in-c"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/ten-things-you-should-know-about-haskell-syntax.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/09/ten-things-you-should-know-about-haskell-syntax/",
        "slug": "ten-things-you-should-know-about-haskell-syntax",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Ten Things You Should Know About Haskell Syntax",
        "description": ".",
        "updated": null,
        "date": "2012-08-22T16:33:00Z",
        "year": 2012,
        "month": 8,
        "day": 22,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/ten-things-you-should-know-about-haskell-syntax.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/09/ten-things-you-should-know-about-haskell-syntax/",
        "components": [
          "blog",
          "2012",
          "09",
          "ten-things-you-should-know-about-haskell-syntax"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/5-day-haskell-course.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/08/5-day-haskell-course/",
        "slug": "5-day-haskell-course",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "5-Day Haskell Course",
        "description": ".",
        "updated": null,
        "date": "2012-08-13T16:33:00Z",
        "year": 2012,
        "month": 8,
        "day": 13,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/5-day-haskell-course.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/08/5-day-haskell-course/",
        "components": [
          "blog",
          "2012",
          "08",
          "5-day-haskell-course"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/joining-forces-to-advance-haskell.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/08/joining-forces-to-advance-haskell/",
        "slug": "joining-forces-to-advance-haskell",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Joining forces to advance Haskell",
        "description": ".",
        "updated": null,
        "date": "2012-07-16T16:33:00Z",
        "year": 2012,
        "month": 7,
        "day": 16,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/joining-forces-to-advance-haskell.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/08/joining-forces-to-advance-haskell/",
        "components": [
          "blog",
          "2012",
          "08",
          "joining-forces-to-advance-haskell"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/the-functor-pattern-in-c.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/07/the-functor-pattern-in-c/",
        "slug": "the-functor-pattern-in-c",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The Functor Pattern in C++",
        "description": ".",
        "updated": null,
        "date": "2012-06-20T16:33:00Z",
        "year": 2012,
        "month": 6,
        "day": 20,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/the-functor-pattern-in-c.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/07/the-functor-pattern-in-c/",
        "components": [
          "blog",
          "2012",
          "07",
          "the-functor-pattern-in-c"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/asynchronous-api-in-c-and-the-continuation-monad.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/06/asynchronous-api-in-c-and-the-continuation-monad/",
        "slug": "asynchronous-api-in-c-and-the-continuation-monad",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Asynchronous API in C++ and the Continuation Monad",
        "description": ".",
        "updated": null,
        "date": "2012-04-09T16:33:00Z",
        "year": 2012,
        "month": 4,
        "day": 9,
        "taxonomies": {
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/asynchronous-api-in-c-and-the-continuation-monad.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/06/asynchronous-api-in-c-and-the-continuation-monad/",
        "components": [
          "blog",
          "2012",
          "06",
          "asynchronous-api-in-c-and-the-continuation-monad"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/onamission.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/03/onamission/",
        "slug": "onamission",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "It's time for Functional Programming",
        "description": ".",
        "updated": null,
        "date": "2012-03-09T20:43:00Z",
        "year": 2012,
        "month": 3,
        "day": 9,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/onamission.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/03/onamission/",
        "components": [
          "blog",
          "2012",
          "03",
          "onamission"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/the-downfall-of-imperative-programming.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2012/04/the-downfall-of-imperative-programming/",
        "slug": "the-downfall-of-imperative-programming",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The Downfall of Imperative Programming",
        "description": "Multicore processing will force imperative programmers to become functional programmers whether they are ready or not. It's time to accept the future.",
        "updated": null,
        "date": "2012-03-09T16:33:00Z",
        "year": 2012,
        "month": 3,
        "day": 9,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "functional programming"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Bartosz Milewski",
          "html": "hubspot-blogs/the-downfall-of-imperative-programming.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2012/04/the-downfall-of-imperative-programming/",
        "components": [
          "blog",
          "2012",
          "04",
          "the-downfall-of-imperative-programming"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 232
  },
  {
    "name": "insights",
    "slug": "insights",
    "path": "/categories/insights/",
    "permalink": "https://tech.fpcomplete.com/categories/insights/",
    "pages": [
      {
        "relative_path": "blog/paradigm-shift-key-to-competing.md",
        "colocated_path": null,
        "content": "<p>It used to be that being technically mature was thought to be a good thing; now, that view is not so cut and dried.  As you look at topics like containerization, cloud migration, and DevOps, it is easy to see why young companies get to claim the term “Cloud Native.”  At the same time, those who have been in business for decades are frequently relegated to the legions of those needing ‘transforming.’  While this is, of course, an overgeneralization, it feels right more often than not.  So, what are the ‘mature’ to do? </p>\n<p>Talking to several older small and medium sized businesses, a few strategic changes help propel those who are thinking about tech ‘transformation’ into becoming better, faster, more cost-effective, and more secure.  These strategies include focusing on containerizing business logic, cloud-enabling their enterprise, and taking a fresh look at open source offerings for their infrastructure.  If we look at these topics from an executive seat rather than an engineering one, a path and a plan emerges. </p>\n<a href=\"/devops/why-what-how/\">\n<p style=\"text-align:center;font-size:2em;border-width: 3px 0;border-color:#ff8d6e;border-style: dashed;margin:1em 0;padding:0.25em 0;font-weight: bold\">\nCheck Out The Why, What, and How of DevSecOps\n</p>\n</a>\n<p>Containerization is not a new topic; it has just evolved.   We have all gone from monolithic solutions to distributed computing.  From there, we bought small Linux servers, and they felt like containers; then, virtualization came to market, and the VM became the new container.  Now, we have Docker and Kubernetes.  Docker containers represent a considerable paradigm shift in that they do not require a lot of hardware or yet another OS license…., and when managed by Kubernetes, they create an entire ecosystem with little overhead.  Kubernetes take Docker containers and handle horizontal scaling, fault tolerance, automated monitoring, etc. within a DevOps toolset and frame.   What makes this setup even more impressive is Open Source; yet, supported by ‘the most prominent’ tech infrastructure firms. </p>\n<p>Once we start embracing modern container architectures, the conversation gets fascinating. All cloud and virtualization providers are now battling each other to get customers to deploy these standardized workloads onto their proprietary platforms.   While there are always a few complications, Docker and Kubernetes run on AWS, Azure, VMWare, GCP, etc., with little (or no) alterations if you follow the Open Source path. </p>\n<p>So imagine....once we were trying to figure out how to build in fault tolerance, scalability, continuous develop/deploy, and automate testing.....now all we need to do is follow a DevOps approach using Open Source frameworks like Docker and Kubernetes....and voila....you are there (well it isn’t that easy....but a darn sight easier than it used to be).  Oh....and by the way, all of this is far easier to deploy in the cloud than on-premise, but that is a topic for another day. </p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/paradigm-shift-key-to-competing/",
        "slug": "paradigm-shift-key-to-competing",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "A Paradigm Shift is Key to Competing",
        "description": "",
        "updated": null,
        "date": "2020-10-16",
        "year": 2020,
        "month": 10,
        "day": 16,
        "taxonomies": {
          "categories": [
            "devops",
            "insights"
          ],
          "tags": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wes Crook",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/paradigm-shift-key-to-competing/",
        "components": [
          "blog",
          "paradigm-shift-key-to-competing"
        ],
        "summary": null,
        "toc": [],
        "word_count": 485,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/devops-in-the-enterprise.md",
        "colocated_path": null,
        "content": "<p>Is it Enterprise DevOps or DevOps in the enterprise?  I guess it all depends on where you sit.  DevOps has been a significant change to how many modern technology organizations approach systems development and support.  While many have found it to be a major productivity boost, it represents a threat in &quot;BTTWWHADI&quot; evangelists in some organizations.  Let's start with two definitions: </p>\n<ul>\n<li>\n<p>DevOps: DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from Agile methodology. Credit: https://en.wikipedia.org/wiki/DevOps </p>\n</li>\n<li>\n<p>BTTWWHADI : This is shorthand for &quot;But That's The Way We Have Always Done It.&quot;  Credit: Unknown </p>\n</li>\n</ul>\n<h2 id=\"where-we-come-from\">Where we come from...</h2>\n<p>If we look at some successful Enterprise technology areas, they have had long term success by sticking with what works.  Cleanly partitioned technical responsibilities (analysts, developers, DBAs, network admins, sysadmins, etc.), a waterfall approach to development, a &quot;stay in your lane&quot; accountability matrix (e.g., you write the app, I'll get it platformed), rack 'em and stack 'em approach to hardware, etc.</p>\n<p>While no one can deny this type of discipline has served many well, Enterprise technology's current generation offers us a much more flexible approach.  Today, virtually all hardware is virtualized (on and off-premise), and cloud vendors offer things like platforms as a service, databases as a service, security as a service...etc.   These innovations have allowed my companies to completely re-think how they want to be spending their technology resource (budget, people, mindshare)….with the most enlightened organizations quickly concluding that they should spend their human capital in spaces where they can create competitive advantages while purchasing those parts of their technology ecosystem what more commoditized.</p>\n<p>An example of this would be in a retail company to think more about creating business intelligence than setting up new hardware for a database server.   A database can be scaled in the cloud, leaving the retail enterprise more human capital to figure out how to drive revenue.  Those who are not embracing the change DevOps affords are most often using a BTTWWHADI argument.   </p>\n<h2 id=\"not-everyone-is-ready-for-a-revolution\">Not everyone is ready for a revolution...</h2>\n<p>So, if DevOps is such a revolution, why do you have so many corporations having such an issue trying to get DevOps strategies to work for them? The answer lies in culture. For DevOps to be effective, an organization needs to be willing to take out a blank sheet of paper and draw a picture of what could be if they tore down yesterday's constraints and looked toward today's innovations. They need to match that picture up against their current staff, recognize that many jobs (and many skills) need to be re-learned or acquired.  No longer is so much specialization required in many specific fixed assets (like data centers, computers, network devices, security devices, etc.)  In a modern DevOps world, much of the infrastructure is virtualized (giving rise to infrastructure as code). </p>\n<p>To some extent, this means that your infrastructure staff will start to look more and more like developers.  Instead of a team plugging in servers, routers, and load balancers into a network backbone, they will be using scripting to configure equivalent services on virtualized hardware. On the development and operational side, CI/CD pipelines and process automation drive out many manual processes involved in yesterday's software development lifecycle. For development, the beginnings of this revolution date back to test-driven development. Today's modern pipelines go from development through testing, integration, and deployment. While everything is automatable, many have stopping points in their pipeline where human interactions are required to review test results or require confirmation about final deployments to production.   Whether you are in infrastructure or development, BTTWWHADI just won't do and more.  To compete, everyone will need to skill up and focus on architecture, automation, XaaS, and scripting/coding to decrease time to market while improving quality and resilience. </p>\n<h2 id=\"so-what-s-the-big-deal\">So, what's the big deal…</h2>\n<p>DevOps can be a threat to those who aren't ready for it (the BTTWWHADI crowd).  If your job is configuring hardware or running manual software tests, you might see these functions being automated into 'coding' jobs.  This function change could pose a severe career problem for those team members who don't see this evolution coming and fail to get prepared through education and training.  Unprepared staff becomes resistive to change (understandably), yet, those who are prepared end up in a better position (read: more career security, mobility, and better paid) as automation experts are now far more sought after than traditional hardware configuration engineers (as a gross generalization).  Please do not misunderstand; traditional system engineers are still valuable members of most enterprise teams, but as DevOps and virtualization take hold, those jobs will change.  Get prepared, train your staff, and address the culture change head-on. </p>\n<p>If you need help with your journey, <a href=\"https://tech.fpcomplete.com/contact-us/\">contact FP Complete</a>.  This is who we are and what we do. </p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/",
        "slug": "devops-in-the-enterprise",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps in the Enterprise: What could be better? What could go wrong?",
        "description": "",
        "updated": null,
        "date": "2020-10-09",
        "year": 2020,
        "month": 10,
        "day": 9,
        "taxonomies": {
          "categories": [
            "devops",
            "insights"
          ],
          "tags": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wes Crook",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/devops-in-the-enterprise/",
        "components": [
          "blog",
          "devops-in-the-enterprise"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "where-we-come-from",
            "permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#where-we-come-from",
            "title": "Where we come from...",
            "children": []
          },
          {
            "level": 2,
            "id": "not-everyone-is-ready-for-a-revolution",
            "permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#not-everyone-is-ready-for-a-revolution",
            "title": "Not everyone is ready for a revolution...",
            "children": []
          },
          {
            "level": 2,
            "id": "so-what-s-the-big-deal",
            "permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#so-what-s-the-big-deal",
            "title": "So, what's the big deal…",
            "children": []
          }
        ],
        "word_count": 827,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cloud-for-non-natives.md",
        "colocated_path": null,
        "content": "<p>Does this mean if you weren't born in the cloud, you'll never be as good as those who are?    </p>\n<p>When thinking about building from scratch or modernizing an existing technology environment, we tend to see one of a few different things happening: </p>\n<ul>\n<li>Staff will read up, and you will try it on your own. </li>\n<li>Managers will hire someone who says they have done it before. </li>\n<li>Leaders will engage a large software vendor or consulting firm to help get them to the promised land. </li>\n</ul>\n<p>While all of these strategies can work, we often find one of the following happens: </p>\n<ul>\n<li>Trial and error result in very expensive under delivery. </li>\n<li>Existing teams become disaffected and resistive because they perceive being left behind. </li>\n<li>Something gets delivered, but costs go up, and reliability goes down. </li>\n<li>New hires come in, make the magic happen, and then move on without leaving enough knowhow to continue without them. </li>\n<li>Vendors use proprietary software, and a new age of vendor lock-in ensues. </li>\n</ul>\n<p>There is a better way of approaching modernizing a business-focused, legacy world.  Our core approach at FP complete is: </p>\n<ul>\n<li>Be vendor agnostic </li>\n<li>Build a road map based on business outcomes </li>\n<li>Deeply understand and implement DevOps concepts </li>\n<li>Be ruthlessly focused on architecture from the start </li>\n<li>Containerize everything* </li>\n<li>Virtualize everything*</li>\n</ul>\n<p>While this approach is straightforward, staying focused on outcomes is the key: </p>\n<ul>\n<li>The business logic is the key to build your ecosystem once and properly so you can focus on what matters. </li>\n<li>Integrate security by design as security is a non-non-negotiable. </li>\n<li>All alerts and logs centrally as managing and operating via complete transparency is key. </li>\n<li>Ensure Containers are made to scale horizontally and be fault-tolerant from the start. </li>\n<li>Ensure you are on-prem and cloud-agnostic. </li>\n<li>Be open-source but get enterprise support. </li>\n</ul>\n<p>How do you get help without breaking the bank, compromising your values, or getting locked in? </p>\n<p>At FP Complete, we believe the way to get started is to: </p>\n<ul>\n<li>Build DevOps expertise, acquire DevOps Tooling. </li>\n<li>Get help constructing your roadmap to ensure technical focus aligns with business results. </li>\n<li>Get help designing how your applications will get containerized to be cloud-ready. </li>\n<li>Acquire Enterprise support for your newly open-sourced world. </li>\n</ul>\n<p>FP Complete has a unique track record in these activities.  We are not built on recurring revenue from long term consulting.   We are built on helping our customers build better software, run better technology operations, and achieve better business outcomes.  We come from diverse backgrounds and have serviced a myriad of industries.  We often find that others have already solved many of our client's problems, and our expertise lies in matching existing solutions to places where they are needed most. </p>\n<p>So, what is the best way to get started: </p>\n<ol>\n<li>Please send us a mail or call us up. </li>\n<li>We will walk through your aspirations and provide a high-level road map for achieving your goals at no cost. </li>\n<li>If you like what you see, invite us in for a POC based on a 100% ROI. </li>\n<li>Scale from there. </li>\n</ol>\n<p>If you are unsure about the claims in this post, shoot me an email...you won't get a bot response… you'll get me. </p>\n<p>*Note: the exceptions to these rules are usually around ultra-low latency requirements. </p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloud-for-non-natives/",
        "slug": "cloud-for-non-natives",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cloud for Non-Natives",
        "description": "Faster time to market and lower failure rate are the beginning of the many benefits DevOps offers companies. Discover the measurable metrics and KPIs, as well as the true business value DevOps offers.",
        "updated": null,
        "date": "2020-10-02",
        "year": 2020,
        "month": 10,
        "day": 2,
        "taxonomies": {
          "tags": [
            "devops",
            "insights"
          ],
          "categories": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wes Crook",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/cloud-for-non-natives/",
        "components": [
          "blog",
          "cloud-for-non-natives"
        ],
        "summary": null,
        "toc": [],
        "word_count": 545,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fpco-inc-5000-company.md",
        "colocated_path": null,
        "content": "<p class=\"text-center\"><img src=\"/images/blog/inc5000.jpg\" style=\"max-width:80%\"></p>\n<p><em>Inc.</em> magazine announced that FP Complete Corporation ranked 1,547 on its list of the fastest-growing privately held companies in America. </p>\n<p>“I’m extremely proud of our entire FP Complete team for what we’ve been able accomplish and to earn this accolade. Not only are we one of the fastest-growing companies, but so are a number of our customers.  It’s only the start of bigger and brighter things to come,” says FP Complete’s CEO, Wesley Crook. </p>\n<p>FP Complete has just recently introduced their new PaaS product, Kube360®, which they expect will generate a great deal of Enterprise interest and will be very disruptive in the DevSecOps marketplace.</p>\n<p>FP Complete is a global Software and Technology Services company.  In addition to developing its own Enterprise software products, FP Complete also provides IT Management Consulting and Software Development services to clients around the world.  FP Complete’s roots are in FinTech (Blockchain and Crypto Currencies), Healthcare and Financial Services. </p>\n<p>“The companies on this year’s Inc. 5000 come from nearly every realm of business,” says Inc. editor-in-chief Scott Omelianuk. “From health and software to media and hospitality, the 2020 list proves that no matter the sector, incredible growth is based on the foundations of tenacity and opportunism.” The Inc. 5000’s aggregate revenue was $209 billion in 2019, accounting for over 1 million jobs over the past three years.   Complete results of the Inc. 5000, including company profiles and an interactive database that can be sorted by industry, region and other criteria, can be found at <a href=\"https://www.inc.com/inc5000\">www.inc.com/inc5000</a>. </p>\n<h2 id=\"the-inc-5000-methodology\">The Inc. 5000 Methodology</h2>\n<p>The 2020 Inc. 5000 is ranked according to percentage revenue growth when comparing 2016 and 2019. To qualify, companies must have been founded and generating revenue by March 31, 2016. They had to be U.S.-based, privately held, for-profit, and independent—not subsidiaries or divisions of other companies—as of December 31, 2019. (Since then, a number of companies on the list have gone public or been acquired.) The minimum revenue required for 2016 is $100,000; the minimum for 2019 is $2 million. As always, Inc. reserves the right to decline applicants for subjective reasons. Companies on the Inc. 500 are featured in Inc.’s September issue. They represent the top tier of the Inc. 5000, which can be found at <a href=\"https://www.inc.com/inc5000\">https://www.inc.com/inc5000</a>. </p>\n<h2 id=\"about-inc-media\">About Inc. Media</h2>\n<p>The world’s most trusted business-media brand, Inc. offers entrepreneurs the knowledge, tools, connections, and community to build great companies. Its award-winning multiplatform content reaches more than 50 million people each month across a variety of channels including websites, newsletters, social media, podcasts, and print. Its prestigious Inc. 5000 list, produced every year since 1982, analyzes company data to recognize the fastest-growing privately held businesses in the United States. The global recognition that comes with inclusion in the 5000 gives the founders of the best businesses an opportunity to engage with an exclusive community of their peers, and the credibility that helps them drive sales and recruit talent. The associated Inc. 5000 Conference is part of a highly acclaimed portfolio of bespoke events produced by Inc. For more information, visit <a href=\"https://www.inc.com\">www.inc.com</a>. </p>\n<p>For more information on the Inc. 5000 Conference, visit <a href=\"https://conference.inc.com/\">http://conference.inc.com/</a>. </p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/fpco-inc-5000-company/",
        "slug": "fpco-inc-5000-company",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Complete Ranked #1547 Among Inc 5000 Companies!",
        "description": "Inc. magazine announced that FP Complete Corporation ranked 1,547 on its list of the fastest-growing privately held companies in America.",
        "updated": null,
        "date": "2020-08-12",
        "year": 2020,
        "month": 8,
        "day": 12,
        "taxonomies": {
          "categories": [
            "insights"
          ],
          "tags": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Staff",
          "blogimage": "/images/blog-listing/blog-2.png",
          "image": "images/blog/inc5000.jpg"
        },
        "path": "/blog/fpco-inc-5000-company/",
        "components": [
          "blog",
          "fpco-inc-5000-company"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "the-inc-5000-methodology",
            "permalink": "https://tech.fpcomplete.com/blog/fpco-inc-5000-company/#the-inc-5000-methodology",
            "title": "The Inc. 5000 Methodology",
            "children": []
          },
          {
            "level": 2,
            "id": "about-inc-media",
            "permalink": "https://tech.fpcomplete.com/blog/fpco-inc-5000-company/#about-inc-media",
            "title": "About Inc. Media",
            "children": []
          }
        ],
        "word_count": 555,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/understanding-devops-roles-and-responsibilities.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/understanding-devops-roles-and-responsibilities/",
        "slug": "understanding-devops-roles-and-responsibilities",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Understanding DevOps Roles and Responsibilities",
        "description": "Companies are implementing DevOps at an increasingly rapid rate. Discover the roles and responsibilities and how to implement DevOps into your latest project.",
        "updated": null,
        "date": "2020-07-24T13:12:00Z",
        "year": 2020,
        "month": 7,
        "day": 24,
        "taxonomies": {
          "categories": [
            "insights",
            "devops"
          ],
          "tags": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "html": "hubspot-blogs/understanding-devops-roles-and-responsibilities.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/understanding-devops-roles-and-responsibilities/",
        "components": [
          "blog",
          "understanding-devops-roles-and-responsibilities"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/preparing-for-cloud-computing-trends.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/preparing-for-cloud-computing-trends/",
        "slug": "preparing-for-cloud-computing-trends",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Preparing for Upcoming Cloud Computing Trends",
        "description": "Cloud Computing is growing at a rate 7 times faster than the rest of IT with no signs of slowing in the coming years. Discover all the trends businesses should be preparing for in order to succeed in 2020 and beyond. ",
        "updated": null,
        "date": "2020-07-24T11:05:00Z",
        "year": 2020,
        "month": 7,
        "day": 24,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "insights",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Team",
          "html": "hubspot-blogs/preparing-for-cloud-computing-trends.html",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/preparing-for-cloud-computing-trends/",
        "components": [
          "blog",
          "preparing-for-cloud-computing-trends"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/how-to-successfully-onboard-a-remote-workforce.md",
        "colocated_path": null,
        "content": "<p>The new realities of the coronavirus pandemic and the social distancing orders of state and local governments have forced many businesses to transition their workforces out of the traditional office space and into the remote home office environment.</p>\n<p>For over eight years, my company has relied exclusively on the remote home office environment for its workers and has gained extensive experience in the recruitment, onboarding and retention of these vital home remote workers in the U.S. and abroad.</p>\n<p>We have developed a highly successful program consisting of four stages: the remote interview process, preboarding, team integration and ongoing training/employee educational development. Based on my experience, here’s how you can navigate these four remote workforce onboarding stages.</p>\n<h2 id=\"the-interview-process\">The Interview Process</h2>\n<p>Over the past eight years, we have\ndiscovered that a successful onboarding\nprogram begins with a relevant and\ndetailed job description that informs the\napplicant of the company’s objectives and\ndelivery methods. When creating a job\ndescription for a remote position, it’s\nimportant to be as detailed as possible\nregarding the responsibilities of the\nposition, reporting structure and\nexplicit work requirements. You will also\nneed to pay close attention to the\nsoftware tool needs and skill level for\neach of your remote workers to guarantee\nthat everyone is working on the same\nsystem for efficiency and team management\npurposes. Interviews can be conducted via\nvideo conference using tools such as\nZoom.</p>\n<p>During the interview process, the\nremote applicant is further informed of\ndetailed job responsibilities and company\nexpectations. In each of their\ninterviews, the remote applicant should\ntypically meet at least three members of\nthe team that they may be working closely\nwith. In my experience, a remote\nworkforce is further developed into a\nteam when each team member has assisted\nin their recruitment. You can implement\nthis interview strategy by communicating\nwith team members and managers on\nspecific job requirements to include in\nthe job advertisement. I do not recommend\ndelegating the interview to a recruiter\nor someone without knowledge of the job\nfunctions. Instead, involve senior\nmembers of the team in the interview\nprocess.</p>\n<h2 id=\"preboarding\">Preboarding</h2>\n<p>Once hired, the new employee should\nundergo a detailed preboarding process.\nThis process should begin with a welcome\nemail that provides an itinerary for the\nfirst few weeks and a management team\ncontact to answer any questions. This new\nemployee onboarding itinerary should map\nout all the necessary employee forms to\ncomplete, contact and bio information for\neach of their new team members, a short\nsynopsis of department goals for the\nyear, along with any necessary client\nproject information.</p>\n<p>At my company, we also like to provide\nnew hires with a company handbook and\naccess to our Slack channel, so that they\ncan “meet and greet” their co-workers\nbefore their first day. To make them look\nand feel a part of our team at their home\noffice, we send all new hires a polo\nshirt and coffee mug. Slack is used in\nmost tech companies, but other\nbroad-based communications platforms can\nbe used for virtual “happy hours.” Making\na small investment in free gear also goes\na long way in establishing goodwill and\nhelping employees feel like they are part\nof the team.</p>\n<h2 id=\"onboarding\">Onboarding</h2>\n<p>Beginning with their first day at\nwork, we like to integrate new hires with\ntheir respective teams by having them\nparticipate in a Zoom welcome meeting\nwith their team. We also provide them\nwith an organization chart that\nidentifies key management personnel and\nincludes an overview of reporting\nrequirements. One tip that I have found\nhelpful is to assign new hires a\ndedicated mentor who can answer any\nquestions and help them succeed in\nreaching project goals and company\nobjectives. This solidifies the\nemployee’s connection with the work and\nhelps them feel more welcome.</p>\n<p>Also, weekly virtual one-on-one\nmeetings with supervisors can further the\nintegration process and help employees\nreach project and professional goals.\nHave your new remote worker participate\nin one-on-one meetings with your HR,\nfinance, IT and product development teams\nduring the first week of work. Each of\nthese teams plays an important role in\nthe onboarding process by training the\nnew employee on a wide array of company\nmatters, such as time entry and company\nproducts and services, along with an\noverview of your brand and\ncompetition.</p>\n<h2 id=\"ongoing-training\">Ongoing Training</h2>\n<p>The final piece to a successful\nonboarding program is ongoing training.\nThrough these virtual training sessions,\nmy company likes to provide additional\nsupport with processes and tools specific\nto its operations. In my experience, this\ncan help increase employee retention,\nfoster innovation and improve overall job\nperformance in a collaborative work\nenvironment.</p>\n<p>One way to provide ongoing training is\nthrough small weekly team meetings, where\nthe various small teams meet and go\nthrough hot topics or issues that have\ncome up in the past week on their team\nproject. These should be short,\nagenda-focused meetings that cover an\nissue that needs to be handled by the\nteam. I have found that a team-based\napproach to handling an issue works best\nin this type of meeting and brings the\nteam closer together while working\nthrough the issue resolution. Short\ntraining videos, strong mentor\nrelationships and one-on-one meetings are\nsome of the most successful ways to train\nyour remote workforce.</p>\n<h2 id=\"final-thoughts\">Final Thoughts</h2>\n<p>Relying on a remote workforce is not\nwithout its own set of challenges, such\nas coordinating virtual meetings across\nmultiple time zones apart. However, I\nhave found that success in remote worker\nrecruitment and retention stems from the\ndevelopment and implementation of an\nonboarding program that directly involves\nteam members and management in the\nrecruitment and interview process, makes\nthe new worker feel welcome, and\ncontinues with the educational and\nprofessional development of the worker\nthroughout their career.</p>\n<p><a href=\"https://www.forbes.com/sites/forbestechcouncil/2020/04/30/how-to-successfully-onboard-a-remote-workforce/\"><em>Original article on Forbes</em></a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/how-to-successfully-onboard-a-remote-workforce/",
        "slug": "how-to-successfully-onboard-a-remote-workforce",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "How To Successfully Onboard A Remote Workforce",
        "description": "The new realities of the coronavirus pandemic and the social distancing orders of state and local governments",
        "updated": null,
        "date": "2020-04-30",
        "year": 2020,
        "month": 4,
        "day": 30,
        "taxonomies": {
          "tags": [
            "insights"
          ],
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Wesley Crook",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/how-to-successfully-onboard-a-remote-workforce/",
        "components": [
          "blog",
          "how-to-successfully-onboard-a-remote-workforce"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "the-interview-process",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-successfully-onboard-a-remote-workforce/#the-interview-process",
            "title": "The Interview Process",
            "children": []
          },
          {
            "level": 2,
            "id": "preboarding",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-successfully-onboard-a-remote-workforce/#preboarding",
            "title": "Preboarding",
            "children": []
          },
          {
            "level": 2,
            "id": "onboarding",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-successfully-onboard-a-remote-workforce/#onboarding",
            "title": "Onboarding",
            "children": []
          },
          {
            "level": 2,
            "id": "ongoing-training",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-successfully-onboard-a-remote-workforce/#ongoing-training",
            "title": "Ongoing Training",
            "children": []
          },
          {
            "level": 2,
            "id": "final-thoughts",
            "permalink": "https://tech.fpcomplete.com/blog/how-to-successfully-onboard-a-remote-workforce/#final-thoughts",
            "title": "Final Thoughts",
            "children": []
          }
        ],
        "word_count": 979,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/16-smart-project-management-strategies-every-tech-leader-can-use.md",
        "colocated_path": null,
        "content": "<p>Task and project management is a must-have\nskill in the technology industry, especially for\ntech leaders. Most are handling multiple projects\nand demands on their time, so it’s important to\nbe able to prioritize and get everything\ndone.</p>\n<p>As some of the top professionals in the field,\nthe members of <a href=\n\"https://forbestechcouncil.com/\" rel=\n\"noopener noreferrer\" target=\"_blank\">Forbes\nTechnology Council</a> have spent years\ncultivating their project-management skills.\nBelow, they share their go-to project-management\nstrategies.</p>\n<div class=\"single-top\">\n  <div id=\"accordion\">\n    <div class=\"card active-main\">\n      <div class=\"card-header active\">\n        <a class=\"card-link\" data-toggle=\"collapse\"\n        href=\"#collapse1\">1. Let your team own the\n        projects they’re passionate about.</a>\n      </div>\n      <div class=\"collapse show\" data-parent=\n      \"#accordion\" id=\"collapse1\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>One management strategy is to create\n            an organization where people apply or\n            sign up for the projects that they are\n            passionate about. This requires that\n            leaders end centralized management and\n            disperse responsibility, creating a\n            self-managing organization. Those who are\n            passionate about a project manage it from\n            beginning to end, often completing\n            projects faster and with better results.\n            – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Sergei-Anikin-CTO-Pipedrive/5fd8d8c1-61d0-4640-98b0-8fc9ec9ee655\"\n            rel=\"noopener\" target=\"_blank\">Sergei\n            Anikin</a>, <a href=\n            \"https://www.pipedrive.com/\" rel=\n            \"noopener\" target=\n            \"_blank\">Pipedrive</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse2\">2. Set\n        milestones and goals as a team.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse2\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>A lot of tasks we end up focusing on\n            are more related to activity than\n            productivity. To make sure our focus is\n            on productive tasks, the entire\n            organization must be aligned on the\n            organization’s goals and the tasks\n            everyone must do to contribute to those\n            goals. Once everyone understands their\n            function, setting and focusing on\n            milestones to accomplish larger tasks\n            leads to better progress. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Randy-Watkins-Chief-Technology-Officer-Critical-Start/a51bf75b-adac-4a54-ba2b-945781daaa7d\"\n            rel=\"noopener\" target=\"_blank\">Randy\n            Watkins</a>, <a href=\n            \"https://www.criticalstart.com/\" rel=\n            \"noopener\" target=\"_blank\">Critical\n            Start</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse3\">3. Have a\n        central communication tool.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse3\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>The first and most important step is\n            to define the goal of the project and\n            clarify expectations. All modern project\n            management comes down to managing\n            expectations. The circulatory system of\n            modern management is communication\n            channels. The key communication tool is a\n            task-management system combined with a\n            knowledge base—something like Jira with\n            Confluence. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Dennis-Turpitka-Founder-CEO-Apriorit/a4529862-bebb-493b-ae42-358aefb5c24e\"\n            rel=\"noopener\" target=\"_blank\">Dennis\n            Turpitka</a>, <a href=\n            \"https://www.apriorit.com/\" rel=\n            \"noopener\" target=\n            \"_blank\">Apriorit</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse4\">4. Create an\n        Eisenhower Matrix.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse4\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>I look to Eisenhower for inspiration,\n            and I utilize an Eisenhower Matrix daily.\n            I make four boxes with “Urgency” on the\n            x-axis and “Importance” on the y-axis.\n            This allows me to bucket tasks into four\n            categories: “Urgent/Important,”\n            “Urgent/Not Important,” “Not\n            Urgent/Important” and “Not Urgent/Not\n            Important.” It’s a powerful way to figure\n            out what needs to be done when. –\n            Michael Zaic, <a href=\n            \"https://www.wildskymedia.com/\" rel=\n            \"noopener\" target=\"_blank\">Wild Sky\n            Media</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse5\">5. Hold regular\n        standup meetings.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse5\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Quite a few principles fall under the\n            agile project-management framework, but\n            the one I find the most useful is having\n            regular standups. In these meetings, team\n            members go over what they’ve done and\n            what they’re going to do, as well as if\n            any roadblocks are in their way. This\n            allows employees to go over every project\n            they’re working on to give regular\n            updates. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Kison-Patel-CEO-Founder-DealRoom/8aa493e4-ef9c-44c2-aba4-eef6c3a2c2e1\"\n            rel=\"noopener\" target=\"_blank\">Kison\n            Patel</a>, <a href=\n            \"https://dealroom.net/\" rel=\"noopener\"\n            target=\"_blank\">DealRoom</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse6\">6. Manage\n        customer expectations.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse6\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Customers are notorious for adding to\n            the scope or making changes to what they\n            want. One of the best ways to deal with\n            it is by managing the customer’s\n            expectation of what they will get. This\n            may mean that, as a manager, you will\n            need to tell customers that their request\n            is out of scope and requires a\n            modification to the contract that may\n            affect cost and/or timelines. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Michael-Hoyt-Director-Cyber-Security-Operations-Life-Cycle-Engineering-Inc/d7f8c985-834e-4d0a-b6b5-61dc8347a1ab\"\n            rel=\"noopener\" target=\"_blank\">Michael\n            Hoyt</a>, <a href=\"https://www.lce.com/\"\n            rel=\"noopener\" target=\"_blank\">Life Cycle\n            Engineering, Inc</a>.</p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse7\">7. Treat your\n        days like sprints.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse7\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Time management is essential. I treat\n            my days as sprints with specific time\n            blocks for each activity. I leave two\n            blocks in the afternoon to return to what\n            I need to for additional review or\n            followup. I set specific times for\n            emails, phone calls, meetings, etc. And,\n            importantly, I do not let them interfere\n            with each other. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Wesley-Crook-CEO-FP-Complete/0f2a6f2f-5a65-462c-ab30-19770b2f6f02\"\n            rel=\"noopener\" target=\"_blank\">Wesley\n            Crook</a>, <a href=\n            \"/server_software_development_and_devops_engineers\"\n            rel=\"noopener\">FP Complete</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse8\">8. Monitor and\n        address positive and negative risk.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse8\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Organizations with agile projects\n            should realign their risk perceptions.\n            Although negative risk must be carefully\n            managed, teams should embrace positive\n            risk to maximize business value. Risk\n            matrices, risk burndown charts and\n            risk-modified user story maps should be\n            included on agile walls and must be\n            adjusted to help teams identify, monitor\n            and address both positive and negative\n            risk. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Christopher-Yang-Vice-President-Engineering-Corporate-Travel-Management/a2c233fc-76a5-4b8b-9e4a-3bb4884527c3\"\n            rel=\"noopener\" target=\n            \"_blank\">Christopher Yang</a>, <a href=\n            \"https://www.travelctm.com/\" rel=\n            \"noopener\" target=\"_blank\">Corporate\n            Travel Management</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse9\">9. Hire smarter\n        people and nurture new leaders.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse9\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>There is no greater joy as a leader\n            than seeing those you have nurtured\n            surpass you in talent and success. That\n            is your lasting legacy. Hire people\n            smarter than you and nurture their\n            leadership abilities. There is the old\n            adage of, “If you want to go fast, go\n            alone, but if you want to go far, go\n            together.” Develop a robust team of\n            leaders and allow them to succeed. –\n            <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Jos%C3%A9-Morey-Chief-Medical-Innovation-Officer-Liberty-BioSecurity/e2a1ca82-3b03-4935-8eef-0888e1a7f0eb\"\n            rel=\"noopener\" target=\"_blank\">José\n            Morey</a>, <a href=\n            \"https://www.libertybiosecurity.com/\"\n            rel=\"noopener\" target=\"_blank\">Liberty\n            BioSecurity</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse10\">10. Prioritize\n        projects that move the needle.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse10\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Tech leaders are constantly juggling\n            multiple projects and initiatives at\n            once. But you need to select and\n            prioritize projects that will make the\n            biggest difference. Nonessential projects\n            can actually result in productivity loss.\n            Selecting the right projects is actually\n            a skill that comes from an understanding\n            of business strategy combined with a\n            data-driven approach that will impact key\n            performance indicators. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/John-Shin-Managing-Director-RSI-Security/10aac0bb-b30f-4c24-bf66-b6ae4f232e8a\"\n            rel=\"noopener\" target=\"_blank\">John\n            Shin</a>, <a href=\n            \"https://www.rsisecurity.com/\" rel=\n            \"noopener\" target=\"_blank\">RSI\n            Security</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse11\">11. Leverage\n        managed services.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse11\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>If you lead an engineering or\n            development group and your tasks include\n            maintaining toolsets, managed services\n            can be a godsend. The same is true if\n            you’re a systems or application\n            administrator. Any service provider worth\n            their weight can take things off your\n            plate like admin and implementation, user\n            training, troubleshooting, support\n            issues, and the like. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/John-McDonald-CEO-ClearObject/bcc116ad-7b8d-440a-8322-7a0f2e0aa2a1\"\n            rel=\"noopener\" target=\"_blank\">John\n            McDonald</a>, <a href=\n            \"https://www.clearobject.com/\" rel=\n            \"noopener\" target=\n            \"_blank\">ClearObject</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse12\">12. Maintain a\n        culture of accountability.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse12\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Even before specific task- or\n            project-management skills come into play,\n            it is important to maintain a culture of\n            accountability. Start with yourself. Meet\n            your own commitments and admit mistakes.\n            Define your expectations. Ask for\n            commitments. Be open to feedback. Coach\n            people on how to be accountable and to\n            hold others accountable, and understand\n            what the consequences should be for poor\n            performance. – Steve Pao, <a href=\n            \"https://blog.hillwork.com/\" rel=\n            \"noopener\" target=\"_blank\">Hillwork,\n            LLC</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse13\">13. Lay out the\n        details ahead of time.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse13\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Describe all the details and lay down\n            all the plans even before the project is\n            launched. This move is often\n            underestimated, but it can really go a\n            long way. Laying a solid foundation for\n            projects will ensure that you are not\n            going to need to manage them daily. If\n            your team knows what to do, the process\n            will be smooth and successful. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Daria-Leshchenko-CEO-SupportYourApp-Inc/b19a25b9-c566-441c-b3a8-4e262dccd26d\"\n            rel=\"noopener\" target=\"_blank\">Daria\n            Leshchenko</a>, <a href=\n            \"https://supportyourapp.com/\" rel=\n            \"noopener\" target=\"_blank\">SupportYourApp\n            Inc</a>.</p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse14\">14. Stop\n        micromanaging your team.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse14\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Let your team members take full\n            ownership of their areas of\n            responsibility. Keep them loaded at 70%\n            to 80% to reduce stress levels and enable\n            creative thinking. To ensure effective\n            delivery, avoid any kind of\n            micromanagement and tactics control. It’s\n            ruinous for both sides. All in all, make\n            sure your team always understands your\n            “what” and can bring you their “how.” –\n            <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Aleksandr-Galkin-CEO-Co-founder-Competera-Competera/814eca80-15e4-4576-a4f9-094985dd12e0\"\n            rel=\"noopener\" target=\"_blank\">Aleksandr\n            Galkin</a>, Competera</p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse15\">15. Limit\n        distractions during your ‘focus time.’</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse15\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>Multitasking is a myth. To do deeper\n            work, you need to limit distractions. To\n            do that, you need cultural and individual\n            practices that allow people to go offline\n            for chunks of time and that respect that\n            time so that folks feel comfortable\n            turning off distractions and digging\n            deep. This singular and serial focus\n            allows you to “multitask” more because\n            you are not constantly switching tasks. –\n            <a href=\n            \"https://profiles.forbes.com/members/agency/profile/Amith-Nagarajan-Executive-Chairman-rasa-io/43f546be-367b-47a5-9213-1cb0af32bc21\"\n            rel=\"noopener\" target=\"_blank\">Amith\n            Nagarajan</a>, <a href=\n            \"https://rasa.io/?utm_source=forbes_council_profile\"\n            rel=\"noopener\" target=\n            \"_blank\">rasa.io</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n    <div class=\"card\">\n      <div class=\"card-header\">\n        <a class=\"collapsed card-link\" data-toggle=\n        \"collapse\" href=\"#collapse16\">16. Implement\n        good status-reporting practices.</a>\n      </div>\n      <div class=\"collapse\" data-parent=\"#accordion\"\n      id=\"collapse16\">\n        <div class=\"card-body\">\n          <div class=\"body-widget\">\n            <p>As a tech leader, I need to know the\n            high-level details of the project\n            (schedule, timeline, whether it’s on\n            track, if anyone needs my help removing\n            an obstacle, etc.). That way I stay\n            updated, know when I need to get involved\n            and can keep my schedule moving forward.\n            We use the Entrepreneurial Operating\n            System to keep our status reports and\n            meetings on track. – <a href=\n            \"https://profiles.forbes.com/members/tech/profile/Thomas-Griffin-Co-Founder-President-OptinMonster/c7b9bbc1-1ffe-487c-93a9-248bac65521d\"\n            rel=\"noopener\" target=\"_blank\">Thomas\n            Griffin</a>, <a href=\n            \"https://optinmonster.com/\" rel=\n            \"noopener\" target=\n            \"_blank\">OptinMonster</a></p>\n          </div>\n        </div>\n      </div>\n    </div>\n  </div>\n</div>\n<div class=\"single-citations\">\n  <div class=\"body-widget\">\n    <p><strong>Citation:</strong> <a href=\n    \"https://www.forbes.com/sites/forbestechcouncil/2020/04/22/16-smart-project-management-strategies-every-tech-leader-can-use/#46431b8d2fa3\"\n    target=\n    \"_blank\">https://www.forbes.com/sites/forbestechcouncil/2020/04/22/16-smart-project-management-strategies-every-tech-leader-can-use/#46431b8d2fa3</a></p>\n  </div>\n</div>\n",
        "permalink": "https://tech.fpcomplete.com/blog/16-smart-project-management-strategies-every-tech-leader-can-use/",
        "slug": "16-smart-project-management-strategies-every-tech-leader-can-use",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "16 Smart Project-Management Strategies Every Tech Leader Can Use",
        "description": "Task and project management is a must-have skill in the technology industry",
        "updated": null,
        "date": "2020-04-22",
        "year": 2020,
        "month": 4,
        "day": 22,
        "taxonomies": {
          "tags": [
            "insights"
          ],
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Forbes Technology Council",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/16-smart-project-management-strategies-every-tech-leader-can-use/",
        "components": [
          "blog",
          "16-smart-project-management-strategies-every-tech-leader-can-use"
        ],
        "summary": null,
        "toc": [],
        "word_count": 2420,
        "reading_time": 13,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/reasons-software-projects-fail.md",
        "colocated_path": null,
        "content": "<p>Tech teams often plunge into new software\nprojects with high hopes, making it all the more\nfrustrating if the project gets derailed. Tech\nleaders need to be aware of potential project\npitfalls ahead of time to avoid wasting time and\nbudget dollars.</p>\n<p>The experts of\n<a href=\"https://forbestechcouncil.com/\">Forbes Technology Council</a>\nhave overseen many\nprojects in their professional tenures. Below, 14\nof them share common reasons software projects\nflounder and what tech teams can do to avoid\nfalling into a trap.</p>\n<h2 id=\"1-not-understanding-the-needs-of-the-business\">1. Not Understanding The Needs Of The Business</h2>\n<p>One of the reasons software projects\nfail is the lack of understanding of the\nbusiness’ needs. The business must\nclearly articulate the requirements in\ndetail. There needs to be a precise\nmapping of features and functions to the\nbusiness’ needs. Assigning a seasoned\nbusiness leader to the project team is\nessential for success.</p>\n<p>-\n<a href=\"https://www.linkedin.com/in/wesleycrook\">Wesley Crook</a>, <a href=\"https://tech.fpcomplete.com/\">FP Complete</a></p>\n<h2 id=\"2-inability-to-reach-consensus-on-priorities\">2. Inability To Reach Consensus On Priorities</h2>\n<p>There are various reasons why software\ndevelopment projects fail, but a common\none that has a big impact is when the\nproject sponsors and project teams are\nnot clearly aligned on top priorities for\nthe project. Decomposing these priorities\ninto “must-haves,” “should-haves” and\n“could-haves” can provide a solid\nframework for the iteration and delivery\nof particular features. – <a href=\n\"https://www.linkedin.com/authwall?trk=gf&amp;trkInfo=AQHzYlpXLIQPhgAAAXHzdqQ4etdJ9l7cSaiMojiJkht39_vNpAbaNjtE1wcIHql_q-9GgTiActUGyowSajzRFIAUTje5jovixHEG8FrgLtdkEzMwm6JXB9hccK3LsS9Gi61GelI=&amp;originalReferer=&amp;sessionRedirect=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fjahnibek\"\nrel=\"noopener\" target=\"_blank\">Jahn\nKarsybaev</a>, Prosource IT</p>\n<h2 id=\"3-lack-of-clarity-and-execution-strategy\">3. Lack Of Clarity And Execution Strategy</h2>\n<p>The primary goal of a software project\nis to solve a business’ problems. It\nrequires not only effective and efficient\nproject management and\nstakeholder-expectation management but\nalso a clear consensus by the entire\ngroup of stakeholders on the definition\nof the business’ problem and a robust\nexecution strategy to deliver software\nthat solves the business’ objectives.\nFailure to address any of the aspects\noutlined above results in a derailed\nproject. – <a href=\n\"https://twitter.com/technosip\" rel=\n\"noopener\" target=\"_blank\">Kartik\nAgarwal</a>, <a href=\n\"https://www.technosip.com/\" rel=\n\"noopener\" target=\"_blank\">TechnoSIP\nInc</a>.</p>\n<h2 id=\"4-not-starting-with-the-end-customer\">4. Not Starting With The End Customer</h2>\n<p>Sometimes software projects begin with\na great idea that is implemented (on time\nor late) and delivered only for\ndevelopers to discover that the problem\nthey solved wasn’t actually the problem\ntheir customer needed to be solved. Doing\nthe hard work of deeply understanding\nyour customers, what they need and what\nthey’re willing to pay for sets the\nceiling on project performance and can\nhelp refocus a team when things derail. –\n<a href=\"https://twitter.com/gyalif\" rel=\n\"noopener\" target=\"_blank\">Guy Yalif</a>,\n<a href=\"https://www.intellimize.com/\"\nrel=\"noopener\" target=\n\"_blank\">Intellimize</a></p>\n<h2 id=\"5-unclear-requirements\">5. Unclear Requirements</h2>\n<p>One of the most common reasons\nsoftware projects fail is unclear\nrequirements and the lack of a detailed\nexplanation. Very often clients\nthemselves are not sure exactly what they\nwant to see, and as a result, the project\ncannot move forward. Communicating with\nyour clients and asking them for their\ndetailed vision of the future of the\nproduct is the key to ensuring that the\nproject will not fail. – <a href=\n\"https://www.linkedin.com/authwall?trk=gf&amp;trkInfo=AQGlcXIBvNyUrQAAAXHzeXbgntsGO4GuS6nvwkqR8W7jfWklbIIf3_dZh9kNEGUlYMjT2flPLUfvF75K_dRwu5RxRi67xgOkiIxYKePSrHItXinarAa_wNzPFtXcYxJ-xQ8R7bE=&amp;originalReferer=&amp;sessionRedirect=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fdesnues\"\nrel=\"noopener\" target=\"_blank\">Daria\nLeshchenko</a>, <a href=\n\"https://www.supportyourapp.com/\" rel=\n\"noopener\" target=\"_blank\">SupportYourApp\nInc</a>.</p>\n<h2 id=\"6-expecting-a-silver-bullet\">6. Expecting A ‘Silver Bullet’</h2>\n<p>Too often, enthusiasm arises from the\nfalse belief that a proverbial “silver\nbullet” will solve a given problem.\nHowever, proper solutions are rarely so\nsimple—they are a blend of methodology,\nstrategy and team support, not the result\nof a single action, technology or idea.\nTech leaders should encourage open\ncommunication and leverage participatory\ngroup decision-making to solve\nchallenges. –\n<a href=\"https://www.linkedin.com/in/christophertyang/\">Christopher Yang</a>,\n<a href=\"https://www.travelctm.com/\" rel=\n\"noopener\" target=\"_blank\">Corporate\nTravel Management</a></p>\n<h2 id=\"7-working-in-a-silo\">7. Working In A Silo</h2>\n<p>The biggest reason software projects\nfail is because teams embark on a journey\nto build something that is either not a\nbusiness need or does not address the\nright problem. Both reasons are a result\nof misalignment between the business and\ntech. To avoid this, it’s crucial to\nidentify the problem the business is\ntrying to solve and then work\ncollectively with the business and not in\na silo. – <a href=\n\"https://twitter.com/tanvirbhangoo\" rel=\n\"noopener\" target=\"_blank\">Tanvir\nBhangoo</a>, Freshii inc.</p>\n<h2 id=\"8-thinking-that-scope-can-be-defined-upfront\">8. Thinking That Scope Can Be Defined Upfront</h2>\n<p>While it is important to understand\nthe problem and define the use cases\nupfront, almost no project can be\nconsidered successful if it does not\nadapt to changing business requirements\nduring development. Unfortunately, some\ntech teams still insist on hitting the\noriginal goal, thus rendering their\neffort ineffective or even a failure. –\n<a href=\"https://twitter.com/songbac\"\nrel=\"noopener\" target=\"_blank\">Song Bac\nToh</a>, <a href=\n\"https://www.tatacommunications.com/\"\nrel=\"noopener\" target=\"_blank\">Tata\nCommunications</a></p>\n<h2 id=\"9-lack-of-coordination-and-detailed-planning\">9. Lack Of Coordination And Detailed Planning</h2>\n<p>Many software projects are late or\nfail due to a lack of good coordination\nand detailed planning. Teams need to\nimplement a bottom-up planning process\nthat identifies dependencies between\ndeliverables and includes estimates from\nthe engineers themselves. After the\nrelease plan is set, I run daily\n15-minute stand-up meetings where issues\nare surfaced and new risks are identified\nand managed. – <a href=\n\"https://twitter.com/dmariani\" rel=\n\"noopener\" target=\"_blank\">Dave\nMariani</a>, <a href=\n\"https://www.atscale.com/\" rel=\"noopener\"\ntarget=\"_blank\">AtScale</a></p>\n<h2 id=\"10-friction-caused-by-undefined-roles\">10. Friction Caused By Undefined Roles</h2>\n<p>Undefined roles often create friction\non project teams. Try using a DACI\nframework from the start to clearly\ndefine who has authority on what. For\nstuck projects, recalibrating on who is\nthe Driver, Approver, Contributor and\nInformed within the project can act as a\nhard reset, inspiring renewed\ncollaboration and autonomy. – <a href=\n\"https://www.linkedin.com/authwall?trk=gf&amp;trkInfo=AQHYaPCz2rx9GwAAAXHzfjmgRKSjFTTYqzEEGEOYD2cSFSlW3itlZuGdMqAOy5HQBpW6rpXWuU0IdSyW5uXBD0EBR4f618Sg39eWg0PzRIlUs7IL97gUzyIHcFm4LrPJFshGpFQ=&amp;originalReferer=&amp;sessionRedirect=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fbavidar\"\nrel=\"noopener\" target=\"_blank\">Leore\nAvidar</a>, <a href=\"https://lob.com/\"\nrel=\"noopener\" target=\"_blank\">Lob.com\nInc</a>.</p>\n<h2 id=\"11-expecting-overcustomization-of-software\">11. Expecting Overcustomization Of Software</h2>\n<p>Oftentimes, we believe that software\ncan be customized to a level that will\ntailor to all needs. That’s a\nmisconception. Being realistic is\nimportant. Define the requirements\nregarding the software’s capability.\nMaking change requests as you go requires\nadjustments, but that’s the hat that will\nneed to be worn to avoid frustrations. –\n<a href=\n\"https://www.linkedin.com/authwall?trk=gf&amp;trkInfo=AQGtvfL4eY9LpgAAAXHzfzOgTOo1sAVOwBiiuT_m9Nho7DCjXpfTyPdkZu67qBzbBSrLQ2SoO8L_ncRTI8TRDUZktLzTwMhzC8QgDODIXoD3sWD7NbwA9LdRfuvvz9IXdt-qVGo=&amp;originalReferer=&amp;sessionRedirect=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fbhavna-juneja-b3575459\"\nrel=\"noopener\" target=\"_blank\">Bhavna\nJuneja</a>, <a href=\n\"https://www.infinitysts.com/\" rel=\n\"noopener\" target=\"_blank\">Infinity, a\nStamford Technology Company</a></p>\n<h2 id=\"12-lack-of-discipline\">12. Lack Of Discipline</h2>\n<p>If we were to build a house and keep\nchanging the blueprint, the project\nbudget would spiral out of control and\ndeadline after deadline would be missed.\nCreate a vision of what project success\nlooks like. Lock it down and execute.\nEvery other great idea and detour can be\nconsidered for a later phase of the\nproject. – <a href=\n\"https://twitter.com/srpolakoff\" rel=\n\"noopener\" target=\"_blank\">Sam\nPolakoff</a>, <a href=\n\"https://www.nexterus.com/\" rel=\n\"noopener\" target=\"_blank\">Nexterus,\nInc</a>.</p>\n<h2 id=\"13-too-many-hands-in-the-dev-pot\">13. Too Many Hands In The Dev Pot</h2>\n<p>Establish (and limit) who’s involved\nfrom day one, whether you’re building\nin-house or not. This can be difficult\nfor larger tech companies with complex\nprocesses and communication channels. But\nin the app development world, such\ncomplexity is detrimental to crafting a\nfully realized product that matches\neveryone’s unique vision without falling\nprey to scope creep and a never-ending\nproject timeline. – <a href=\n\"https://twitter.com/dasjoshua\" rel=\n\"noopener\" target=\"_blank\">Joshua\nDavidson</a>, <a href=\n\"https://chopdawg.com/\" rel=\"noopener\"\ntarget=\"_blank\">ChopDawg.com</a></p>\n<h2 id=\"14-not-enough-emphasis-on-soft-skills\">14. Not Enough Emphasis On Soft Skills</h2>\n<p>A clear and meaningful focus on\nmanaging the change process is often\nlacking or insufficient. I’ve seen many\nsoftware projects in various categories\nand in an array of different types and\nsizes of organizations run into\nchallenges because they are super-focused\non the technical work but not applying\nenough energy toward training, coaching,\nteam building and soft skills. – <a href=\n\"https://twitter.com/AmithNagarajan\" rel=\n\"noopener\" target=\"_blank\">Amith\nNagarajan</a>, <a href=\n\"https://www.rasa.io/?utm_source=forbes_council_profile\"\nrel=\"noopener\" target=\n\"_blank\">rasa.io</a></p>\n<p><a href=\"https://www.forbes.com/sites/forbestechcouncil/2020/03/31/14-common-reasons-software-projects-fail-and-how-to-avoid-them/\"><em>Original article on Forbes</em></a></p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/",
        "slug": "reasons-software-projects-fail",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "14 Common Reasons Software Projects Fail (And How To Avoid Them)",
        "description": "Tech teams often plunge into new software projects with high hopes, making it all the more frustrating if the project gets derailed.",
        "updated": null,
        "date": "2020-03-31",
        "year": 2020,
        "month": 3,
        "day": 31,
        "taxonomies": {
          "categories": [
            "insights"
          ],
          "tags": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Forbes Technology Council",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/reasons-software-projects-fail/",
        "components": [
          "blog",
          "reasons-software-projects-fail"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "1-not-understanding-the-needs-of-the-business",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#1-not-understanding-the-needs-of-the-business",
            "title": "1. Not Understanding The Needs Of The Business",
            "children": []
          },
          {
            "level": 2,
            "id": "2-inability-to-reach-consensus-on-priorities",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#2-inability-to-reach-consensus-on-priorities",
            "title": "2. Inability To Reach Consensus On Priorities",
            "children": []
          },
          {
            "level": 2,
            "id": "3-lack-of-clarity-and-execution-strategy",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#3-lack-of-clarity-and-execution-strategy",
            "title": "3. Lack Of Clarity And Execution Strategy",
            "children": []
          },
          {
            "level": 2,
            "id": "4-not-starting-with-the-end-customer",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#4-not-starting-with-the-end-customer",
            "title": "4. Not Starting With The End Customer",
            "children": []
          },
          {
            "level": 2,
            "id": "5-unclear-requirements",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#5-unclear-requirements",
            "title": "5. Unclear Requirements",
            "children": []
          },
          {
            "level": 2,
            "id": "6-expecting-a-silver-bullet",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#6-expecting-a-silver-bullet",
            "title": "6. Expecting A ‘Silver Bullet’",
            "children": []
          },
          {
            "level": 2,
            "id": "7-working-in-a-silo",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#7-working-in-a-silo",
            "title": "7. Working In A Silo",
            "children": []
          },
          {
            "level": 2,
            "id": "8-thinking-that-scope-can-be-defined-upfront",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#8-thinking-that-scope-can-be-defined-upfront",
            "title": "8. Thinking That Scope Can Be Defined Upfront",
            "children": []
          },
          {
            "level": 2,
            "id": "9-lack-of-coordination-and-detailed-planning",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#9-lack-of-coordination-and-detailed-planning",
            "title": "9. Lack Of Coordination And Detailed Planning",
            "children": []
          },
          {
            "level": 2,
            "id": "10-friction-caused-by-undefined-roles",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#10-friction-caused-by-undefined-roles",
            "title": "10. Friction Caused By Undefined Roles",
            "children": []
          },
          {
            "level": 2,
            "id": "11-expecting-overcustomization-of-software",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#11-expecting-overcustomization-of-software",
            "title": "11. Expecting Overcustomization Of Software",
            "children": []
          },
          {
            "level": 2,
            "id": "12-lack-of-discipline",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#12-lack-of-discipline",
            "title": "12. Lack Of Discipline",
            "children": []
          },
          {
            "level": 2,
            "id": "13-too-many-hands-in-the-dev-pot",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#13-too-many-hands-in-the-dev-pot",
            "title": "13. Too Many Hands In The Dev Pot",
            "children": []
          },
          {
            "level": 2,
            "id": "14-not-enough-emphasis-on-soft-skills",
            "permalink": "https://tech.fpcomplete.com/blog/reasons-software-projects-fail/#14-not-enough-emphasis-on-soft-skills",
            "title": "14. Not Enough Emphasis On Soft Skills",
            "children": []
          }
        ],
        "word_count": 1390,
        "reading_time": 7,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/running-open-source-business.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/running-open-source-business/",
        "slug": "running-open-source-business",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "How We Run an Open Source Based Business",
        "description": "Open source and shareware are methods of working on the free (as in beer) software economy. Haskell, Docker, Blockchain, and DevOps rely on open source. Profit and business are compatible with open source through IT Consulting and value-added software engineering.",
        "updated": null,
        "date": "2019-03-26T10:13:00Z",
        "year": 2019,
        "month": 3,
        "day": 26,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/running-open-source-business.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/running-open-source-business/",
        "components": [
          "blog",
          "running-open-source-business"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/enhancing-file-durability-in-programs.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/enhancing-file-durability-in-programs/",
        "slug": "enhancing-file-durability-in-programs",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Enhancing File Durability in Your Programs",
        "description": "An unexpected shutdown should not affect the durability of the filesystem our programs rely upon. We should always strive to build systems that enhance the file durability of our programs. If we don't we will love valuable data and could even put lives at risk is the data is critical.",
        "updated": null,
        "date": "2019-03-12T08:15:00Z",
        "year": 2019,
        "month": 3,
        "day": 12,
        "taxonomies": {
          "tags": [
            "haskell"
          ],
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Roman Gonzalez",
          "html": "hubspot-blogs/enhancing-file-durability-in-programs.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/enhancing-file-durability-in-programs/",
        "components": [
          "blog",
          "enhancing-file-durability-in-programs"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/guide-to-open-source-maintenance.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2018/07/guide-to-open-source-maintenance/",
        "slug": "guide-to-open-source-maintenance",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Guide to open source maintenance",
        "description": "An opinionated guide to successfully maintaining open source projects.",
        "updated": null,
        "date": "2018-07-26T22:47:00Z",
        "year": 2018,
        "month": 7,
        "day": 26,
        "taxonomies": {
          "tags": [
            "insights"
          ],
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/guide-to-open-source-maintenance.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/2018/07/guide-to-open-source-maintenance/",
        "components": [
          "blog",
          "2018",
          "07",
          "guide-to-open-source-maintenance"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/10-common-mistakes-to-avoid-in-fintech-software-development.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/10-common-mistakes-to-avoid-in-fintech-software-development/",
        "slug": "10-common-mistakes-to-avoid-in-fintech-software-development",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "10 Common Mistakes to Avoid in FinTech Software Development",
        "description": "There are a lot of uncertainties in the newer areas of the FinTech industry right now and knowing how to navigate these issues is not an easy task. Learn about the most common mistakes to avoid here. ",
        "updated": null,
        "date": "2018-02-28T12:13:00Z",
        "year": 2018,
        "month": 2,
        "day": 28,
        "taxonomies": {
          "categories": [
            "functional programming",
            "insights"
          ],
          "tags": [
            "fintech"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/10-common-mistakes-to-avoid-in-fintech-software-development.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/10-common-mistakes-to-avoid-in-fintech-software-development/",
        "components": [
          "blog",
          "10-common-mistakes-to-avoid-in-fintech-software-development"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/selecting-the-right-level-of-quality-assurance.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/selecting-the-right-level-of-quality-assurance/",
        "slug": "selecting-the-right-level-of-quality-assurance",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Selecting the Right Level of Quality Assurance",
        "description": "Ensuring the quality of a new system is vital to its success. But knowing the correct level of (QA) Quality Assurance to invest in a project is important for balancing these successes with its potential ROI. Discover how to determine the correct level of investment with our useful QA scale.",
        "updated": null,
        "date": "2018-02-27T10:56:00Z",
        "year": 2018,
        "month": 2,
        "day": 27,
        "taxonomies": {
          "categories": [
            "insights"
          ],
          "tags": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/selecting-the-right-level-of-quality-assurance.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/selecting-the-right-level-of-quality-assurance/",
        "components": [
          "blog",
          "selecting-the-right-level-of-quality-assurance"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/signs-your-business-needs-a-devops-consultant.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/signs-your-business-needs-a-devops-consultant/",
        "slug": "signs-your-business-needs-a-devops-consultant",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Signs Your Business Needs a DevOps Consultant",
        "description": "Today’s business challenges cause issues with traditional deployment models. Find out why a DevOps consultant may be right for you. ",
        "updated": null,
        "date": "2018-01-18T15:06:00Z",
        "year": 2018,
        "month": 1,
        "day": 18,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "insights",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/signs-your-business-needs-a-devops-consultant.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/signs-your-business-needs-a-devops-consultant/",
        "components": [
          "blog",
          "signs-your-business-needs-a-devops-consultant"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/big-data-vs-business-intelligence-blog-post.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/big-data-vs-business-intelligence-blog-post/",
        "slug": "big-data-vs-business-intelligence-blog-post",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Big Data vs Business Intelligence: What’s the difference?",
        "description": "The relationship between big data and business intelligence, as well as how to leverage them, is important for making strategic decisions that spur growth. Explore this relationship and how it can help you reach your business objectives here. ",
        "updated": null,
        "date": "2018-01-10T12:58:00Z",
        "year": 2018,
        "month": 1,
        "day": 10,
        "taxonomies": {
          "tags": [
            "data"
          ],
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/big-data-vs-business-intelligence-blog-post.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/big-data-vs-business-intelligence-blog-post/",
        "components": [
          "blog",
          "big-data-vs-business-intelligence-blog-post"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/devops-value-how-to-measure-the-success-of-devops.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/devops-value-how-to-measure-the-success-of-devops/",
        "slug": "devops-value-how-to-measure-the-success-of-devops",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "DevOps Value: How to Measure the Success of DevOps",
        "description": "Faster time to market and lower failure rate are the beginning of the many benefits DevOps offers companies. Discover the measurable metrics and KPIs, as well as the true business value DevOps offers.",
        "updated": null,
        "date": "2018-01-04T13:51:00Z",
        "year": 2018,
        "month": 1,
        "day": 4,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Robert Bobbett",
          "html": "hubspot-blogs/devops-value-how-to-measure-the-success-of-devops.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/devops-value-how-to-measure-the-success-of-devops/",
        "components": [
          "blog",
          "devops-value-how-to-measure-the-success-of-devops"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/software-release-management-best-practices.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/software-release-management-best-practices/",
        "slug": "software-release-management-best-practices",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Software Release Management Best Practices",
        "description": "Release management is an ever-evolving practice. Discover today’s release management best practices and how they can help streamline your software deployment.",
        "updated": null,
        "date": "2017-12-13T13:11:00Z",
        "year": 2017,
        "month": 12,
        "day": 13,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/software-release-management-best-practices.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/software-release-management-best-practices/",
        "components": [
          "blog",
          "software-release-management-best-practices"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/techniques-for-success-with-offshore-software-development.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/techniques-for-success-with-offshore-software-development/",
        "slug": "techniques-for-success-with-offshore-software-development",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Techniques for Success with Offshore Software Development",
        "description": "What to look out for with offshore software development can be a daunting task. Aaron Contorer details how to make offshore software development successful. ",
        "updated": null,
        "date": "2017-12-06T14:07:00Z",
        "year": 2017,
        "month": 12,
        "day": 6,
        "taxonomies": {
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/techniques-for-success-with-offshore-software-development.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/techniques-for-success-with-offshore-software-development/",
        "components": [
          "blog",
          "techniques-for-success-with-offshore-software-development"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager/",
        "slug": "my-devops-journey-and-how-i-became-a-recovering-it-operations-manager",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "My DevOps Journey and How I Became a Recovering IT Operations Manager",
        "description": "Learn how containerization and automated deployments laid the groundwork for what would become know as DevOps for a Fortune 500 IT company.",
        "updated": null,
        "date": "2017-11-15T13:30:00Z",
        "year": 2017,
        "month": 11,
        "day": 15,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "devops",
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Steve Bogdan",
          "html": "hubspot-blogs/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager/",
        "components": [
          "blog",
          "my-devops-journey-and-how-i-became-a-recovering-it-operations-manager"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/distributing-packages-without-sysadmin.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/05/distributing-packages-without-sysadmin/",
        "slug": "distributing-packages-without-sysadmin",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Distributing our packages without a sysadmin",
        "description": ".",
        "updated": null,
        "date": "2015-05-13T00:00:00Z",
        "year": 2015,
        "month": 5,
        "day": 13,
        "taxonomies": {
          "tags": [
            "devops"
          ],
          "categories": [
            "insights",
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/distributing-packages-without-sysadmin.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/2015/05/distributing-packages-without-sysadmin/",
        "components": [
          "blog",
          "2015",
          "05",
          "distributing-packages-without-sysadmin"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/announce-ide-backend.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/03/announce-ide-backend/",
        "slug": "announce-ide-backend",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Announcing: open sourcing of ide-backend",
        "description": ".",
        "updated": null,
        "date": "2015-03-30T04:00:00Z",
        "year": 2015,
        "month": 3,
        "day": 30,
        "taxonomies": {
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/announce-ide-backend.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/2015/03/announce-ide-backend/",
        "components": [
          "blog",
          "2015",
          "03",
          "announce-ide-backend"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/fp-complete-software-pipeline.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2015/01/fp-complete-software-pipeline/",
        "slug": "fp-complete-software-pipeline",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Complete's software pipeline",
        "description": ".",
        "updated": null,
        "date": "2015-01-13T13:00:00Z",
        "year": 2015,
        "month": 1,
        "day": 13,
        "taxonomies": {
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "html": "hubspot-blogs/fp-complete-software-pipeline.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/2015/01/fp-complete-software-pipeline/",
        "components": [
          "blog",
          "2015",
          "01",
          "fp-complete-software-pipeline"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/beta-discount-last-call.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/08/beta-discount-last-call/",
        "slug": "beta-discount-last-call",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "4 Days Left to Get Beta User Discount",
        "description": ".",
        "updated": null,
        "date": "2013-08-27T13:12:00Z",
        "year": 2013,
        "month": 8,
        "day": 27,
        "taxonomies": {
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Natalia Muska",
          "html": "hubspot-blogs/beta-discount-last-call.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/2013/08/beta-discount-last-call/",
        "components": [
          "blog",
          "2013",
          "08",
          "beta-discount-last-call"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/academic-accounts.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/07/academic-accounts/",
        "slug": "academic-accounts",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FP Complete Announces Free Academic Accounts",
        "description": ".",
        "updated": null,
        "date": "2013-07-25T19:30:00Z",
        "year": 2013,
        "month": 7,
        "day": 25,
        "taxonomies": {
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Natalia Muska",
          "html": "hubspot-blogs/academic-accounts.html",
          "blogimage": "/images/blog-listing/executive-insights.png"
        },
        "path": "/blog/2013/07/academic-accounts/",
        "components": [
          "blog",
          "2013",
          "07",
          "academic-accounts"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/joining-fp-complete.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/01/joining-fp-complete/",
        "slug": "joining-fp-complete",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Joining FP Complete",
        "description": ".",
        "updated": null,
        "date": "2013-01-03T16:32:00Z",
        "year": 2013,
        "month": 1,
        "day": 3,
        "taxonomies": {
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Andy Adams-Moran",
          "html": "hubspot-blogs/joining-fp-complete.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/01/joining-fp-complete/",
        "components": [
          "blog",
          "2013",
          "01",
          "joining-fp-complete"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/why-im-investing-in-fp-complete.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/2013/01/why-im-investing-in-fp-complete/",
        "slug": "why-im-investing-in-fp-complete",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Why I'm investing in FP Complete",
        "description": ".",
        "updated": null,
        "date": "2012-12-17T16:32:00Z",
        "year": 2012,
        "month": 12,
        "day": 17,
        "taxonomies": {
          "categories": [
            "insights"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Simon Peyton-Jones",
          "html": "hubspot-blogs/why-im-investing-in-fp-complete.html",
          "blogimage": "/images/blog-listing/functional.png"
        },
        "path": "/blog/2013/01/why-im-investing-in-fp-complete/",
        "components": [
          "blog",
          "2013",
          "01",
          "why-im-investing-in-fp-complete"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 27
  },
  {
    "name": "IT Compliance",
    "slug": "it-compliance",
    "path": "/categories/it-compliance/",
    "permalink": "https://tech.fpcomplete.com/categories/it-compliance/",
    "pages": [
      {
        "relative_path": "blog/pathway-to-information-security-management-and-certification.md",
        "colocated_path": null,
        "content": "<p><strong>The Pathway to Information Security Management and Certification</strong></p>\n<p>Information security is a complex area to handle well.  The possible risks to information assets and reputation, including computer systems and countless filing cabinets full of valuable proprietary information, are difficult to determine and bring under control.  Plus, this needs to be done in ways that don't unduly interfere with the legitimate use of information by authorized users. </p>\n<p>The most practical and cost-effective way to handle information security and governance obligations, and to be seen to be doing so, is to adopt an Information Security Management System (ISMS) that complies with the international standard such as SOC-2 or ISO 27001.  An ISMS is a framework of policies, processes and controls used to manage information security in a structured, systematic manner.</p>\n<p><strong>Why implement an ISMS and pursue an Information Security Certification?</strong></p>\n<ul>\n<li>Improve policies and procedures by addressing critical security related processes and controls </li>\n<li>Minimizes the actual and perceived impact of data breaches </li>\n<li>Objective verification that there are controls on the security risks related to Information Assets </li>\n</ul>\n<p>At a high level, the ISMS will help minimize the costs of security incidents and enhance your brand.  In more detail, the ISMS will be used to: </p>\n<ul>\n<li>systematically assess the organization's information risks in order to establish and prioritize its security requirements, primarily in terms of the need to protect the confidentiality, integrity and availability of information </li>\n<li>design a suite of security controls, both technical and non-technical in nature, to address any risks deemed unacceptable by management </li>\n<li>ensure that security controls satisfy compliance obligations under applicable laws, regulations and contracts (such as privacy laws, PCI and HIPAA) </li>\n<li>operate, manage and maintain the security controls </li>\n<li>monitor and continuously improve the protection of valuable information assets, for example updating the controls when the risks change (e.g. responding to novel hacker attacks or frauds, ideally in advance thereby preventing us from suffering actual incidents!). </li>\n</ul>\n<p><strong>Information Security Focus Areas</strong></p>\n<ul>\n<li>What is the proper scope for the organization? </li>\n<li>What are applicable areas and controls? </li>\n<li>Are the proper policies &amp; procedures documented? </li>\n<li>Is the organization living these values?  </li>\n</ul>\n<p><strong>What are the Outcomes</strong></p>\n<ul>\n<li>Improved InfoSec policies and procedures </li>\n<li>Confirmation of the implementation of Incident and Risk Management </li>\n<li>Completion of Asset and Risk register </li>\n<li>Implementation of an Information Security Management System (ISMS) for your scope </li>\n<li>Prepared for independent certification auditor </li>\n<li>Gain trust from customers and partners. </li>\n</ul>\n<p><strong>Information Security Certification Preparation Project</strong></p>\n<p><img src=\"/images/blog/info-sec-cert-prep-prep-project.png\" alt=\"Information Security Certification Preparation Project\" /></p>\n<p><strong>Key Project Activities</strong></p>\n<ul>\n<li>Define Certification Scope       </li>\n<li>Perform Gap Assessment against the relevant standard (SOC-2, ISO 27001)</li>\n<li>Identify Documentation Requirements </li>\n<li>Identify Evidence Requirements </li>\n<li>Develop New Documents required for certification </li>\n<li>Perform Impact Assessment </li>\n<li>Maintain Data Flow diagrams </li>\n<li>Maintain Risk Register </li>\n<li>Prepare for Pre-Certification Audit </li>\n<li>Remediate findings from Pre-Cert Audit </li>\n<li>Prepare for Stage 1 and Stage 2 </li>\n<li>Obtain Standards Body Certification or audited Report </li>\n</ul>\n<p>FP Complete has extensive experience in the preparation of SOC-2 and ISO 270001 certifications, as well as many other security certifications.  Contact us if we can help your organization. </p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/pathway-to-information-security-management-and-certification/",
        "slug": "pathway-to-information-security-management-and-certification",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "The Pathway to Information Security Management and Certification ",
        "description": "Information security is a complex area to handle well.",
        "updated": null,
        "date": "2021-06-10",
        "year": 2021,
        "month": 6,
        "day": 10,
        "taxonomies": {
          "tags": [
            "compliance"
          ],
          "categories": [
            "IT Compliance"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Jeffrey Silver",
          "blogimage": "/images/blog-listing/distributed-ledger.png",
          "image": "images/blog/thumbs/intermediate-training-courses.png"
        },
        "path": "/blog/pathway-to-information-security-management-and-certification/",
        "components": [
          "blog",
          "pathway-to-information-security-management-and-certification"
        ],
        "summary": null,
        "toc": [],
        "word_count": 505,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/istio-mtls-debugging-story.md",
        "colocated_path": null,
        "content": "<p>Last week, our team was working on a feature enhancement to <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>. We work with clients in regulated industries, and one of the requirements was fully encrypted traffic throughout the cluster. While we've supported Istio's mutual TLS (mTLS) as an optional feature for end-user applications, not all of our built-in services were using mTLS strict mode. We were working on rolling out that support.</p>\n<p>One of the cornerstones of Kube360 is our centralized authentication system, which is primarily supplied by a service (called <code>k3dash</code>) that receives incoming traffic, performs authentication against an external identity provider (such as Okta, Azure AD, or others), and then provides those credentials to the other services within the clusters, such as the Kubernetes Dashboard or Grafana. This service in particular was giving some trouble.</p>\n<p>Before diving into the bugs and the debugging journey, however, let's review both Istio's mTLS support and relevant details of how <code>k3dash</code> operates.</p>\n<p><em>Interested in solving these kinds of problems? We're looking for experienced DevOps engineers to join our global team. We're hiring globally, and particularly looking for another US lead engineer. If you're interesting, <a href=\"mailto:[email protected]\">send your CV to [email protected]</a>.</em></p>\n<h2 id=\"what-is-mtls\">What is mTLS?</h2>\n<p>In a typical Kubernetes setup, encrypted traffic comes into the cluster and hits a load balancer. That load balancer terminates the TLS connection, resulting in the decrypted traffic. That decrypted traffic is then sent to the relevant service within the cluster. Since traffic within the cluster is typically considered safe, for many use cases this is an acceptable approach.</p>\n<p>But for some use cases, such as handling Personally Identifiable Information (PII), extra safeguards may be desired or required. In those cases, we would like to ensure that <em>all</em> network traffic, even traffic inside the same cluster, is encrypted. That gives extra guarantees against both snooping (reading data in transit) and spoofing (faking the source of data) attacks. This can help mitigate the impact of other flaws in the system.</p>\n<p>Implementing this complete data-in-transit encryption system manually requires a major overhaul to essentially every application in the cluster. You'll need to teach all of them to terminate their own TLS connections, issue certificates for all applications, and add a new Certificate Authority for all applications to respect.</p>\n<p>Istio's mTLS handles this outside of the application. It installs a sidecar that communicates with your application over a localhost connection, bypassing exposed network traffic. It uses sophisticated port forwarding rules (via IP tables) to redirect incoming and outgoing traffic to and from the pod to go via the sidecar. And the Envoy sidecar in the proxy handles all the logic of obtaining TLS certificates, refreshing keys, termination, etc.</p>\n<p>The way Istio handles all of this is pretty incredible. When it works, it works great. And when it fails, it can be disastrously difficult to debug. Which is what happened here (though thankfully it took less than a day to get to a conclusion). In the realm of <em>epic foreshadowment</em>, let me point out three specific points about Istio's mTLS worth mentioning.</p>\n<ul>\n<li>In strict mode, which is what we're going for, the Envoy sidecar will reject any incoming plaintext communication.</li>\n<li>Something I hadn't recognized at first, but now have fully internalized: normally, if you make an HTTP connection to a host that doesn't exist, you'll get a failed connection error. You definitely <em>won't</em> get an HTTP response. With Istio, however, you'll <em>always</em> make a successful outgoing HTTP connection, since your connection is going to Envoy itself. If the Envoy proxy cannot make the connection, it will return an HTTP response body with a 503 error message, like most proxies.</li>\n<li>The Envoy proxy has special handling for some protocols. Most importantly, if you make a plaintext HTTP outgoing connection, the Envoy proxy has sophisticated abilities to parse the outgoing request, understand details about various headers, and do intelligent routing.</li>\n</ul>\n<p>OK, that's mTLS. Let's talk about the other player here: <code>k3dash</code>.</p>\n<h2 id=\"k3dash-and-reverse-proxying\"><code>k3dash</code> and reverse proxying</h2>\n<p>The primary method <code>k3dash</code> uses to provide authentication credentials to other services inside the cluster is HTTP reverse proxying. This is a common technique, and common libraries exist for doing it. In fact, <a href=\"https://www.stackage.org/package/http-reverse-proxy\">I wrote one such library</a> years ago. We've already mentioned a common use case of reverse proxying: load balancing. In a reverse proxy situation, incoming traffic is received by one server, which analyzes the incoming request, performs some transformations, and then chooses a destination service to forward the request to.</p>\n<p>One of the most important aspects of reverse proxying is header management. There are a few different things you can do at the header level, such as:</p>\n<ul>\n<li>Remove hop-by-hop headers, such as <code>transfer-encoding</code>, which apply to a single hop and not the end-to-end communication between client and server.</li>\n<li>Inject new headers. For example, in <code>k3dash</code>, we regularly inject headers recognized by the final services for authentication purposes.</li>\n<li>Leave headers completely untouched. This is often the case with headers like <code>content-type</code>, where we typically want the client and final server to exchange data without any interference.</li>\n</ul>\n<p>As one <em>epic foreshadowment</em> example, consider the <code>Host</code> header in a typical reverse proxy situation. I may have a single load balancer handling traffic for a dozen different domain names, including domain names <code>A</code> and <code>B</code>. And perhaps I have a single service behind the reverse proxy serving the traffic for both of those domain names. I need to make sure that my load balancer forwards on the <code>Host</code> header to the final service, so it can decide how to respond to the request.</p>\n<p><code>k3dash</code> in fact uses the library linked above for its implementation, and is following fairly standard header forwarding rules, plus making some specific modifications within the application.</p>\n<p>I think that's enough backstory, and perhaps you're already beginning to piece together what went wrong based on my clues above. Anyway, let's dive in!</p>\n<h2 id=\"the-problem\">The problem</h2>\n<p>One of my coworkers, Sibi, got started on the Istio mTLS strict mode migration. He got strict mode turned on in a test cluster, and then began to figure out what was broken. I don't know all the preliminary changes he made. But when he reached out to me, he'd gotten us to a point where the Kubernetes load balancer was successfully receiving the incoming requests for <code>k3dash</code> and forwarding them along to <code>k3dash</code>. <code>k3dash</code> was able to log the user in and provide its own UI display. All good so far.</p>\n<p>However, following through from the main UI to the Kubernetes Dashboard would fail, and we'd end up with this error message in the browser:</p>\n<blockquote>\n<p>upstream connect error or disconnect/reset before headers. reset reason: connection failure</p>\n</blockquote>\n<p>Sibi believed this to be a problem with the <code>k3dash</code> codebase itself and asked me to step in to help debug.</p>\n<h2 id=\"the-wrong-rabbit-hole-and-incredible-laziness\">The wrong rabbit hole, and incredible laziness</h2>\n<p>This whole section is just a cathartic gripe session on how I foot-gunned myself. I'm entirely to blame for my own pain, as we're about to see.</p>\n<p>It seemed pretty clear that the outgoing connection from the <code>k3dash</code> pod to the <code>kubernetes-dashboard</code> pod was failing. (And this turned out to be a safe guess.) The first thing I wanted to do was make a simpler repro, which in this case involved <code>kubectl exec</code>ing into the <code>k3dash</code> container and <code>curl</code>ing to the in-cluster service endpoint. Essentially:</p>\n<pre><code>$ curl -ivvv http:&#x2F;&#x2F;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local&#x2F;\n*   Trying 172.20.165.228...\n* TCP_NODELAY set\n* Connected to kube360-kubernetes-dashboard.kube360-system.svc.cluster.local (172.20.165.228) port 80 (#0)\n&gt; GET &#x2F; HTTP&#x2F;1.1\n&gt; Host: kube360-kubernetes-dashboard.kube360-system.svc.cluster.local\n&gt; User-Agent: curl&#x2F;7.58.0\n&gt; Accept: *&#x2F;*\n&gt;\n&lt; HTTP&#x2F;1.1 503 Service Unavailable\nHTTP&#x2F;1.1 503 Service Unavailable\n&lt; content-length: 84\ncontent-length: 84\n&lt; content-type: text&#x2F;plain\ncontent-type: text&#x2F;plain\n&lt; date: Wed, 14 Jul 2021 15:29:04 GMT\ndate: Wed, 14 Jul 2021 15:29:04 GMT\n&lt; server: envoy\nserver: envoy\n&lt;\n* Connection #0 to host kube360-kubernetes-dashboard.kube360-system.svc.cluster.local left intact\nupstream connect error or disconnect&#x2F;reset before headers. reset reason: local reset\n</code></pre>\n<p>This reproed the problem right away. Great! I was now completely convinced that the problem was not <code>k3dash</code> specific, since neither <code>curl</code> nor <code>k3dash</code> could make the connection, and they both gave the same <code>upstream connect error</code> message. I could think of a few different reasons for this to happen, none of which were correct:</p>\n<ul>\n<li>The outgoing packets from the container were not being sent to the Envoy proxy. I strongly believed this one for a while. But if I'd thought a bit harder, I would have realized that this was completely impossible. That <code>upstream connect error</code> message was of course coming from the Envoy proxy itself! If we were having a normal connection failure, we would have received the error message at the TCP level, not as an HTTP 503 response code. Next!</li>\n<li>The Envoy sidecar was receiving the packets, but the mesh was confused enough that it couldn't figure out how to connect to the destination Envoy sidecar. This turned out to be partially right, but not in the way I thought.</li>\n</ul>\n<p>I futzed around with lots of different attempts here but was essentially stalled. Until Sibi noticed something fascinating. It turns out that the following, seemingly nonsensical command <em>did</em> work:</p>\n<pre><code>curl http:&#x2F;&#x2F;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local:443&#x2F;\n</code></pre>\n<p>For some reason, making an <em>insecure</em> HTTP request over 443, the <em>secure</em> HTTPS port, worked. This made no sense, of course. Why would using the wrong port fix everything? And this is where incredible laziness comes into play. You see, Kubernetes Dashboard's default configuration uses TLS, and requires all of that setup I mentioned above about passing around certificates and updating accepted Certificate Authorities. But you can turn off that requirement, and make it listen on plain text. Since (1) this was intracluster communication, and (2) we've always had strict mTLS on our roadmap, we decided to simply turn off TLS in the Kubernetes Dashboard. However, when doing so, I forgot to switch the port number from 443 to 80.</p>\n<p>Not to worry though! I <em>did</em> remember to correctly configure <code>k3dash</code> to communicate with Kubernetes Dashboard, using insecure HTTP, over port 443. Since both parties agreed on the port, it didn't matter that it was the wrong port.</p>\n<p>But this was all very frustrating. It meant that the &quot;repro&quot; wasn't a repro at all. <code>curl</code>ing on the wrong port was giving the same error message, but for a different reason. In the meanwhile, we went ahead and changed Kubernetes Dashboard to listen on port 80 and <code>k3dash</code> to connect on port 80. We thought there <em>may</em> be a possibility that the Envoy proxy was giving some special treatment to the port number, which in retrospect doesn't really make much sense. In any event, this ended at a situation where our &quot;repro&quot; wasn't a repro at all.</p>\n<h2 id=\"the-bug-is-in-k3dash\">The bug is in <code>k3dash</code></h2>\n<p>Now it was clear that Sibi was right. <code>curl</code> could connect, <code>k3dash</code> couldn't. The bug <em>must</em> be inside <code>k3dash</code>. But I couldn't figure out how. Being the author of essentially all the HTTP libraries involved in this toolchain, I began to worry that my HTTP client library itself may somehow be the source of the bug. I went down a rabbit hole there too, putting together some minimal sample program outside <code>k3dash</code>. I <code>kubectl cp</code>ed them over and then ran them... and everything worked fine. Phew, my libraries were working, but not <code>k3dash</code>.</p>\n<p>Then I did the thing I should have done at the very beginning. I looked at the logs very, very carefully. Remember, <code>k3dash</code> is doing a reverse proxy. So, it receives an incoming request, modifies it, makes the new request, and then sends a modified response back. The logs included the modified outgoing HTTP request (some fields modified to remove private information):</p>\n<pre><code>2021-07-15 05:20:39.820662778 UTC ServiceRequest Request {\n  host                 = &quot;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local&quot;\n  port                 = 80\n  secure               = False\n  requestHeaders       = [(&quot;X-Real-IP&quot;,&quot;127.0.0.1&quot;),(&quot;host&quot;,&quot;test-kube360-hostname.hidden&quot;),(&quot;upgrade-insecure-requests&quot;,&quot;1&quot;),(&quot;user-agent&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;accept&quot;,&quot;text&#x2F;html,application&#x2F;xhtml+xml,application&#x2F;xml;q=0.9,image&#x2F;avif,image&#x2F;webp,image&#x2F;apng,*&#x2F;*;q=0.8,application&#x2F;signed-exchange;v=b3;q=0.9&quot;),(&quot;sec-gpc&quot;,&quot;1&quot;),(&quot;referer&quot;,&quot;http:&#x2F;&#x2F;test-kube360-hostname.hidden&#x2F;dash&quot;),(&quot;accept-language&quot;,&quot;en-US,en;q=0.9&quot;),(&quot;cookie&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;x-forwarded-for&quot;,&quot;192.168.0.1&quot;),(&quot;x-forwarded-proto&quot;,&quot;http&quot;),(&quot;x-request-id&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;x-envoy-attempt-count&quot;,&quot;3&quot;),(&quot;x-envoy-internal&quot;,&quot;true&quot;),(&quot;x-forwarded-client-cert&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;Authorization&quot;,&quot;&lt;REDACTED&gt;&quot;)]\n  path                 = &quot;&#x2F;&quot;\n  queryString          = &quot;&quot;\n  method               = &quot;GET&quot;\n  proxy                = Nothing\n  rawBody              = False\n  redirectCount        = 0\n  responseTimeout      = ResponseTimeoutNone\n  requestVersion       = HTTP&#x2F;1.1\n}\n</code></pre>\n<p>I tried to leave in enough content here to give you the same overwhelmed sense that I had looking it. Keep in mind the <code>requestHeaders</code> field is in practice about three times as long. Anyway, with the slimmed down headers, and all my hints throughout, see if you can guess what the problem is.</p>\n<p>Ready? It's the <code>Host</code> header! Let's take a quote from the <a href=\"https://istio.io/latest/docs/ops/configuration/traffic-management/traffic-routing/\">Istio traffic routing documentation</a>. Regarding HTTP traffic, it says:</p>\n<blockquote>\n<p>Requests are routed based on the port and <em><code>Host</code></em> header, rather than port and IP. This means the destination IP address is effectively ignored. For example, <code>curl 8.8.8.8 -H &quot;Host: productpage.default.svc.cluster.local&quot;</code>, would be routed to the <code>productpage</code> Service.</p>\n</blockquote>\n<p>See the problem? <code>k3dash</code> is behaving like a standard reverse proxy, and including the <code>Host</code> header, which is almost always the right thing to do. But not here! In this case, that <code>Host</code> header we're forwarding is confusing Envoy. Envoy is trying to connect to something (<code>test-kube360-hostname.hidden</code>) that doesn't respond to its mTLS connections. That's why we get the <code>upstream connect error</code>. And that's why we got the same response as when we used the wrong port number, since Envoy is configured to only receive incoming traffic on a port that the service is actually listening to.</p>\n<h2 id=\"the-fix\">The fix</h2>\n<p>After all of that, the fix is rather anticlimactic:</p>\n<pre data-lang=\"diff\" class=\"language-diff \"><code class=\"language-diff\" data-lang=\"diff\">-(\\(h, _) -&gt; not (Set.member h _serviceStripHeaders))\n+-- Strip out host headers, since they confuse the Envoy proxy\n+(\\(h, _) -&gt; not (Set.member h _serviceStripHeaders) &amp;&amp; h &#x2F;= &quot;Host&quot;)\n</code></pre>\n<p>We already had logic in <code>k3dash</code> to strip away specific headers for each service. And it turns out this logic was primarily used to strip out the <code>Host</code> header for services that got confused when they saw it! Now we just need to strip away the <code>Host</code> header for all the services instead. Fortunately none of our services perform any logic based on the <code>Host</code> header, so with that in place, we should be good. We deployed the new version of <code>k3dash</code>, and voilà! everything worked.</p>\n<h2 id=\"the-moral-of-the-story\">The moral of the story</h2>\n<p>I walked away from this adventure with a much better understanding of how Istio interacts with applications, which is great. I got a great reminder to look more carefully at log messages before hardening my assumptions about the source of a bug. And I got a great kick in the pants for being lazy about port number fixes.</p>\n<p>All in all, it was about six hours of debugging fun. And to quote a great Hebrew phrase on it, &quot;היה טוב, וטוב שהיה&quot; (it was good, and good that it <em>was</em> (in the past)).</p>\n<hr />\n<p>As I mentioned above, we're actively looking for new DevOps candidates, especially US based candidates. If you're interested in working with a global team of experienced DevOps, Rust, and Haskell engineers, consider <a href=\"mailto:[email protected]\">sending us your CV</a>.</p>\n<p>And if you're looking for a solid Kubernetes platform, batteries included, so you can offload this kind of tedious debugging to some other unfortunate souls (read: us), <a href=\"https://tech.fpcomplete.com/products/kube360/\">check out Kube360</a>.</p>\n<p>If you liked this article, you may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-kubernetes-windows/\">Deploying Rust with Windows Containers on Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/\">Cloud Vendor Neutrality</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">Secure defaults with Kubernetes Security with Kube360</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
        "slug": "istio-mtls-debugging-story",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "An Istio/mutual TLS debugging story",
        "description": "While rolling out Istio's strict mTLS mode in our Kube360 product, we ran into an interesting corner case problem.",
        "updated": null,
        "date": "2021-07-20",
        "year": 2021,
        "month": 7,
        "day": 20,
        "taxonomies": {
          "tags": [
            "kubernetes",
            "regulated"
          ],
          "categories": [
            "devops",
            "kube360",
            "it-compliance"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/devops.png",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/istio-mtls-debugging-story.png"
        },
        "path": "/blog/istio-mtls-debugging-story/",
        "components": [
          "blog",
          "istio-mtls-debugging-story"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "what-is-mtls",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#what-is-mtls",
            "title": "What is mTLS?",
            "children": []
          },
          {
            "level": 2,
            "id": "k3dash-and-reverse-proxying",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#k3dash-and-reverse-proxying",
            "title": "k3dash and reverse proxying",
            "children": []
          },
          {
            "level": 2,
            "id": "the-problem",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-problem",
            "title": "The problem",
            "children": []
          },
          {
            "level": 2,
            "id": "the-wrong-rabbit-hole-and-incredible-laziness",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-wrong-rabbit-hole-and-incredible-laziness",
            "title": "The wrong rabbit hole, and incredible laziness",
            "children": []
          },
          {
            "level": 2,
            "id": "the-bug-is-in-k3dash",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-bug-is-in-k3dash",
            "title": "The bug is in k3dash",
            "children": []
          },
          {
            "level": 2,
            "id": "the-fix",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-fix",
            "title": "The fix",
            "children": []
          },
          {
            "level": 2,
            "id": "the-moral-of-the-story",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-moral-of-the-story",
            "title": "The moral of the story",
            "children": []
          }
        ],
        "word_count": 2642,
        "reading_time": 14,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          }
        ]
      }
    ],
    "page_count": 2
  },
  {
    "name": "kub360",
    "slug": "kub360",
    "path": "/categories/kub360/",
    "permalink": "https://tech.fpcomplete.com/categories/kub360/",
    "pages": [
      {
        "relative_path": "blog/continuous-integration-delivery-best-practices.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/continuous-integration-delivery-best-practices/",
        "slug": "continuous-integration-delivery-best-practices",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Best practices when implementing continuous integration and delivery",
        "description": "Although, there are countless reasons to ditch the old ways of development and adopt DevOps practices, the change from one to the another can be an intimidating task. Use these best practices to ensure your company succeeds during these transitions. ",
        "updated": null,
        "date": "2018-04-11T12:49:00Z",
        "year": 2018,
        "month": 4,
        "day": 11,
        "taxonomies": {
          "categories": [
            "devops",
            "kub360"
          ],
          "tags": [
            "devops"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Deni Bertovic",
          "html": "hubspot-blogs/continuous-integration-delivery-best-practices.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/continuous-integration-delivery-best-practices/",
        "components": [
          "blog",
          "continuous-integration-delivery-best-practices"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 1
  },
  {
    "name": "kube360",
    "slug": "kube360",
    "path": "/categories/kube360/",
    "permalink": "https://tech.fpcomplete.com/categories/kube360/",
    "pages": [
      {
        "relative_path": "blog/istio-mtls-debugging-story.md",
        "colocated_path": null,
        "content": "<p>Last week, our team was working on a feature enhancement to <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>. We work with clients in regulated industries, and one of the requirements was fully encrypted traffic throughout the cluster. While we've supported Istio's mutual TLS (mTLS) as an optional feature for end-user applications, not all of our built-in services were using mTLS strict mode. We were working on rolling out that support.</p>\n<p>One of the cornerstones of Kube360 is our centralized authentication system, which is primarily supplied by a service (called <code>k3dash</code>) that receives incoming traffic, performs authentication against an external identity provider (such as Okta, Azure AD, or others), and then provides those credentials to the other services within the clusters, such as the Kubernetes Dashboard or Grafana. This service in particular was giving some trouble.</p>\n<p>Before diving into the bugs and the debugging journey, however, let's review both Istio's mTLS support and relevant details of how <code>k3dash</code> operates.</p>\n<p><em>Interested in solving these kinds of problems? We're looking for experienced DevOps engineers to join our global team. We're hiring globally, and particularly looking for another US lead engineer. If you're interesting, <a href=\"mailto:[email protected]\">send your CV to [email protected]</a>.</em></p>\n<h2 id=\"what-is-mtls\">What is mTLS?</h2>\n<p>In a typical Kubernetes setup, encrypted traffic comes into the cluster and hits a load balancer. That load balancer terminates the TLS connection, resulting in the decrypted traffic. That decrypted traffic is then sent to the relevant service within the cluster. Since traffic within the cluster is typically considered safe, for many use cases this is an acceptable approach.</p>\n<p>But for some use cases, such as handling Personally Identifiable Information (PII), extra safeguards may be desired or required. In those cases, we would like to ensure that <em>all</em> network traffic, even traffic inside the same cluster, is encrypted. That gives extra guarantees against both snooping (reading data in transit) and spoofing (faking the source of data) attacks. This can help mitigate the impact of other flaws in the system.</p>\n<p>Implementing this complete data-in-transit encryption system manually requires a major overhaul to essentially every application in the cluster. You'll need to teach all of them to terminate their own TLS connections, issue certificates for all applications, and add a new Certificate Authority for all applications to respect.</p>\n<p>Istio's mTLS handles this outside of the application. It installs a sidecar that communicates with your application over a localhost connection, bypassing exposed network traffic. It uses sophisticated port forwarding rules (via IP tables) to redirect incoming and outgoing traffic to and from the pod to go via the sidecar. And the Envoy sidecar in the proxy handles all the logic of obtaining TLS certificates, refreshing keys, termination, etc.</p>\n<p>The way Istio handles all of this is pretty incredible. When it works, it works great. And when it fails, it can be disastrously difficult to debug. Which is what happened here (though thankfully it took less than a day to get to a conclusion). In the realm of <em>epic foreshadowment</em>, let me point out three specific points about Istio's mTLS worth mentioning.</p>\n<ul>\n<li>In strict mode, which is what we're going for, the Envoy sidecar will reject any incoming plaintext communication.</li>\n<li>Something I hadn't recognized at first, but now have fully internalized: normally, if you make an HTTP connection to a host that doesn't exist, you'll get a failed connection error. You definitely <em>won't</em> get an HTTP response. With Istio, however, you'll <em>always</em> make a successful outgoing HTTP connection, since your connection is going to Envoy itself. If the Envoy proxy cannot make the connection, it will return an HTTP response body with a 503 error message, like most proxies.</li>\n<li>The Envoy proxy has special handling for some protocols. Most importantly, if you make a plaintext HTTP outgoing connection, the Envoy proxy has sophisticated abilities to parse the outgoing request, understand details about various headers, and do intelligent routing.</li>\n</ul>\n<p>OK, that's mTLS. Let's talk about the other player here: <code>k3dash</code>.</p>\n<h2 id=\"k3dash-and-reverse-proxying\"><code>k3dash</code> and reverse proxying</h2>\n<p>The primary method <code>k3dash</code> uses to provide authentication credentials to other services inside the cluster is HTTP reverse proxying. This is a common technique, and common libraries exist for doing it. In fact, <a href=\"https://www.stackage.org/package/http-reverse-proxy\">I wrote one such library</a> years ago. We've already mentioned a common use case of reverse proxying: load balancing. In a reverse proxy situation, incoming traffic is received by one server, which analyzes the incoming request, performs some transformations, and then chooses a destination service to forward the request to.</p>\n<p>One of the most important aspects of reverse proxying is header management. There are a few different things you can do at the header level, such as:</p>\n<ul>\n<li>Remove hop-by-hop headers, such as <code>transfer-encoding</code>, which apply to a single hop and not the end-to-end communication between client and server.</li>\n<li>Inject new headers. For example, in <code>k3dash</code>, we regularly inject headers recognized by the final services for authentication purposes.</li>\n<li>Leave headers completely untouched. This is often the case with headers like <code>content-type</code>, where we typically want the client and final server to exchange data without any interference.</li>\n</ul>\n<p>As one <em>epic foreshadowment</em> example, consider the <code>Host</code> header in a typical reverse proxy situation. I may have a single load balancer handling traffic for a dozen different domain names, including domain names <code>A</code> and <code>B</code>. And perhaps I have a single service behind the reverse proxy serving the traffic for both of those domain names. I need to make sure that my load balancer forwards on the <code>Host</code> header to the final service, so it can decide how to respond to the request.</p>\n<p><code>k3dash</code> in fact uses the library linked above for its implementation, and is following fairly standard header forwarding rules, plus making some specific modifications within the application.</p>\n<p>I think that's enough backstory, and perhaps you're already beginning to piece together what went wrong based on my clues above. Anyway, let's dive in!</p>\n<h2 id=\"the-problem\">The problem</h2>\n<p>One of my coworkers, Sibi, got started on the Istio mTLS strict mode migration. He got strict mode turned on in a test cluster, and then began to figure out what was broken. I don't know all the preliminary changes he made. But when he reached out to me, he'd gotten us to a point where the Kubernetes load balancer was successfully receiving the incoming requests for <code>k3dash</code> and forwarding them along to <code>k3dash</code>. <code>k3dash</code> was able to log the user in and provide its own UI display. All good so far.</p>\n<p>However, following through from the main UI to the Kubernetes Dashboard would fail, and we'd end up with this error message in the browser:</p>\n<blockquote>\n<p>upstream connect error or disconnect/reset before headers. reset reason: connection failure</p>\n</blockquote>\n<p>Sibi believed this to be a problem with the <code>k3dash</code> codebase itself and asked me to step in to help debug.</p>\n<h2 id=\"the-wrong-rabbit-hole-and-incredible-laziness\">The wrong rabbit hole, and incredible laziness</h2>\n<p>This whole section is just a cathartic gripe session on how I foot-gunned myself. I'm entirely to blame for my own pain, as we're about to see.</p>\n<p>It seemed pretty clear that the outgoing connection from the <code>k3dash</code> pod to the <code>kubernetes-dashboard</code> pod was failing. (And this turned out to be a safe guess.) The first thing I wanted to do was make a simpler repro, which in this case involved <code>kubectl exec</code>ing into the <code>k3dash</code> container and <code>curl</code>ing to the in-cluster service endpoint. Essentially:</p>\n<pre><code>$ curl -ivvv http:&#x2F;&#x2F;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local&#x2F;\n*   Trying 172.20.165.228...\n* TCP_NODELAY set\n* Connected to kube360-kubernetes-dashboard.kube360-system.svc.cluster.local (172.20.165.228) port 80 (#0)\n&gt; GET &#x2F; HTTP&#x2F;1.1\n&gt; Host: kube360-kubernetes-dashboard.kube360-system.svc.cluster.local\n&gt; User-Agent: curl&#x2F;7.58.0\n&gt; Accept: *&#x2F;*\n&gt;\n&lt; HTTP&#x2F;1.1 503 Service Unavailable\nHTTP&#x2F;1.1 503 Service Unavailable\n&lt; content-length: 84\ncontent-length: 84\n&lt; content-type: text&#x2F;plain\ncontent-type: text&#x2F;plain\n&lt; date: Wed, 14 Jul 2021 15:29:04 GMT\ndate: Wed, 14 Jul 2021 15:29:04 GMT\n&lt; server: envoy\nserver: envoy\n&lt;\n* Connection #0 to host kube360-kubernetes-dashboard.kube360-system.svc.cluster.local left intact\nupstream connect error or disconnect&#x2F;reset before headers. reset reason: local reset\n</code></pre>\n<p>This reproed the problem right away. Great! I was now completely convinced that the problem was not <code>k3dash</code> specific, since neither <code>curl</code> nor <code>k3dash</code> could make the connection, and they both gave the same <code>upstream connect error</code> message. I could think of a few different reasons for this to happen, none of which were correct:</p>\n<ul>\n<li>The outgoing packets from the container were not being sent to the Envoy proxy. I strongly believed this one for a while. But if I'd thought a bit harder, I would have realized that this was completely impossible. That <code>upstream connect error</code> message was of course coming from the Envoy proxy itself! If we were having a normal connection failure, we would have received the error message at the TCP level, not as an HTTP 503 response code. Next!</li>\n<li>The Envoy sidecar was receiving the packets, but the mesh was confused enough that it couldn't figure out how to connect to the destination Envoy sidecar. This turned out to be partially right, but not in the way I thought.</li>\n</ul>\n<p>I futzed around with lots of different attempts here but was essentially stalled. Until Sibi noticed something fascinating. It turns out that the following, seemingly nonsensical command <em>did</em> work:</p>\n<pre><code>curl http:&#x2F;&#x2F;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local:443&#x2F;\n</code></pre>\n<p>For some reason, making an <em>insecure</em> HTTP request over 443, the <em>secure</em> HTTPS port, worked. This made no sense, of course. Why would using the wrong port fix everything? And this is where incredible laziness comes into play. You see, Kubernetes Dashboard's default configuration uses TLS, and requires all of that setup I mentioned above about passing around certificates and updating accepted Certificate Authorities. But you can turn off that requirement, and make it listen on plain text. Since (1) this was intracluster communication, and (2) we've always had strict mTLS on our roadmap, we decided to simply turn off TLS in the Kubernetes Dashboard. However, when doing so, I forgot to switch the port number from 443 to 80.</p>\n<p>Not to worry though! I <em>did</em> remember to correctly configure <code>k3dash</code> to communicate with Kubernetes Dashboard, using insecure HTTP, over port 443. Since both parties agreed on the port, it didn't matter that it was the wrong port.</p>\n<p>But this was all very frustrating. It meant that the &quot;repro&quot; wasn't a repro at all. <code>curl</code>ing on the wrong port was giving the same error message, but for a different reason. In the meanwhile, we went ahead and changed Kubernetes Dashboard to listen on port 80 and <code>k3dash</code> to connect on port 80. We thought there <em>may</em> be a possibility that the Envoy proxy was giving some special treatment to the port number, which in retrospect doesn't really make much sense. In any event, this ended at a situation where our &quot;repro&quot; wasn't a repro at all.</p>\n<h2 id=\"the-bug-is-in-k3dash\">The bug is in <code>k3dash</code></h2>\n<p>Now it was clear that Sibi was right. <code>curl</code> could connect, <code>k3dash</code> couldn't. The bug <em>must</em> be inside <code>k3dash</code>. But I couldn't figure out how. Being the author of essentially all the HTTP libraries involved in this toolchain, I began to worry that my HTTP client library itself may somehow be the source of the bug. I went down a rabbit hole there too, putting together some minimal sample program outside <code>k3dash</code>. I <code>kubectl cp</code>ed them over and then ran them... and everything worked fine. Phew, my libraries were working, but not <code>k3dash</code>.</p>\n<p>Then I did the thing I should have done at the very beginning. I looked at the logs very, very carefully. Remember, <code>k3dash</code> is doing a reverse proxy. So, it receives an incoming request, modifies it, makes the new request, and then sends a modified response back. The logs included the modified outgoing HTTP request (some fields modified to remove private information):</p>\n<pre><code>2021-07-15 05:20:39.820662778 UTC ServiceRequest Request {\n  host                 = &quot;kube360-kubernetes-dashboard.kube360-system.svc.cluster.local&quot;\n  port                 = 80\n  secure               = False\n  requestHeaders       = [(&quot;X-Real-IP&quot;,&quot;127.0.0.1&quot;),(&quot;host&quot;,&quot;test-kube360-hostname.hidden&quot;),(&quot;upgrade-insecure-requests&quot;,&quot;1&quot;),(&quot;user-agent&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;accept&quot;,&quot;text&#x2F;html,application&#x2F;xhtml+xml,application&#x2F;xml;q=0.9,image&#x2F;avif,image&#x2F;webp,image&#x2F;apng,*&#x2F;*;q=0.8,application&#x2F;signed-exchange;v=b3;q=0.9&quot;),(&quot;sec-gpc&quot;,&quot;1&quot;),(&quot;referer&quot;,&quot;http:&#x2F;&#x2F;test-kube360-hostname.hidden&#x2F;dash&quot;),(&quot;accept-language&quot;,&quot;en-US,en;q=0.9&quot;),(&quot;cookie&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;x-forwarded-for&quot;,&quot;192.168.0.1&quot;),(&quot;x-forwarded-proto&quot;,&quot;http&quot;),(&quot;x-request-id&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;x-envoy-attempt-count&quot;,&quot;3&quot;),(&quot;x-envoy-internal&quot;,&quot;true&quot;),(&quot;x-forwarded-client-cert&quot;,&quot;&lt;REDACTED&gt;&quot;),(&quot;Authorization&quot;,&quot;&lt;REDACTED&gt;&quot;)]\n  path                 = &quot;&#x2F;&quot;\n  queryString          = &quot;&quot;\n  method               = &quot;GET&quot;\n  proxy                = Nothing\n  rawBody              = False\n  redirectCount        = 0\n  responseTimeout      = ResponseTimeoutNone\n  requestVersion       = HTTP&#x2F;1.1\n}\n</code></pre>\n<p>I tried to leave in enough content here to give you the same overwhelmed sense that I had looking it. Keep in mind the <code>requestHeaders</code> field is in practice about three times as long. Anyway, with the slimmed down headers, and all my hints throughout, see if you can guess what the problem is.</p>\n<p>Ready? It's the <code>Host</code> header! Let's take a quote from the <a href=\"https://istio.io/latest/docs/ops/configuration/traffic-management/traffic-routing/\">Istio traffic routing documentation</a>. Regarding HTTP traffic, it says:</p>\n<blockquote>\n<p>Requests are routed based on the port and <em><code>Host</code></em> header, rather than port and IP. This means the destination IP address is effectively ignored. For example, <code>curl 8.8.8.8 -H &quot;Host: productpage.default.svc.cluster.local&quot;</code>, would be routed to the <code>productpage</code> Service.</p>\n</blockquote>\n<p>See the problem? <code>k3dash</code> is behaving like a standard reverse proxy, and including the <code>Host</code> header, which is almost always the right thing to do. But not here! In this case, that <code>Host</code> header we're forwarding is confusing Envoy. Envoy is trying to connect to something (<code>test-kube360-hostname.hidden</code>) that doesn't respond to its mTLS connections. That's why we get the <code>upstream connect error</code>. And that's why we got the same response as when we used the wrong port number, since Envoy is configured to only receive incoming traffic on a port that the service is actually listening to.</p>\n<h2 id=\"the-fix\">The fix</h2>\n<p>After all of that, the fix is rather anticlimactic:</p>\n<pre data-lang=\"diff\" class=\"language-diff \"><code class=\"language-diff\" data-lang=\"diff\">-(\\(h, _) -&gt; not (Set.member h _serviceStripHeaders))\n+-- Strip out host headers, since they confuse the Envoy proxy\n+(\\(h, _) -&gt; not (Set.member h _serviceStripHeaders) &amp;&amp; h &#x2F;= &quot;Host&quot;)\n</code></pre>\n<p>We already had logic in <code>k3dash</code> to strip away specific headers for each service. And it turns out this logic was primarily used to strip out the <code>Host</code> header for services that got confused when they saw it! Now we just need to strip away the <code>Host</code> header for all the services instead. Fortunately none of our services perform any logic based on the <code>Host</code> header, so with that in place, we should be good. We deployed the new version of <code>k3dash</code>, and voilà! everything worked.</p>\n<h2 id=\"the-moral-of-the-story\">The moral of the story</h2>\n<p>I walked away from this adventure with a much better understanding of how Istio interacts with applications, which is great. I got a great reminder to look more carefully at log messages before hardening my assumptions about the source of a bug. And I got a great kick in the pants for being lazy about port number fixes.</p>\n<p>All in all, it was about six hours of debugging fun. And to quote a great Hebrew phrase on it, &quot;היה טוב, וטוב שהיה&quot; (it was good, and good that it <em>was</em> (in the past)).</p>\n<hr />\n<p>As I mentioned above, we're actively looking for new DevOps candidates, especially US based candidates. If you're interested in working with a global team of experienced DevOps, Rust, and Haskell engineers, consider <a href=\"mailto:[email protected]\">sending us your CV</a>.</p>\n<p>And if you're looking for a solid Kubernetes platform, batteries included, so you can offload this kind of tedious debugging to some other unfortunate souls (read: us), <a href=\"https://tech.fpcomplete.com/products/kube360/\">check out Kube360</a>.</p>\n<p>If you liked this article, you may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-kubernetes-windows/\">Deploying Rust with Windows Containers on Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/\">Cloud Vendor Neutrality</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">Secure defaults with Kubernetes Security with Kube360</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
        "slug": "istio-mtls-debugging-story",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "An Istio/mutual TLS debugging story",
        "description": "While rolling out Istio's strict mTLS mode in our Kube360 product, we ran into an interesting corner case problem.",
        "updated": null,
        "date": "2021-07-20",
        "year": 2021,
        "month": 7,
        "day": 20,
        "taxonomies": {
          "tags": [
            "kubernetes",
            "regulated"
          ],
          "categories": [
            "devops",
            "kube360",
            "it-compliance"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/devops.png",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/istio-mtls-debugging-story.png"
        },
        "path": "/blog/istio-mtls-debugging-story/",
        "components": [
          "blog",
          "istio-mtls-debugging-story"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "what-is-mtls",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#what-is-mtls",
            "title": "What is mTLS?",
            "children": []
          },
          {
            "level": 2,
            "id": "k3dash-and-reverse-proxying",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#k3dash-and-reverse-proxying",
            "title": "k3dash and reverse proxying",
            "children": []
          },
          {
            "level": 2,
            "id": "the-problem",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-problem",
            "title": "The problem",
            "children": []
          },
          {
            "level": 2,
            "id": "the-wrong-rabbit-hole-and-incredible-laziness",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-wrong-rabbit-hole-and-incredible-laziness",
            "title": "The wrong rabbit hole, and incredible laziness",
            "children": []
          },
          {
            "level": 2,
            "id": "the-bug-is-in-k3dash",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-bug-is-in-k3dash",
            "title": "The bug is in k3dash",
            "children": []
          },
          {
            "level": 2,
            "id": "the-fix",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-fix",
            "title": "The fix",
            "children": []
          },
          {
            "level": 2,
            "id": "the-moral-of-the-story",
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-moral-of-the-story",
            "title": "The moral of the story",
            "children": []
          }
        ],
        "word_count": 2642,
        "reading_time": 14,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          }
        ]
      },
      {
        "relative_path": "blog/kube360s-kubernetes-security-focus.md",
        "colocated_path": null,
        "content": "<p>Security is a multifaceted concept with broad-reaching applications\nacross multiple domains. As a complete server deployment solution,\nKube360 touches on many different pieces of the security puzzle. FP\nComplete has years of experience helping our clients boost Kubernetes\nsecurity with Kubernetes in particular and cloud deployments in general.\nWe’ve accumulated many of our learned best practices and included them\nas sensible and secure defaults in\n<a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>.</p>\n<p>This article analyzes a few different security domains and how Kube360\nprovides a security focus out of the box.</p>\n<h2 id=\"federated-authentication\">Federated authentication</h2>\n<p>One of the primary Kubernetes security weaknesses we regularly see in\nKubernetes installations is user authentication. The most common\napproach to connecting to a cluster seems to be:</p>\n<ul>\n<li>\n<p>Create a single service account with full permissions on the cluster</p>\n</li>\n<li>\n<p>Generate a kubeconfig file for that service account</p>\n</li>\n<li>\n<p>Share that file with all developers and operators</p>\n</li>\n<li>\n<p>Paste that kubeconfig file into secrets configuration for CI scripts</p>\n</li>\n</ul>\n<p>The default configuration of Kubernetes, unfortunately, makes this kind\nof setup enticingly inviting. In exchange for simplicity, teams are\ntrapped with many weaknesses:</p>\n<ul>\n<li>\n<p>This kubeconfig file becomes a single point of failure for\ncompromising the entire cluster</p>\n</li>\n<li>\n<p>Migrating clusters is a painful and laborious experience</p>\n</li>\n<li>\n<p>It is all too easy to grant too many permissions to a user or CI\njob, leading to accidental breakage</p>\n</li>\n<li>\n<p>Offboarding a staff member is difficult and dangerous</p>\n</li>\n</ul>\n<p>In Kube360, we’ve deployed federated authentication out of the box.\nKube360 is configured to work with your originating directory service by\ndefault. Virtually all popular platforms—including Microsoft 365,\nGoogle, and Okta—are supported. Instead of maintaining yet another set\nof credentials, users can reuse their existing accounts, complete with\ntwo-factor authentication and other deterrents to intrusion.</p>\n<p><img src=\"/images/blog/kube360-security/cli-access.png\" alt=\"Command line access\" /></p>\n<p>Credentials in Kube360 are also short-lived. Instead of receiving a\npermanent kubeconfig file, Kube360 ships with web-based and command-line\ntooling for issuing and updating temporary credentials. This follows\nKubernetes security best practices and minimizes the impact of security\nbreaches. When combined with the central directory, offboarded staff\nwill quickly and automatically lose access to the cluster.</p>\n<p>And finally, by leveraging existing authentication systems, we believe\nKube360 can <strong>democratize</strong> operations. The deployment systems and\ndashboards are often a complete black box to anyone outside of the\nengineering organization. With federated login, every team member—from\nCEO to sysadmin—can easily access the cluster and gain insight.</p>\n<p><img src=\"/images/blog/kube360-security/monitoring.png\" alt=\"Monitoring\" /></p>\n<h3 id=\"role-based-authorization\">Role-based authorization</h3>\n<p>The final piece of the puzzle is role-based authorization. Kube360 ties\ntogether directory service groups, such as Microsoft 365 security\ngroups, with Kubernetes’s RBAC system. By defining a group to role\nmapping and providing a limited set of permissions for each role, users\ncan be granted access to only the systems they need. View-only\npermissions, single-namespace permissions, and more can be granted to\nvarious teams. And our customized dashboard views allow for an\nintegrated experience for every member of the team.</p>\n<h2 id=\"namespacing-and-isolation\">Namespacing and isolation</h2>\n<p>Kube360 encourages multitenancy within an organization. Instead of\nseparating out various applications into separate clusters, we encourage\nhosting multiple applications within a single cluster. We also recommend\nhosting the development, staging, QA, and production environments within\nthe same environment. This reduces hardware costs, simplifies\noperations, and provides a unified insight into what the organization is\nrunning.</p>\n<p>This kind of multitenancy does introduce security concerns. Done poorly,\nmultitenancy can allow one application to accidentally intrude upon\nanother via network traffic or by leveraging shared Kubernetes\nresources, especially secrets. This can elevate a security intrusion\ninto one application to a cluster-wide compromise.</p>\n<p>Kube360 encourages the isolation of applications by default. We\nrecommend and provide tooling and documentation around a\nnamespace-driven isolation strategy. This allows segmenting of the\nnetwork traffic between applications, restrict access to users via RBAC\nrules, and minimally advertising secret data across the cluster.</p>\n<p>Combined with our central application visualization tooling via ArgoCD,\nnamespacing is simple, natural, easy to work with, and a sensible\ndefault.</p>\n<p><img src=\"/images/blog/kube360-security/argocd.png\" alt=\"ArgoCD\" /></p>\n<h2 id=\"encryption-in-transit\">Encryption in transit</h2>\n<p>Increasingly, organizations and regulators are insisting on encrypting\nall network traffic. Load balancers with TLS termination solve the\nexternal traffic issue with minimal application impact. Kube360 provides\nboth external-DNS and cert-manager to automate the acquisition and usage\nof TLS certificates, making it trivial to ensure all incoming\nconnections are properly encrypted.</p>\n<p>But intra-cluster traffic encryption is typically an invasive, time\nconsuming, and error-prone activity. To address this, Kube360 ships with\nIstio service mesh out of the box. By default, every pod launched into a\nKube360 cluster will have an Istio sidecar responsible for encrypting\nnetwork traffic. Not only will incoming traffic from the outside world\nbe encrypted when hitting the cluster, but traffic from the load\nbalancer to the pods and in between pods in a microservices architecture\nwill all be encrypted.</p>\n<p><img src=\"/images/blog/kube360-security/istio.png\" alt=\"Istio\" /></p>\n<p>Encryption is handled via mutual TLS (mTLS). By default, traffic in\ndifferent namespaces leverages separate keys for additional isolation.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>Modern organizations are under enormous pressure to ship features and\nanswer user demands. The unfortunate reality is that security often\ntakes a back seat. Modern DevOps tooling and cloud services can be a\nboon to productivity. But insecure defaults can lead to weaknesses.</p>\n<p>We’ve given you a small taste of what we’ve done in Kube360 to\nstrengthen tooling by default. Our tooling works hand in hand with your\nengineers to create a multifaceted, multilayered approach to security,\nwhich will protect your services, your customers, and your organization.</p>\n<p>If you’d like to learn about other security features within Kube360,\nplease <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us to set up a\nconsultation</a>\nwith our engineering team to explore how Kube360 can help you.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/",
        "slug": "kube360s-kubernetes-security-focus",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Secure defaults with Kubernetes Security with Kube360",
        "description": "Security is a multifaceted concept with broad-reaching applications. Boost Kubernetes Security with Kube360",
        "updated": null,
        "date": "2021-01-20",
        "year": 2021,
        "month": 1,
        "day": 20,
        "taxonomies": {
          "tags": [
            "devops",
            "insights",
            "kube360"
          ],
          "categories": [
            "kube360"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Staff",
          "keywords": "kubernetes security",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/kube360s-kubernetes-security-focus/",
        "components": [
          "blog",
          "kube360s-kubernetes-security-focus"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "federated-authentication",
            "permalink": "https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/#federated-authentication",
            "title": "Federated authentication",
            "children": [
              {
                "level": 3,
                "id": "role-based-authorization",
                "permalink": "https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/#role-based-authorization",
                "title": "Role-based authorization",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "namespacing-and-isolation",
            "permalink": "https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/#namespacing-and-isolation",
            "title": "Namespacing and isolation",
            "children": []
          },
          {
            "level": 2,
            "id": "encryption-in-transit",
            "permalink": "https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/#encryption-in-transit",
            "title": "Encryption in transit",
            "children": []
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 957,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
            "title": "Canary Deployment with Kubernetes and Istio"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
            "title": "An Istio/mutual TLS debugging story"
          },
          {
            "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/",
            "title": "Meet the Measurements of Effective IT with Kube360"
          }
        ]
      },
      {
        "relative_path": "blog/ci-cd-in-kube360.md",
        "colocated_path": null,
        "content": "<p>Continuous Integration and Continuous Delivery are two concepts that\ncome up frequently in modern DevOps. At a high level, <strong>continuous\nintegration</strong> (CI) describes the processes and tooling used to ensure\nthat software changes can be tested and built in an automated fashion.\n<strong>Continuous delivery</strong> (CD) adds to that concept by providing the setup\nneeded for newly built software releases to be deployed to production or\ndevelopment environments.</p>\n<p>While logically, CI and CD are distinct processes, practically, they are\noften combined into a single step. Many organizations will use the same\ntool—such as Jenkins, GitHub Actions, Gitlab CI, or many others—to\nbuild, test, and deploy the new version of their code.</p>\n<h2 id=\"limitations-of-combined-ci-cd\">Limitations of combined CI/CD</h2>\n<p>CI/CD is certainly a viable approach, and it's one we have used many\ntimes in the past at FP Complete. However, we're also aware of some its\nlimitations. Let's explore some of the weaknesses of this approach.</p>\n<h3 id=\"permissions-model\">Permissions model</h3>\n<p>A combined CI/CD pipeline will end up needing several different\npermissions, including:</p>\n<ul>\n<li>\n<p>Read access to the source code repository</p>\n</li>\n<li>\n<p>Push access to the binary artifact repository, e.g., the Docker\nregistry</p>\n</li>\n<li>\n<p>Update access to the production hosting, e.g., the Kubernetes\ncluster</p>\n</li>\n</ul>\n<p>These permissions are typically passed in via secret variables to the CI\nsystem. By combining CI/CD into a single process, we essentially require\nthat any users with maintainer access to the job get access to all these\nabilities. Furthermore, if a nefarious (or faulty) code change is merged\ninto source code that reveals secret variables from CI, production\ncluster tokens may be revealed.</p>\n<p>Following the principle of least privilege, reducing a CI system's\naccess is preferable.</p>\n<h3 id=\"multi-cluster-deployments\">Multi-cluster deployments</h3>\n<p>Baking CD into the primary CI job of an application means that <em>any</em>\ncluster that wants to run that application needs to modify the\noriginating CI job. In many cases, this isn't a blocker. There's a\nsource repository with the application code, and the application runs on\njust one cluster in one production environment. Hard-coding information\ninto the primary CI job about that one cluster and environment feels\nnatural.</p>\n<p>But not all software works like this. Providing CI, QA, staging, and\nproduction environments is one common obstacle. Does the CI job update\nall these environments on each commit? Is there unique mapping from\ndifferent branches to different environments?</p>\n<p>Multi-cloud or multi-region deployment take this concept further. If\nyour application needs to be geolocated, configuring the originating CI\njob to update all the different clouds and regions can be a burden.</p>\n<h3 id=\"upstream-tools\">Upstream tools</h3>\n<p>Continuing this theme is the presence of upstream software. If you're\ndeploying an open-source application or vendor-provided software, you\nlikely have no direct control of their CI job. And modifying that job to\ntrigger a deployment within your cluster is likely not an option — not\nto mention a security nightmare.</p>\n<h3 id=\"cycling-cluster-credentials\">Cycling cluster credentials</h3>\n<p>Tying up the story is cluster credentials. As part of normal cluster\nmaintenance, you will likely rotate credentials for cluster access. This\nis a good security practice in general and maybe absolutely required\nwhen performing certain kinds of updates. When you have dozens of\napplications on several different CI systems, each with its own set of\ncluster credentials hardcoded into CI secret variables, such rotations\nbecome onerous.</p>\n<h2 id=\"minimize-ci\">Minimize CI</h2>\n<p>Instead, with\n<a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>, we decided\nto go the route of minimizing the role of Continuous Integration to:</p>\n<ul>\n<li>\n<p>Build the software</p>\n</li>\n<li>\n<p>Test the software</p>\n</li>\n<li>\n<p>Produce a binary artifact (a Docker image in the case of Kubernetes\ndeployments)</p>\n</li>\n<li>\n<p>Publish that binary artifact (in our case to a Docker registry)</p>\n</li>\n</ul>\n<p>This addresses the four concerns above:</p>\n<ul>\n<li>\n<p><strong>Permissions model:</strong> We've reduced the CI system's permissions to\nthings that do not directly impact what is running on the production\nsystem. The CI system now can read source code and write to the\nDocker registry. This latter step can cause issues in production by\noverwriting an existing tag with a new image. However, if you deploy\nbased on SHA256 hashes, you're protected even against this. The CI\nsystem can no longer directly modify what is running in production.</p>\n</li>\n<li>\n<p><strong>Multi-cluster deployments:</strong> The CI system knows nothing about\n<em>any</em> clusters. The responsibility to notify the clusters of an\navailable update is handled separately, as described below.</p>\n</li>\n<li>\n<p><strong>Upstream tools:</strong> We are now modeling our custom-written\nproprietary software in the same way we would deploy upstream tools.\nA vendor or open-source provider will write software, build it, and\npublish a release. We are now shifting the in-house software model\nto behave the same way: CI publishes a release, and deployment\nresponsibility picks up from there.</p>\n</li>\n<li>\n<p><strong>Cycling cluster credentials:</strong> This step is no longer necessary\nsince the CI system maintains no such credentials.</p>\n</li>\n</ul>\n<p>But it still begs the question: how do we handle <a href=\"https://tech.fpcomplete.com/platformengineering/cicd/\">Continuous\nDeployment</a> in Kube360?</p>\n<h2 id=\"dedicated-in-cluster-cd\">Dedicated in-cluster CD</h2>\n<p>Kube360 ships with ArgoCD out of the box. ArgoCD provides in-cluster\nContinuous Deployment. It relies on a GitOps workflow. Each deployed\napplication tracks a Git repository, which defines the Kubernetes\nmanifest files. These manifest files contain a fully declarative\nstatement of how to deploy an application, including which version of\nthe Docker image to deploy. This provides centralized definitions of\nyour software stack and the audibility and provenance of deployments\nthrough Git history.</p>\n<p>This has some apparent downsides. Deploying new versions of software is\nnow a multi-step process, including updating the source code, waiting\nfor CI, updating the Git repository, and then updating ArgoCD. You must\nnow maintain multiple copies of manifest files for different clusters\nand different environments.</p>\n<p>With careful planning, all of these can be overcome. And as you'll see\nbelow, some of these downsides can be a benefit for some industries.</p>\n<h3 id=\"overlays\">Overlays</h3>\n<p>In our recommended setup, we strongly leverage overlays for creating\nmodifications of application deployments. Instead of duplicating your\nmanifest files, you take a base set of manifests and apply overlays for\ndifferent clusters or different environments. This cuts down on\nrepetition and helps ensure that your QA and staging environments are\naccurately testing what you deploy to production.</p>\n<h3 id=\"autosync\">Autosync</h3>\n<p>Each deployment can choose to either enable or disable auto-sync. With\nauto-sync enabled, each push to the manifest file Git repository will\nautomatically update the code running in production. When disabled, the\ncentral dashboard will indicate which applications are fully\nsynchronized and lagging the Git repository of manifest files.</p>\n<p><img src=\"/images/blog/ci-cd-in-kube360/autosync.png\" alt=\"Autosync\" /></p>\n<p>Which option you choose depends upon your security and regulatory\nconcerns. Development environments typically enable auto-sync for ease\nof testing. In some cases, auto-sync makes perfect sense for production;\nyou may want to strongly limit the permissions model around updating a\nproduction deployment and make the final deploy step the responsibility\nof a quality auditor.</p>\n<h3 id=\"autoupdate\">Autoupdate</h3>\n<p>An advantage of the GitOps workflow is that each update to the\ndeployment is accompanied by a Git commit reflecting the new version of\nthe Docker image. A downside is that this requires an explicit action by\nan engineer to create this commit. In some cases, it's desirable to\nautomatically update the manifest repository each time a new Docker\nimage is available.</p>\n<p>To allow for this, FP Complete has developed a tool for automatic\nmanagement of these Git repositories, providing fast updates without an\noperator's involvement. When paired with auto sync, this can provide for\na fully automated deployment process on each commit to a source\nrepository. But by keeping this as an optional component, you retain\nfull flexibility to create a pipeline in line with your goals and\nsecurity needs.</p>\n<h3 id=\"permissions-management\">Permissions management</h3>\n<p>Permissions for modifying your cluster always stay within your cluster.\nYou no longer need to distribute Kubernetes tokens to external CI\nsystems. Within the cluster, Kube360 grants ArgoCD access to update\ndeployments. When deploying to a new cluster, no update of Kubernetes\nconfiguration is needed. Once you copy over Docker registry tokens, you\ncan read in the Docker images and deploy directly into the local\ncluster.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>At FP Complete, we believe it is vital to balance developer productivity\nand quality guarantees. One of the best ways to optimize this balance is\nto leverage great tools to improve both productivity and quality\nsimultaneously. With Kube360, we've leveraged best-in-class tooling with\ninnovative best practices to provide a simple, secure, and productive\nbuild and deploy pipeline. To learn more, <a href=\"https://tech.fpcomplete.com/contact-us/\">contact our team</a>!</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/",
        "slug": "ci-cd-in-kube360",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Improve productivity and quality with CI/CD in Kube360",
        "description": "At FP Complete, we believe it is vital to balance developer productivity and quality guarantees. Explore how with CI/CD in Kube360",
        "updated": null,
        "date": "2021-01-13",
        "year": 2021,
        "month": 1,
        "day": 13,
        "taxonomies": {
          "categories": [
            "kube360"
          ],
          "tags": [
            "devops",
            "insights",
            "kube360"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Staff",
          "keywords": "ci/cd",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/ci-cd-in-kube360/",
        "components": [
          "blog",
          "ci-cd-in-kube360"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "limitations-of-combined-ci-cd",
            "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#limitations-of-combined-ci-cd",
            "title": "Limitations of combined CI/CD",
            "children": [
              {
                "level": 3,
                "id": "permissions-model",
                "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#permissions-model",
                "title": "Permissions model",
                "children": []
              },
              {
                "level": 3,
                "id": "multi-cluster-deployments",
                "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#multi-cluster-deployments",
                "title": "Multi-cluster deployments",
                "children": []
              },
              {
                "level": 3,
                "id": "upstream-tools",
                "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#upstream-tools",
                "title": "Upstream tools",
                "children": []
              },
              {
                "level": 3,
                "id": "cycling-cluster-credentials",
                "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#cycling-cluster-credentials",
                "title": "Cycling cluster credentials",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "minimize-ci",
            "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#minimize-ci",
            "title": "Minimize CI",
            "children": []
          },
          {
            "level": 2,
            "id": "dedicated-in-cluster-cd",
            "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#dedicated-in-cluster-cd",
            "title": "Dedicated in-cluster CD",
            "children": [
              {
                "level": 3,
                "id": "overlays",
                "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#overlays",
                "title": "Overlays",
                "children": []
              },
              {
                "level": 3,
                "id": "autosync",
                "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#autosync",
                "title": "Autosync",
                "children": []
              },
              {
                "level": 3,
                "id": "autoupdate",
                "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#autoupdate",
                "title": "Autoupdate",
                "children": []
              },
              {
                "level": 3,
                "id": "permissions-management",
                "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#permissions-management",
                "title": "Permissions management",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "conclusion",
            "permalink": "https://tech.fpcomplete.com/blog/ci-cd-in-kube360/#conclusion",
            "title": "Conclusion",
            "children": []
          }
        ],
        "word_count": 1396,
        "reading_time": 7,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/",
            "title": "Meet the Measurements of Effective IT with Kube360"
          }
        ]
      },
      {
        "relative_path": "blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360.md",
        "colocated_path": null,
        "content": "<h2 id=\"why-fp-complete-chose-kubernetes-for-container-orchestration-in-kube360\">Why FP Complete chose Kubernetes for Container Orchestration in Kube360</h2>\n<p><em>While it is a small competition field, alternatives to Kubernetes for\ncontainer orchestration exist and have their own merits and drawbacks.\nToday, we'll cover the major players and explain why FP Complete\nultimately decided to base\n<a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a> on its namesake,\nKubernetes.</em></p>\n<p>When companies and organizations move towards containerized workflows\nfor their software and applications, various options exist to handle\nthese containers' deployments and day-to-day management. Which option a\ncompany decides to use has an enormous impact on its long-run success.\nThe container orchestration tools currently available take their\ndifferent approaches to how developers can deploy new placements,\nmonitor existing apps, and handle and debug failures.</p>\n<h2 id=\"kubernetes-and-its-advantages\">Kubernetes and its advantages</h2>\n<p>Kubernetes is an open-source container orchestration system initially\ndesigned by Google and now maintained by the Cloud Native Computing\nFoundation. Google, Red Hat, and many other major tech companies\ncurrently develop Kubernetes, along with a large community of\nopen-source contributors.</p>\n<p>Kubernetes provides container scheduling, cluster management, logging\nand monitoring, service discovery, secrets management, and other\nservices that Docker-based containerized applications require.\nKubernetes' container orchestration functionality is comprised of a\ncollection of interoperating services. These services provide a layer of\nabstraction over the underlying servers, storage, and other cloud\ninfrastructure components that the containerized applications need to\noperate. To do this, Kubernetes defines a set of primitives, known as\nObjects, that can be used to manage the underlying resources at a high\nlevel. This action provides a cloud provider with an agnostic layer\nwhere each container workflow's setup and deployment (and the services\nthey require) are defined in terms of these Kubernetes Objects.</p>\n<p><strong>A significant advantage Kubernetes has over its cloud orchestration\ncompetitors is the ecosystem of specialized services, tools, and other\nDevOps applications explicitly written with Kubernetes in mind. Because\nKubernetes has become the de-facto standard for container orchestration,\nthere is a wide variety of options to choose from for various parts of a\ntypical DevOps deployment.</strong></p>\n<h2 id=\"competitors-to-kubernetes\">Competitors to Kubernetes</h2>\n<h3 id=\"nomad\">Nomad</h3>\n<p>Nomad is currently the most robust alternative to Kubernetes but\nsignificantly lacks in terms of adoption and support. Nomad is an\nopen-source container orchestration system developed by HashiCorp and\nmainly targets the same use cases as Kubernetes. Besides adoption, where\nKubernetes and Nomad primarily differ is in their implementation.\nWhereas Kubernetes is a collection of interoperating pieces, Nomad is a\nsingle monolithic binary. In Kubernetes, the service that handles\ncoordination and storage is separate from the API controllers that take\ncare of the state. With Nomad, the resource manager, and scheduler and\nwrapped into a single system. Nomad's design focuses on being more of a\npure scheduler. As a result, it leaves out the complexity that can come\nwith the Kubernetes networking model.</p>\n<p>Given Nomad's less widespread adoption and a smaller community, the\nexistential risk with Nomad is that HashiCorp could go under and leave\nNomad unsupported. <strong>With Kubernetes, this risk is minimal since many\nmajor companies have committed to supporting Kubernetes' development and\nadoption</strong>. Nomad lacks development support and features behind it,\nwhich hinders the adoption of the overall product.</p>\n<h3 id=\"docker-compose-and-docker-swarm\">Docker Compose and Docker Swarm</h3>\n<p>Docker Compose may seem like an odd choice to include on this list.\nHowever, it targets a similar use case as Kubernetes in a far more\nlimited way. Docker Compose is a tool that allows multi-container\ndeployments to be defined in a single file. Docker Compose can then be\nused to create and start all the services described in this file.</p>\n<p>Docker Compose is typically used for limited development, prototyping,\nand testing use cases but combines with Docker Swarm for larger\ndeployments. Docker Swarm is another container orchestration tool\ncomprised of a Swarm manager and individual container nodes. Docker\nSwarm development has recently slowed, and the project and its platform\nwill likely cease to be supported soon.</p>\n<h3 id=\"apache-mesos-with-marathon\">Apache Mesos with Marathon</h3>\n<p>Apache Mesos is a resource manager typically used in tandem with a\nMarathon framework, which provides a container orchestration platform.\nIn this setup, Marathon depends on Mesos to provide resource management.\nMesos with Marathon faces similar issues to Nomad regarding the\ncommunity support and development energy behind it. Additionally, users\nmay also find that significant and sometimes simple features they need\nare locked away in Marathon's enterprise version.</p>\n<p><strong>The risk of locked features or functionality into the proprietary\nenterprise version of Kubernetes does not exist. Once a user has\nadopted Kubernetes and defined their containerized applications in\nKubernetes Objects, they are free to switch between different Kubernetes\ndeployment services.</strong></p>\n<h2 id=\"why-kubernetes-is-the-best-choice-for-kube360\">Why Kubernetes is the best choice for Kube360</h2>\n<p>FP Complete chose Kubernetes because it was the best container\norchestration system to build Kube360. Kubernetes provided FP Complete\nthe right material and ingredients to create a competitive product.\nKubernetes stood out as #1 due to its dual requirements of being well\nadopted and supported by the DevOps community and, at the same time, not\nbeholden to one company and its financial interests. To learn more about\nKube360, visit our <a href=\"https://tech.fpcomplete.com/blog/kube360-overview/\">overview\narticle</a> or <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us for a free demo today</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/",
        "slug": "why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Kubernetes is best for container orchestration",
        "description": "While it's a small competition field, alternatives to Kubernetes for container orchestration exist. We cover the major players and explain how we decided on Kube360",
        "updated": null,
        "date": "2021-01-06",
        "year": 2021,
        "month": 1,
        "day": 6,
        "taxonomies": {
          "categories": [
            "kube360"
          ],
          "tags": [
            "devops",
            "insights",
            "kube360"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Staff",
          "keywords": "container orchestration",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/",
        "components": [
          "blog",
          "why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "why-fp-complete-chose-kubernetes-for-container-orchestration-in-kube360",
            "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/#why-fp-complete-chose-kubernetes-for-container-orchestration-in-kube360",
            "title": "Why FP Complete chose Kubernetes for Container Orchestration in Kube360",
            "children": []
          },
          {
            "level": 2,
            "id": "kubernetes-and-its-advantages",
            "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/#kubernetes-and-its-advantages",
            "title": "Kubernetes and its advantages",
            "children": []
          },
          {
            "level": 2,
            "id": "competitors-to-kubernetes",
            "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/#competitors-to-kubernetes",
            "title": "Competitors to Kubernetes",
            "children": [
              {
                "level": 3,
                "id": "nomad",
                "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/#nomad",
                "title": "Nomad",
                "children": []
              },
              {
                "level": 3,
                "id": "docker-compose-and-docker-swarm",
                "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/#docker-compose-and-docker-swarm",
                "title": "Docker Compose and Docker Swarm",
                "children": []
              },
              {
                "level": 3,
                "id": "apache-mesos-with-marathon",
                "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/#apache-mesos-with-marathon",
                "title": "Apache Mesos with Marathon",
                "children": []
              }
            ]
          },
          {
            "level": 2,
            "id": "why-kubernetes-is-the-best-choice-for-kube360",
            "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/#why-kubernetes-is-the-best-choice-for-kube360",
            "title": "Why Kubernetes is the best choice for Kube360",
            "children": []
          }
        ],
        "word_count": 851,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/",
            "title": "Meet the Measurements of Effective IT with Kube360"
          }
        ]
      },
      {
        "relative_path": "blog/kube360-overview.md",
        "colocated_path": null,
        "content": "<h2 id=\"introduction\">Introduction</h2>\n<p>At the start of a new year, it is traditional to summarize what we have\nlearned over the past year and apply those lessons to the coming year.\nThis past year has been an outlier in so many ways. Hence, the usual\nplatitudes about lessons learned may seem out of place. Nonetheless,\npeople in a leadership position do not have the luxury of ignoring the\ndifficult situation we all face; they have an urgent responsibility to\nmove their organization to safety in these very trying times.</p>\n<p>This article will focus on one specific resource that organizational\nleadership must manage extraordinarily well: information technology.\nEvery year, pundits point out that IT is more important than ever\nbefore. This past year drove home how true this tired cliché really is.\nOrganizational leaders don't need to hear this repeated. They want and\nneed to hear from us technologists the answer to &quot;how do we make this\nmost vital tool truly effective?&quot;</p>\n<p>Before providing an answer, we need to define what <em><strong>effective</strong></em>\n<em><strong>IT</strong></em> means. To do that, let's look at some of the problems IT faced\nduring pandemic 2020:</p>\n<ol>\n<li>\n<p>As most workers moved off-site, remote work added whole new layers\nof inter-worker communication issues that few companies had\nexperienced before. IT was at the forefront of providing solutions\nto these issues at an incredibly rapid pace.</p>\n</li>\n<li>\n<p>Serving customers online moved from being just one more channel to\nthe most important, and often the only channel. Online applications\nbecame front and center for nearly every business and service\norganization, private and public alike.</p>\n</li>\n<li>\n<p>The vast uncertainties in market functioning created by the pandemic\nchallenged enterprise resource planning like never before. Getting\ntimely information on every aspect of organizational functioning was\nno longer a desired goal but a necessity to survive.</p>\n</li>\n</ol>\n<p>To meet these challenges, IT programs and projects that, in the past,\nmight have been implemented over many months or even years had to be\ncompleted in weeks or days. This was necessary to ensure all\norganizational operations, both in- and out-facing, continued to\nfunction at all, let alone smoothly.</p>\n<p>Hence, in 2021 <em><strong>effective</strong></em> <em><strong>IT</strong></em> means IT that can build\napplications that are</p>\n<ul>\n<li>\n<p>scalable instantly and reliably,</p>\n</li>\n<li>\n<p>adaptable to rapidly changing requirements,</p>\n</li>\n<li>\n<p>deployable almost instantaneously,</p>\n</li>\n<li>\n<p>resilient against security threats and</p>\n</li>\n<li>\n<p>capable of ensuring customer and organizational privacy.</p>\n</li>\n</ul>\n<p>These measures of effectiveness are not new. What is new is that they\nare no longer a &quot;nice to have&quot; goal but a matter of organizational\nsurvival.</p>\n<p>The good news is that <em>how</em> to achieve this type of effectiveness is\nsomething that technologists have been talking about for the past\ndecade. The rest of this article will show you how to stop talking about\neffective IT in your organization and start doing it now.</p>\n<h2 id=\"kubernetes-21st-century-it-infrastructure\">Kubernetes – 21<sup>st</sup> Century IT Infrastructure</h2>\n<p>Let's start with &quot;scalable instantly and reliably.&quot; By now, most of us\nhave heard the metaphor that our IT applications need to be cattle, not\npets. Suppose we want extremely high reliability and scalability with\nminimum stress on IT resources. In that case, we don't want our IT staff\nfiddling around and wasting time figuring out why an app stopped working\non a particular pet VM. We want them to be able to immediately kill the\nnon-functioning &quot;cattle&quot; and redeploy an exact replica to get the\napplication right back up. That is precisely what containerized\napplications allow us to do.</p>\n<p>Remember we said we want minimum stress on IT resources? That means we\ndon't want IT staff redeploying a new container when an old one goes\ndown. We want our systems to be <em>self-healing</em>. In other words, we want\nredeployments of failed applications to happen automatically. Even more\nthan that, we want multiple containers with <em>the exact same application</em>\nto be deployed to handle spikes in demand automatically. Then, we want\nan automatic scale down of that deployment after peak demand passes to\nconserve compute resources. Even better, we want the underlying\ninfrastructure resources themselves to automatically scale up and down\nto handle peaks and valleys of demand.</p>\n<p>Kubernetes is the tool that manages container &quot;cattle&quot; and can do all of\nthe above and more. Despite there being several alternatives out there,\nwhether, in the cloud or on-premise, Kubernetes has become the de-facto\nstandard tool for container orchestration. All the major cloud and\nsoftware vendors now fully support Kubernetes, precisely because it is\nnot &quot;owned&quot; by any of them. For this reason, along with its rapid\nadoption, there has been enormous growth of ancillary tooling available\nfor Kubernetes.</p>\n<p>In sum, containerizing applications and having Kubernetes orchestrate\nthese containers is now the baseline tool requirement for effective IT.\n<a href=\"https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/\">You can read more about the advantages of Kubernetes\nhere.</a></p>\n<h2 id=\"continuous-deployment\">Continuous Deployment</h2>\n<p>Let's now turn to the next measures of effective IT, &quot;adaptable to\nrapidly changing requirements and deployable almost instantaneously.&quot;\nWe've all heard about the magic of continuous integration/continuous\ndeployment (CI/CD). We are told that CI/CD is a technique developed by\nthe millennial generation of tech giants. It allows organizations to\ncontinuously add new features to applications while remaining confident\nthat the new and/or improved features work as defined without\ninterfering with the rest of the application's functioning.</p>\n<p>While almost always lumped together, the two halves of CI/CD have\ndifferent roles. The role of CI is <em>integration</em>, which means\n<em>integration testing</em> – add some code and make sure everything works\njust as before, except for the new functionality, which also works as\ndescribed. Once CI has confirmed that we have a well-tested version of\nthe application, it automatically packages it up. It stores it with a\nstamp indicating what version of our code it represents. The second role\nis that of CD, which is deployment – getting our packaged application\nout into the world where it can be used.</p>\n<p>Deployment can happen in many ways and many places for different\npurposes. Hence it is useful to separate CI from CD. This allows us to\nuse a CD system tailored to our deployment environment's specific needs,\nwhich, for our current purposes, is Kubernetes. There are multiple\noptions, but the two most widely used CD options for Kubernetes have\njoined together to create a CD system known as Argo CD. Best-of-breed\nCI/CD systems are based on the following critical DevOps principles:</p>\n<ul>\n<li>\n<p>Infrastructure as Code</p>\n</li>\n<li>\n<p>Version tracking</p>\n</li>\n<li>\n<p>Automated workflows</p>\n</li>\n<li>\n<p>One source of truth</p>\n</li>\n</ul>\n<p>Argo CD uses Git repositories to implement all these principles, so it\nis known as a GitOps tool. <a href=\"https://tech.fpcomplete.com/blog/ci-cd-in-kube360/\">You can learn much more about Argo CD and\nits many advantages\nhere.</a></p>\n<h2 id=\"security-in-2021\">Security in 2021</h2>\n<p>The last measure of effective IT is not something new. Everyone knows\nsecurity and privacy are critical. What is new are the levels of threat\norganizations face when the whole world is interconnected. The recent\nSolarwinds episode demonstrates how IT is now literally the front line\nfor warring nations.</p>\n<p>Businesses don't have the luxury of disconnecting from the internet —\nbut security and privacy access controls can make using online systems\ndifficult and even unpleasant. The result is that organizational staff\noften take shortcuts to avoid the barriers and save themselves time.\nUnfortunately, these shortcuts then serve as vectors of attack for\nhackers.</p>\n<p>The key lesson is that <strong>making security easy to use is critically\nimportant in making security effective</strong>. FP Complete has created a\nsecurity tool for Kubernetes (and other platforms) that makes security\nmuch easier to implement and use. Among its key features:</p>\n<ul>\n<li>\n<p>Leverage existing user directories and credentials</p>\n</li>\n<li>\n<p>Grant everyone in your organization access to the cluster.</p>\n</li>\n<li>\n<p>Ensure credentials are all per-user, time based, and never\ncopy-pasted through screens.</p>\n</li>\n<li>\n<p>Carry a single set of credentials across all add-ons provided with\nKube360Provide easy command-line access to the Kubernetes cluster,\nleveraging secure, and easy credential acquisition.</p>\n</li>\n</ul>\n<p><a href=\"https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">You can learn more about our authentication\ntool here</a>.</p>\n<h2 id=\"kube360-kubernetes-with-batteries-included\">Kube360 – Kubernetes with Batteries Included</h2>\n<p>In our shortlist of measures that define effective IT in 2021, we left\nout all the standard measures that have long been best practice, for\nexample:</p>\n<ul>\n<li>\n<p>Observability – manage, monitor, and check the health of\napplications and infrastructure</p>\n</li>\n<li>\n<p>Auditability – track all interactions with applications and\ninfrastructure for forensic analysis</p>\n</li>\n<li>\n<p>Compliance – ensure infrastructure and applications meet government\nand industry standards</p>\n</li>\n</ul>\n<p>By now, it should be evident that meeting these measures will require\nyou to add even more tools to your Kubernetes cluster. We all know from\nexperience that choosing the right tool and integrating many different\ntools is extremely time-consuming and expensive. We seem to be stuck in\nan unsolvable paradox: on the one hand, we need a multi-headed hydra\ntool to implement effective IT in our organizations; on the other hand,\nbuilding this tool will be too time-consuming and therefore mean we\nwon't be effective the way we need to be.</p>\n<p>Wouldn't it be nice if there was an off-the-shelf tool with all the\n&quot;batteries included&quot;? We need a tool that allows us to easily deploy\nKubernetes clusters in the cloud and on premise, which includes and\nintegrates the many tools we need to meet all the measures of effective\nIT we've discussed thus far. Fortunately, there is such a tool. <a href=\"https://tech.fpcomplete.com/products/kube360/\">It's\nFP Complete's Kube360. You can learn more about\nit here</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/",
        "slug": "kube360-overview",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Meet the Measurements of Effective IT with Kube360",
        "description": "Meet All of the Measures of Effective IT with FP Complete's Kube360",
        "updated": null,
        "date": "2020-12-30",
        "year": 2020,
        "month": 12,
        "day": 30,
        "taxonomies": {
          "tags": [
            "devops",
            "insights",
            "kube360"
          ],
          "categories": [
            "kube360"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete Staff",
          "keywords": "effective IT",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/kube360-overview/",
        "components": [
          "blog",
          "kube360-overview"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "introduction",
            "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/#introduction",
            "title": "Introduction",
            "children": []
          },
          {
            "level": 2,
            "id": "kubernetes-21st-century-it-infrastructure",
            "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/#kubernetes-21st-century-it-infrastructure",
            "title": "Kubernetes – 21st Century IT Infrastructure",
            "children": []
          },
          {
            "level": 2,
            "id": "continuous-deployment",
            "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/#continuous-deployment",
            "title": "Continuous Deployment",
            "children": []
          },
          {
            "level": 2,
            "id": "security-in-2021",
            "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/#security-in-2021",
            "title": "Security in 2021",
            "children": []
          },
          {
            "level": 2,
            "id": "kube360-kubernetes-with-batteries-included",
            "permalink": "https://tech.fpcomplete.com/blog/kube360-overview/#kube360-kubernetes-with-batteries-included",
            "title": "Kube360 – Kubernetes with Batteries Included",
            "children": []
          }
        ],
        "word_count": 1568,
        "reading_time": 8,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": [
          {
            "permalink": "https://tech.fpcomplete.com/blog/why-fpcomplete-chose-kubernetes-for-container-orchestration-kube360/",
            "title": "Kubernetes is best for container orchestration"
          }
        ]
      },
      {
        "relative_path": "blog/fintech-best-practices-devops-priorities-for-financial-technology-applications.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/fintech-best-practices-devops-priorities-for-financial-technology-applications/",
        "slug": "fintech-best-practices-devops-priorities-for-financial-technology-applications",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "FinTech best practices: DevOps Priorities for Financial Technology Applications",
        "description": "Modern software development is complicated, but developing software for the FinTech industry adds a whole new dimension of complexity. Adopting modern DevOps principals will ensure your software adheres to FinTech best practices. This blog explains how you can get started and be successful.",
        "updated": null,
        "date": "2018-04-05T12:21:00Z",
        "year": 2018,
        "month": 4,
        "day": 5,
        "taxonomies": {
          "categories": [
            "devops",
            "kube360"
          ],
          "tags": [
            "devops",
            "fintech"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Aaron Contorer",
          "html": "hubspot-blogs/fintech-best-practices-devops-priorities-for-financial-technology-applications.html",
          "blogimage": "/images/blog-listing/devops.png"
        },
        "path": "/blog/fintech-best-practices-devops-priorities-for-financial-technology-applications/",
        "components": [
          "blog",
          "fintech-best-practices-devops-priorities-for-financial-technology-applications"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/nat-gateways-in-amazon-govcloud.md",
        "colocated_path": null,
        "content": "",
        "permalink": "https://tech.fpcomplete.com/blog/nat-gateways-in-amazon-govcloud/",
        "slug": "nat-gateways-in-amazon-govcloud",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "NAT Gateways in Amazon GovCloud",
        "description": "Since AWS GovCloud has no managed NAT gateways this task is left for you to set up. This post is the third in a series to explain how you can make it work.",
        "updated": null,
        "date": "2017-11-30T14:25:00Z",
        "year": 2017,
        "month": 11,
        "day": 30,
        "taxonomies": {
          "categories": [
            "devops",
            "kube360"
          ],
          "tags": [
            "devops",
            "aws",
            "govcloud"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Yghor Kerscher",
          "html": "hubspot-blogs/nat-gateways-in-amazon-govcloud.html",
          "blogimage": "/images/blog-listing/cloud-computing.png"
        },
        "path": "/blog/nat-gateways-in-amazon-govcloud/",
        "components": [
          "blog",
          "nat-gateways-in-amazon-govcloud"
        ],
        "summary": null,
        "toc": [],
        "word_count": 0,
        "reading_time": 0,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 7
  },
  {
    "name": "rust",
    "slug": "rust",
    "path": "/categories/rust/",
    "permalink": "https://tech.fpcomplete.com/categories/rust/",
    "pages": [
      {
        "relative_path": "blog/rust-asref-asderef.md",
        "colocated_path": null,
        "content": "<p>What's wrong with this program?</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    match option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>The compiler gives us a wonderful error message, including a hint on how to fix it:</p>\n<pre><code>error[E0382]: borrow of partially moved value: `option_name`\n --&gt; src\\main.rs:7:22\n  |\n4 |         Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n  |              ---- value partially moved here\n...\n7 |     println!(&quot;{:?}&quot;, option_name);\n  |                      ^^^^^^^^^^^ value borrowed here after partial move\n  |\n  = note: partial move occurs because value has type `String`, which does not implement the `Copy` trait\nhelp: borrow this field in the pattern to avoid moving `option_name.0`\n  |\n4 |         Some(ref name) =&gt; println!(&quot;Name is {}&quot;, name),\n  |              ^^^\n</code></pre>\n<p>The issue here is that our pattern match on <code>option_name</code> moves the <code>Option&lt;String&gt;</code> value into the match. We can then no longer use <code>option_name</code> after the <code>match</code>. But this is disappointing, because our usage of <code>option_name</code> and <code>name</code> inside the pattern match doesn't actually require moving the value at all! Instead, borrowing would be just fine.</p>\n<p>And that's exactly what the <code>note</code> from the compiler says. We can use the <code>ref</code> keyword in the <a href=\"https://doc.rust-lang.org/stable/reference/patterns.html#identifier-patterns\">identifier pattern</a> to change this behavior and, instead of <em>moving</em> the value, we'll borrow a reference to the value. Now we're free to reuse <code>option_name</code> after the <code>match</code>. That version of the code looks like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    match option_name {\n        Some(ref name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>For the curious, you can <a href=\"https://doc.rust-lang.org/std/keyword.ref.html\">read more about the <code>ref</code> keyword</a>.</p>\n<h2 id=\"more-idiomatic\">More idiomatic</h2>\n<p>While this is <em>working</em> code, in my opinion and experience, it's not idiomatic. It's far more common to put the borrow on <code>option_name</code>, like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    match &amp;option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>I like this version more, since it's blatantly obvious that we have no intention of moving <code>option_name</code> in the pattern match. Now <code>name</code> still remains as a reference, <code>println!</code> can use it as a reference, and everything is fine.</p>\n<p>The fact that this code works, however, is a specifically added feature of the language. Before <a href=\"https://rust-lang.github.io/rfcs/2005-match-ergonomics.html\">RFC 2005 &quot;match ergonomics&quot; landed in 2016</a>, the code above would have failed. That's because we tried to match the <code>Some</code> constructor against a <em>reference</em> to an <code>Option</code>, and those types don't match up. To borrow the RFC's terminology, getting that code to work would require &quot;a bit of a dance&quot;:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    match &amp;option_name {\n        &amp;Some(ref name) =&gt; println!(&quot;Name is {}&quot;, name),\n        &amp;None =&gt; println!(&quot;No name provided&quot;),\n    }\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>Now all of the types really line up explicitly:</p>\n<ul>\n<li>We have an <code>&amp;Option&lt;String&gt;</code></li>\n<li>We can therefore match on a <code>&amp;Some</code> variant or a <code>&amp;None</code> variant</li>\n<li>In the <code>&amp;Some</code> variant, we need to make sure we borrow the inner value, so we add a <code>ref</code> keyword</li>\n</ul>\n<p>Fortunately, with RFC 2005 in place, this extra noise isn't needed, and we can simplify our pattern match as above. The Rust language is better for this change, and the masses can rejoice.</p>\n<h2 id=\"introducing-as-ref\">Introducing as_ref</h2>\n<p>But what if we didn't have RFC 2005? Would we be required to use the awkward syntax above forever? Thanks to a helper method, no. The problem in our code is that <code>&amp;option_name</code> is a reference to an <code>Option&lt;String&gt;</code>. And we want to pattern match on the <code>Some</code> and <code>None</code> constructors, and capture a <code>&amp;String</code> instead of a <code>String</code> (avoiding the move). RFC 2005 implements that as a direct language feature. But there's also a method on <code>Option</code> that does just this: <code>as_ref</code>.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;T&gt; Option&lt;T&gt; {\n    pub const fn as_ref(&amp;self) -&gt; Option&lt;&amp;T&gt; {\n        match *self {\n            Some(ref x) =&gt; Some(x),\n            None =&gt; None,\n        }\n    }\n}\n</code></pre>\n<p>This is another way of avoiding the &quot;dance,&quot; by capturing it in the method definition itself. But thankfully, there's a great language ergonomics feature that captures this pattern, and automatically applies this rule for us. Meaning that <code>as_ref</code> isn't really necessary any more... right?</p>\n<h2 id=\"side-rant-ergonomics-in-rust\">Side rant: ergonomics in Rust</h2>\n<p>I absolutely love the ergonomics features of Rust. There is no &quot;but&quot; in my love for RFC 2005. There is, however, a concern around learning and teaching a language with these kinds of ergonomics. These kinds of features work 99% of the time. But when they fail, as we're about to see, it can come as a large shock.</p>\n<p>I'm guessing most Rustaceans, at least those that learned the language after 2016, never considered the fact that there was something weird about being able to pattern match a <code>Some</code> from an <code>&amp;Option&lt;String&gt;</code> value. It feels natural. It <em>is</em> natural. But because you were never forced to confront this while learning the language, at some point in the distant future you'll crash into a wall when this ergonomic feature doesn't kick in.</p>\n<p>I kind of wish there was a <code>--no-ergonomics</code> flag that we could turn on when learning the language to force us to confront all of these details. But there isn't. I'm hoping blog posts like this help out. Anyway, &lt;/rant&gt;.</p>\n<h2 id=\"when-rfc-2005-fails\">When RFC 2005 fails</h2>\n<p>We can fairly easily create a contrived example of match ergonomics failing to solve our problem. Let's &quot;improve&quot; our program above by factoring out the greet logic to its own helper function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn try_greet(option_name: Option&lt;&amp;String&gt;) {\n    match option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n}\n\nfn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    try_greet(&amp;option_name);\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>This code won't compile:</p>\n<pre><code>error[E0308]: mismatched types\n  --&gt; src\\main.rs:10:15\n   |\n10 |     try_greet(&amp;option_name);\n   |               ^^^^^^^^^^^^\n   |               |\n   |               expected enum `Option`, found `&amp;Option&lt;String&gt;`\n   |               help: you can convert from `&amp;Option&lt;T&gt;` to `Option&lt;&amp;T&gt;` using `.as_ref()`: `&amp;option_name.as_ref()`\n   |\n   = note:   expected enum `Option&lt;&amp;String&gt;`\n           found reference `&amp;Option&lt;String&gt;`\n</code></pre>\n<p>Now we've bypassed any ability to use match ergonomics at the call site. With what we know about <code>as_ref</code>, it's easy enough to fix this. But, at least in my experience, the first time someone runs into this kind of error, it's a bit surprising, since most of us have never previously thought about the distinction between <code>Option&lt;&amp;T&gt;</code> and <code>&amp;Option&lt;T&gt;</code>.</p>\n<p>These kinds of errors tend to pop up when combining together other helper functions, such as <code>map</code>, which circumvent the need for explicit pattern matching.</p>\n<p>As an aside, you could solve this compile error pretty easily, without resorting to <code>as_ref</code>. Instead, you could change the type signature of <code>try_greet</code> to take a <code>&amp;Option&lt;String&gt;</code> instead of an <code>Option&lt;&amp;String&gt;</code>, and then allow the match ergonomics to kick in within the body of <code>try_greet</code>. One reason not to do this is that, as mentioned, this was all a contrived example to demonstrate a failure. But the other reason is more important: neither <code>&amp;Option&lt;String&gt;</code> nor <code>Option&lt;&amp;String&gt;</code> are good argument types. Let's explore that next.</p>\n<h2 id=\"when-as-ref-fails\">When as_ref fails</h2>\n<p>We're taught pretty early in our Rust careers that, when receiving an argument to a function, we should prefer taking references to slices instead of references to owned objects. In other words:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn greet_good(name: &amp;str) {\n    println!(&quot;Name is {}&quot;, name);\n}\n\nfn greet_bad(name: &amp;String) {\n    println!(&quot;Name is {}&quot;, name);\n}\n</code></pre>\n<p>And in fact, if you pass this code by <code>clippy</code>, it will tell you to change the signature of <code>greet_bad</code>. The <a href=\"https://rust-lang.github.io/rust-clippy/master/index.html#ptr_arg\">clippy lint description</a> provides a great explanation of this, but suffice it to say that <code>greet_good</code> is more general in what it accepts than <code>greet_bad</code>.</p>\n<p>The same logic applies to <code>try_greet</code>. Why should we accept <code>Option&lt;&amp;String&gt;</code> instead of <code>Option&lt;&amp;str&gt;</code>? And interestingly, clippy doesn't complain in this case like it did in <code>greet_bad</code>. To see why, let's change our signature like so and see what happens:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn try_greet(option_name: Option&lt;&amp;str&gt;) {\n    match option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n}\n\nfn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    try_greet(option_name.as_ref());\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>This code no longer compiles:</p>\n<pre><code>error[E0308]: mismatched types\n  --&gt; src\\main.rs:10:15\n   |\n10 |     try_greet(option_name.as_ref());\n   |               ^^^^^^^^^^^^^^^^^^^^ expected `str`, found struct `String`\n   |\n   = note: expected enum `Option&lt;&amp;str&gt;`\n              found enum `Option&lt;&amp;String&gt;`\n</code></pre>\n<p>This is another example of ergonomics failing. You see, when you call a function with an argument of type <code>&amp;String</code>, but the function expects a <code>&amp;str</code>, <a href=\"https://doc.rust-lang.org/book/ch15-02-deref.html#implicit-deref-coercions-with-functions-and-methods\">deref coercion</a> kicks in and will perform a conversion for you. This is a piece of Rust ergonomics that we all rely on regularly, and every once in a while it completely fails to help us. This is one of those times. The compiler will not automatically convert a <code>Option&lt;&amp;String&gt;</code> into an <code>Option&lt;&amp;str&gt;</code>.</p>\n<p>(You can also read more about <a href=\"https://doc.rust-lang.org/nomicon/coercions.html\">coercions in the nomicon</a>.)</p>\n<p>Fortunately, there's another helper method on <code>Option</code> that does this for us. <code>as_deref</code> works just like <code>as_ref</code>, but additionally performs a <code>deref</code> method call on the value. Its implementation in <code>std</code> is interesting:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;T: Deref&gt; Option&lt;T&gt; {\n    pub fn as_deref(&amp;self) -&gt; Option&lt;&amp;T::Target&gt; {\n        self.as_ref().map(|t| t.deref())\n    }\n}\n</code></pre>\n<p>But we can also implement it more explicitly to see the behavior spelled out:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::ops::Deref;\n\nfn try_greet(option_name: Option&lt;&amp;str&gt;) {\n    match option_name {\n        Some(name) =&gt; println!(&quot;Name is {}&quot;, name),\n        None =&gt; println!(&quot;No name provided&quot;),\n    }\n}\n\nfn my_as_deref&lt;T: Deref&gt;(x: &amp;Option&lt;T&gt;) -&gt; Option&lt;&amp;T::Target&gt; {\n    match *x {\n        None =&gt; None,\n        Some(ref t) =&gt; Some(t.deref())\n    }\n}\n\nfn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    try_greet(my_as_deref(&amp;option_name));\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<p>And to bring this back to something closer to real world code, here's a case where combining <code>as_deref</code> and <code>map</code> leads to much cleaner code than you'd otherwise have:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn greet(name: &amp;str) {\n    println!(&quot;Name is {}&quot;, name);\n}\n\nfn main() {\n    let option_name: Option&lt;String&gt; = Some(&quot;Alice&quot;.to_owned());\n    option_name.as_deref().map(greet);\n    println!(&quot;{:?}&quot;, option_name);\n}\n</code></pre>\n<h2 id=\"real-ish-life-example\">Real-ish life example</h2>\n<p>Like most of my blog posts, this one was inspired by some real world code. To simplify the concept down a bit, I was parsing a config file, and ended up with an <code>Option&lt;String&gt;</code>. I needed some code that would either provide the value from the config, or default to a static string in the source code. Without <code>as_deref</code>, I could have used <code>STATIC_STRING_VALUE.to_string()</code> to get types to line up, but that would have been ugly and inefficient. Here's a somewhat intact representation of that code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use serde::Deserialize;\n\n#[derive(Deserialize)]\nstruct Config {\n    some_value: Option&lt;String&gt;\n}\n\nconst DEFAULT_VALUE: &amp;str = &quot;my-default-value&quot;;\n\nfn main() {\n    let mut file = std::fs::File::open(&quot;config.yaml&quot;).unwrap();\n    let config: Config = serde_yaml::from_reader(&amp;mut file).unwrap();\n    let value = config.some_value.as_deref().unwrap_or(DEFAULT_VALUE);\n    println!(&quot;value is {}&quot;, value);\n}\n</code></pre>\n<p>Want to learn more Rust with FP Complete? Check out these links:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training courses</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"/tags/rust/\">Rust tagged articles</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust homepage</a></li>\n</ul>\n",
        "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/",
        "slug": "rust-asref-asderef",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Rust's as_ref vs as_deref",
        "description": "A short analysis of when to use the Option methods as_ref and as_deref",
        "updated": null,
        "date": "2021-07-05",
        "year": 2021,
        "month": 7,
        "day": 5,
        "taxonomies": {
          "tags": [
            "rust"
          ],
          "categories": [
            "functional programming",
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "author_avatar": "/images/leaders/michael-snoyman.png",
          "image": "images/blog/thumbs/rust-asref-asderef.png"
        },
        "path": "/blog/rust-asref-asderef/",
        "components": [
          "blog",
          "rust-asref-asderef"
        ],
        "summary": null,
        "toc": [
          {
            "level": 2,
            "id": "more-idiomatic",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#more-idiomatic",
            "title": "More idiomatic",
            "children": []
          },
          {
            "level": 2,
            "id": "introducing-as-ref",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#introducing-as-ref",
            "title": "Introducing as_ref",
            "children": []
          },
          {
            "level": 2,
            "id": "side-rant-ergonomics-in-rust",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#side-rant-ergonomics-in-rust",
            "title": "Side rant: ergonomics in Rust",
            "children": []
          },
          {
            "level": 2,
            "id": "when-rfc-2005-fails",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#when-rfc-2005-fails",
            "title": "When RFC 2005 fails",
            "children": []
          },
          {
            "level": 2,
            "id": "when-as-ref-fails",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#when-as-ref-fails",
            "title": "When as_ref fails",
            "children": []
          },
          {
            "level": 2,
            "id": "real-ish-life-example",
            "permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#real-ish-life-example",
            "title": "Real-ish life example",
            "children": []
          }
        ],
        "word_count": 1822,
        "reading_time": 10,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      },
      {
        "relative_path": "blog/cloning-reference-method-calls.md",
        "colocated_path": null,
        "content": "<p>This semi-surprising corner case came up in some recent <a href=\"https://tech.fpcomplete.com/training/\">Rust training</a> I was giving. I figured a short write-up may help some others in the future.</p>\n<p>Rust's language design focuses on ergonomics. The goal is to make common patterns easy to write on a regular basis. This overall works out very well. But occasionally, you end up with a surprising outcome. And I think this situation is a good example.</p>\n<p>Let's start off by pretending that method syntax doesn't exist at all. Let's say I've got a <code>String</code>, and I want to clone it. I know that there's a <code>Clone::clone</code> method, which takes a <code>&amp;String</code> and returns a <code>String</code>. We can leverage that like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn uses_string(x: String) {\n    println!(&quot;I consumed the String! {}&quot;, x);\n}\n\nfn main() {\n    let name = &quot;Alice&quot;.to_owned();\n    let name_clone = Clone::clone(&amp;name);\n    uses_string(name);\n    uses_string(name_clone);\n}\n</code></pre>\n<p>Notice that I needed to pass <code>&amp;name</code> to <code>clone</code>, not simply <code>name</code>. If I did the latter, I would end up with a type error:</p>\n<pre><code>error[E0308]: mismatched types\n --&gt; src\\main.rs:7:35\n  |\n7 |     let name_clone = Clone::clone(name);\n  |                                   ^^^^\n  |                                   |\n  |                                   expected reference, found struct `String`\n  |                                   help: consider borrowing here: `&amp;name`\n</code></pre>\n<p>And that's because Rust won't automatically borrow a reference from function arguments. You need to explicit say that you want to borrow the value. Cool.</p>\n<p>But now I've remembered that method syntax <em>is</em>, in fact, a thing. So let's go ahead and use it!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let name_clone = (&amp;name).clone();\n</code></pre>\n<p>Remembering that <code>clone</code> takes a <code>&amp;String</code> and not a <code>String</code>, I've gone ahead and helpfully borrowed from <code>name</code> before calling the <code>clone</code> method. And I needed to wrap up that whole expression in parentheses, otherwise it will be parsed incorrectly by the compiler.</p>\n<p>That all works, but it's clearly not the way we want to write code in general. Instead, we'd like to forgo the parentheses and the <code>&amp;</code> symbol. And fortunately, we can! Most Rustaceans early on learn that you can simply do this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let name_clone = name.clone();\n</code></pre>\n<p>In other words, when we use method syntax, we can call <code>.clone()</code> on either a <code>String</code> <em>or</em> a <code>&amp;String</code>. That's because with a <a href=\"https://doc.rust-lang.org/stable/reference/expressions/method-call-expr.html\">method call expression</a>, &quot;the receiver may be automatically dereferenced or borrowed in order to call a method.&quot; Essentially, the compiler follows these steps:</p>\n<ul>\n<li>What's the type of <code>name</code>? OK, it's a <code>String</code></li>\n<li>Is there a method available that takes a <code>String</code> as the receiver? Nope.</li>\n<li>OK, try borrowing it. Is there a method available that takes a <code>&amp;String</code> as the receiver? Yes. Use that!</li>\n</ul>\n<p>And, for the most part, this works exactly as you'd expect. Until it doesn't. Let's start off with a confusing error message. Let's say I've got a helper function to loudly clone a <code>String</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn clone_loudly(x: &amp;String) -&gt; String {\n    println!(&quot;Cloning {}&quot;, x);\n    x.clone()\n}\n\nfn uses_string(x: String) {\n    println!(&quot;I consumed the String! {}&quot;, x);\n}\n\nfn main() {\n    let name = &quot;Alice&quot;.to_owned();\n    let name_clone = clone_loudly(&amp;name);\n    uses_string(name);\n    uses_string(name_clone);\n}\n</code></pre>\n<p>Looking at <code>clone_loudly</code>, I realize that I can easily generalize this to more than just a <code>String</code>. The only two requirements are that the type must implement <code>Display</code> (for the <code>println!</code> call) and <code>Clone</code>. Let's go ahead and implement that, accidentally forgetting about the <code>Clone</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::fmt::Display;\nfn clone_loudly&lt;T: Display&gt;(x: &amp;T) -&gt; T {\n    println!(&quot;Cloning {}&quot;, x);\n    x.clone()\n}\n</code></pre>\n<p>As you'd expect, this doesn't compile. However, the error message given may be surprising. If you're like me, you were probably expecting an error message about missing a <code>Clone</code> bound on <code>T</code>. In fact, we get something else entirely:</p>\n<pre><code>error[E0308]: mismatched types\n --&gt; src\\main.rs:4:5\n  |\n2 | fn clone_loudly&lt;T: Display&gt;(x: &amp;T) -&gt; T {\n  |                 - this type parameter - expected `T` because of return type\n3 |     println!(&quot;Cloning {}&quot;, x);\n4 |     x.clone()\n  |     ^^^^^^^^^ expected type parameter `T`, found `&amp;T`\n  |\n  = note: expected type parameter `T`\n                  found reference `&amp;T`\n</code></pre>\n<p>Strangely enough, the <code>.clone()</code> seems to have succeeded, but returned a <code>&amp;T</code> instead of a <code>T</code>. That's because the method call expression is following the same steps as above with <code>String</code>, namely:</p>\n<ul>\n<li>What's the type of <code>x</code>? OK, it's a <code>&amp;T</code></li>\n<li>Is there a <code>clone</code> method available that takes a <code>&amp;T</code> as the receiver? Nope, since we don't know that <code>T</code> implements the <code>Clone</code> trait.</li>\n<li>OK, try borrowing it. Is there a method available that takes a <code>&amp;&amp;T</code> as the receiver? <a href=\"https://doc.rust-lang.org/1.48.0/src/core/clone.rs.html#222-227\">Interestingly yes</a>.</li>\n</ul>\n<p>Let's dig in on that <code>Clone</code> implementation a bit. Removing a bit of noise so we can focus on the important bits:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl&lt;T&gt; Clone for &amp;T {\n    fn clone(self: &amp;&amp;T) -&gt; &amp;T {\n        *self\n    }\n}\n</code></pre>\n<p>Since references are <code>Copy</code>able, derefing a reference to a reference results in copying the inner reference value. What I find fascinating, and slightly concerning, is that we have two orthogonal features in the language:</p>\n<ul>\n<li>Method call syntax automatically causing borrows</li>\n<li>The ability to implement traits for both a type and a reference to that type</li>\n</ul>\n<p>When combined, there's some level of ambiguity about <em>which</em> trait implementation will end up being used.</p>\n<p>In this example, we're fortunate that the code didn't compile. We ended up with nothing more than a confusing error message. I haven't yet run into a real life issue where this behavior can result in code which compiles but does the wrong thing. It's certainly theoretically possible, but seems unlikely to occur unintentionally. That said, if anyone has been bitten by this, I'd be very interested to hear the details.</p>\n<p>So the takeaway: autoborrowing and derefing as part of method call syntax is a great feature of the language. It would be a major pain to use Rust without it. I'm glad it's present. Having traits implemented for references is a great feature, and I wouldn't want to use the language without it.</p>\n<p>But every once in a while, these two things bite us. Caveat emptor.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/cloning-reference-method-calls/",
        "slug": "cloning-reference-method-calls",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Cloning a reference and method call syntax in Rust",
        "description": "A short example of a possibly surprising impact of how method resolution works in Rust",
        "updated": null,
        "date": "2020-12-28",
        "year": 2020,
        "month": 12,
        "day": 28,
        "taxonomies": {
          "categories": [
            "functional programming",
            "rust"
          ],
          "tags": [
            "rust"
          ]
        },
        "authors": [],
        "extra": {
          "author": "Michael Snoyman",
          "blogimage": "/images/blog-listing/rust.png",
          "image": "images/blog/method-syntax-autoborrow-surprise.png",
          "author_avatar": "/images/leaders/michael-snoyman.png"
        },
        "path": "/blog/cloning-reference-method-calls/",
        "components": [
          "blog",
          "cloning-reference-method-calls"
        ],
        "summary": null,
        "toc": [],
        "word_count": 975,
        "reading_time": 5,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 2
  },
  {
    "name": "smart contracts",
    "slug": "smart-contracts",
    "path": "/categories/smart-contracts/",
    "permalink": "https://tech.fpcomplete.com/categories/smart-contracts/",
    "pages": [
      {
        "relative_path": "blog/blockchain-technology-smart-contracts-save-money.md",
        "colocated_path": null,
        "content": "<p>With the cost of goods only going up and the increased scarcity of quality workers and resources, saving money and time in your day-to-day business operations is paramount. Therefore, adopting blockchain technology into your traditional day-to-day business operations is key to giving you back valuable time, saving you money, creating less dependency on workers, and modernizing your business operations for good. There are many ways blockchain technology can help you and your business save money and resources, but one profound way is through the use of smart contracts.</p>\n<p>Smart contracts are software contracts that execute predefined logic based on the parameters coded into the system.  Smart contracts are digital agreements that automatically run transactions between parties, increasing speed, accuracy, and integrity in payment and performance. In addition, smart contracts are legally enforceable if they comply with contract law. </p>\n<p>The smart contract aims to provide transactional security while reducing surplus transaction costs. In addition, smart contracts can automate the execution of an agreement so that all parties are immediately sure of the outcome without the need for intermediary involvement. For example, instead of hiring a department to handle contract review and purchasing, your business can run smart contracts that enforce the same procedures more effectively at substantial cost savings.  In addition, your business can use smart contracts to manage your corporate documents, regulatory compliance procedures, cross-border financial transactions, real property ownership, supply management, and the chronology of ownership of your business IP, materials, and licenses. </p>\n<p>Finance and banking are prime examples of industries that have benefited from smart contract applications.  Smart contracts track corporate spending, stock trading, investing, lending, and borrowing. Smart contracts are also used in corporate mergers and acquisitions and are frequently used to configure or reconfigure entire corporate structures. </p>\n<p>Below is an illustration of how smart contracts work:</p>\n<p><img src=\"/images/blog/how-smart-contracts-work.png\" alt=\"CPU usage\" /></p>\n<p>As you can imagine, blockchain technology and smart contracts are still developing. They do have some roadblocks and implementational challenges. Still, these pitfalls and hassles cannot take away from the many benefits blockchain technology offers to businesses needing to save money and resources.</p>\n<p>FP Complete Corporation has direct experience <a href=\"https://www.fpblock.com\">working with blockchain technologies</a>, most recently the <a href=\"https://tech.fpcomplete.com/blog/levana-nft-launch/\">Levana NFT launch</a>, which relied on blockchain technology written by one of our engineers. Previously, one of our senior engineers released a video titled “<a href=\"https://www.youtube.com/watch?v=jngHo0Gzk6s\">How to be Successful at Blockchain Development</a>,” highlighting our expertise in this area in detail.   If you want to learn more about how we can help you with blockchain technology, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us today</a>.</p>\n",
        "permalink": "https://tech.fpcomplete.com/blog/blockchain-technology-smart-contracts-save-money/",
        "slug": "blockchain-technology-smart-contracts-save-money",
        "ancestors": [
          "_index.md",
          "blog/_index.md"
        ],
        "title": "Blockchain Technology, Smart Contracts, and Your Company",
        "description": "How Blockchain Technology and Smart Contracts Can Help You and Your Company Save Money and Resources Now!",
        "updated": null,
        "date": "2022-01-16",
        "year": 2022,
        "month": 1,
        "day": 16,
        "taxonomies": {
          "tags": [
            "blockchain",
            "smart contracts"
          ],
          "categories": [
            "blockchain",
            "smart contracts"
          ]
        },
        "authors": [],
        "extra": {
          "author": "FP Complete",
          "keywords": "blockchain, NFT, cryptocurrency, smart contracts",
          "blogimage": "/images/blog-listing/blockchain.png"
        },
        "path": "/blog/blockchain-technology-smart-contracts-save-money/",
        "components": [
          "blog",
          "blockchain-technology-smart-contracts-save-money"
        ],
        "summary": null,
        "toc": [],
        "word_count": 440,
        "reading_time": 3,
        "assets": [],
        "draft": false,
        "lang": "en",
        "lower": null,
        "higher": null,
        "translations": [],
        "backlinks": []
      }
    ],
    "page_count": 1
  }
]