[
{
"name": "blockchain",
"slug": "blockchain",
"path": "/categories/blockchain/",
"permalink": "https://tech.fpcomplete.com/categories/blockchain/",
"pages": [
{
"relative_path": "blog/blockchain-technology-smart-contracts-save-money.md",
"colocated_path": null,
"content": "<p>With the cost of goods only going up and the increased scarcity of quality workers and resources, saving money and time in your day-to-day business operations is paramount. Therefore, adopting blockchain technology into your traditional day-to-day business operations is key to giving you back valuable time, saving you money, creating less dependency on workers, and modernizing your business operations for good. There are many ways blockchain technology can help you and your business save money and resources, but one profound way is through the use of smart contracts.</p>\n<p>Smart contracts are software contracts that execute predefined logic based on the parameters coded into the system. Smart contracts are digital agreements that automatically run transactions between parties, increasing speed, accuracy, and integrity in payment and performance. In addition, smart contracts are legally enforceable if they comply with contract law. </p>\n<p>The smart contract aims to provide transactional security while reducing surplus transaction costs. In addition, smart contracts can automate the execution of an agreement so that all parties are immediately sure of the outcome without the need for intermediary involvement. For example, instead of hiring a department to handle contract review and purchasing, your business can run smart contracts that enforce the same procedures more effectively at substantial cost savings. In addition, your business can use smart contracts to manage your corporate documents, regulatory compliance procedures, cross-border financial transactions, real property ownership, supply management, and the chronology of ownership of your business IP, materials, and licenses. </p>\n<p>Finance and banking are prime examples of industries that have benefited from smart contract applications. Smart contracts track corporate spending, stock trading, investing, lending, and borrowing. Smart contracts are also used in corporate mergers and acquisitions and are frequently used to configure or reconfigure entire corporate structures. </p>\n<p>Below is an illustration of how smart contracts work:</p>\n<p><img src=\"/images/blog/how-smart-contracts-work.png\" alt=\"CPU usage\" /></p>\n<p>As you can imagine, blockchain technology and smart contracts are still developing. They do have some roadblocks and implementational challenges. Still, these pitfalls and hassles cannot take away from the many benefits blockchain technology offers to businesses needing to save money and resources.</p>\n<p>FP Complete Corporation has direct experience <a href=\"https://www.fpblock.com\">working with blockchain technologies</a>, most recently the <a href=\"https://tech.fpcomplete.com/blog/levana-nft-launch/\">Levana NFT launch</a>, which relied on blockchain technology written by one of our engineers. Previously, one of our senior engineers released a video titled “<a href=\"https://www.youtube.com/watch?v=jngHo0Gzk6s\">How to be Successful at Blockchain Development</a>,” highlighting our expertise in this area in detail. If you want to learn more about how we can help you with blockchain technology, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us today</a>.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/blockchain-technology-smart-contracts-save-money/",
"slug": "blockchain-technology-smart-contracts-save-money",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Blockchain Technology, Smart Contracts, and Your Company",
"description": "How Blockchain Technology and Smart Contracts Can Help You and Your Company Save Money and Resources Now!",
"updated": null,
"date": "2022-01-16",
"year": 2022,
"month": 1,
"day": 16,
"taxonomies": {
"tags": [
"blockchain",
"smart contracts"
],
"categories": [
"blockchain",
"smart contracts"
]
},
"authors": [],
"extra": {
"author": "FP Complete",
"keywords": "blockchain, NFT, cryptocurrency, smart contracts",
"blogimage": "/images/blog-listing/blockchain.png"
},
"path": "/blog/blockchain-technology-smart-contracts-save-money/",
"components": [
"blog",
"blockchain-technology-smart-contracts-save-money"
],
"summary": null,
"toc": [],
"word_count": 440,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/hedera-platform-audit.md",
"colocated_path": null,
"content": "<p><strong>FP Complete Publishes Results of Independent 3rd Party Audits of Hedera Platform and New Hedera Token Service</strong></p>\n<p><em>FP Complete Corporation development specialists conducted a comprehensive review of Hedera's code and technical documentation</em></p>\n<p><strong>Zug, Switzerland – February 9, 2021 –</strong> As part of its goal to deliver\ntransparency to the development community,\n<a href=\"http://www.hedera.com/\">Hedera Hashgraph</a>, the enterprise-grade public\ndistributed ledger, engaged FP Complete, an IT engineering specialist,\nto perform an independent audit of the engineering work by Hedera's\ndevelopment team on the Hedera platform, including the new Hedera Token\nService. The full completed audit reports can be found at:</p>\n<ul>\n<li><a href=\"https://hedera.com/fp-complete-hedera\">Hedera Platform</a></li>\n<li><a href=\"https://hedera.com/fp-complete-hts\">Hedera Token Service</a></li>\n</ul>\n<p>Founded by the former head of Microsoft's own in-house engineering\ntools, Aaron Contorer, FP Complete Corporation is the world's leading\nsupplier of commercial-grade tools and engineering for advanced\nprogramming languages, distributed systems, blockchain, and DevOps\ntechnologies. FP Complete performed an in-depth code review to examine\nthe Hedera software quality, focusing on robustness, security, and\naudibility.</p>\n<p>FP Complete also completed a review of Hedera's code and technical\ndocumentation, enabling the development team to use this ongoing work to\noptimize the engineering methods, tools, and coding standards used to\nimplement the Hedera network. The publication of these results\ndemonstrates the Company's commitment to technical rigor and\ntransparency.</p>\n<p>Dr. Leemon Baird, co-founder and Chief Scientist of Hedera Hashgraph,\ncomments: "These third-party audits by FP Complete illustrate our\ncommitment to high-quality engineering, project transparency, and a\nrigorous and independent auditing process. We are pleased to be able to\npublish these audit results today and look forward to sharing additional\naudit findings as they are completed in the future."</p>\n<p>Wesley Crook, CEO of FP Complete, comments: "We have worked with the\nHedera team to conduct a third-party audit of their codebase to assess\nsecurity, stability, and correctness. Our team of software, blockchain,\nand network architecture experts has provided feedback throughout the\ndevelopment process."</p>\n<hr />\n<h2 id=\"about-hedera\">About Hedera</h2>\n<p>Hedera is a decentralized enterprise-grade public network on which\nanyone can build secure, fair applications with near real-time finality.\nThe platform is owned and governed by a council of the world's leading\norganizations including Avery Dennison, Boeing, Dentons, Deutsche\nTelekom, DLA Piper, eftpos, FIS (WorldPay), Google, IBM, LG Electronics,\nMagalu, Nomura, Swirlds, Tata Communications, University College London\n(UCL), Wipro, and Zain Group.</p>\n<p>For more information, visit\nhttps://www.hedera.com, or follow us on Twitter\nat <a href=\"https://twitter.com/hedera\">@hedera</a>, Telegram at\n<a href=\"https://t.me/hederahashgraph\">t.me/hederahashgraph</a>, or Discord at\n<a href=\"https://www.hedera.com/discord\">www.hedera.com/discord</a>. The Hedera\nwhitepaper can be found at\n<a href=\"https://hedera.com/papers\">www.hedera.com/papers</a>.</p>\n<h2 id=\"about-fp-complete\">About FP Complete</h2>\n<p>FP Complete is an advanced server-side software development and DevOps\nconsulting Company. We specialize in helping FinTech companies solve\ntheir unique set of problems related to data and information integrity,\ndata security, architectural design, systems integration, and regulatory\ncompliance. We are recognized worldwide for our contributions to the\nfunctional programming community using the Haskell programming language.\nOur people and processes have helped countless companies increase the\nvelocity and quality of their delivered software products. From fortune\n500 biotech companies to small blockchain FinTech software companies we\nhave solved unique and complicated problems with expert results.</p>\n<p><a href=\"https://www.fpcomplete.com/\">https://www.fpcomplete.com/</a></p>\n",
"permalink": "https://tech.fpcomplete.com/blog/hedera-platform-audit/",
"slug": "hedera-platform-audit",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Hedera Platform Audit",
"description": "FP Complete has conducted a third party audit of the Hedera Platform and New Hedera Token Service. Check out the press release for more information.",
"updated": null,
"date": "2021-02-09",
"year": 2021,
"month": 2,
"day": 9,
"taxonomies": {
"tags": [
"blockchain"
],
"categories": [
"blockchain"
]
},
"authors": [],
"extra": {
"author": "FP Complete Staff",
"blogimage": "/images/blog-listing/distributed-ledger.png",
"image": "images/blog/hedera-platform-audit.png"
},
"path": "/blog/hedera-platform-audit/",
"components": [
"blog",
"hedera-platform-audit"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "about-hedera",
"permalink": "https://tech.fpcomplete.com/blog/hedera-platform-audit/#about-hedera",
"title": "About Hedera",
"children": []
},
{
"level": 2,
"id": "about-fp-complete",
"permalink": "https://tech.fpcomplete.com/blog/hedera-platform-audit/#about-fp-complete",
"title": "About FP Complete",
"children": []
}
],
"word_count": 538,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
}
],
"page_count": 2
},
{
"name": "devops",
"slug": "devops",
"path": "/categories/devops/",
"permalink": "https://tech.fpcomplete.com/categories/devops/",
"pages": [
{
"relative_path": "blog/partnership-portworx-pure-storage.md",
"colocated_path": null,
"content": "<p><strong>FP Complete Corporation Announces Partnership with Portworx by Pure Storage to Streamline World-Class DevOps Consulting Services with State-of-the-Art, End-To-End Storage and Data Management Solution for Kubernetes Projects.</strong></p>\n<p><strong>Charlotte, North Carolina (August 31, 2022)</strong> – FP Complete Corporation, a global technology partner that specializes in DevSecOps, Cloud Native Computing, and Advanced Server-Side Programming Languages today announced that it has partnered with Portworx by Pure Storage to bring an integrated solution to customers seeking DevSecOps consulting services for the management of persistent storage, data protection, disaster recovery, data security, and hybrid data migrations.</p>\n<p>The partnership between FP Complete Corporation and Portworx will be integral in providing FP Complete's DevSecOps and Cloud Enablement clients with a data storage platform designed to run in a container that supports any cloud physical storage on any Kubernetes distribution.</p>\n<p>Portworx Enterprise gets right to the heart of what developers and Kubernetes admins want: data to behave like a cloud service. Developers and Admins wish to request Storage based on their requirements (capacity, performance level, resiliency level, security level, access, protection level, and more) and let the data management layer figure out all the details. Portworx PX-Backup adds enterprise-grade point-and-click backup and recovery for all applications running on Kubernetes, even if they are stateless.</p>\n<p>Portworx shortens development timelines and headaches for companies moving from on-prem to cloud. In addition, the integration between FP Complete Corporation and Portworx allows the easy exchange of best practices information, so design and storage run in parallel.</p>\n<p>Gartner predicts that by 2025, more than 85% of global organizations will be running containerized applications in production, up from less than 35% in 2019<sup>1</sup>. As container adoption increases and more applications are being deployed in the enterprise, these organizations want more options to manage stateful and persistent data associated with these modern applications.</p>\n<p>"It is my pleasure to announce that Pure Storage can now be utilized by our world-class engineers needing a fully integrated, end-to-end storage and data management solution for our DevSecOps clients with complicated Kubernetes projects. Pure Storage is known globally for its strength in the storage industry, and this partnership offers strong support for our business," said Wes Crook, CEO of FP Complete Corporation.</p>\n<p>“There can be zero doubt that most new cloud-native apps are built on containers and orchestrated by Kubernetes. Unfortunately, the early development on containers resulted in lots of data access and availability issues due to a lack of enterprise-grade persistent storage data management and low data visibility. With Portworx and the aid of Kubernetes experts like FP Complete, we can offer customers a rock-solid, enterprise-class, cloud-native development platform that delivers end-to-end application and data lifecycle management that significantly lowers the risks and costs of operating cloud-native application infrastructure,” said Venkat Ramakrishnan, VP, Engineering, Cloud Native Business Unit, Pure Storage.</p>\n<div><u><strong>About FP Complete Corporation</strong></u></div>\nFounded in 2012 by Aaron Contorer, former Microsoft executive, FP Complete Corporation is known globally as the one-stop, full-stack technology shop that delivers agile, reliable, repeatable, and highly secure software. In 2019, we launched our flagship platform, Kube360®, which is a fully managed enterprise Kubernetes-based DevOps ecosystem. With Kube360, FP Complete is now well positioned to provide a complete suite of products and solutions to our clients on their journey towards cloudification, containerization, and DevOps best practices. The Company's mission is to deliver superior software engineering to build great software for our clients. FP Complete Corporation serves over 200+ global clients and employs over 70 people worldwide. It has won many awards and made the Inc. 5000 list in 2020 for being one of the 5000 fastest-growing private companies in America. For more information about FP Complete Corporation, visit its website at [www.fpcomplete.com](https://www.fpcomplete.com/).\n<p><sup>1</sup> <small>Arun Chandrasekaran, <a href=\"https://www.gartner.com/en/documents/3988395\">Best Practices for Running Containers and Kubernetes in Production</a>, Gartner, August 2020</small></p>\n",
"permalink": "https://tech.fpcomplete.com/blog/partnership-portworx-pure-storage/",
"slug": "partnership-portworx-pure-storage",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "FP Complete Corporation Announces Partnership with Portworx by Pure Storage",
"description": "FP Complete Corporation Announces Partnership with Portworx by Pure Storage to Streamline World-Class DevOps Consulting Services with State-of-the-Art, End-To-End Storage and Data Management Solution for Kubernetes Projects.",
"updated": null,
"date": "2022-08-29",
"year": 2022,
"month": 8,
"day": 29,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "FP Complete Staff",
"keywords": "Portworx Pure Storage",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/partnership-portworx-pure-storage/",
"components": [
"blog",
"partnership-portworx-pure-storage"
],
"summary": null,
"toc": [],
"word_count": 669,
"reading_time": 4,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/canary-deployment-istio.md",
"colocated_path": null,
"content": "<p>Istio is a service mesh that transparently adds various capabilities\nlike observability, traffic management and security to your\ndistributed collection of microservices. It comes with various\nfunctionalities like circuit breaking, granular traffic routing, mTLS\nmanagement, authentication and authorization polices, ability to do\nchaos testing etc.</p>\n<p>In this post, we will explore on how to do canary deployments of our\napplication using Istio.</p>\n<h2 id=\"what-is-canary-deployment\">What is Canary Deployment</h2>\n<p>Using Canary deployment strategy, you release a new version of your\napplication to a small percentage of the production traffic. And then\nyou monitor your application and gradually expand its percentage of\nthe production traffic.</p>\n<p>For a canary deployment to be shipped successfully, you need good\nmonitoring in place. Based on your exact use case, you might want to\ncheck various metrics like performance, user experience or <a href=\"https://en.wikipedia.org/wiki/Bounce_rate\">bounce\nrate</a>.</p>\n<h2 id=\"pre-requisites\">Pre requisites</h2>\n<p>This post assumes that following components are already provisioned or\ninstalled:</p>\n<ul>\n<li>Kubernetes cluster</li>\n<li>Istio</li>\n<li>cert-manager: (Optional, required if you want to provision TLS\ncertificates)</li>\n<li>Kiali (Optional)</li>\n</ul>\n<h2 id=\"istio-concepts\">Istio Concepts</h2>\n<p>For this specific deployment, we will be using three specific features\nof Istio's traffic management capabilities:</p>\n<ul>\n<li><a href=\"https://istio.io/latest/docs/concepts/traffic-management/#virtual-services\">Virtual Service</a>: Virtual Service describes how traffic flows to\na set of destinations. Using Virtual Service you can configure how\nto route the requests to a service within the mesh. It contains a\nbunch of routing rules that are evaluated, and then a decision is\nmade on where to route the incoming request (or even reject if no\nroutes match).</li>\n<li><a href=\"https://istio.io/latest/docs/concepts/traffic-management/#gateways\">Gateway</a>: Gateways are used to manage your inbound and outbound\ntraffic. They allow you to specify the virtual hosts and their\nassociated ports that needs to be opened for allowing the traffic\ninto the cluster.</li>\n<li><a href=\"https://istio.io/latest/docs/reference/config/networking/destination-rule/\">Destination Rule</a>: This is used to configure how a client in\nthe mesh interacts with your service. It's used for configuring TLS\nsettings of <a href=\"https://istio.io/latest/docs/reference/config/networking/sidecar/\">your sidecar</a>, splitting your service into subsets,\nload balancing strategy for your clients etc.</li>\n</ul>\n<p>For doing canary deployment, destination rule plays a major role as\nthat's what we will be using to split the service into subset and\nroute traffic accordingly.</p>\n<h2 id=\"application-deployment\">Application deployment</h2>\n<p>For our canary deployment, we will be using the following version of\nthe application:</p>\n<ul>\n<li><a href=\"https://httpbin.org/\">httpbin.org</a>: This will be the version one (v1) of our\napplication. This is the application that's already deployed, and\nyour aim is to partially replace it with a newer version of the\napplication.</li>\n<li><a href=\"https://github.com/psibi/tornado-websocket-example\">websocket app</a>: This will be the version two (v2) of the\napplication that has to be gradually introduced.</li>\n</ul>\n<p>Note that in the actual real world, both the applications will share\nthe same code. For our example, we are just taking two arbitrary\napplications to make testing easier.</p>\n<p>Our assumption is that we already have version one of our application\ndeployed. So let's deploy that initially. We will write our usual\nKubernetes resources for it. The deployment manifest for the version\none application:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: httpbin\n namespace: canary\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: httpbin\n version: v1\n template:\n metadata:\n labels:\n app: httpbin\n version: v1\n spec:\n containers:\n - image: docker.io/kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: httpbin\n ports:\n - containerPort: 80\n</code></pre>\n<p>And let's create a corresponding service for it:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: httpbin\n name: httpbin\n namespace: canary\nspec:\n ports:\n - name: httpbin\n port: 8000\n targetPort: 80\n - name: tornado\n port: 8001\n targetPort: 8888\n selector:\n app: httpbin\n type: ClusterIP\n</code></pre>\n<p>SSL certificate for the application which will use cert-manager:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: cert-manager.io/v1\nkind: Certificate\nmetadata:\n name: httpbin-ingress-cert\n namespace: istio-system\nspec:\n secretName: httpbin-ingress-cert\n issuerRef:\n name: letsencrypt-dns-prod\n kind: ClusterIssuer\n dnsNames:\n - canary.33test.dev-sandbox.fpcomplete.com\n</code></pre>\n<p>And the Istio resources for the application:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.istio.io/v1alpha3\nkind: Gateway\nmetadata:\n name: httpbin-gateway\n namespace: canary\nspec:\n selector:\n istio: ingressgateway\n servers:\n - hosts:\n - canary.33test.dev-sandbox.fpcomplete.com\n port:\n name: https-httpbin\n number: 443\n protocol: HTTPS\n tls:\n credentialName: httpbin-ingress-cert\n mode: SIMPLE\n - hosts:\n - canary.33test.dev-sandbox.fpcomplete.com\n port:\n name: http-httpbin\n number: 80\n protocol: HTTP\n tls:\n httpsRedirect: true\n---\napiVersion: networking.istio.io/v1alpha3\nkind: VirtualService\nmetadata:\n name: httpbin\n namespace: canary\nspec:\n gateways:\n - httpbin-gateway\n hosts:\n - canary.33test.dev-sandbox.fpcomplete.com\n http:\n - route:\n - destination:\n host: httpbin.canary.svc.cluster.local\n port:\n number: 8000\n</code></pre>\n<p>The above resource define gateway and virtual service. You could see\nthat we are using TLS here and redirecting HTTP to HTTPS.</p>\n<p>We also have to make sure that namespace has istio injection enabled:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: v1\nkind: Namespace\nmetadata:\n labels:\n app.kubernetes.io/component: httpbin\n istio-injection: enabled\n name: canary\n</code></pre>\n<p>I have the above set of k8s resources managed via\n<a href=\"https://kustomize.io/\">kustomize</a>. Let's deploy them to get the initial environment which\nconsists of only v1 (httpbin) application:</p>\n<pre data-lang=\"shellsession\" class=\"language-shellsession \"><code class=\"language-shellsession\" data-lang=\"shellsession\">❯ kustomize build overlays/istio_canary > istio.yaml\n❯ kubectl apply -f istio.yaml\nnamespace/canary created\nservice/httpbin created\ndeployment.apps/httpbin created\ngateway.networking.istio.io/httpbin-gateway created\nvirtualservice.networking.istio.io/httpbin created\n❯ kubectl apply -f overlays/istio_canary/certificate.yaml\ncertificate.cert-manager.io/httpbin-ingress-cert created\n</code></pre>\n<p>Now I can go and verify in my browser that my application is actually\nup and running:</p>\n<p><img src=\"/images/istio_httpbin_application.png\" alt=\"httpbin: Version 1 application\" /></p>\n<p>Now comes the interesting part. We have to deploy the version two of\nour application and make sure around 20% of our traffic goes to\nit. Let's write the deployment manifest for it:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: httpbin-v2\n namespace: canary\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: httpbin\n version: v2\n template:\n metadata:\n labels:\n app: httpbin\n version: v2\n spec:\n containers:\n - image: psibi/tornado-websocket:v0.3\n imagePullPolicy: IfNotPresent\n name: tornado\n ports:\n - containerPort: 8888\n</code></pre>\n<p>And now the destination rule to split the service:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.istio.io/v1alpha3\nkind: DestinationRule\nmetadata:\n name: httpbin\n namespace: canary\nspec:\n host: httpbin.canary.svc.cluster.local\n subsets:\n - labels:\n version: v1\n name: v1\n - labels:\n version: v2\n name: v2\n</code></pre>\n<p>And finally let's modify the virtual service to split 20% of the\ntraffic to the newer version:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.istio.io/v1alpha3\nkind: VirtualService\nmetadata:\n name: httpbin\n namespace: canary\nspec:\n gateways:\n - httpbin-gateway\n hosts:\n - canary.33test.dev-sandbox.fpcomplete.com\n http:\n - route:\n - destination:\n host: httpbin.canary.svc.cluster.local\n port:\n number: 8000\n subset: v1\n weight: 80\n - destination:\n host: httpbin.canary.svc.cluster.local\n port:\n number: 8001\n subset: v2\n weight: 20\n</code></pre>\n<p>And now if you go again to the browser and refresh it a number of\ntimes (note that we route only 20% of the traffic to the new\ndeployment), you will see the new application eventually:</p>\n<p><img src=\"/images/istio_tornado_application.png\" alt=\"websocket: Version 2 application\" /></p>\n<h2 id=\"testing-deployment\">Testing deployment</h2>\n<p>Let's do around 10 curl requests to our endpoint to see how the\ntraffic is getting routed:</p>\n<pre data-lang=\"shellsession\" class=\"language-shellsession \"><code class=\"language-shellsession\" data-lang=\"shellsession\">❯ seq 10 | xargs -Iz curl -s https://canary.33test.dev-sandbox.fpcomplete.com | rg "<title>"\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n<title>tornado WebSocket example</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n<title>tornado WebSocket example</title>\n</code></pre>\n<p>And you can confirm how out of the 10 requests, 2 requests are routed\nto the websocket (v2) application. If you have <a href=\"https://kiali.io/\">Kiali</a> deployed,\nyou can even visualize the above traffic flow:</p>\n<p><img src=\"/images/istio_kiali.png\" alt=\"Kiali visualization\" /></p>\n<p>And that summarizes our post on how to achieve canary deployment using\nIstio. While this post shows a basic example, traffic steering and\nrouting is one of the core features of Istio and it offers various\nways to configure the routing decisions made by it. You can find more\nfurther details about it in the <a href=\"https://istio.io/latest/docs/concepts/traffic-management/#virtual-services\">official docs</a>. You can also use a\ncontroller like <a href=\"https://argoproj.github.io/argo-rollouts/features/traffic-management/istio/\">Argo Rollouts with Istio</a> to perform canary\ndeployments and use additional features like <a href=\"https://argoproj.github.io/argo-rollouts/features/analysis/\">analysis</a> and\n<a href=\"https://argoproj.github.io/argo-rollouts/features/experiment/\">experiment</a>.</p>\n<hr />\n<p>If you're looking for a solid Kubernetes platform, batteries included\nwith a first class support of Istio, <a href=\"https://tech.fpcomplete.com/products/kube360/\">check out Kube360</a>.</p>\n<p>If you liked this article, you may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/\">An Istio/mutual TLS debugging story</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-kubernetes-windows/\">Deploying Rust with Windows Containers on Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/\">Cloud Vendor Neutrality</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">Secure defaults with Kubernetes Security with Kube360</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
"slug": "canary-deployment-istio",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Canary Deployment with Kubernetes and Istio",
"description": "Want to do canary deployments in your Kubernetes cluster? Read up on our recommended step-by-step process",
"updated": null,
"date": "2022-03-24",
"year": 2022,
"month": 3,
"day": 24,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"DevOps",
"istio",
"Kubernetes"
]
},
"authors": [],
"extra": {
"author": "Sibi Prabakaran",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/canary-deployment-istio/",
"components": [
"blog",
"canary-deployment-istio"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-canary-deployment",
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#what-is-canary-deployment",
"title": "What is Canary Deployment",
"children": []
},
{
"level": 2,
"id": "pre-requisites",
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#pre-requisites",
"title": "Pre requisites",
"children": []
},
{
"level": 2,
"id": "istio-concepts",
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#istio-concepts",
"title": "Istio Concepts",
"children": []
},
{
"level": 2,
"id": "application-deployment",
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#application-deployment",
"title": "Application deployment",
"children": []
},
{
"level": 2,
"id": "testing-deployment",
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/#testing-deployment",
"title": "Testing deployment",
"children": []
}
],
"word_count": 1364,
"reading_time": 7,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/cloud-native.md",
"colocated_path": null,
"content": "<p>You hear "go Cloud-Native," but if you're like many, you wonder, "what does that mean, and how can applying a Cloud-Native strategy help my company's Dev Team be more productive?"\nAt a high level, Cloud-Native architecture means adapting to the many new possibilities—but a very different set of architectural constraints—offered by the cloud compared to traditional on-premises infrastructure.</p>\n<p>Cloud-Native architecture optimizes systems and software for the cloud. This optimization creates an efficient way to utilize the platform by streamlining the processes and workflows. This is accomplished by harnessing the cloud's inherent strengths: </p>\n<ul>\n<li>its flexibility, </li>\n<li>on-demand infrastructure; and </li>\n<li>robust managed services. </li>\n</ul>\n<p>Cloud-native computing couples these strengths with cloud-optimized technologies such as microservices, containers, and continuous delivery. Cloud-Native takes advantage of the cloud's distributed, scalable and adaptable nature. By doing this, Cloud-Native will maximize your dev team's focus on writing code, reducing operational tasks, creating business value, and keeping your customers happy by building high-impact applications faster, without compromising on quality. You might even think you can’t do cloud-native without using one of the big cloud providers- this simply isn’t true, many of the benefits of cloud-native are the approaches and emphasis on better tooling around automation.</p>\n<h2 id=\"why-move-to-cloud-native-now\">Why Move to Cloud-Native Now?</h2>\n<p><em>#1 - High-Frequency Software Release</em></p>\n<p>Faster and more frequent updates and new features releases allow your organization to respond to user needs in near real-time, increasing user retention. For example, new software versions with novel features can be released incrementally and more often as they become available. In addition, Cloud-native makes high-frequency software possible via continuous integration (CI) and continuous deployment (CD), where full version commits are no longer needed. Instead, one can modify, test, and commit just a few lines of code continuously and automatically to meet changing customer trends, thereby giving your organization an edge. </p>\n<p><em>#2 - Automatic Software Updates</em></p>\n<p>One of the most valuable Cloud-native features is automation. For example, updates are deployed automatically without interfering with core applications or user base. Automated redundancies for infrastructure can automatically move applications between data centers as needed with little to zero human intervention. Even scalability, testing, and resource allocation can be automated. There are many available automation tools in the marketplace, such as FP Complete Corporation's widely accepted tool, <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>.</p>\n<p><em>#3 - Greater Protection from Software Failures</em></p>\n<p>Isolation of containers is another important cloud-native feature. Software failures and bugs can be traced to a specific microservice version, rolled back, or fixed quickly. Software fixes can be tested in isolation without compromising the stability of the entire application. On the other hand, if there's a widespread failure, automation can restore the application to a previous stable state, minimizing downtime. Automated DevOps testing before code goes to production (example: linting and software scrubbing) drives faster bug detection and resolution- reducing the risk of bugs in production.</p>\n<h2 id=\"wow-cloud-native-seems-perfect-what-s-the-catch\">WOW – Cloud-Native Seems Perfect – What's the Catch?</h2>\n<p>Switching over to Cloud-Native architecture requires a thorough assessment of your existing application setup. The biggest question you and your team need to ask before making any moves is, "should our business modernize our current applications, or should we build new applications from scratch and utilize Cloud-Native development practices?"</p>\n<p>If you choose to modernize your existing application, you will save time and money by capitalizing on the cloud's agility, flexibility, and scalability. Your dev team can retain existing application functionality and business logic, re-architect into a Cloud-Native app, and containerize to utilize the cloud platform's strengths.</p>\n<p>You can also build a net-new application using Cloud-Native development practices instead of upgrading your legacy applications. Building from scratch may make more sense from a corporate culture, risk management, and regulatory compliance standpoint. You keep running old application code unchanged while developing and phasing in a platform. Building new applications also allows dev teams to develop applications free from prior architectural constraints, allowing developers to experiment and deliver innovation to users.</p>\n<h2 id=\"three-essential-tools-for-successful-cloud-native-architecture\">Three Essential Tools for Successful Cloud-Native Architecture</h2>\n<p>Whether you decide to create a new Cloud-Native application or modernize your existing ones, your dev team needs to use these three tools for successful implementation of Cloud-Native Architecture:</p>\n<ol>\n<li><em>Microservices Architecture</em>. </li>\n</ol>\n<p>A cloud-native microservice architecture is considered a "best practice" architectural approach for creating cloud applications because each application makes up a set of services. Each service runs its processes and communicates through clearly defined APIs, which provide good foundations for continuous delivery. With microservices, ideally each service is independently deployable This architecture allows each service to be updated independently without interfering with another service. This results in:</p>\n<ul>\n<li>reduced downtime for users; </li>\n<li>simplified troubleshooting; and </li>\n<li>minimized disruptions even if a problem's identified. \nWhich allows for high-frequency updates and continuous delivery. </li>\n</ul>\n<ol start=\"2\">\n<li><em>Container-based Infrastructure Platform</em>.</li>\n</ol>\n<p>Now that your microservice architecture is broken down into individual container-based services, the next essential tool is a system to manage all those containers automatically - known as a ‘container orchestrator. The most widely accepted platform is Kubernetes, an open-source system originally developed in collaboration with Google, Microsoft, and others. It runs the containerized applications and controls the automated deployment, storage, scaling, scheduling, load balancing, updates, and monitors containers across clusters of hosts. Kubernetes supports all major public cloud service providers, including Azure, AWS, Google Cloud Platform, and Oracle Cloud.</p>\n<ol start=\"3\">\n<li><em>CI/CD Pipeline</em>.</li>\n</ol>\n<p>A CI/CD Pipeline is the third essential tool for a cloud-native environment to work seamlessly. Continuous integration and continuous delivery embody a set of operating principles and a collection of practices that allow dev teams to deliver code changes more frequently and reliably. This implementation is known as the CI/CD Pipeline. By automating deployment processes, the CI/CD pipeline will allow your dev team to focus on:</p>\n<ul>\n<li>meeting business requirements; </li>\n<li>code quality; and </li>\n<li>security. \nCI/CD tools preserve the environment-specific parameters that must be included with each delivery. CI/CD automation then performs any necessary service calls to web servers, databases, and other services that may require a restart or follow other procedures when applications are deployed.</li>\n</ul>\n<h2 id=\"cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use\">Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?</h2>\n<p>As you can probably guess, countless tools make up the cloud-native architecture. Unfortunately, these tools are complex, require separate authentication, and frequently do not interact with each other. In essence, you are expected to integrate these cloud tools yourself as a user. We at FP Complete became frustrated with this approach. So, to save time and provide a turn-key solution, we created Kube360. Kube360 puts all necessary tools into one easy-to-use toolbox, accessed via a single sign-on, and operating as a fully integrated environment. Kube360 combines best practices, technologies, and processes into one complete package, and Kube360 has been proven an effective tool at multiple customer site deployments. In addition, Kube360 supports multiple cloud providers and on-premise infrastructure. Kube360 is vendor agnostic, fully customizable, and has no vendor lock-in.</p>\n<p><strong>Kube360 - Centralized Management</strong>. Kube360 employs centralized management, which increases your dev team's productivity. Increased Dev Team productivity will happen through:</p>\n<ul>\n<li>single-sign-on functionality </li>\n<li>speed-up of installation and setup</li>\n<li>Quick access to all tools</li>\n<li>Automation of logs, backups, and alerts</li>\n</ul>\n<p>This simplified administration hides frequent login complexities and allows single-sign-on through existing company identity management. Kube360 also streamlines tool authentication and access, eliminating many standard security holes. In the background, Kube360 automatically runs everyday tasks such as backups, log aggregation, and alerts.</p>\n<p><strong>Kube360 - Automated Features</strong>. Kube360's automated features include:</p>\n<ul>\n<li>automatic backups of the etcd config;</li>\n<li>log aggregation and indexing of all services; and</li>\n<li>integrated monitoring and alert framework.</li>\n</ul>\n<p><strong>Kube360 - Kubernetes Tooling Features</strong>. Kube360 simplifies Kubernetes management and allows you to take advantage of many cloud-native features such as:\nautoscaling; to stay cost efficient with growing and shrinking demands on systems</p>\n<ul>\n<li>high availability;</li>\n<li>health checks; and</li>\n<li>integrated secrets management.</li>\n</ul>\n<p><strong>Kube360 - Service Mesh</strong>.</p>\n<ul>\n<li>Mutual TLS based encryption within the cluster</li>\n<li>Tracing tools</li>\n<li>Rerouting traffic</li>\n<li>Canary deployments</li>\n</ul>\n<p><strong>Kube360 - Integration</strong>.</p>\n<ul>\n<li>Integrates into existing AWS & Azure infrastructures</li>\n<li>Deploys into existing VPCs</li>\n<li>Leverages existing subnets</li>\n<li>Communicates with components outside of Kube360</li>\n<li>Supports multiple clusters per organization</li>\n<li>Installed by FP Complete team or customer</li>\n</ul>\n<p>As you can see – Kube360 is one of the most comprehensive tools you can rely on for Cloud Native architecture. Kube360 is your one-stop, fully integrated enterprise Kubernetes ecosystem. Kube360 standardizes containerization, software deployment, fault tolerance, auto-scaling, auto-healing, and security - by design. Kube360's modular, standardized architecture mitigates proprietary lock-in, high support costs, and obsolescence. In addition, Kube360 delivers a seamless deployment experience for you and your team.\nFind out how Kube360 can make your business more efficient, more reliable, and more secure, all in a fraction of the time. Speed up your dev team's productivity - <a href=\"https://tech.fpcomplete.com/contact-us/\">Contact us today!</a></p>\n",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/",
"slug": "cloud-native",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Confused about Cloud-Native? Want to speed up your dev team's productivity?",
"description": "Learn about Cloud-Native architecture.",
"updated": null,
"date": "2022-01-17",
"year": 2022,
"month": 1,
"day": 17,
"taxonomies": {
"tags": [
"kubernetes",
"cloud native"
],
"categories": [
"devsecops",
"devops"
]
},
"authors": [],
"extra": {
"author": "FP Complete",
"keywords": "devsecops, devops",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/cloud-native/",
"components": [
"blog",
"cloud-native"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "why-move-to-cloud-native-now",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/#why-move-to-cloud-native-now",
"title": "Why Move to Cloud-Native Now?",
"children": []
},
{
"level": 2,
"id": "wow-cloud-native-seems-perfect-what-s-the-catch",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/#wow-cloud-native-seems-perfect-what-s-the-catch",
"title": "WOW – Cloud-Native Seems Perfect – What's the Catch?",
"children": []
},
{
"level": 2,
"id": "three-essential-tools-for-successful-cloud-native-architecture",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/#three-essential-tools-for-successful-cloud-native-architecture",
"title": "Three Essential Tools for Successful Cloud-Native Architecture",
"children": []
},
{
"level": 2,
"id": "cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/#cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
"title": "Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?",
"children": []
}
],
"word_count": 1482,
"reading_time": 8,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/levana-nft-launch.md",
"colocated_path": null,
"content": "<p><em>FP Complete Corporation, headquartered in Charlotte, North Carolina, is a global technology company building next-generation software to solve complex problems. We specialize in Server-Side Software Engineering, DevSecOps, Cloud-Native Computing, Distributed Ledger, and Advanced Programming Languages. We have been a full-stack technology partner in business for 10+ years, delivering reliable, repeatable, and highly secure software. Our team of engineers, strategically located in over 13 countries, offers our clients one-stop advanced software engineering no matter their size.</em></p>\n<p>For the past few months, the FP Complete engineering team has been working with <a href=\"https://levana.finance/\">Levana Protocol</a> on a DeFi platform for leveraged assets on the Terra blockchain. But more recently, we've additionally been helping launch the <a href=\"https://meteors.levana.finance/\">Levana Dragons meteor shower</a>. This NFT launch completed in the middle of last week, and to date is the largest single NFT event in the Terra ecosystem. We were very excited to be a part of this. You can read more about the NFT launch itself on <a href=\"https://blog.levana.finance/recap-of-the-levana-meteor-shower-128919193f9b\">the Levana Protocol blog post</a>.</p>\n<p>We received a lot of positive feedback about the smoothness of this launch, which was pretty wonderful feedback to hear. People expressed interest in learning about the technical decisions we made that led to such a smooth event. We also had a few hiccups occur during the launch and post-launch that are worth addressing as well.</p>\n<p>So strap in for a journey involving cloud technologies, DevOps practices, Rust, React, and—of course—Dragons.</p>\n<h2 id=\"overview-of-the-event\">Overview of the event</h2>\n<p>The Levana Dragons meteor shower was an event consisting of 44 separate "showers", or drops during which NFT meteors would be issued. Participants in a shower competed by contributing UST (a Terra-specific stablecoin tied to US Dollars) to a specific Terra wallet. Contributions from a single wallet across the shower were aggregated into a single contribution, and contributions of a higher amount resulted in a better meteor. At the least granular level, this meant stratification into legendary, ancient, rare, and common meteors. But higher contributions also lead to the greater likelihood of receiving an egg inside your meteor.</p>\n<p>Each shower was separated from the next by 1 hour, and we opened up the site about 24 hours before the first shower occurred. That means the site was active for contributions for about 67 hours straight. Then, following the showers, we needed to mint the actual NFTs, ship them to users' wallets, and open up the "cave" page where users could view their NFTs.</p>\n<p>So all told, this was an event that spanned many days, had lots of bouts of high activity, was involved in a game that incorporated many financial transactions, and any downtime, slowness, or poor behavior could result in user frustration or worse. On top of that, given the short timeframe this event was intended to be active, attacks such as DDoS taking down the site could be catastrophic for success of the showers. And the absolute worst case would be a compromise allowing an attacker to redirect funds to a different wallet.</p>\n<p>All that said, let's dive in.</p>\n<h2 id=\"backend-server\">Backend server</h2>\n<p>A major component of the meteor drop was to track contributions to the destination wallet, and provide high level data back to users about these activities. This kind of high level data included the floor prices per shower, the timestamps of the upcoming drops, total meteors a user had acquired so far, and more. All this information is publicly available on the blockchain, and in principle could have been written as frontend logic. However, the overhead of having every visitor to the site downloading essentially the entire history of transactions with the destination wallet would have made the site unusable.</p>\n<p>Instead, we implemented a backend web server. We used Rust (with Axum) for this for multiple reasons:</p>\n<ul>\n<li>We're <a href=\"https://tech.fpcomplete.com/rust/\">very familiar with Rust</a></li>\n<li>Rust is a high performance language, and there were serious concerns about needing to withstand surges in traffic and DDoS attacks</li>\n<li>Due to CosmWasm already heavily leveraging Rust, Rust was already in use on the project</li>\n</ul>\n<p>The server was responsible for keeping track of configuration data (like the shower timestamps and destination wallet address), downloading transaction information from the blockchain (using the <a href=\"https://fcd.terra.dev/apidoc\">Full Client Daemon</a>), and answering queries to the frontend (described next) providing this information.</p>\n<p>We could have kept data in a mutable database like PostgreSQL, but instead we decided to keep all data in memory and download from scratch from the blockchain on each application load. Given the size of the data, these two decisions initially seemed very wise. We'll see some outcomes of this when we analyze performance and look at some of our mistakes below.</p>\n<h2 id=\"react-frontend\">React frontend</h2>\n<p>The primary interface users interacted with was a standard React frontend application. We used TypeScript, but otherwise stuck with generic tools and libraries wherever possible. We didn't end up using any state management libraries or custom CSS systems. Another thing to note is that this frontend is going to expand and evolve over time to include additional functionality around the evolving NFT concept, some of which has already happened, and we'll discuss below.</p>\n<p>One specific item that popped up was mobile optimization. Initially, the plan was for the meteor shower site to be desktop-only. After a few beta runs, it became apparent that the majority of users were using mobile devices. As a DAO, a primary goal of Levana is to allow for distributed governance of all products and services, and therefore we felt it vital to be responsive to this community request. Redesigning the interface for mobile and then rewriting the relevant HTML and CSS took up a decent chunk of time.</p>\n<h2 id=\"hosting-infrastructure\">Hosting infrastructure</h2>\n<p>Many DApps sites are exclusively client side, leveraging frontend logic interacting with the blockchain and smart contracts exclusively. For these kinds of sites, hosting options like Vercel work out very nicely. However, as described above, this application was a combo frontend/backend. Instead of splitting the hosting between two different options, we decided to host both the static frontend app and the backend dynamic app in a single place.</p>\n<p>At FP Complete, we typically use Kubernetes for this kind of deployment. In this case, however, we went with Amazon ECS. This isn't a terribly large delta from our standard Kubernetes deployments, following many of the same patterns: container-based application, rolling deployments with health checks, autoscaling and load balancers, externalized TLS cert management, and centralized monitoring and logging. No major issues there.</p>\n<p>Additionally, to help reduce burden on the backend application and provide a better global experience for the site, we put Amazon CloudFront in front of the application, which allowed caching the static files in data centers around the world.</p>\n<p>Finally, we codified all of this infrastructure using Terraform, our standard tool for Infrastructure as Code.</p>\n<h2 id=\"gitlab\">GitLab</h2>\n<p>GitLab is a standard part of our FP Complete toolchain. We leverage it for internal projects for its code hosting, issue tracking, Docker registry, and CI integration. While we will often adapt our tools to match our client needs, in this case we ended up using our standard tool, and things went very well.</p>\n<p>We ended up with a four-stage CI build process:</p>\n<ol>\n<li>Lint and build the frontend code, producing an artifact with the built static assets</li>\n<li>Build a static Rust application from the backend, embedding the static files from (1), and run standard Rust lints (<code>clippy</code> and <code>fmt</code>), producing an artifact with the single file compiled binary</li>\n<li>Generate a Docker image from the static binary in (2)</li>\n<li>Deploy the new Docker image to either the dev or prod ECS cluster</li>\n</ol>\n<p>Steps (3) and (4) are set up to only run on the <code>master</code> and <code>prod</code> branches. This kind of automated deployment setup made it easy for our distributed team to get changes into a real environment for review quickly. However, it also opened a security hole we needed to address.</p>\n<h2 id=\"aws-lockdown\">AWS lockdown</h2>\n<p>Due to the nature of this application, any kind of downtime during the active showers could have resulted in a lot of egg on our faces and a missed opportunity for the NFT raise. However, there was a far scarier potential outcome. Changing a single config value in production—the destination wallet—would have enabled a nefarious actor to siphon away funds intended for NFTs. This was the primary concern we had during the launch.</p>\n<p>We considered multiple social engineering approaches to the problem, such as advertising to potentially users the correct wallet address they should be using. However, we decided that most likely users would not be checking addresses before sending their funds. We <em>did</em> set up some emergency "shower halted" page and put in place an on-call team to detect and deploy such measures if necessary, but fortunately nothing along those lines occurred.</p>\n<p>However, during the meteor shower, we did instate an AWS account lockdown. This included:</p>\n<ul>\n<li>Switching <a href=\"https://tech.fpcomplete.com/products/zehut/\">Zehut</a>, a tool we use for granting temporary AWS credentials, into read-only credentials mode</li>\n<li>Disabling GitLab CI's production credentials, so that GitLab users could not cause a change in prod</li>\n</ul>\n<p>We additionally vetted all other components in the pipeline of DNS resolution, such as domain name registrar, Route 53, and other AWS services for hosting.</p>\n<p>These are generally good practices, and over time we intend to refine the AWS permissions setup for Levana's AWS account in general. However, this launch was the first time we needed to use AWS for app deployment, and time did not permit a thorough AWS permissions analysis and configuration.</p>\n<h2 id=\"during-the-shower\">During the shower</h2>\n<p>As I just mentioned, during the shower we had an on-call team ready to jump into action and a playbook to address potential issues. Issues essentially fell into three categories:</p>\n<ol>\n<li>Site is slow/down/bad in some way</li>\n<li>Site is actively malicious, serving the wrong content and potentially scamming people</li>\n<li>Some kind of social engineering attack is underway</li>\n</ol>\n<p>The FP Complete team were responsible for observing (1) and (2). I'll be honest that this is not our strong suit. We are a team that typically builds backends and designs DevOps solutions, not an on-call operations team. However, we were the experts in both the DevOps hosting, as well as the app itself. Fortunately, no major issues popped up, and the on-call team got to sit on their hands the whole time.</p>\n<p>Out of a preponderance of caution, we did take a few extra steps before the showers started to try and ensure we were ready for any attack:</p>\n<ol>\n<li>We bumped the replica count in ECS from 2 desired instances to 5. We had autoscaling in place already, but we wanted extra buffer just to be safe.</li>\n<li>We increased the instance size from 512 CPU units to 2048 CPU units.</li>\n</ol>\n<p>In all of our load testing pre-launch, we had seen that 512 CPU units was sufficient to handle 100,000 requests per second per instance with 99th percentile latency of 3.78ms. With these bumped limits in production, and in the middle of the highest activity on the site, we were very pleased to see the following CPU and memory usage graphs:</p>\n<p><img src=\"/images/blog/levana-nft/cpu.png\" alt=\"CPU usage\" /></p>\n<p><img src=\"/images/blog/levana-nft/memory.png\" alt=\"Memory usage\" /></p>\n<p>This was a nice testament to the power of a Rust-written web service, combined with proper autoscaling and CloudFront caching.</p>\n<h2 id=\"image-creation\">Image creation</h2>\n<p>Alright, let's put the app itself to the side for a second. We knew that, at the end of the shower, we would need to quickly mint NFTs for everyone wallet that donated more than $8 during a single shower. There are a few problems with this:</p>\n<ul>\n<li>We had no idea how many users would contribute.</li>\n<li>Generating the images is a relatively slow process.</li>\n<li>Making the images available on IPFS—necessary for how NFTs work—was potentially going to be a bottleneck.</li>\n</ul>\n<p>What we ended up doing was writing a Python script that pregenerated 100,000 or so meteor images. We did this generation directly on an Amazon EC2 instance. Then, instead of uploading the images to an IPFS hosting/pinning service, we ran the IPFS daemon directly on this EC2 instance. We additionally backed up all the images on S3 for redundant storage. Then we launched a <em>second</em> EC2 instance for redundant IPFS hosting.</p>\n<p>This Python script not only generated the images, but also generated a CSV file mapping the image Content ID (IPFS address) together with various pieces of metadata about the meteor image, such as the meteor body. We'll use this CID/meteor image metadata mapping for correct minting next.</p>\n<p>All in all, this worked just fine. However, there were some hurdles getting there, and we have plans to change this going forward in future stages of the NFT evolution. We'll mention those below.</p>\n<h2 id=\"minting\">Minting</h2>\n<p>Once the shower finished, we needed to get NFTs into user wallets as quickly as possible. That meant we needed two different things:</p>\n<ol>\n<li>All the NFT images on IPFS, which we had.</li>\n<li>A set of CSV files providing the NFTs to be generated, together with all of their metadata and owners.</li>\n</ol>\n<p>The former was handled by the previous step. The latter was additional pieces of Rust tooling we wrote that leveraged the same internal libraries we wrote for the backend application. The purpose of this tooling was to:</p>\n<ul>\n<li>Aggregate the total set of contributions from the blockchain.</li>\n<li>Stratify contributions into individual meteors of different rarity.</li>\n<li>Apply the appropriate algorithms to randomly decide which meteors receive an egg and which don't.</li>\n<li>Assign eggs among the meteors.</li>\n<li>Assign additionally metadata to the meteors.</li>\n<li>Choose an appropriate and unique meteor image for each meteor based on its needed metadata. (This relies on the Python-generated CSV file above.)</li>\n</ul>\n<p>This process produced a few different pieces of data:</p>\n<ul>\n<li>CSV files for meteor NFT generation. There's nothing secret about these, you could reconstruct them yourself by analyzing the NFT minting on the blockchain.</li>\n<li>The distribution of attributes (such as essence, crystals, distance, etc.) among the meteors for calculating rarity of individual traits. Again, this can be derived easily from public information.</li>\n<li>A file that tracks the meteor/egg mapping. This is the one outcome from this process that is a closely guarded secret.</li>\n</ul>\n<p>This final point is also influencing the design of the next few stages of this project. Specifically, while a smart contract would be the more natural way to interact with NFTs in general, we cannot expose the meteor/egg mapping on the blockchain. Therefore, the "cracking" phase (which will allow users to exchange meteors for their potential eggs) will need to work with another backend application.</p>\n<p>In any event, this metadata-generation process was something we tested multiple times on data from our beta runs, and were ready to produce and send over to Knowhere.art for minting soon after the shower. I believe users got NFTs in their wallets within 8 hours of the end of the shower, which was a pretty good timeframe overall.</p>\n<h2 id=\"opening-the-cave\">Opening the cave</h2>\n<p>The final step was opening the cave, a new page on the meteor site that allows users to view their meteors. This phase was achieved by updating the configuration values of the backend to include:</p>\n<ul>\n<li>The smart contract address of the NFT collection</li>\n<li>The total number of meteors</li>\n<li>The trait distribution</li>\n</ul>\n<p>Once we switched the config values, the cave opened up, and users were able to access it. Besides pulling the static information mentioned above from the server, all cave page interactions occur fully client side, with the client querying the blockchain using the Terra.js library.</p>\n<p>And that's where we're at today. The showers completed, users got their meteors, the cave is open, and we're back to work on implementing the cracking phase of this project. W00t!</p>\n<h2 id=\"problems\">Problems</h2>\n<p>Overall, this project went pretty smoothly in production. However, there were a few gotcha moments worth mentioning.</p>\n<h3 id=\"fcd-rate-limiting\">FCD rate limiting</h3>\n<p>The biggest issue we hit during the showers, and the one that had the biggest potential to break everything, was FCD rate limiting. We'd done extensive testing prior to the real showers on testnet, with many volunteer testers in addition to bots. We never ran into a single example that I'm aware of where rate limiting kicked in.</p>\n<p>However, the real production shower run into such rate limiting issues about 10 showers into the event. (We'll look at how they manifested in a moment.) There are multiple potentially contributing factors for this:</p>\n<ul>\n<li>There was simply far greater activity in the real event than we had tested for.</li>\n<li>Most of our testing was limited to just 10 showers, and the real event went for 44.</li>\n<li>There may be different rate limiting rules for FCD on mainnet versus testnet.</li>\n</ul>\n<p>Whatever the case, we began to notice the rate limiting when we tried to roll out a new feature. We implemented the Telescope functionality, which allowed users to see the historical floor prices in previous showers.</p>\n<p><img src=\"/images/blog/levana-nft/telescope.png\" alt=\"Telescope\" /></p>\n<p>After pushing the change to ECS, however, we noticed that the new deployment didn't go live. The reason was that, during the initial data load process, the new processes were receiving rate limiting responses and dying. We tried fixing this by adding a delay or other kinds of retry logic. However, none of these combinations allowed the application to begin processing requests within ECS's readiness check period. (We could have simply turned off health checks, but that would have opened a new can of worms.)</p>\n<p>This problem was fairly critical. Not being able to roll out new features or bug fixes was worrying. But more troubling was the lack of autohealing. The existing instances continued to run fine, because they only needed to download small amounts of data from FCD to stay up-to-date, and therefore never triggered the rate limiting. But if any of those instances went down, ECS wouldn't be able to replace them with healthy instances.</p>\n<p>Fortunately, we had already written the majority of a caching solution in prior weeks, and had not finished the work because we thought it wasn't a priority. After a few hair-raising hours of effort, we got a solution in place which:</p>\n<ul>\n<li>Saved all transactions to a YAML file (a binary format would have been a better choice, but YAML was the easiest to roll out)</li>\n<li>Uploaded this YAML file to S3</li>\n<li>Ran this save/upload process on a loop, updating every 10 minutes</li>\n<li>Modified the application logic to start off by first downloading the YAML file from S3, and then doing a delta load from there using FCD</li>\n</ul>\n<p>This reduced startup time significantly, bypassed the rate limiting completely, and allowed us to roll out new features and not worry about the entire site going down.</p>\n<h3 id=\"ipfs-hosting\">IPFS hosting</h3>\n<p>FP Complete's DevOps approach is decidedly cloud-focused. For large blob storage, our go-to solution is almost always cloud-based blob storage, which would be S3 in the case of Amazon. We had zero experience with large scale IPFS data hosting prior to this project, which presented a unique challenge.</p>\n<p>As mentioned, we didn't want to go with one of the IPFS pinning services, since the rate limiting may have prevented us from uploading all the pregenerated images. (Rate limiting is beginning to sound like a pattern here...) Being comfortable with S3, we initially tried hosting the images using <a href=\"https://github.com/ipfs/go-ds-s3\">go-ds-s3</a>, a plugin for the <code>ipfs</code> CLI that uses S3 for storage. We still don't know why, but this never worked correctly for us. Instead, we reverted to storing the raw image data on Amazon EBS, which is more expensive and less durable, but actually worked. To fix the durability issue, we backed up all the raw image files to S3.</p>\n<p>Overall, however, we're not happy with this outcome. The cost for this hosting is relatively high, and we haven't set up a truly fault-tolerant, highly available hosting. At this point, we would like to switch over to an IPFS pinning service, such as Pinata. Now that the images are available on IPFS, issuing API calls to pin those files should be easier than uploading the complete images. We're planning on using this as a framework going forward for other images, namely:</p>\n<ul>\n<li>Generate the raw images on EC2</li>\n<li>Upload for durability to S3</li>\n<li>Run <code>ipfs</code> locally to make the images available on IPFS</li>\n<li>Pin the images to a service like Pinata</li>\n<li>Take down the EC2 instance</li>\n</ul>\n<p>The next issue we ran into was... RATE LIMITING, again. This time, we discovered that Cloudflare's IPFS gateway was rate limiting users on downloading their meteor images, resulting in a situation where users would see only some of their meteors appear in their cave page. We solved this one by sticking CloudFront in front of the S3 bucket holding the meteor images and serving from there instead.</p>\n<p>Going forward, when it's available, <a href=\"https://blog.cloudflare.com/introducing-r2-object-storage/\">Cloudflare R2</a> is a promising alternative to the S3+CloudFront offering, due to reduced storage cost and entirely removed bandwidth costs.</p>\n<h2 id=\"lessons-learned\">Lessons learned</h2>\n<p>This project was a great mix of leveraging existing expertise and pairing with some new challenges. Some of the top lessons we learned here were:</p>\n<ol>\n<li>We got a lot of experience with working directly with the LCD and FCD APIs for Terra from Rust code. Previously, with our DeFi work, this almost exclusively sat behind Terra.js usage.</li>\n<li>IPFS was a brand-new topic for us, and we got to play with some pretty extreme cases right off the bat. Understanding the concepts in pinning and gateways will help us immensely with future NFT work.</li>\n<li>Since ECS is a relatively unusual technology for us, we got to learn quite a few of the idiosyncrasies it has versus Kubernetes, our more standard toolchain.</li>\n<li>While rate limiting is a concept we're familiar with and have worked with many times in the past, these particular obstacles were all new, and each of them surprising in different ways. Typically, we would have some simpler workarounds for these rate limiting issues, such as using authenticated requests. Having to solve each problem in such an extreme way was surprising.</li>\n<li>And while we've been involved in blockchain and smart contract work for years, this was our first time working directly with NFTs. This was probably the simplest lesson learned. The API for querying the NFTs contracts is <a href=\"https://github.com/CosmWasm/cw-nfts/blob/main/packages/cw721/README.md\">fairly straightforward</a>, and represented a small portion of the time spent on this project.</li>\n</ol>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>We're very excited to have been part of such a successful event as the Levana Dragons NFT meteor shower. This was a fun site to work on, with a huge and active user base, and some interesting challenges. It was great to pair together some of our standard cloud DevOps practices with blockchain and smart contract common practices. And using Rust brought some great advantages we're quite happy with.</p>\n<p>Going forward, we're looking forward to getting to continue evolving the backend, frontend, and DevOps of this project, just like the NFTs themselves will be evolving. Happy dragon luck to all!</p>\n<p><em>Interested in learning more? Check out these relevant articles</em></p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps, part 1</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Deploying Rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n<li><a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360®</a></li>\n<li><a href=\"https://tech.fpcomplete.com/products/zehut/\">Zehut</a></li>\n</ul>\n<p><em>Does this kind of work sound interesting? Consider <a href=\"https://tech.fpcomplete.com/jobs/\">applying to work at FP Complete</a>.</em></p>\n",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
"slug": "levana-nft-launch",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Levana NFT Launch",
"description": "We were excited to recently help Levana Protocol with their NFT launch. This blog post explains some technical details behind the scenes that allowed this to happen.",
"updated": null,
"date": "2021-11-17",
"year": 2021,
"month": 11,
"day": 17,
"taxonomies": {
"tags": [
"blockchain",
"rust",
"devops"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Wesley Crook",
"keywords": "blockchain, NFT, cryptocurrency, Terra",
"blogimage": "/images/blog-listing/blockchain.png",
"image": "images/blog/thumbs/levana-nft-launch.png"
},
"path": "/blog/levana-nft-launch/",
"components": [
"blog",
"levana-nft-launch"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "overview-of-the-event",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#overview-of-the-event",
"title": "Overview of the event",
"children": []
},
{
"level": 2,
"id": "backend-server",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#backend-server",
"title": "Backend server",
"children": []
},
{
"level": 2,
"id": "react-frontend",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#react-frontend",
"title": "React frontend",
"children": []
},
{
"level": 2,
"id": "hosting-infrastructure",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#hosting-infrastructure",
"title": "Hosting infrastructure",
"children": []
},
{
"level": 2,
"id": "gitlab",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#gitlab",
"title": "GitLab",
"children": []
},
{
"level": 2,
"id": "aws-lockdown",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#aws-lockdown",
"title": "AWS lockdown",
"children": []
},
{
"level": 2,
"id": "during-the-shower",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#during-the-shower",
"title": "During the shower",
"children": []
},
{
"level": 2,
"id": "image-creation",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#image-creation",
"title": "Image creation",
"children": []
},
{
"level": 2,
"id": "minting",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#minting",
"title": "Minting",
"children": []
},
{
"level": 2,
"id": "opening-the-cave",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#opening-the-cave",
"title": "Opening the cave",
"children": []
},
{
"level": 2,
"id": "problems",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#problems",
"title": "Problems",
"children": [
{
"level": 3,
"id": "fcd-rate-limiting",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#fcd-rate-limiting",
"title": "FCD rate limiting",
"children": []
},
{
"level": 3,
"id": "ipfs-hosting",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#ipfs-hosting",
"title": "IPFS hosting",
"children": []
}
]
},
{
"level": 2,
"id": "lessons-learned",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#lessons-learned",
"title": "Lessons learned",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 3960,
"reading_time": 20,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/blockchain-technology-smart-contracts-save-money/",
"title": "Blockchain Technology, Smart Contracts, and Your Company"
}
]
},
{
"relative_path": "blog/announcing-amber-ci-secret-tool.md",
"colocated_path": null,
"content": "<p>Years ago, <a href=\"https://travis-ci.org/\">Travis CI</a> introduced a method for passing secret values from your repository into the Travis CI system. This method relies on encryption to ensure that anyone can provide a new secret, but only the CI system itself can read those secrets. I've always thought that the Travis approach to secrets was one of the best around, and was disappointed that other CI tools continued to use the more standard "set and update secrets in a web interface" approach. (We'll get into the advantages of the encrypted-secrets approach a bit later.)</p>\n<p>Fast-forward to earlier this year, and for running <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a> deployment jobs, we found that the secrets-in-CI-web-interface approach simply wasn't scaling. So I hacked together a quick script that used GPG and symmetric key encryption to encrypt a <code>secrets.sh</code> file containing the relevant secrets for CI (or, really, CD in this case). This worked, but had some downsides.</p>\n<p>A few weeks ago, I finally bit the bullet and rewrote this ugly script. Instead of using GPG and symmetric key encryption, I used <a href=\"https://lib.rs/crates/sodiumoxide\"><code>sodiumoxide</code></a> and public key encryption. This addressed essentially all the pain points I had with our CD setup. However, this tool was very much custom-built for Kube360.</p>\n<p>Over the weekend, I extracted the general-purpose components of this tool into a <a href=\"https://github.com/fpco/amber\">new open source repository</a>. This blog post is announcing the first public release of Amber, a tool geared at CI/CD systems for better management of secret data over time. There's basic information in that repo to describe how to use the tool. This blog post is intended to go into more detail on why I believe encrypted-secrets is a better approach than web-interface-of-secrets.</p>\n<h2 id=\"the-pain-points\">The pain points</h2>\n<p>There are two primary issues with the standard CI secrets management approach:</p>\n<ol>\n<li>It can be tedious to manage a large number of values inside a web interface. I've personally made mistakes copy-pasting values. And if you ever need to run a script locally for testing purposes, copying all the values out each time is an even bigger pain. (More on that below.)</li>\n<li>It's completely reasonable for secret values to change over time. However, there's no evidence of this in the source repository feeding into the CI system. Instead, the changes happen opaquely, and can never be observed as having changed, nor an old build faithfully reproduced with the original values. (This is pretty similar to why we believe <a href=\"https://tech.fpcomplete.com/blog/2017/04/ci-build-process-in-code-repository/\">your CI build process should be in your code repository</a>.)</li>\n</ol>\n<p>With encrypted values within a repository, both of these things change. Adding new encrypted values is now a command line call, which for many of us is less tedious and more foolproof than web interfaces. The encrypted secrets are stored in the Git repository itself, so as values change over time, the files provide evidence of that fact. And checking out an old commit from the repository will allow you to rerun a build with exactly the same secrets as when the commit was made.</p>\n<h2 id=\"why-public-key\">Why public key</h2>\n<p>One of the important changes I made from the GPG script mentioned above was public key, instead of symmetric key, encryption. With symmetric key encryption, you use the same key to encrypt and decrypt data. That means that all people who want to encrypt a value into the repository need access to a piece of secret data. While encrypting new secret values isn't <em>that</em> common an activity, requiring access to that secret data is best avoided.</p>\n<p>Instead, with public key encryption, we generate a secret key and public key. The public key lives inside the repository, in the same file as the secrets themselves. With that in place, anyone with access to the repo can encrypt new values, without any ability to read existing values.</p>\n<p>Further, since the public key is available in the repository, Amber is able to perform sanity checks to ensure that its secret key matches up with the public key in the repository. While the encryption algorithms we use provide the ability to ensure message integrity, this self-check provides for nicer diagnostics, clearly distinguishing "message corrupted" from "looks like you're using the wrong secret key for this repository."</p>\n<h2 id=\"minimizing-deltas\">Minimizing deltas</h2>\n<p>Amber is optimized for the Git repository case. This includes wanting to minimize the deltas when updating secrets. This resulted in three design decisions:</p>\n<ul>\n<li>\n<p>The config file format is YAML. Its whitespace-sensitive formatting makes it a great choice to minimize the number of lines affected when updating a secret. While other formats (like TOML) would have been great choices too, I stuck with YAML as, anecdotally, it seems to have stronger overall language support for people wishing to write companion tools.</p>\n</li>\n<li>\n<p>In addition to storing the secret name and encrypted value (the ciphertext), Amber additionally includes a SHA256 digest of the secret. This means that, if you encrypt the same value twice, Amber can detect this and avoid generating a new ciphertext. This has the additional benefit of letting users check if they know the secret value without being able to decrypt the file.</p>\n</li>\n<li>\n<p>The most natural representation of this data would be a YAML mapping, something like:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">secrets:\n NAME1:\n sha256: deadbeef\n cipher: abc123\n</code></pre>\n<p>However, in most languages, the ordering of keys in a mapping is arbitrary. This makes it harder to read these files, and means that arbitrary minor changes may result in large deltas. Instead, Amber stores secrets in an array:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">secrets:\n- name: NAME1\n sha256: deadbeef\n cipher: abc123\n</code></pre>\n</li>\n</ul>\n<p>This all works together to achieve what for me is the goal of secrets in a repository: you can trivially see in a <code>git diff</code> which secrets values were added, removed, or updated.</p>\n<h2 id=\"local-running\">Local running</h2>\n<p>Ideally production deployments are only ever run from the official CI/CD system designated for that. However:</p>\n<ol>\n<li>Sometimes during development it's much easier to iterate by doing non-production deployments from your local system.</li>\n<li>As a realist, I have to admit that even the best run DevOps teams may occasionally need to bend the rules for expediency or better debugging of a production issue.</li>\n</ol>\n<p>For Kube360, it wasn't unreasonable to have about a dozen secret values for a standard deployment. Copy/pasting all of those to your local machine each time you want to debug an issue wasn't feasible. This encouraged some worst practices, such as keeping the secret values in a plain-text shell script file locally. For a development cluster, that's not the worst thing in the world. But lax security practices in dev tend to bleed into prod too easily.</p>\n<p>Copying a single secret value from CI secrets or a team password manager is a completely different story. It takes 30 seconds at the beginning of a debug session. I feel no objections to doing so.</p>\n<p>Even this may be something we can bypass with cloud secrets managers, which I'll mention below.</p>\n<h2 id=\"what-s-with-the-name\">What's with the name?</h2>\n<p>As we all know, there are two hard problems in computer science:</p>\n<ol>\n<li>Cache invalidation</li>\n<li>Naming things</li>\n<li>Off-by-one errors</li>\n</ol>\n<p>I named this tool Amber based on Jurassic Park, and the idea of some highly important data (dinosaur DNA) being trapped in amber under layers of sediment. This fit in nicely with my image of storing encrypted secrets inside the commits of a Git repository. But since I just finished playing "Legend of Zelda: Skyward Sword," a more appropriate image seems to be:</p>\n<p><img src=\"/images/blog/amber-zelda.png\" alt=\"Zelda trapped in amber\" /></p>\n<h2 id=\"implementation\">Implementation</h2>\n<p>I wrote this tool in Rust. It's a pretty small codebase currently, clocking in at only 445 SLOC of Rust code. It's also a pretty simple overall implementation, if anyone is interested in a first project to contribute to.</p>\n<h2 id=\"future-enhancements\">Future enhancements</h2>\n<p>Future enhancements will be driven by internal and customer needs at FP Complete, as well as feedback we receive on the issue tracker and pull requests. I have a few ideas ranging from concrete to nebulous for enhancements:</p>\n<ul>\n<li>Masking values. Currently, <code>amber exec</code> will simply run the child process without modifying its output at all. A standard CI system feature is to mask secret values from output. Implementing such as change in Amber should be straightforward. (<a href=\"https://github.com/fpco/amber/issues/1\">Issue #1</a>)</li>\n<li>Tie-ins with cloud secrets management systems. Currently, Amber's only source of the secret key is via environment variables. There are many use cases where grabbing the data from a secrets manager, such as AWS Secrets Manager or Azure Key Vault, would be a better choice. In particular, during deployments, this could allow delegating access to secrets to existing cloud-native permissions mechanisms. See <a href=\"https://github.com/fpco/amber/issues/2\">issue #2</a> and <a href=\"https://github.com/fpco/amber/pull/4\">pull request #4</a> for some more information. One possible approach here is to follow a pattern of naming the secret based on the public key, leading to a zero-config approach to discovering the secret key (since the public key is already in the repository).</li>\n<li>Additional platform support. Currently, we're building executables for x86-64 on Linux (static via musl), Windows, and Mac. Cross compilation support from Rust is great, and one of the reasons I prefer writing CI tools like this in Rust. However, the <code>sodiumoxide</code> library depends on <code>libsodium</code>, so additional GitHub Actions setup will be necessary to get these builds working.</li>\n<li>Auto-generation of passwords. In our Kube360 work, a common need is to generate a temporary password to be used by different components in the system (e.g., an OpenID Connect client secret used by both the Identity Provider and Service Provider). A simple <code>amber gen-password CLIENT_SECRET</code> subcommand may be nice.</li>\n<li>I haven't released this code to <a href=\"https://crates.io/\">crates</a>, but if there's interest I'd be happy to do so.</li>\n<li>Support for encrypted files in addition to encrypted environment variables. I haven't really thought through what the interface for this may look like.</li>\n</ul>\n<h2 id=\"get-started\">Get started</h2>\n<p>There are <a href=\"https://github.com/fpco/amber#readme\">instructions in the repo</a> for getting started with Amber. The basic steps are:</p>\n<ul>\n<li>Download the executable from <a href=\"https://github.com/fpco/amber/releases\">the release page</a> or build it yourself</li>\n<li>Use <code>amber init</code> to create an <code>amber.yaml</code> file and a secret key</li>\n<li>Store the secret key somewhere safe, like your password manager, and additionally within your CI system's secrets\n<ul>\n<li>In theory, this is the last value you'll ever store there!</li>\n</ul>\n</li>\n<li>Add your secrets with <code>amber encrypt</code></li>\n<li>Commit <code>amber.yaml</code> to your repository</li>\n<li>Modify your CI scripts to download the Amber executable and use <code>amber exec</code> to run commands that need secrets</li>\n</ul>\n<h2 id=\"more-from-fp-complete\">More from FP Complete</h2>\n<p>FP Complete is an IT consulting firm specializing in server-side development, DevOps, Rust, and Haskell. A large part of our consulting involves improving and automating build and deployment pipelines. If you're interested in additional help from FP Complete in one of these domains, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us</a>.</p>\n<p>Interested in working with a team of DevOps, Rust, and Haskell engineers to solve real world problems? We're actively <a href=\"https://tech.fpcomplete.com/jobs/\">hiring senior and lead DevOps engineers</a>.</p>\n<p>Want to read more? Check out:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/\">Our blog</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Our Rust homepage</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/",
"slug": "announcing-amber-ci-secret-tool",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Announcing Amber, encrypted secrets management",
"description": "We've released a new tool, Amber, to help better manage secrets in Git repositories for CI purposes. Read more about the motivation and how to get started.",
"updated": null,
"date": "2021-08-17",
"year": 2021,
"month": 8,
"day": 17,
"taxonomies": {
"tags": [
"kubernetes",
"rust"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/devops.png",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/announcing-amber.png"
},
"path": "/blog/announcing-amber-ci-secret-tool/",
"components": [
"blog",
"announcing-amber-ci-secret-tool"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "the-pain-points",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#the-pain-points",
"title": "The pain points",
"children": []
},
{
"level": 2,
"id": "why-public-key",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#why-public-key",
"title": "Why public key",
"children": []
},
{
"level": 2,
"id": "minimizing-deltas",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#minimizing-deltas",
"title": "Minimizing deltas",
"children": []
},
{
"level": 2,
"id": "local-running",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#local-running",
"title": "Local running",
"children": []
},
{
"level": 2,
"id": "what-s-with-the-name",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#what-s-with-the-name",
"title": "What's with the name?",
"children": []
},
{
"level": 2,
"id": "implementation",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#implementation",
"title": "Implementation",
"children": []
},
{
"level": 2,
"id": "future-enhancements",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#future-enhancements",
"title": "Future enhancements",
"children": []
},
{
"level": 2,
"id": "get-started",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#get-started",
"title": "Get started",
"children": []
},
{
"level": 2,
"id": "more-from-fp-complete",
"permalink": "https://tech.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#more-from-fp-complete",
"title": "More from FP Complete",
"children": []
}
],
"word_count": 1874,
"reading_time": 10,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/istio-mtls-debugging-story.md",
"colocated_path": null,
"content": "<p>Last week, our team was working on a feature enhancement to <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>. We work with clients in regulated industries, and one of the requirements was fully encrypted traffic throughout the cluster. While we've supported Istio's mutual TLS (mTLS) as an optional feature for end-user applications, not all of our built-in services were using mTLS strict mode. We were working on rolling out that support.</p>\n<p>One of the cornerstones of Kube360 is our centralized authentication system, which is primarily supplied by a service (called <code>k3dash</code>) that receives incoming traffic, performs authentication against an external identity provider (such as Okta, Azure AD, or others), and then provides those credentials to the other services within the clusters, such as the Kubernetes Dashboard or Grafana. This service in particular was giving some trouble.</p>\n<p>Before diving into the bugs and the debugging journey, however, let's review both Istio's mTLS support and relevant details of how <code>k3dash</code> operates.</p>\n<p><em>Interested in solving these kinds of problems? We're looking for experienced DevOps engineers to join our global team. We're hiring globally, and particularly looking for another US lead engineer. If you're interesting, <a href=\"mailto:[email protected]\">send your CV to [email protected]</a>.</em></p>\n<h2 id=\"what-is-mtls\">What is mTLS?</h2>\n<p>In a typical Kubernetes setup, encrypted traffic comes into the cluster and hits a load balancer. That load balancer terminates the TLS connection, resulting in the decrypted traffic. That decrypted traffic is then sent to the relevant service within the cluster. Since traffic within the cluster is typically considered safe, for many use cases this is an acceptable approach.</p>\n<p>But for some use cases, such as handling Personally Identifiable Information (PII), extra safeguards may be desired or required. In those cases, we would like to ensure that <em>all</em> network traffic, even traffic inside the same cluster, is encrypted. That gives extra guarantees against both snooping (reading data in transit) and spoofing (faking the source of data) attacks. This can help mitigate the impact of other flaws in the system.</p>\n<p>Implementing this complete data-in-transit encryption system manually requires a major overhaul to essentially every application in the cluster. You'll need to teach all of them to terminate their own TLS connections, issue certificates for all applications, and add a new Certificate Authority for all applications to respect.</p>\n<p>Istio's mTLS handles this outside of the application. It installs a sidecar that communicates with your application over a localhost connection, bypassing exposed network traffic. It uses sophisticated port forwarding rules (via IP tables) to redirect incoming and outgoing traffic to and from the pod to go via the sidecar. And the Envoy sidecar in the proxy handles all the logic of obtaining TLS certificates, refreshing keys, termination, etc.</p>\n<p>The way Istio handles all of this is pretty incredible. When it works, it works great. And when it fails, it can be disastrously difficult to debug. Which is what happened here (though thankfully it took less than a day to get to a conclusion). In the realm of <em>epic foreshadowment</em>, let me point out three specific points about Istio's mTLS worth mentioning.</p>\n<ul>\n<li>In strict mode, which is what we're going for, the Envoy sidecar will reject any incoming plaintext communication.</li>\n<li>Something I hadn't recognized at first, but now have fully internalized: normally, if you make an HTTP connection to a host that doesn't exist, you'll get a failed connection error. You definitely <em>won't</em> get an HTTP response. With Istio, however, you'll <em>always</em> make a successful outgoing HTTP connection, since your connection is going to Envoy itself. If the Envoy proxy cannot make the connection, it will return an HTTP response body with a 503 error message, like most proxies.</li>\n<li>The Envoy proxy has special handling for some protocols. Most importantly, if you make a plaintext HTTP outgoing connection, the Envoy proxy has sophisticated abilities to parse the outgoing request, understand details about various headers, and do intelligent routing.</li>\n</ul>\n<p>OK, that's mTLS. Let's talk about the other player here: <code>k3dash</code>.</p>\n<h2 id=\"k3dash-and-reverse-proxying\"><code>k3dash</code> and reverse proxying</h2>\n<p>The primary method <code>k3dash</code> uses to provide authentication credentials to other services inside the cluster is HTTP reverse proxying. This is a common technique, and common libraries exist for doing it. In fact, <a href=\"https://www.stackage.org/package/http-reverse-proxy\">I wrote one such library</a> years ago. We've already mentioned a common use case of reverse proxying: load balancing. In a reverse proxy situation, incoming traffic is received by one server, which analyzes the incoming request, performs some transformations, and then chooses a destination service to forward the request to.</p>\n<p>One of the most important aspects of reverse proxying is header management. There are a few different things you can do at the header level, such as:</p>\n<ul>\n<li>Remove hop-by-hop headers, such as <code>transfer-encoding</code>, which apply to a single hop and not the end-to-end communication between client and server.</li>\n<li>Inject new headers. For example, in <code>k3dash</code>, we regularly inject headers recognized by the final services for authentication purposes.</li>\n<li>Leave headers completely untouched. This is often the case with headers like <code>content-type</code>, where we typically want the client and final server to exchange data without any interference.</li>\n</ul>\n<p>As one <em>epic foreshadowment</em> example, consider the <code>Host</code> header in a typical reverse proxy situation. I may have a single load balancer handling traffic for a dozen different domain names, including domain names <code>A</code> and <code>B</code>. And perhaps I have a single service behind the reverse proxy serving the traffic for both of those domain names. I need to make sure that my load balancer forwards on the <code>Host</code> header to the final service, so it can decide how to respond to the request.</p>\n<p><code>k3dash</code> in fact uses the library linked above for its implementation, and is following fairly standard header forwarding rules, plus making some specific modifications within the application.</p>\n<p>I think that's enough backstory, and perhaps you're already beginning to piece together what went wrong based on my clues above. Anyway, let's dive in!</p>\n<h2 id=\"the-problem\">The problem</h2>\n<p>One of my coworkers, Sibi, got started on the Istio mTLS strict mode migration. He got strict mode turned on in a test cluster, and then began to figure out what was broken. I don't know all the preliminary changes he made. But when he reached out to me, he'd gotten us to a point where the Kubernetes load balancer was successfully receiving the incoming requests for <code>k3dash</code> and forwarding them along to <code>k3dash</code>. <code>k3dash</code> was able to log the user in and provide its own UI display. All good so far.</p>\n<p>However, following through from the main UI to the Kubernetes Dashboard would fail, and we'd end up with this error message in the browser:</p>\n<blockquote>\n<p>upstream connect error or disconnect/reset before headers. reset reason: connection failure</p>\n</blockquote>\n<p>Sibi believed this to be a problem with the <code>k3dash</code> codebase itself and asked me to step in to help debug.</p>\n<h2 id=\"the-wrong-rabbit-hole-and-incredible-laziness\">The wrong rabbit hole, and incredible laziness</h2>\n<p>This whole section is just a cathartic gripe session on how I foot-gunned myself. I'm entirely to blame for my own pain, as we're about to see.</p>\n<p>It seemed pretty clear that the outgoing connection from the <code>k3dash</code> pod to the <code>kubernetes-dashboard</code> pod was failing. (And this turned out to be a safe guess.) The first thing I wanted to do was make a simpler repro, which in this case involved <code>kubectl exec</code>ing into the <code>k3dash</code> container and <code>curl</code>ing to the in-cluster service endpoint. Essentially:</p>\n<pre><code>$ curl -ivvv http://kube360-kubernetes-dashboard.kube360-system.svc.cluster.local/\n* Trying 172.20.165.228...\n* TCP_NODELAY set\n* Connected to kube360-kubernetes-dashboard.kube360-system.svc.cluster.local (172.20.165.228) port 80 (#0)\n> GET / HTTP/1.1\n> Host: kube360-kubernetes-dashboard.kube360-system.svc.cluster.local\n> User-Agent: curl/7.58.0\n> Accept: */*\n>\n< HTTP/1.1 503 Service Unavailable\nHTTP/1.1 503 Service Unavailable\n< content-length: 84\ncontent-length: 84\n< content-type: text/plain\ncontent-type: text/plain\n< date: Wed, 14 Jul 2021 15:29:04 GMT\ndate: Wed, 14 Jul 2021 15:29:04 GMT\n< server: envoy\nserver: envoy\n<\n* Connection #0 to host kube360-kubernetes-dashboard.kube360-system.svc.cluster.local left intact\nupstream connect error or disconnect/reset before headers. reset reason: local reset\n</code></pre>\n<p>This reproed the problem right away. Great! I was now completely convinced that the problem was not <code>k3dash</code> specific, since neither <code>curl</code> nor <code>k3dash</code> could make the connection, and they both gave the same <code>upstream connect error</code> message. I could think of a few different reasons for this to happen, none of which were correct:</p>\n<ul>\n<li>The outgoing packets from the container were not being sent to the Envoy proxy. I strongly believed this one for a while. But if I'd thought a bit harder, I would have realized that this was completely impossible. That <code>upstream connect error</code> message was of course coming from the Envoy proxy itself! If we were having a normal connection failure, we would have received the error message at the TCP level, not as an HTTP 503 response code. Next!</li>\n<li>The Envoy sidecar was receiving the packets, but the mesh was confused enough that it couldn't figure out how to connect to the destination Envoy sidecar. This turned out to be partially right, but not in the way I thought.</li>\n</ul>\n<p>I futzed around with lots of different attempts here but was essentially stalled. Until Sibi noticed something fascinating. It turns out that the following, seemingly nonsensical command <em>did</em> work:</p>\n<pre><code>curl http://kube360-kubernetes-dashboard.kube360-system.svc.cluster.local:443/\n</code></pre>\n<p>For some reason, making an <em>insecure</em> HTTP request over 443, the <em>secure</em> HTTPS port, worked. This made no sense, of course. Why would using the wrong port fix everything? And this is where incredible laziness comes into play. You see, Kubernetes Dashboard's default configuration uses TLS, and requires all of that setup I mentioned above about passing around certificates and updating accepted Certificate Authorities. But you can turn off that requirement, and make it listen on plain text. Since (1) this was intracluster communication, and (2) we've always had strict mTLS on our roadmap, we decided to simply turn off TLS in the Kubernetes Dashboard. However, when doing so, I forgot to switch the port number from 443 to 80.</p>\n<p>Not to worry though! I <em>did</em> remember to correctly configure <code>k3dash</code> to communicate with Kubernetes Dashboard, using insecure HTTP, over port 443. Since both parties agreed on the port, it didn't matter that it was the wrong port.</p>\n<p>But this was all very frustrating. It meant that the "repro" wasn't a repro at all. <code>curl</code>ing on the wrong port was giving the same error message, but for a different reason. In the meanwhile, we went ahead and changed Kubernetes Dashboard to listen on port 80 and <code>k3dash</code> to connect on port 80. We thought there <em>may</em> be a possibility that the Envoy proxy was giving some special treatment to the port number, which in retrospect doesn't really make much sense. In any event, this ended at a situation where our "repro" wasn't a repro at all.</p>\n<h2 id=\"the-bug-is-in-k3dash\">The bug is in <code>k3dash</code></h2>\n<p>Now it was clear that Sibi was right. <code>curl</code> could connect, <code>k3dash</code> couldn't. The bug <em>must</em> be inside <code>k3dash</code>. But I couldn't figure out how. Being the author of essentially all the HTTP libraries involved in this toolchain, I began to worry that my HTTP client library itself may somehow be the source of the bug. I went down a rabbit hole there too, putting together some minimal sample program outside <code>k3dash</code>. I <code>kubectl cp</code>ed them over and then ran them... and everything worked fine. Phew, my libraries were working, but not <code>k3dash</code>.</p>\n<p>Then I did the thing I should have done at the very beginning. I looked at the logs very, very carefully. Remember, <code>k3dash</code> is doing a reverse proxy. So, it receives an incoming request, modifies it, makes the new request, and then sends a modified response back. The logs included the modified outgoing HTTP request (some fields modified to remove private information):</p>\n<pre><code>2021-07-15 05:20:39.820662778 UTC ServiceRequest Request {\n host = "kube360-kubernetes-dashboard.kube360-system.svc.cluster.local"\n port = 80\n secure = False\n requestHeaders = [("X-Real-IP","127.0.0.1"),("host","test-kube360-hostname.hidden"),("upgrade-insecure-requests","1"),("user-agent","<REDACTED>"),("accept","text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"),("sec-gpc","1"),("referer","http://test-kube360-hostname.hidden/dash"),("accept-language","en-US,en;q=0.9"),("cookie","<REDACTED>"),("x-forwarded-for","192.168.0.1"),("x-forwarded-proto","http"),("x-request-id","<REDACTED>"),("x-envoy-attempt-count","3"),("x-envoy-internal","true"),("x-forwarded-client-cert","<REDACTED>"),("Authorization","<REDACTED>")]\n path = "/"\n queryString = ""\n method = "GET"\n proxy = Nothing\n rawBody = False\n redirectCount = 0\n responseTimeout = ResponseTimeoutNone\n requestVersion = HTTP/1.1\n}\n</code></pre>\n<p>I tried to leave in enough content here to give you the same overwhelmed sense that I had looking it. Keep in mind the <code>requestHeaders</code> field is in practice about three times as long. Anyway, with the slimmed down headers, and all my hints throughout, see if you can guess what the problem is.</p>\n<p>Ready? It's the <code>Host</code> header! Let's take a quote from the <a href=\"https://istio.io/latest/docs/ops/configuration/traffic-management/traffic-routing/\">Istio traffic routing documentation</a>. Regarding HTTP traffic, it says:</p>\n<blockquote>\n<p>Requests are routed based on the port and <em><code>Host</code></em> header, rather than port and IP. This means the destination IP address is effectively ignored. For example, <code>curl 8.8.8.8 -H "Host: productpage.default.svc.cluster.local"</code>, would be routed to the <code>productpage</code> Service.</p>\n</blockquote>\n<p>See the problem? <code>k3dash</code> is behaving like a standard reverse proxy, and including the <code>Host</code> header, which is almost always the right thing to do. But not here! In this case, that <code>Host</code> header we're forwarding is confusing Envoy. Envoy is trying to connect to something (<code>test-kube360-hostname.hidden</code>) that doesn't respond to its mTLS connections. That's why we get the <code>upstream connect error</code>. And that's why we got the same response as when we used the wrong port number, since Envoy is configured to only receive incoming traffic on a port that the service is actually listening to.</p>\n<h2 id=\"the-fix\">The fix</h2>\n<p>After all of that, the fix is rather anticlimactic:</p>\n<pre data-lang=\"diff\" class=\"language-diff \"><code class=\"language-diff\" data-lang=\"diff\">-(\\(h, _) -> not (Set.member h _serviceStripHeaders))\n+-- Strip out host headers, since they confuse the Envoy proxy\n+(\\(h, _) -> not (Set.member h _serviceStripHeaders) && h /= "Host")\n</code></pre>\n<p>We already had logic in <code>k3dash</code> to strip away specific headers for each service. And it turns out this logic was primarily used to strip out the <code>Host</code> header for services that got confused when they saw it! Now we just need to strip away the <code>Host</code> header for all the services instead. Fortunately none of our services perform any logic based on the <code>Host</code> header, so with that in place, we should be good. We deployed the new version of <code>k3dash</code>, and voilà! everything worked.</p>\n<h2 id=\"the-moral-of-the-story\">The moral of the story</h2>\n<p>I walked away from this adventure with a much better understanding of how Istio interacts with applications, which is great. I got a great reminder to look more carefully at log messages before hardening my assumptions about the source of a bug. And I got a great kick in the pants for being lazy about port number fixes.</p>\n<p>All in all, it was about six hours of debugging fun. And to quote a great Hebrew phrase on it, "היה טוב, וטוב שהיה" (it was good, and good that it <em>was</em> (in the past)).</p>\n<hr />\n<p>As I mentioned above, we're actively looking for new DevOps candidates, especially US based candidates. If you're interested in working with a global team of experienced DevOps, Rust, and Haskell engineers, consider <a href=\"mailto:[email protected]\">sending us your CV</a>.</p>\n<p>And if you're looking for a solid Kubernetes platform, batteries included, so you can offload this kind of tedious debugging to some other unfortunate souls (read: us), <a href=\"https://tech.fpcomplete.com/products/kube360/\">check out Kube360</a>.</p>\n<p>If you liked this article, you may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-kubernetes-windows/\">Deploying Rust with Windows Containers on Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/\">Cloud Vendor Neutrality</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">Secure defaults with Kubernetes Security with Kube360</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
"slug": "istio-mtls-debugging-story",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "An Istio/mutual TLS debugging story",
"description": "While rolling out Istio's strict mTLS mode in our Kube360 product, we ran into an interesting corner case problem.",
"updated": null,
"date": "2021-07-20",
"year": 2021,
"month": 7,
"day": 20,
"taxonomies": {
"tags": [
"kubernetes",
"regulated"
],
"categories": [
"devops",
"kube360",
"it-compliance"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/devops.png",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/istio-mtls-debugging-story.png"
},
"path": "/blog/istio-mtls-debugging-story/",
"components": [
"blog",
"istio-mtls-debugging-story"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-mtls",
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#what-is-mtls",
"title": "What is mTLS?",
"children": []
},
{
"level": 2,
"id": "k3dash-and-reverse-proxying",
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#k3dash-and-reverse-proxying",
"title": "k3dash and reverse proxying",
"children": []
},
{
"level": 2,
"id": "the-problem",
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-problem",
"title": "The problem",
"children": []
},
{
"level": 2,
"id": "the-wrong-rabbit-hole-and-incredible-laziness",
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-wrong-rabbit-hole-and-incredible-laziness",
"title": "The wrong rabbit hole, and incredible laziness",
"children": []
},
{
"level": 2,
"id": "the-bug-is-in-k3dash",
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-bug-is-in-k3dash",
"title": "The bug is in k3dash",
"children": []
},
{
"level": 2,
"id": "the-fix",
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-fix",
"title": "The fix",
"children": []
},
{
"level": 2,
"id": "the-moral-of-the-story",
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/#the-moral-of-the-story",
"title": "The moral of the story",
"children": []
}
],
"word_count": 2642,
"reading_time": 14,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
"title": "Canary Deployment with Kubernetes and Istio"
}
]
},
{
"relative_path": "blog/cloud-vendor-neutrality.md",
"colocated_path": null,
"content": "<p>Earlier this week, Amazon removed Parler from its platform. As a company hosting a network service on a cloud provider today, should you worry about such actions from cloud vendors? And what steps should you be taking now?</p>\n<p>In this post, we'll explore some of the risks associated with being tied to a single vendor, and the costs involved in breaking the dependency. I'll also give some recommendations on low hanging fruit.</p>\n<p>Ultimately, how far down the vendor neutrality path you want to go is a company specific risk mitigation strategy. In this post, we'll explore the raw information, but deeper analysis would be based on your company's specific situation. As usual, if you would like more direct help from the team at FP Complete in understanding these topics, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us for a consultation</a>.</p>\n<h2 id=\"what-is-vendor-neutrality\">What is vendor neutrality?</h2>\n<p>Vendor neutrality is not a binary. There are various levels on a spectrum from an application that leverages many vendor-specific services to an application which runs on any Linux machine in the world. Achieving complete vendor neutrality is almost never the goal. Instead, most companies interested in this topic are looking to reduce their dependencies where reasonable.</p>\n<p>To be more concrete, let's say you're on Amazon, and you're looking into what database options to use in your application. Your team comes up with three options:</p>\n<ol>\n<li>Build it using DynamoDB, an Amazon-specific proprietary offering</li>\n<li>Build it using PostgreSQL hosted on Amazon's RDS service</li>\n<li>Build it using PostgreSQL which your team manages themselves</li>\n</ol>\n<p>Option (1) provides no vendor neutrality. If you, for any reason, decide to leave Amazon, you'll need to rewrite large parts of your application to move from DynamoDB. This may be a significant undertaking, introducing a major barrier to exit from Amazon.</p>\n<p>Option (2), while still leveraging an Amazon service, does not fall into that same trap. Your application will speak to PostgreSQL, an open source database that can be hosted anywhere in the world. If you're dissatisfied with RDS, you can migrate to another offering fairly easy. PostgreSQL hosted offerings are available on other cloud providers. And by using RDS, you'll get some features more easily, such as backups and replication.</p>\n<p>Option (3) is the most vendor neutral. You'll be forced to implement all features of PostgreSQL you want yourself. Maybe this will entail creating a Docker image with a fully configured PostgreSQL instance. Moving this to Azure or on-prem is even easier than option (2). But we may be at the point of diminishing returns, as we'll discuss below.</p>\n<p>To summarize: vendor neutrality is a spectrum measuring how tied you are to a specific vendor, and how difficult it would be to move to a different one.</p>\n<h2 id=\"advantages-of-vendor-neutrality\">Advantages of vendor neutrality</h2>\n<p>The current situation with Parler is an extreme example of the advantages of vendor neutrality. I would imagine most companies doing business with Amazon don't have a reasonable expectation that Amazon would decide to remove them from their platform. Again, this is a risk assessment scenario, and you need to analyze the risk for your own business. A company hosting uncensored political discourse is in a different risk category from a someone running a personal blog.</p>\n<p>But this is far from the only advantage of vendor neutrality. Let's analyze some of the most common concerns I've seen for companies to remain vendor neutral.</p>\n<ul>\n<li><strong>Price sensitivity</strong> Cloud costs can be a major part of a company's budget, and costs can vary radically between providers. Various providers are also willing to give large incentives for companies to switch platforms. But if you've designed your application deeply around one provider, the cost of switching may not exceed the long term cost savings, leaving you at your current provider's mercy.</li>\n<li><strong>Regulatory obligations</strong> Some governments may have requirements that your software run on specific vendor hardware, or specific on-prem environments. Building up your software around one provider may prevent you from offering your services in those cases.</li>\n<li><strong>Client preference</strong> Similarly, if you provide managed software to companies, they may have a built-in cloud provider preference. If you've built your software on Google Cloud, but they have a corporate policy that all new projects live on Azure, you may lose the sale.</li>\n<li><strong>Geographic distribution</strong> For lowest latency, you'll want to put your services as close to the clients as possible. And it may turn out that the provider you've chosen simply doesn't have a presence there. Or a competitor may be closer. Or a service you want to peer with is on different provider, and the data costs will be much lower if you switch providers.</li>\n</ul>\n<p>There are many more examples, this isn't an exhaustive list. What I want to motivate here is that vendor neutrality isn't just a fringe ideal for companies afraid of platform eviction. There are many reasons a normal company in its normal course of business may wish to be vendor neutral. You should analyze these cases, as well as others that may apply to your company, and assess the value of neutrality.</p>\n<h2 id=\"costs-of-vendor-neutrality\">Costs of vendor neutrality</h2>\n<p>Vendor neutrality does not come for free. A primary value proposition of most cloud providers is quick time to market. By leveraging existing services, your team can offload creation and maintenance of complex systems. Eschewing such services and building from scratch will impact your time to market, and potentially have other impacts (like increase bug rate, reduced reliability, etc).</p>\n<p>I often see engineers decrying the evils of vendor lock-in without taking these costs into account. As a business, you'll need to find a way to adequately and accurately measure these costs as you make decisions, instead of turning it into a quasi-religious crusade against all forms of lock-in.</p>\n<p>With these trade-offs in mind, I'll finish off this post by explaining some of the most bang-for-the-buck moves you can make, which:</p>\n<ul>\n<li>Move you much farther along the vendor neutral spectrum</li>\n<li>Do not cost significant engineering work, if undertaken early on and designed correctly</li>\n<li>Provide additional benefits whenever possible</li>\n</ul>\n<h2 id=\"leverage-open-source-tools\">Leverage open source tools</h2>\n<p>The hardest lock-in to overcome is dedication to a proprietary tool. Without naming names, some large 6-letter database companies have made a great reputation of leveraging lock-in with major increases in licensing fees. Once you're tied into that model, it's difficult to disengage.</p>\n<p>Open source tools provide a major protection against this. Assuming the licenses are correct—and you should be sure to check that—no one can ever take your open source tools away from you. Sure, a provider may decide to stop maintaining the software. Or perhaps future releases may be closed source instead. Or perhaps they won't address your bug reports without paying for a support contract. But ultimately, you retain lots of freedom to take the software, modify it as necessary, and deploy it everywhere.</p>\n<p>There has long been a debate between the features and maturity of proprietary versus open source tooling. As always, we cannot make our decisions in a vacuum, and the flexibility of open source is not the be-all and end-all for a business. However, in the past decade in particular, open source has come to dominate large parts of the deployment space.</p>\n<p>To pick on the example above: while DynamoDB is a powerful and flexible database option on AWS, it's far from unique. Cassandra, Redis, PostgreSQL, and dozens of other open source databases are readily available, with companies offering support, commercial hosting, and paid consulting services.</p>\n<p>We've seen a major shift occur as well in the software development language space. Many of the biggest tech companies in the world not only <em>use</em> open source languages, but provide their own complete language ecosystems, free of charge. Google's Go, Microsoft's .NET Core, Mozilla's <a href=\"https://tech.fpcomplete.com/rust/\">Rust</a>, and Apple's Swift are some prime examples.</p>\n<p>Far from being the scrappy underdog, we've seen a shift where open source is the de facto standard, and proprietary options are viewed as niche. You're no longer trading quality for flexibility. You can often have your cake and eat it too.</p>\n<h3 id=\"kubernetes\">Kubernetes</h3>\n<p>I decided to give one open source player its own subsection in this context. Kubernetes is an orchestration management tool, managing various cloud resources for hosting containerized applications in both Linux and Windows. The first notable thing in this context is that Kubernetes has effectively supplanted other proprietary and cloud-specific offerings. Those offerings still exist, but from a market share standpoint, Kubernetes is clearly in a dominant position.</p>\n<p>The second thing to note is that Kubernetes is a tool supported by many of the largest cloud providers. Google created Kubernetes, Microsoft provides significant support, and all three top cloud providers (Google, Azure, and AWS) offer native Kubernetes services.</p>\n<p>The final thing to note is that Kubernetes really goes beyond a single service. In many ways, it functions as a cloud abstraction layer. When you use Kubernetes, you often times write your applications to target Kubernetes <em>instead of</em> targeting the underlying vendor. Instead of using a cloud Load Balancer, you'll use an ingress and service in Kubernetes. This drastically reduces the cost of remaining vendor neutral.</p>\n<p>As a plug, in <a href=\"https://tech.fpcomplete.com/products/kube360/\">our own Kubernetes offering</a>, we've focused on combining commonly used open source components to provide a batteries-included experience with minimized vendor lock-in. We've already used it internally and for customers to easily migrate services between different cloud providers, and from the cloud to on-prem.</p>\n<div class=\"text-center\"><a href=\"/products/kube360\" class=\"button-coral\">Learn more about Kube360</a></div>\n<h2 id=\"high-value-cloud-services\">High value cloud services</h2>\n<p>Some cloud services provide an interesting combination of delivering high value with minimal lock-in costs. The greatest example of that is blob storage services, such as S3. The durability and availability guarantees cloud providers offer around your data is far greater than most teams would be able to provide on their own. The cost of usage is significantly far lower than rolling your own solution using block storage in the cloud. And finally: the lock-in risks tend to be small. There are tools available to abstract the different vendor APIs for blob storage (and we include such a tool in Kube360). And even without such tools, generally the impact on a codebase from blob storage selection is minimal.</p>\n<p>Another example is services which host open source offerings. The RDS example above fits in nicely here. We generally recommend using hosted database offerings from cloud providers, since the cost is close to what you would pay to set it up yourself, you get lots of features quickly, and migration to a different option is trivial.</p>\n<p>And one final example is services like load balancers and auto-scaling groups. These are services that are impossible to implement fully yourself, would be far more expensive to implement to any extent using cloud virtual machines, and introduce virtually no lock-in. If you're moving from AWS to Azure, you'll need to change your infrastructure code to use Azure equivalents to those services. But generally, these can be seen at the same level of commodity as the virtual machines themselves. You're paying for a fairly standard service, you're rarely locking yourself in to a vendor-specific feature.</p>\n<h2 id=\"multicloud-vs-hybrid-cloud\">Multicloud vs hybrid cloud</h2>\n<p>In previous discussions, the topic of vendor neutrality typically introduces the two confusing terms "multicloud" and "hybrid cloud." There is some disagreement in the tech space around what the former term means, but I'm going to define these two terms as:</p>\n<ul>\n<li><strong>Multicloud</strong> means that your service is capable of running on multiple different cloud providers and/or on-prem environments, but each environment will be autonomous from others</li>\n<li><strong>Hybrid cloud</strong> means that you can simultaneously run your service on multiple cloud providers, and they will replicate data, load balance, and perform other intelligent operations between the different providers</li>\n</ul>\n<p>Multicloud is a much easier thing to attain than hybrid cloud. Hybrid cloud introduces many new kinds of distributed systems failure models, as well as risks around major data transfer costs and latencies. There are certainly some potential advantages for hybrid cloud setups, but in our experience the much lower hanging fruit is in targeting multicloud.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>Summing up, there are many reasons a company may decide to keep their applications vendor neutral. Each of these reasons can be seen as a risk mitigation strategy, and a proper risk assessment and cost analysis should be performed. While current events has people's attention on vendor eviction, plenty of other reasons exist.</p>\n<p>On the other hand, vendor neutrality is not free, and should not be pursued to the detriment of the business. Finding high value, low cost moves to increase your neutrality is your best bet. Such moves may include:</p>\n<ul>\n<li>Opting for open source where possible</li>\n<li>Using a platform like <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kubernetes</a> that encourages more neutrality</li>\n<li>Opt for cloud services that are more easily swappable, such as load balancers</li>\n</ul>\n<p>If you would like more information or help with a vendor neutrality risk assessment, we would love to chat.</p>\n<div class=\"text-center\"><a href=\"/contact-us/\" class=\"button-coral\">Contact us for more information</a></div>\n<p>If you liked this post, you may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/why-we-built-kube360/\">Why we built Kube360</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/understanding-cloud-deployments/\">Understanding Cloud Software Deployments</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/",
"slug": "cloud-vendor-neutrality",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloud Vendor Neutrality",
"description": "Amazon recently removed Parler from its platform, causing some people to ask if and how they should protect themselves from cloud providers. In this post, we'll explore costs and benefits of keeping yourself cloud vendor neutral, and how to approach it expediently.",
"updated": null,
"date": "2021-01-13",
"year": 2021,
"month": 1,
"day": 13,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/devops.png",
"image": "images/blog/cloud-vendor-neutrality.png"
},
"path": "/blog/cloud-vendor-neutrality/",
"components": [
"blog",
"cloud-vendor-neutrality"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-vendor-neutrality",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#what-is-vendor-neutrality",
"title": "What is vendor neutrality?",
"children": []
},
{
"level": 2,
"id": "advantages-of-vendor-neutrality",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#advantages-of-vendor-neutrality",
"title": "Advantages of vendor neutrality",
"children": []
},
{
"level": 2,
"id": "costs-of-vendor-neutrality",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#costs-of-vendor-neutrality",
"title": "Costs of vendor neutrality",
"children": []
},
{
"level": 2,
"id": "leverage-open-source-tools",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#leverage-open-source-tools",
"title": "Leverage open source tools",
"children": [
{
"level": 3,
"id": "kubernetes",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#kubernetes",
"title": "Kubernetes",
"children": []
}
]
},
{
"level": 2,
"id": "high-value-cloud-services",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#high-value-cloud-services",
"title": "High value cloud services",
"children": []
},
{
"level": 2,
"id": "multicloud-vs-hybrid-cloud",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#multicloud-vs-hybrid-cloud",
"title": "Multicloud vs hybrid cloud",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2235,
"reading_time": 12,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
"title": "Canary Deployment with Kubernetes and Istio"
},
{
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
"title": "An Istio/mutual TLS debugging story"
}
]
},
{
"relative_path": "blog/rust-kubernetes-windows.md",
"colocated_path": null,
"content": "<p>A few years back, we <a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">published a blog post</a> about deploying a Rust application using Docker and Kubernetes. That application was a Telegram bot. We're going to do something similar today, but with a few meaningful differences:</p>\n<ol>\n<li>We're going to be deploying a web app. Don't get too excited: this will be an incredibly simply piece of code, basically copy-pasted from the <a href=\"https://actix.rs/docs/application/\">actix-web documentation</a>.</li>\n<li>We're going to build the deployment image on Github Actions</li>\n<li>And we're going to be building this using Windows Containers instead of Linux. (Sorry for burying the lead.)</li>\n</ol>\n<p>We put this together for testing purposes when rolling out Windows support in our <a href=\"https://tech.fpcomplete.com/products/kube360/\">managed Kubernetes product, Kube360®</a> here at FP Complete. I wanted to put this post together to demonstrate a few things:</p>\n<ul>\n<li>How pleasant and familiar Windows Containers workflows were versus the more familiar Linux approaches</li>\n<li>Github Actions work seamlessly for building Windows Containers</li>\n<li>With the correct configuration, Kubernetes is a great platform for deploying Windows Containers</li>\n<li>And, of course, how wonderful the Rust toolchain is on Windows</li>\n</ul>\n<p>Alright, let's dive in! And if any of those topics sound interesting, and you'd like to learn more about FP Complete offerings, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us for more information on our offerings</a>.</p>\n<h2 id=\"prereqs\">Prereqs</h2>\n<p>Quick sidenote before we dive in. Windows Containers only run on Windows machines. Not even all Windows machines will support Windows Containers. You'll need Windows 10 Pro or a similar license, and have Docker installed on that machine. You'll also need to ensure that Docker is set to use Windows instead of Linux containers.</p>\n<p>If you have all of that set up, you'll be able to follow along with most of the steps below. If not, you won't be able to build or run the Docker images on your local machine.</p>\n<p>Also, for running the application on Kubernetes, you'll need a Kubernetes cluster with Windows nodes. I'll be using the FP Complete Kube360 test cluster on Azure in this blog post, though we've previously tested in on both AWS and on-prem clusters too.</p>\n<h2 id=\"the-rust-application\">The Rust application</h2>\n<p>The source code for this application will be, by far, the most uninteresting part of this post. As mentioned, it's basically a copy-paste of an example straight from the actix-web documentation featuring mutable state. It turns out this was a great way to test out basic Kubernetes functionality like health checks, replicas, and autohealing.</p>\n<p>We're going to build this using the latest stable Rust version as of writing this post, so create a <code>rust-toolchain</code> file with the contents:</p>\n<pre><code>1.47.0\n</code></pre>\n<p>Our <code>Cargo.toml</code> file will be pretty vanilla, just adding in the dependency on <code>actix-web</code>:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[package]\nname = "windows-docker-web"\nversion = "0.1.0"\nauthors = ["Michael Snoyman <[email protected]>"]\nedition = "2018"\n\n[dependencies]\nactix-web = "3.1"\n</code></pre>\n<p>If you want to see the <code>Cargo.lock</code> file I compiled with, it's <a href=\"https://github.com/fpco/windows-docker-web/blob/f8a3192e63f2e699cc67716488a633f5e0893446/Cargo.lock\">available in the source repo</a>.</p>\n<p>And finally, the actual code in <code>src/main.rs</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use actix_web::{get, web, App, HttpServer};\nuse std::sync::Mutex;\n\nstruct AppState {\n counter: Mutex<i32>,\n}\n\n#[get("/")]\nasync fn index(data: web::Data<AppState>) -> String {\n let mut counter = data.counter.lock().unwrap();\n *counter += 1;\n format!("Counter is at {}", counter)\n}\n\n#[actix_web::main]\nasync fn main() -> std::io::Result<()> {\n let host = "0.0.0.0:8080";\n println!("Trying to listen on {}", host);\n let app_state = web::Data::new(AppState {\n counter: Mutex::new(0),\n });\n HttpServer::new(move || App::new().app_data(app_state.clone()).service(index))\n .bind(host)?\n .run()\n .await\n}\n</code></pre>\n<p>This code creates an application state (a mutex of an <code>i32</code>), defines a single <code>GET</code> handler that increments that variable and prints the current value, and then hosts this on <code>0.0.0.0:8080</code>. Not too shabby.</p>\n<p>If you're following along with the code, now would be a good time to <code>cargo run</code> and make sure you're able to load up the site on your <code>localhost:8080</code>.</p>\n<h2 id=\"dockerfile\">Dockerfile</h2>\n<p>If this is your first foray into Windows Containers, you may be surprised to hear me say "Dockerfile." Windows Container images can be built with the same kind of Dockerfiles you're used to from the Linux world. This even supports more advanced features, such as multistage Dockerfiles, which we're going to take advantage of here.</p>\n<p>There are a number of different base images provided by Microsoft for Windows Containers. We're going to be using Windows Server Core. It provides enough capabilities for installing Rust dependencies (which we'll see shortly), without including too much unneeded extras. Nanoserver is a much lighterweight image, but it doesn't play nicely with the Microsoft Visual C++ runtime we're using for the <code>-msvc</code> Rust target.</p>\n<p><strong>NOTE</strong> I've elected to use the <code>-msvc</code> target here instead of <code>-gnu</code> for two reasons. Firstly, it's closer to the actual use cases we need to support in Kube360, and therefore made a better test case. Also, as the default target for Rust on Windows, it seemed appropriate. It should be possible to set up a more minimal nanoserver-based image based on the <code>-gnu</code> target, if someone's interested in a "fun" side project.</p>\n<p>The <a href=\"https://github.com/fpco/windows-docker-web/blob/f8a3192e63f2e699cc67716488a633f5e0893446/Dockerfile\">complete Dockerfile is available on Github</a>, but let's step through it more carefully. As mentioned, we'll be performing a multistage build. We'll start with the build image, which will install the Rust build toolchain and compile our application. We start off by using the Windows Server Core base image and switching the shell back to the standard <code>cmd.exe</code>:</p>\n<pre><code>FROM mcr.microsoft.com/windows/servercore:1809 as build\n\n# Restore the default Windows shell for correct batch processing.\nSHELL ["cmd", "/S", "/C"]\n</code></pre>\n<p>Next we're going to install the Visual Studio buildtools necessary for building Rust code:</p>\n<pre><code># Download the Build Tools bootstrapper.\nADD https://aka.ms/vs/16/release/vs_buildtools.exe /vs_buildtools.exe\n\n# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload,\n# excluding workloads and components with known issues.\nRUN vs_buildtools.exe --quiet --wait --norestart --nocache \\\n --installPath C:\\BuildTools \\\n --add Microsoft.Component.MSBuild \\\n --add Microsoft.VisualStudio.Component.Windows10SDK.18362 \\\n --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64\t\\\n || IF "%ERRORLEVEL%"=="3010" EXIT 0\n</code></pre>\n<p>And then we'll modify the entrypoint to include the environment modifications necessary to use those buildtools:</p>\n<pre><code># Define the entry point for the docker container.\n# This entry point starts the developer command prompt and launches the PowerShell shell.\nENTRYPOINT ["C:\\\\BuildTools\\\\Common7\\\\Tools\\\\VsDevCmd.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]\n</code></pre>\n<p>Next up is installing <code>rustup</code>, which is fortunately pretty easy:</p>\n<pre><code>RUN curl -fSLo rustup-init.exe https://win.rustup.rs/x86_64\nRUN start /w rustup-init.exe -y -v && echo "Error level is %ERRORLEVEL%"\nRUN del rustup-init.exe\n\nRUN setx /M PATH "C:\\Users\\ContainerAdministrator\\.cargo\\bin;%PATH%"\n</code></pre>\n<p>Then we copy over the relevant source files and kick off a build, storing the generated executable in <code>c:\\output</code>:</p>\n<pre><code>COPY Cargo.toml /project/Cargo.toml\nCOPY Cargo.lock /project/Cargo.lock\nCOPY rust-toolchain /project/rust-toolchain\nCOPY src/ /project/src\nRUN cargo install --path /project --root /output\n</code></pre>\n<p>And with that, we're done with our build! Time to jump over to our runtime image. We don't need the Visual Studio buildtools in this image, but we do need the Visual C++ runtime:</p>\n<pre><code>FROM mcr.microsoft.com/windows/servercore:1809\n\nADD https://download.microsoft.com/download/6/A/A/6AA4EDFF-645B-48C5-81CC-ED5963AEAD48/vc_redist.x64.exe /vc_redist.x64.exe\nRUN c:\\vc_redist.x64.exe /install /quiet /norestart\n</code></pre>\n<p>With that in place, we can copy over our executable from the build image and set it as the default <code>CMD</code> in the image:</p>\n<pre><code>COPY --from=build c:/output/bin/windows-docker-web.exe /\n\nCMD ["/windows-docker-web.exe"]\n</code></pre>\n<p>And just like that, we've got a real life Windows Container. If you'd like to, you can test it out yourself by running:</p>\n<pre><code>> docker run --rm -p 8080:8080 fpco/windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446\n</code></pre>\n<p>If you connect to port 8080, you should see our painfully simple app. Hurrah!</p>\n<h2 id=\"building-with-github-actions\">Building with Github Actions</h2>\n<p>One of the nice things about using a multistage Dockerfile for performing the build is that our CI scripts become very simple. Instead of needing to set up an environment with correct build tools or any other configuration, our script:</p>\n<ul>\n<li>Logs into the Docker Hub registry</li>\n<li>Performs a <code>docker build</code></li>\n<li>Pushes to the Docker Hub registry</li>\n</ul>\n<p>The downside is that there is no build caching at play with this setup. There are multiple methods to mitigate this problem, such as creating helper build images that pre-bake the dependencies. Or you can perform the builds on the host on CI and only use the Dockerfile for generating the runtime image. Those are interesting tweaks to try out another time. </p>\n<p>Taking on the simple multistage approach though, we have the following in our <code>.github/workflows/container.yml</code> file:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">name: Build a Windows container\n\non:\n push:\n branches: [master]\n\njobs:\n build:\n runs-on: windows-latest\n\n steps:\n - uses: actions/checkout@v1\n\n - name: Build and push\n shell: bash\n run: |\n echo "${{ secrets.DOCKER_HUB_TOKEN }}" | docker login --username fpcojenkins --password-stdin\n IMAGE_ID=fpco/windows-docker-web:$GITHUB_SHA\n docker build -t $IMAGE_ID .\n docker push $IMAGE_ID\n</code></pre>\n<p>I like following the convention of tagging my images with the Git SHA of the commit. Other people prefer different tagging schemes, it's all up to you.</p>\n<h2 id=\"manifest-files\">Manifest files</h2>\n<p>Now that we have a working Windows Container image, the next step is to deploy it to our Kube360 cluster. Generally, we use ArgoCD and Kustomize for managing app deployments within Kube360, which lets us keep a very nice Gitops workflow. Instead, for this blog post, I'll show you the raw manifest files. It will also let us play with the <code>k3</code> command line tool, which also happens to be written in Rust.</p>\n<p>First we'll have a Deployment manifest to manage the pods running the application itself. Since this is a simple Rust application, we can put very low resource limits on this. We're going to disable the Istio sidebar, since it's not compatible with Windows. We're going to ask Kubernetes to use the Windows machines to host these pods. And we're going to set up some basic health checks. All told, this is what our manifest file looks like:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: windows-docker-web\n labels:\n app.kubernetes.io/component: webserver\nspec:\n replicas: 1\n minReadySeconds: 5\n selector:\n matchLabels:\n app.kubernetes.io/component: webserver\n template:\n metadata:\n labels:\n app.kubernetes.io/component: webserver\n annotations:\n sidecar.istio.io/inject: "false"\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: windows-docker-web\n image: fpco/windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446\n ports:\n - name: http\n containerPort: 8080\n readinessProbe:\n httpGet:\n path: /\n port: 8080\n initialDelaySeconds: 10\n periodSeconds: 10\n livenessProbe:\n httpGet:\n path: /\n port: 8080\n initialDelaySeconds: 10\n periodSeconds: 10\n resources:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 128Mi\n cpu: 100m\n</code></pre>\n<p>Awesome, that's the most complicated by far of the three manifests. Next we'll put a fairly stock-standard Service in front of that deployment:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: v1\nkind: Service\nmetadata:\n name: windows-docker-web\n labels:\n app.kubernetes.io/component: webserver\nspec:\n ports:\n - name: http\n port: 80\n targetPort: http\n type: ClusterIP\n selector:\n app.kubernetes.io/component: webserver\n</code></pre>\n<p>This exposes a services on port 80, and targets the <code>http</code> port (port 8080) inside the deployment. Finally, we have our Ingress. Kube360 uses external DNS to automatically set DNS records, and cert-manager to automatically grab TLS certificates. Our manifest looks like this:</p>\n<pre data-lang=\"yaml\" class=\"language-yaml \"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n annotations:\n cert-manager.io/cluster-issuer: letsencrypt-ingress-prod\n kubernetes.io/ingress.class: nginx\n nginx.ingress.kubernetes.io/force-ssl-redirect: "true"\n name: windows-docker-web\nspec:\n rules:\n - host: windows-docker-web.az.fpcomplete.com\n http:\n paths:\n - backend:\n serviceName: windows-docker-web\n servicePort: 80\n tls:\n - hosts:\n - windows-docker-web.az.fpcomplete.com\n secretName: windows-docker-web-tls\n</code></pre>\n<p>Now that we have our application inside a Docker image, and we have our manifest files to instruct Kubernetes on how to run it, we just need to deploy these manifests and we'll be done.</p>\n<h2 id=\"launch\">Launch</h2>\n<p>With our manifests in place, we can finally deploy them. You can use <code>kubectl</code> directly to do this. Since I'm deploying to Kube360, I'm going to use the <code>k3</code> command line tool, which automates the process of logging in, getting temporary Kubernetes credentials, and providing those to the <code>kubectl</code> command via an environment variable. These steps could be run on Windows, Mac, or Linux. But since we've done the rest of this post on Windows, I'll use my Windows machine for this too.</p>\n<pre><code>> k3 init test.az.fpcomplete.com\n> k3 kubectl apply -f deployment.yaml\nWeb browser opened to https://test.az.fpcomplete.com/k3-confirm?nonce=c1f764d8852f4ff2a2738fb0a2078e68\nPlease follow the login steps there (if needed).\nThen return to this terminal.\nPolling the server. Please standby.\nChecking ...\nThanks, got the token response. Verifying token is valid\nRetrieving a kubeconfig for use with k3 kubectl\nKubeconfig retrieved. You are now ready to run kubectl commands with `k3 kubectl ...`\ndeployment.apps/windows-docker-web created\n> k3 kubectl apply -f ingress.yaml\ningress.networking.k8s.io/windows-docker-web created\n> k3 kubectl apply -f service.yaml\nservice/windows-docker-web created\n</code></pre>\n<p>I told <code>k3</code> to use the <code>test.az.fpcomplete.com</code> cluster. On the first <code>k3 kubectl</code> call, it detected that I did not have valid credentials for the cluster, and opened up my browser to a page that allowed me to log in. One of the design goals in Kube360 is to strongly leverage existing identity providers, such as Azure AD, Google Directory, Okta, Microsoft 365, and others. This is not only more secure than copy-pasting <code>kubeconfig</code> files with permanent credentials around, but more user friendly. As you can see, the process above was pretty automated.</p>\n<p>It's easy enough to check that the pods are actually running and healthy:</p>\n<pre><code>> k3 kubectl get pods\nNAME READY STATUS RESTARTS AGE\nwindows-docker-web-5687668cdf-8tmn2 1/1 Running 0 3m2s\n</code></pre>\n<p>Initially, the ingress controller looked like this while it was getting TLS certificates:</p>\n<pre><code>> k3 kubectl get ingress\nNAME CLASS HOSTS ADDRESS PORTS AGE\ncm-acme-http-solver-zlq6j <none> windows-docker-web.az.fpcomplete.com 80 0s\nwindows-docker-web <none> windows-docker-web.az.fpcomplete.com 80, 443 3s\n</code></pre>\n<p>And after cert-manager gets the TLS certificate, it will switch over to:</p>\n<pre><code>> k3 kubectl get ingress\nNAME CLASS HOSTS ADDRESS PORTS AGE\nwindows-docker-web <none> windows-docker-web.az.fpcomplete.com 52.151.225.139 80, 443 90s\n</code></pre>\n<p>And finally, our site is live! Hurrah, a Rust web application compiled for Windows and running on Kubernetes inside Azure.</p>\n<p><strong>NOTE</strong> Depending on when you read this post, the web app may or may not still be live, so don't be surprised if you don't get a response if you try to connect to that host.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>This post was a bit light on actual Rust code, but heavy on a lot of Windows scripting. As I think many Rustaceans already know, the dev experience for Rust on Windows is top notch. What may not have been obvious is how pleasant the Docker experience is on Windows. There are definitely some pain points, like the large images involved and needing to install the VC runtime. But overall, with a bit of cargo-culting, it's not too bad. And finally, having a cluster with Windows support ready via Kube360 makes deployment a breeze.</p>\n<p>If anyone has follow up questions about anything here, please <a href=\"https://twitter.com/snoyberg\">reach out to me on Twitter</a> or <a href=\"https://tech.fpcomplete.com/contact-us/\">contact our team at FP Complete</a>. In addition to our <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360 product offering</a>, FP Complete provides many related services, including:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/platformengineering/\">DevOps consulting</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust consulting and training</a></li>\n<li><a href=\"https://tech.fpcomplete.com/services/\">General training and consulting services</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/\">Haskell consulting and training</a></li>\n</ul>\n<p>If you liked this post, please check out some related posts:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Deploying Rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">The Rust Crash Course eBook</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/understanding-cloud-auth/\">Understanding cloud auth</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
"slug": "rust-kubernetes-windows",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Deploying Rust with Windows Containers on Kubernetes",
"description": "An example of deploying Rust inside a Windows Containers as a web service hosted on Kubernetes",
"updated": null,
"date": "2020-10-26",
"year": 2020,
"month": 10,
"day": 26,
"taxonomies": {
"tags": [
"rust",
"devops",
"kubernetes"
],
"categories": [
"functional programming",
"devops"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png",
"image": "images/blog/rust-windows-kube360.png"
},
"path": "/blog/rust-kubernetes-windows/",
"components": [
"blog",
"rust-kubernetes-windows"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "prereqs",
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#prereqs",
"title": "Prereqs",
"children": []
},
{
"level": 2,
"id": "the-rust-application",
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#the-rust-application",
"title": "The Rust application",
"children": []
},
{
"level": 2,
"id": "dockerfile",
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#dockerfile",
"title": "Dockerfile",
"children": []
},
{
"level": 2,
"id": "building-with-github-actions",
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#building-with-github-actions",
"title": "Building with Github Actions",
"children": []
},
{
"level": 2,
"id": "manifest-files",
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#manifest-files",
"title": "Manifest files",
"children": []
},
{
"level": 2,
"id": "launch",
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#launch",
"title": "Launch",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2573,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
"title": "Canary Deployment with Kubernetes and Istio"
},
{
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
"title": "An Istio/mutual TLS debugging story"
}
]
},
{
"relative_path": "blog/paradigm-shift-key-to-competing.md",
"colocated_path": null,
"content": "<p>It used to be that being technically mature was thought to be a good thing; now, that view is not so cut and dried. As you look at topics like containerization, cloud migration, and DevOps, it is easy to see why young companies get to claim the term “Cloud Native.” At the same time, those who have been in business for decades are frequently relegated to the legions of those needing ‘transforming.’ While this is, of course, an overgeneralization, it feels right more often than not. So, what are the ‘mature’ to do? </p>\n<p>Talking to several older small and medium sized businesses, a few strategic changes help propel those who are thinking about tech ‘transformation’ into becoming better, faster, more cost-effective, and more secure. These strategies include focusing on containerizing business logic, cloud-enabling their enterprise, and taking a fresh look at open source offerings for their infrastructure. If we look at these topics from an executive seat rather than an engineering one, a path and a plan emerges. </p>\n<a href=\"/devops/why-what-how/\">\n<p style=\"text-align:center;font-size:2em;border-width: 3px 0;border-color:#ff8d6e;border-style: dashed;margin:1em 0;padding:0.25em 0;font-weight: bold\">\nCheck Out The Why, What, and How of DevSecOps\n</p>\n</a>\n<p>Containerization is not a new topic; it has just evolved. We have all gone from monolithic solutions to distributed computing. From there, we bought small Linux servers, and they felt like containers; then, virtualization came to market, and the VM became the new container. Now, we have Docker and Kubernetes. Docker containers represent a considerable paradigm shift in that they do not require a lot of hardware or yet another OS license…., and when managed by Kubernetes, they create an entire ecosystem with little overhead. Kubernetes take Docker containers and handle horizontal scaling, fault tolerance, automated monitoring, etc. within a DevOps toolset and frame. What makes this setup even more impressive is Open Source; yet, supported by ‘the most prominent’ tech infrastructure firms. </p>\n<p>Once we start embracing modern container architectures, the conversation gets fascinating. All cloud and virtualization providers are now battling each other to get customers to deploy these standardized workloads onto their proprietary platforms. While there are always a few complications, Docker and Kubernetes run on AWS, Azure, VMWare, GCP, etc., with little (or no) alterations if you follow the Open Source path. </p>\n<p>So imagine....once we were trying to figure out how to build in fault tolerance, scalability, continuous develop/deploy, and automate testing.....now all we need to do is follow a DevOps approach using Open Source frameworks like Docker and Kubernetes....and voila....you are there (well it isn’t that easy....but a darn sight easier than it used to be). Oh....and by the way, all of this is far easier to deploy in the cloud than on-premise, but that is a topic for another day. </p>\n",
"permalink": "https://tech.fpcomplete.com/blog/paradigm-shift-key-to-competing/",
"slug": "paradigm-shift-key-to-competing",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "A Paradigm Shift is Key to Competing",
"description": "",
"updated": null,
"date": "2020-10-16",
"year": 2020,
"month": 10,
"day": 16,
"taxonomies": {
"categories": [
"devops",
"insights"
],
"tags": [
"devops",
"insights"
]
},
"authors": [],
"extra": {
"author": "Wes Crook",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/paradigm-shift-key-to-competing/",
"components": [
"blog",
"paradigm-shift-key-to-competing"
],
"summary": null,
"toc": [],
"word_count": 485,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/devops-in-the-enterprise.md",
"colocated_path": null,
"content": "<p>Is it Enterprise DevOps or DevOps in the enterprise? I guess it all depends on where you sit. DevOps has been a significant change to how many modern technology organizations approach systems development and support. While many have found it to be a major productivity boost, it represents a threat in "BTTWWHADI" evangelists in some organizations. Let's start with two definitions: </p>\n<ul>\n<li>\n<p>DevOps: DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from Agile methodology. Credit: https://en.wikipedia.org/wiki/DevOps </p>\n</li>\n<li>\n<p>BTTWWHADI : This is shorthand for "But That's The Way We Have Always Done It." Credit: Unknown </p>\n</li>\n</ul>\n<h2 id=\"where-we-come-from\">Where we come from...</h2>\n<p>If we look at some successful Enterprise technology areas, they have had long term success by sticking with what works. Cleanly partitioned technical responsibilities (analysts, developers, DBAs, network admins, sysadmins, etc.), a waterfall approach to development, a "stay in your lane" accountability matrix (e.g., you write the app, I'll get it platformed), rack 'em and stack 'em approach to hardware, etc.</p>\n<p>While no one can deny this type of discipline has served many well, Enterprise technology's current generation offers us a much more flexible approach. Today, virtually all hardware is virtualized (on and off-premise), and cloud vendors offer things like platforms as a service, databases as a service, security as a service...etc. These innovations have allowed my companies to completely re-think how they want to be spending their technology resource (budget, people, mindshare)….with the most enlightened organizations quickly concluding that they should spend their human capital in spaces where they can create competitive advantages while purchasing those parts of their technology ecosystem what more commoditized.</p>\n<p>An example of this would be in a retail company to think more about creating business intelligence than setting up new hardware for a database server. A database can be scaled in the cloud, leaving the retail enterprise more human capital to figure out how to drive revenue. Those who are not embracing the change DevOps affords are most often using a BTTWWHADI argument. </p>\n<h2 id=\"not-everyone-is-ready-for-a-revolution\">Not everyone is ready for a revolution...</h2>\n<p>So, if DevOps is such a revolution, why do you have so many corporations having such an issue trying to get DevOps strategies to work for them? The answer lies in culture. For DevOps to be effective, an organization needs to be willing to take out a blank sheet of paper and draw a picture of what could be if they tore down yesterday's constraints and looked toward today's innovations. They need to match that picture up against their current staff, recognize that many jobs (and many skills) need to be re-learned or acquired. No longer is so much specialization required in many specific fixed assets (like data centers, computers, network devices, security devices, etc.) In a modern DevOps world, much of the infrastructure is virtualized (giving rise to infrastructure as code). </p>\n<p>To some extent, this means that your infrastructure staff will start to look more and more like developers. Instead of a team plugging in servers, routers, and load balancers into a network backbone, they will be using scripting to configure equivalent services on virtualized hardware. On the development and operational side, CI/CD pipelines and process automation drive out many manual processes involved in yesterday's software development lifecycle. For development, the beginnings of this revolution date back to test-driven development. Today's modern pipelines go from development through testing, integration, and deployment. While everything is automatable, many have stopping points in their pipeline where human interactions are required to review test results or require confirmation about final deployments to production. Whether you are in infrastructure or development, BTTWWHADI just won't do and more. To compete, everyone will need to skill up and focus on architecture, automation, XaaS, and scripting/coding to decrease time to market while improving quality and resilience. </p>\n<h2 id=\"so-what-s-the-big-deal\">So, what's the big deal…</h2>\n<p>DevOps can be a threat to those who aren't ready for it (the BTTWWHADI crowd). If your job is configuring hardware or running manual software tests, you might see these functions being automated into 'coding' jobs. This function change could pose a severe career problem for those team members who don't see this evolution coming and fail to get prepared through education and training. Unprepared staff becomes resistive to change (understandably), yet, those who are prepared end up in a better position (read: more career security, mobility, and better paid) as automation experts are now far more sought after than traditional hardware configuration engineers (as a gross generalization). Please do not misunderstand; traditional system engineers are still valuable members of most enterprise teams, but as DevOps and virtualization take hold, those jobs will change. Get prepared, train your staff, and address the culture change head-on. </p>\n<p>If you need help with your journey, <a href=\"https://tech.fpcomplete.com/contact-us/\">contact FP Complete</a>. This is who we are and what we do. </p>\n",
"permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/",
"slug": "devops-in-the-enterprise",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps in the Enterprise: What could be better? What could go wrong?",
"description": "",
"updated": null,
"date": "2020-10-09",
"year": 2020,
"month": 10,
"day": 9,
"taxonomies": {
"categories": [
"devops",
"insights"
],
"tags": [
"devops",
"insights"
]
},
"authors": [],
"extra": {
"author": "Wes Crook",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/devops-in-the-enterprise/",
"components": [
"blog",
"devops-in-the-enterprise"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "where-we-come-from",
"permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#where-we-come-from",
"title": "Where we come from...",
"children": []
},
{
"level": 2,
"id": "not-everyone-is-ready-for-a-revolution",
"permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#not-everyone-is-ready-for-a-revolution",
"title": "Not everyone is ready for a revolution...",
"children": []
},
{
"level": 2,
"id": "so-what-s-the-big-deal",
"permalink": "https://tech.fpcomplete.com/blog/devops-in-the-enterprise/#so-what-s-the-big-deal",
"title": "So, what's the big deal…",
"children": []
}
],
"word_count": 827,
"reading_time": 5,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/cloud-for-non-natives.md",
"colocated_path": null,
"content": "<p>Does this mean if you weren't born in the cloud, you'll never be as good as those who are? </p>\n<p>When thinking about building from scratch or modernizing an existing technology environment, we tend to see one of a few different things happening: </p>\n<ul>\n<li>Staff will read up, and you will try it on your own. </li>\n<li>Managers will hire someone who says they have done it before. </li>\n<li>Leaders will engage a large software vendor or consulting firm to help get them to the promised land. </li>\n</ul>\n<p>While all of these strategies can work, we often find one of the following happens: </p>\n<ul>\n<li>Trial and error result in very expensive under delivery. </li>\n<li>Existing teams become disaffected and resistive because they perceive being left behind. </li>\n<li>Something gets delivered, but costs go up, and reliability goes down. </li>\n<li>New hires come in, make the magic happen, and then move on without leaving enough knowhow to continue without them. </li>\n<li>Vendors use proprietary software, and a new age of vendor lock-in ensues. </li>\n</ul>\n<p>There is a better way of approaching modernizing a business-focused, legacy world. Our core approach at FP complete is: </p>\n<ul>\n<li>Be vendor agnostic </li>\n<li>Build a road map based on business outcomes </li>\n<li>Deeply understand and implement DevOps concepts </li>\n<li>Be ruthlessly focused on architecture from the start </li>\n<li>Containerize everything* </li>\n<li>Virtualize everything*</li>\n</ul>\n<p>While this approach is straightforward, staying focused on outcomes is the key: </p>\n<ul>\n<li>The business logic is the key to build your ecosystem once and properly so you can focus on what matters. </li>\n<li>Integrate security by design as security is a non-non-negotiable. </li>\n<li>All alerts and logs centrally as managing and operating via complete transparency is key. </li>\n<li>Ensure Containers are made to scale horizontally and be fault-tolerant from the start. </li>\n<li>Ensure you are on-prem and cloud-agnostic. </li>\n<li>Be open-source but get enterprise support. </li>\n</ul>\n<p>How do you get help without breaking the bank, compromising your values, or getting locked in? </p>\n<p>At FP Complete, we believe the way to get started is to: </p>\n<ul>\n<li>Build DevOps expertise, acquire DevOps Tooling. </li>\n<li>Get help constructing your roadmap to ensure technical focus aligns with business results. </li>\n<li>Get help designing how your applications will get containerized to be cloud-ready. </li>\n<li>Acquire Enterprise support for your newly open-sourced world. </li>\n</ul>\n<p>FP Complete has a unique track record in these activities. We are not built on recurring revenue from long term consulting. We are built on helping our customers build better software, run better technology operations, and achieve better business outcomes. We come from diverse backgrounds and have serviced a myriad of industries. We often find that others have already solved many of our client's problems, and our expertise lies in matching existing solutions to places where they are needed most. </p>\n<p>So, what is the best way to get started: </p>\n<ol>\n<li>Please send us a mail or call us up. </li>\n<li>We will walk through your aspirations and provide a high-level road map for achieving your goals at no cost. </li>\n<li>If you like what you see, invite us in for a POC based on a 100% ROI. </li>\n<li>Scale from there. </li>\n</ol>\n<p>If you are unsure about the claims in this post, shoot me an email...you won't get a bot response… you'll get me. </p>\n<p>*Note: the exceptions to these rules are usually around ultra-low latency requirements. </p>\n",
"permalink": "https://tech.fpcomplete.com/blog/cloud-for-non-natives/",
"slug": "cloud-for-non-natives",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloud for Non-Natives",
"description": "Faster time to market and lower failure rate are the beginning of the many benefits DevOps offers companies. Discover the measurable metrics and KPIs, as well as the true business value DevOps offers.",
"updated": null,
"date": "2020-10-02",
"year": 2020,
"month": 10,
"day": 2,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops",
"insights"
]
},
"authors": [],
"extra": {
"author": "Wes Crook",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/cloud-for-non-natives/",
"components": [
"blog",
"cloud-for-non-natives"
],
"summary": null,
"toc": [],
"word_count": 545,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/rust-for-devops-tooling.md",
"colocated_path": null,
"content": "<p>A beginner's guide to writing your DevOps tools in Rust.</p>\n<h2 id=\"introduction\">Introduction</h2>\n<p>In this blog post we'll cover some basic DevOps use cases for Rust and why \nyou would want to use it.\nAs part of this, we'll also cover a few common libraries you will likely use\nin a Rust-based DevOps tool for AWS.</p>\n<p>If you're already familiar with writing DevOps tools in other languages,\nthis post will explain why you should try Rust.</p>\n<p>We'll cover why Rust is a particularly good choice of language to write your DevOps\ntooling and critical cloud infrastructure software in.\nAnd we'll also walk through a small demo DevOps tool written in Rust. \nThis project will be geared towards helping someone new to the language ecosystem \nget familiar with the Rust project structure.</p>\n<p>If you're brand new to Rust, and are interested in learning the language, you may want to start off with our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a>.</p>\n<h2 id=\"what-makes-the-rust-language-unique\">What Makes the Rust Language Unique</h2>\n<blockquote>\n<p>Rust is a systems programming language focused on three goals: safety, speed, \nand concurrency. It maintains these goals without having a garbage collector, \nmaking it a useful language for a number of use cases other languages aren’t \ngood at: embedding in other languages, programs with specific space and time \nrequirements, and writing low-level code, like device drivers and operating systems. </p>\n</blockquote>\n<p><em>The Rust Book (first edition)</em></p>\n<p>Rust was initially created by Mozilla and has since gained widespread adoption and\nsupport. As the quote from the Rust book alludes to, it was designed to fill the \nsame space that C++ or C would (in that it doesn’t have a garbage collector or a runtime).\nBut Rust also incorporates zero-cost abstractions and many concepts that you would\nexpect in a higher level language (like Go or Haskell).\nFor that, and many other reasons, Rust's uses have expanded well beyond that\noriginal space as low level safe systems language.</p>\n<p>Rust's ownership system is extremely useful in efforts to write correct and \nresource efficient code. Ownership is one of the killer features of the Rust \nlanguage and helps programmers catch classes of resource errors at compile time \nthat other languages miss or ignore.</p>\n<p>Rust is an extremely performant and efficient language, comparable to the speeds \nyou see with idiomatic everyday C or C++.\nAnd since there isn’t a garbage collector in Rust, it’s a lot easier to get \npredictable deterministic performance.</p>\n<h2 id=\"rust-and-devops\">Rust and DevOps</h2>\n<p>What makes Rust unique also makes it very useful for areas stemming from robots \nto rocketry, but are those qualities relevant for DevOps?\nDo we care if we have efficient executables or fine grained control over \nresources, or is Rust a bit overkill for what we typically need in DevOps?</p>\n<p><em>Yes and no</em></p>\n<p>Rust is clearly useful for situations where performance is crucial and actions \nneed to occur in a deterministic and consistent way. That obviously translates to \nlow-level places where previously C and C++ were the only game in town. \nIn those situations, before Rust, people simply had to accept the inherent risk and \nadditional development costs of working on a large code base in those languages.\nRust now allows us to operate in those areas but without the risk that C and C++\ncan add.</p>\n<p>But with DevOps and infrastructure programming we aren't constrained by those \nrequirements. For DevOps we've been able to choose from languages like Go, Python, \nor Haskell because we're not strictly limited by the use case to languages without \ngarbage collectors. Since we can reach for other languages you might argue \nthat using Rust is a bit overkill, but let's go over a few points to counter this.</p>\n<h3 id=\"why-you-would-want-to-write-your-devops-tools-in-rust\">Why you would want to write your DevOps tools in Rust</h3>\n<ul>\n<li>Small executables relative to other options like Go or Java</li>\n<li>Easy to port across different OS targets</li>\n<li>Efficient with resources (which helps cut down on your AWS bill) </li>\n<li>One of the fastest languages (even when compared to C)</li>\n<li>Zero cost abstractions - Rust is a low level performant language which also\ngives the us benefits of a high level language with its generics and abstractions.</li>\n</ul>\n<p>To elaborate on some of these points a bit further:</p>\n<h4 id=\"os-targets-and-cross-compiling-rust-for-different-architectures\">OS targets and Cross Compiling Rust for different architectures</h4>\n<p>For DevOps it's also worth mentioning the (relative) ease with which you can \nport your Rust code across different architectures and different OS's. </p>\n<p>Using the official Rust toolchain installer <code>rustup</code>, it's easy to get the \nstandard library for your target platform.\nRust <a href=\"https://doc.rust-lang.org/nightly/rustc/platform-support.html\">supports a great number of platforms</a>\nwith different tiers of support.\nThe docs for the <code>rustup</code> tool has <a href=\"https://rust-lang.github.io/rustup/cross-compilation.html\">a section</a>\ncovering how you can access pre-compiled artifacts for various architectures.\nTo install the target platform for an architecture (other than the host platform which is installed by default)\nyou simply need to run <code>rustup target add</code>:</p>\n<pre><code>$ rustup target add x86_64-pc-windows-msvc \ninfo: downloading component 'rust-std' for 'x86_64-pc-windows-msvc'\ninfo: installing component 'rust-std' for 'x86_64-pc-windows-msvc'\n</code></pre>\n<p>Cross compilation is already built into the Rust compiler by default. \nOnce the <code>x86_64-pc-windows-msvc</code> target is installed you can build for Windows \nwith the <code>cargo</code> build tool using the <code>--target</code> flag:</p>\n<pre><code>cargo build --target=x86_64-pc-windows-msvc\n</code></pre>\n<p>(the default target is always the host architecture)</p>\n<p>If one of your dependencies links to a native (i.e. non-Rust) library, you will\nneed to make sure that those cross compile as well. Doing <code>rustup target add</code>\nonly installs the Rust standard library for that target. However for the other \ntools that are often needed when cross-compiling, there is the handy\n<a href=\"https://github.com/rust-embedded/cross\">github.com/rust-embedded/cross</a> tool.\nThis is essentially a wrapper around cargo which does all cross compilation in \ndocker images that have all the necessary bits (linkers) and pieces installed.</p>\n<h4 id=\"small-executables\">Small Executables</h4>\n<p>A key unique feature of Rust is that it doesn't need a runtime or a garbage collector.\nCompare this to languages like Python or Haskell: with Rust the lack of any runtime\ndependencies (Python), or system libraries (as with Haskell) is a huge advantage \nfor portability.</p>\n<p>For practical purposes, as far as DevOps is concerned, this portability means \nthat Rust executables are much easier to deploy than scripts.\nWith Rust, compared to Python or Bash, we don't need to set up the environment for \nour code ahead of time. This frees us up from having to worry if the runtime \ndependencies for the language are set up.</p>\n<p>In addition to that, with Rust you're able to produce 100% static executables for \nLinux using the MUSL libc (and by default Rust will statically link all Rust code). \nThis means that you can deploy your Rust DevOps tool's binaries across your Linux \nservers without having to worry if the correct <code>libc</code> or other libraries were \ninstalled beforehand.</p>\n<p>Creating static executables for Rust is simple. As we discussed before, when discussing\ndifferent OS targets, it's easy with Rust to switch the target you're building against.\nTo compile static executables for the Linux MUSL target all you need to do is add \nthe <code>musl</code> target with:</p>\n<pre><code>$ rustup target add x86_64-unknown-linux-musl\n</code></pre>\n<p>Then you can using this new target to build your Rust project as a fully static \nexecutable with:</p>\n<pre><code>$ cargo build --target x86_64-unknown-linux-musl\n</code></pre>\n<p>As a result of not having a runtime or a garbage collector, Rust executables \ncan be extremely small. For example, there is a common DevOps tool called \nCredStash that was originally written in Python but has since been \nported to Go (GCredStash) and now Rust (RuCredStash).</p>\n<p>Comparing the executable sizes of the Rust versus Go implementations of CredStash,\nthe Rust executable is nearly a quarter of the size of the Go variant. </p>\n<table><thead><tr><th>Implementation</th><th>Executable Size</th></tr></thead><tbody>\n<tr><td>Rust CredStash: (RuCredStash Linux amd64)</td><td>3.3 MB</td></tr>\n<tr><td>Go CredStash: (GCredStash Linux amd64 v0.3.5)</td><td>11.7 MB</td></tr>\n</tbody></table>\n<p>Project links:</p>\n<ul>\n<li><a href=\"https://github.com/psibi/rucredstash\">github.com/psibi/rucredstash</a></li>\n<li><a href=\"https://github.com/winebarrel/gcredstash\">github.com/winebarrel/gcredstash</a></li>\n</ul>\n<p>This is by no means a perfect comparison, and 8 MB may not seem like a lot, but\nconsider the advantage automatically of having executables that are a quarter of the \nsize you would typically expect. </p>\n<p>This cuts down on the size your Docker images, AWS AMI's, or Azure VM images need\nto be - and that helps speed up the time it takes to spin up new deployments.</p>\n<p>With a tool of this size, having an executable that is 75% smaller than it \nwould be otherwise is not immediately apparent. On this scale the difference, 8 MB,\nis still quite cheap.\nBut with larger tools (or collections of tools and Rust based software) the benefits\nadd up and the difference begins to be a practical and worthwhile consideration.</p>\n<p>The Rust implementation was also not strictly written with the resulting size of \nthe executable in mind. So if executable size was even more important of a \nfactor other changes could be made - but that's beyond the scope of this post.</p>\n<h4 id=\"rust-is-fast\">Rust is fast</h4>\n<p>Rust is very fast even for common idiomatic everyday Rust code. And not only that\nit's arguably easier to work with than with C and C++ and catch errors in your \ncode.</p>\n<p>For the Fortunes benchmark (which exercises the ORM, \ndatabase connectivity, dynamic-size collections, sorting, server-side templates, \nXSS countermeasures, and character encoding) Rust is second and third, only lagging \nbehind the first place C++ based framework by 4 percent. </p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-fortunes.png\" style=\"max-width:95%\">\n<p>In the benchmark for database access for a single query Rust is first and second:</p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-single-query.png\" style=\"max-width:95%\">\n<p>And in a composite of all the benchmarks Rust based frameworks are second and third place.</p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-composite.png\" style=\"max-width:95%\">\n<p>Of course language and framework benchmarks are not real life, however this is \nstill a fair comparison of the languages as they relate to others (within the context \nand the focus of the benchmark).</p>\n<p>Source: <a href=\"https://www.techempower.com/benchmarks/\">https://www.techempower.com/benchmarks</a></p>\n<h3 id=\"why-would-you-not-want-to-write-your-devops-tools-in-rust\">Why would you not want to write your DevOps tools in Rust?</h3>\n<p>For medium to large projects, it’s important to have a type system and compile \ntime checks like those in Rust versus what you would find in something like Python\nor Bash.\nThe latter languages let you get away with things far more readily. This makes \ndevelopment much "faster" in one sense.</p>\n<p>Certain situations, especially those with involving small project codebases, would \nbenefit more from using an interpreted language. In these cases, being able to quickly \nchange pieces of the code without needing to re-compile and re-deploy the project\noutweighs the benefits (in terms of safety, execution speed, and portability)\nthat languages like Rust bring. </p>\n<p>Working with and iterating on a Rust codebase in those circumstances, with frequent\nbut small codebases changes, would be needlessly time-consuming\nIf you have a small codebase with few or no runtime dependencies, then it wouldn't\nbe worth it to use Rust.</p>\n<h2 id=\"demo-devops-project-for-aws\">Demo DevOps Project for AWS</h2>\n<p>We'll briefly cover some of the libraries typically used for an AWS focused \nDevOps tool in a walk-through of a small demo Rust project here. \nThis aims to provide a small example that uses some of the libraries you'll likely\nwant if you’re writing a CLI based DevOps tool in Rust. Specifically for this \nexample we'll show a tool that does some basic operations against AWS S3 \n(creating new buckets, adding files to buckets, listing the contents of buckets).</p>\n<h3 id=\"project-structure\">Project structure</h3>\n<p>For AWS integration we're going to utilize the <a href=\"https://www.rusoto.org/\">Rusoto</a> library.\nSpecifically for our modest demo Rust DevOps tools we're going to pull in the \n<a href=\"https://docs.rs/rusoto_core/0.45.0/rusoto_core/\">rusoto_core</a> and the \n<a href=\"https://docs.rs/rusoto_s3/0.45.0/rusoto_s3/\">rusoto_s3</a> crates (in Rust a <em>crate</em>\nis akin to a library or package).</p>\n<p>We're also going to use the <a href=\"https://docs.rs/structopt/0.3.16/structopt/\">structopt</a> crate\nfor our CLI options. This is a handy, batteries included CLI library that makes \nit easy to create a CLI interface around a Rust struct. </p>\n<p>The tool operates by matching the CLI option and arguments the user passes in \nwith a <a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs#L211\"><code>match</code> expression</a>.</p>\n<p>We can then use this to match on that part of the CLI option struct we've defined \nand call the appropriate functions for that option.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">match opt {\n Opt::Create { bucket: bucket_name } => {\n println!("Attempting to create a bucket called: {}", bucket_name);\n let demo = S3Demo::new(bucket_name);\n create_demo_bucket(&demo);\n },\n</code></pre>\n<p>This matches on the <a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs#L182\"><code>Create</code></a>\nvariant of the <code>Opt</code> enum. </p>\n<p>We then use <code>S3Demo::new(bucket_name)</code> to create a new <code>S3Client</code> which we can\nuse in the standalone <code>create_demo_bucket</code> function that we've defined \nwhich will create a new S3 bucket.</p>\n<p>The tool is fairly simple with most of the code located in \n<a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs\">src/main.rs</a></p>\n<h3 id=\"building-the-rust-project\">Building the Rust project</h3>\n<p>Before you build the code in this project, you will need to install Rust. \nPlease follow <a href=\"https://www.rust-lang.org/tools/install\">the official install instructions here</a>.</p>\n<p>The default build tool for Rust is called Cargo. It's worth getting familiar \nwith <a href=\"https://doc.rust-lang.org/cargo/guide/\">the docs for Cargo</a>\nbut here's a quick overview for building the project.</p>\n<p>To build the project run the following from the root of the \n<a href=\"https://github.com/fpco/rust-aws-devops\">git repo</a>:</p>\n<pre><code>cargo build\n</code></pre>\n<p>You can then use <code>cargo run</code> to run the code or execute the code directly\nwith <code>./target/debug/rust-aws-devops</code>:</p>\n<pre><code>$ ./target/debug/rust-aws-devops \n\nRunning tool\nRustAWSDevops 0.1.0\nMike McGirr <[email protected]>\n\nUSAGE:\n rust-aws-devops <SUBCOMMAND>\n\nFLAGS:\n -h, --help Prints help information\n -V, --version Prints version information\n\nSUBCOMMANDS:\n add-object Add the specified file to the bucket\n create Create a new bucket with the given name\n delete Try to delete the bucket with the given name\n delete-object Remove the specified object from the bucket\n help Prints this message or the help of the given subcommand(s)\n list Try to find the bucket with the given name and list its objects``\n</code></pre>\n<p>Which will output the nice CLI help output automatically created for us \nby <code>structopt</code>.</p>\n<p>If you're ready to build a release version (with optimizations turn on which \nwill make compilation take slightly longer) run the following:</p>\n<pre><code>cargo build --release\n</code></pre>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>As this small demo showed, it's not difficult to get started using Rust to write\nDevOps tools. And even then we didn't need to make a trade-off between ease of\ndevelopment and performant fast code. </p>\n<p>Hopefully the next time you're writing a new piece of DevOps software, \nanything from a simple CLI tool for a specific DevOps operation or you're writing \nthe next Kubernetes, you'll consider reaching for Rust.\nAnd if you have further questions about Rust, or need help implementing your Rust \nproject, please feel free to reach out to FP Complete for Rust engineering \nand training!</p>\n<p>Want to learn more Rust? Check out our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a>. And for more information, check out our <a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a>.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/",
"slug": "rust-for-devops-tooling",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Using Rust for DevOps tooling",
"description": "A beginner's guide to writing your DevOps tools in Rust.",
"updated": null,
"date": "2020-09-09",
"year": 2020,
"month": 9,
"day": 9,
"taxonomies": {
"tags": [
"devops",
"rust",
"insights"
],
"categories": [
"functional programming",
"devops"
]
},
"authors": [],
"extra": {
"author": "Mike McGirr",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "/blog/rust-for-devops-tooling/",
"components": [
"blog",
"rust-for-devops-tooling"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introduction",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#introduction",
"title": "Introduction",
"children": []
},
{
"level": 2,
"id": "what-makes-the-rust-language-unique",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#what-makes-the-rust-language-unique",
"title": "What Makes the Rust Language Unique",
"children": []
},
{
"level": 2,
"id": "rust-and-devops",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#rust-and-devops",
"title": "Rust and DevOps",
"children": [
{
"level": 3,
"id": "why-you-would-want-to-write-your-devops-tools-in-rust",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#why-you-would-want-to-write-your-devops-tools-in-rust",
"title": "Why you would want to write your DevOps tools in Rust",
"children": [
{
"level": 4,
"id": "os-targets-and-cross-compiling-rust-for-different-architectures",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#os-targets-and-cross-compiling-rust-for-different-architectures",
"title": "OS targets and Cross Compiling Rust for different architectures",
"children": []
},
{
"level": 4,
"id": "small-executables",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#small-executables",
"title": "Small Executables",
"children": []
},
{
"level": 4,
"id": "rust-is-fast",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#rust-is-fast",
"title": "Rust is fast",
"children": []
}
]
},
{
"level": 3,
"id": "why-would-you-not-want-to-write-your-devops-tools-in-rust",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#why-would-you-not-want-to-write-your-devops-tools-in-rust",
"title": "Why would you not want to write your DevOps tools in Rust?",
"children": []
}
]
},
{
"level": 2,
"id": "demo-devops-project-for-aws",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#demo-devops-project-for-aws",
"title": "Demo DevOps Project for AWS",
"children": [
{
"level": 3,
"id": "project-structure",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#project-structure",
"title": "Project structure",
"children": []
},
{
"level": 3,
"id": "building-the-rust-project",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#building-the-rust-project",
"title": "Building the Rust project",
"children": []
}
]
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/rust-for-devops-tooling/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2540,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/cloud-vendor-neutrality/",
"title": "Cloud Vendor Neutrality"
},
{
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
"title": "Levana NFT Launch"
},
{
"permalink": "https://tech.fpcomplete.com/blog/of-course-it-compiles-right/",
"title": "Rust: Of course it compiles, right?"
},
{
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
"title": "Deploying Rust with Windows Containers on Kubernetes"
},
{
"permalink": "https://tech.fpcomplete.com/rust/",
"title": "FP Complete Rust"
}
]
},
{
"relative_path": "blog/devops-unifying-dev-ops-qa.md",
"colocated_path": null,
"content": "<p>The term DevOps has been around for many years. Small and big companies adopt DevOps concepts for different purposes, e.g. to increase the quality of software. In this blog post, we define DevOps, present its pros and cons, highlight a few concepts and see how these can impact the entire organization.</p>\n<h2 id=\"what-is-devops\">What is DevOps?</h2>\n<p>At a high level, DevOps is understood as a technical, organizational and cultural shift in a company to run software more efficiently, reliably, and securely. From this first definition, we can see that DevOps is much more than "use tool X" or "move to the cloud". DevOps starts with the understanding that development (Dev), operations (Ops) and quality assurance (QA) are not treated as siloed disciplines anymore. Instead, they all come together in shared processes and responsibilities across collaborating teams. DevOps achieves this through various techniques. In the section "Implementation", we present a few of these concepts.</p>\n<h2 id=\"benefits\">Benefits</h2>\n<p>Benefits of applying DevOps include:</p>\n<ul>\n<li>Cost savings through higher efficiency.</li>\n<li>Faster software iteration cycles, where updates take less time from development to running in production.</li>\n<li>More security, reliability, and fault tolerance when running software.</li>\n<li>Stronger bonds between different stakeholders in the organization including non-technical staff.</li>\n<li>Enable more data-driven decisions.</li>\n</ul>\n<p>Let's have a look <em>how</em> these benefits can be achieved by applying DevOps ideas:</p>\n<h2 id=\"how-to-implement-devops\">How to implement DevOps</h2>\n<h3 id=\"automation-and-continuous-integration-ci-continuous-delivery-cd\">Automation and Continuous Integration (CI) / Continuous Delivery (CD)</h3>\n<p>Automation refers to a key aspect of the engineering-driven part of DevOps. With automation, we aim to reduce the need for human action, and thus the possibility of human error, as far as possible by sending your software through an automated and well-understood pipeline of actions. These automated actions can build your software, run unit tests, integrate it with existing systems, run system tests, deploy it, and provide feedback on each step. What we are\ndescribing here is usually referred to as <strong>Continuous Integration (CI)</strong> and <strong>Continuous Delivery (CD)</strong>. Adopting CI/CD invests in a low-risk and low-cost way of crossing the chasm between "software that is working on an engineer's laptop" and "software that running securely and reliably on production servers".</p>\n<p>CI/CD is usually tied to a platform on top of which the automated actions are run, e.g., Gitlab. The platform accepts software that should be passed through the pipeline, executes the automated actions on servers which are usually abstracted away, and provides feedback to the engineering team. These actions can be highly customized and tied together in different ways. For example, one action only compiles the source code and provides the build artifacts to subsequent actions. Another action can be responsible for running a test-suite, another one can deploy software. Such actions can be defined for different types of software: A website can be automatically deployed to a server, or a Desktop application can be made available to your customers without human interaction.</p>\n<p>Besides the fact that CI/CD can be used for all kinds of software, there are other advantages to consider:</p>\n<ol>\n<li><strong>The CI/CD pipeline is well-understood and maintained by the teams</strong>: the actions that are run in a pipeline can be flexibly updated, extended, etc. <a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#infra-as-code\">Infrastructure as Code</a> can be a powerful concept here.</li>\n<li><strong>Run in standardized environments</strong>: Version conflicts between tools and configuration or dependency mismatches only have to be fixed once when the pipeline is built. Once a pipeline is working, it will continue to work as the underlying servers and their software versions don't change. No more conflicts between operating systems, tools, versions of tools across different engineers. Pipelines are highly reproducible. Containerization can be a game-changer here.</li>\n<li><strong>Feedback</strong>: Actions sometimes fail, e.g. because a unit test does not pass. The CI/CD platform usually allows different reporting mechanisms: E-mail someone, update the project status on your repository overview page, block subsequent actions or cancel other pipelines.</li>\n</ol>\n<p>The next sections cover more DevOps concepts that benefit from automation.</p>\n<h3 id=\"multiple-environments\">Multiple Environments</h3>\n<p>The CI/CD can be extended by deploying software to different environments. These deployments can happen in individual actions defined in your pipeline. Besides the production environment, which runs user-facing software, staging and testing environments can be defined where software is deployed to. For example, a testing environment can be used by the engineering team for peer-reviewing and validating software changes. Once the team agreed on new software, it can be deployed to a staging environment. A usual purpose of the staging environment is to mimic the production environment as closely as possible. Further tests can be run in a staging environment to make sure the software is ready to be used by real users. Finally, the software reaches production-readiness and is deployed to a production environment. Such a production deployment can be designed using a gradual rollout, i.e. canary deployments.</p>\n<p>Different environments not only realize different semantics and confidence levels of running software, e.g. as described in the previous paragraph, but also serve as an agreed-upon view on software in the entire organization. Multi-environment deployments make your software and quality thereof easier to understand. This is because of the gained insights when running software, in particular on infrastructure that is close to a production setting. Generally, running software gives much more insights into the performance, reliability, security, production-readiness and overall quality. Different teams, e.g. security experts or a dedicated QA-team (if your organization follows this practice) can be consulted at different software quality stages, i.e. different environments in which software runs. Additionally, non-technical staff can use environments, e.g. specialized ones for demo purposes.</p>\n<p>Ultimately, integrating multiple environments structures QA and smoothens the interactions between different teams.</p>\n<h3 id=\"fail-early\">Fail early</h3>\n<p>No matter how well things are working in an organization that builds software, bugs happen and bugs are expensive. The cost of bugs can be projected to the manpower invested into fixing the bug, the loss of reputation due to angry customers, and generally negative business impact. Since we can't fully avoid bugs, there exist concepts to reduce both the frequency and impact of bugs. "Fail early" is one of these concepts.</p>\n<p>The basic idea is to catch bugs and other flaws in your software as early in the development process as possible. When software is developed, unit tests, compiler errors and peer reviews count towards the early and cheap mechanisms to detect and fix flaws. Ideally, a unit test tells the developer that the software is not correct, or, a second pair of eyes reveals a potential performance issue during a code review. In both cases, not much time and effort is lost and the flaw can be easily fixed. However, other bugs might make it through these initial checks and land in testing or staging environments. Other types of tests and QA should be in place to check the software quality. Worst case, the bug outlives all checks and is in production. There, bugs have much higher impact and require more effort by many stakeholders, e.g. the bug fix by the engineering team and the apology to the customers.</p>\n<p>To save costs, cheap checks such as running a test suite in an automated pipeline should be executed early. This will save costs as flaws discovered later in the process result in higher costs. Thus, failing early increases cost efficiency.</p>\n<h3 id=\"rollbacks\">Rollbacks</h3>\n<p>DevOps can also help to react quickly to changes. One example of a sudden change is a bug, as described in the last section, which is discovered in the production environment. Rollbacks, for example as manually triggered pipelines, can recover the well-functioning of a production service in a timely manner. This can be useful when the bug is a hard one and needs hours to be identified and fixed. These hours of degraded customer experience or even downtime makes paying customers unhappy. A faster mechanism is desired, which minimizes the gap between a faulty system and a recovered system. A rollback can be a fast and effective way to recover system state without exposing customers to company failure much.</p>\n<h3 id=\"policies\">Policies</h3>\n<p>DevOps concepts impose a challenge to security and permission management as these span the entire organization. Policies can help to formulate authorizations and rules during operations. For example, implementing the following security requirements may be required:</p>\n<ul>\n<li>A deployment or rollback in production should not be triggered by anyone but a well-defined set of people in authority.</li>\n<li>Some actions in a CI/CD pipeline should always be run while other actions are intended to be triggered manually or only run under certain conditions.</li>\n<li>The developers might require slightly different permissions than a dedicated QA team to perform their day-to-day work.</li>\n<li>Humans and machine users can have different capabilities but should always have the least privileges assigned to them.</li>\n</ul>\n<p>The authentication and authorization tools provided by CI/CD providers or cloud vendors can help to design such policies according to your organizational needs.</p>\n<h3 id=\"observability\">Observability</h3>\n<p>As software is running and users are interacting with your applications, insights such as error rates, performance statistics, resource usages, etc. can help to identify bottlenecks, mitigate future issues, and drive business decisions through data. There exist two major ways to establish different forms of observability:</p>\n<ul>\n<li><strong>Logging</strong>: Events in text form that software outputs to inform about the application's status and health. Different types of logging messages, e.g. indicating the severity of an error event, can help to aggregate and display log messages in a central place, where it can be used by engineering teams for debugging purposes.</li>\n<li><strong>Metrics</strong>: Information about the running software that is not generated by the application itself. For example, the CPU or memory usage of the underlying machine that runs the software, network statistics, HTTP error rates, etc. As with logging, metrics can help to spot bottlenecks and mitigate them before they have a business impact. Visualizing aggregated metrics data facilitates communication across technical and non-technical teams and leverages data-driven decisions. Metrics dashboards can strengthen the shared ownership of software across teams.</li>\n</ul>\n<p>Logging and metrics can help to define goals and to align a development team with a QA team for example.</p>\n<h2 id=\"disadvantages\">Disadvantages</h2>\n<p>So far, we only looked at the benefits and characteristics of DevOps. Let's have a brief look at the other side of the coin by commenting on the possible negative side effects and disadvantages of adopting DevOps concepts.</p>\n<ul>\n<li>\n<p>The investment into DevOps can be huge as it is a company-wide, multi-discipline, and multi-team transformation that not only requires technical implementation effort but also training for people, re-structuring and aligning teams.</p>\n</li>\n<li>\n<p>This goes along with the first point but it's worth emphasizing it: The cultural impact on your organization can be challenging due to human factors. While a new automation mechanism can be estimated and implemented reasonably well, tracking the progress of changing people's way of communication, feeling of ownership, aligning to new processes can be hard and might lead to no gained efficiencies, which DevOps promises, short-term. Due to the high impact of DevOps, it is a long-term investment.</p>\n</li>\n<li>\n<p>The technical backbone of DevOps, e.g. CI/CD pipelines, cloud vendors, integration of authorization and authentication, likely results in increased expenses through new contracts and licenses with new players. However, through the dominance of open source in modern DevOps tooling, e.g. through using Kubernetes, vendor lock-in can be avoided.</p>\n</li>\n</ul>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>In this blog post, we explored the definition of DevOps and presented several DevOps concepts and use-cases. Furthermore, we evaluated benefits and disadvantages. Adopting DevOps is an investment into a low-friction and automated way of developing, testing, and running software. Technical improvements, e.g. automation, as well as increased collaboration between teams of different disciplines ultimately improve the efficiency in your organization long-term.</p>\n<p>However, DevOps represents not only technical effort but also impacts the entire company, e.g. how teams communicate with each other, how issues are resolved, and what teams feel responsible for. Finding the right balance and choosing the best concepts and tools for your teams represents a challenge. We can help you with identifying and solving the DevOps transformation in your organization.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/",
"slug": "devops-unifying-dev-ops-qa",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps: Unifying Dev, Ops, and QA",
"description": "The term DevOps has been around for many years. Small and big companies adopt DevOps concepts for different purposes, e.g. to increase the quality of software. In this blog post, we define DevOps, present its pros and cons, highlight a few concepts and see how these can impact the entire organization.",
"updated": null,
"date": "2020-08-24",
"year": 2020,
"month": 8,
"day": 24,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"insights"
]
},
"authors": [],
"extra": {
"author": "Moritz Hoffmann",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/devops-unifying-dev-ops-qa/",
"components": [
"blog",
"devops-unifying-dev-ops-qa"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-devops",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#what-is-devops",
"title": "What is DevOps?",
"children": []
},
{
"level": 2,
"id": "benefits",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#benefits",
"title": "Benefits",
"children": []
},
{
"level": 2,
"id": "how-to-implement-devops",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#how-to-implement-devops",
"title": "How to implement DevOps",
"children": [
{
"level": 3,
"id": "automation-and-continuous-integration-ci-continuous-delivery-cd",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#automation-and-continuous-integration-ci-continuous-delivery-cd",
"title": "Automation and Continuous Integration (CI) / Continuous Delivery (CD)",
"children": []
},
{
"level": 3,
"id": "multiple-environments",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#multiple-environments",
"title": "Multiple Environments",
"children": []
},
{
"level": 3,
"id": "fail-early",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#fail-early",
"title": "Fail early",
"children": []
},
{
"level": 3,
"id": "rollbacks",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#rollbacks",
"title": "Rollbacks",
"children": []
},
{
"level": 3,
"id": "policies",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#policies",
"title": "Policies",
"children": []
},
{
"level": 3,
"id": "observability",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#observability",
"title": "Observability",
"children": []
}
]
},
{
"level": 2,
"id": "disadvantages",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#disadvantages",
"title": "Disadvantages",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2023,
"reading_time": 11,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-deployments/",
"title": "Understanding Cloud Software Deployments"
}
]
},
{
"relative_path": "blog/devops-for-developers.md",
"colocated_path": null,
"content": "<p>In this post, I describe my personal journey as a developer skeptical\nof the seemingly ever-growing, ever more complex, array of "ops"\ntools. I move towards adopting some of these practices, ideas and\ntools. I write about how this journey helps me to write software\nbetter and understand discussions with the ops team at work.</p>\n<div style=\"border:1px solid black;background-color:#f8f8f8;margin-bottom:1em;padding: 0.5em 0.5em 0 0.5em;\">\n<p><strong>Table of Contents</strong></p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#on-being-skeptical\">On being skeptical</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#the-humble-app\">The humble app</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#disk-failures-are-not-that-common\">Disk failures are not that common</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#auto-deployment-is-better-than-manual\">Auto-deployment is better than manual</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#backups-become-worth-it\">Backups become worth it</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#deployment-staging\">Deployment staging</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#packaging-with-docker-is-good\">Packaging with Docker is good</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#custodians-multiple-processes-are-useful\">Custodians multiple processes are useful</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#kubernetes-provides-exactly-that\">Kubernetes provides exactly that</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#declarative-is-good-vendor-lock-in-is-bad\">Declarative is good, vendor lock-in is bad</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#more-advanced-rollout\">More advanced rollout</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#relationship-between-code-and-deployed-state\">Relationship between code and deployed state</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#argocd\">ArgoCD</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#infra-as-code\">Infra-as-code</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#where-the-dev-meets-the-ops\">Where the dev meets the ops</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/devops-for-developers/#what-we-do\">What we do</a></li>\n</ul>\n</div>\n<h2 id=\"on-being-skeptical\">On being skeptical</h2>\n<p>I would characterise my attitudes to adopting technology in two\nstages:</p>\n<ul>\n<li>Firstly, I am conservative and dismissive, in that I will usually\ndisregard any popular new technology as a bandwagon or trend. I'm a\nslow adopter.</li>\n<li>Secondly, when I actually encounter a situation where I've suffered,\nI'll then circle back to that technology and give it a try, and if I\ncan really find the nugget of technical truth in there, then I'll\nadopt it.</li>\n</ul>\n<p>Here are some things that I disregarded for a year or more before\ntrying: Emacs, Haskell, Git, Docker, Kubernetes, Kafka. The whole\nNoSQL trend came, wrecked havoc, and went, while I had my back turned,\nbut I am considering using Redis for a cache at the moment.</p>\n<h2 id=\"the-humble-app\">The humble app</h2>\n<p>If you’re a developer like me, you’re probably used to writing your\nsoftware, spending most of your time developing, and then finally\ndeploying your software by simply creating a machine, either a\ndedicated machine or a virtual machine, and then uploading a binary of\nyour software (or source code if it’s interpreted), and then running\nit with the copy pasted config of systemd or simply running the\nsoftware inside GNU screen. It's a secret shame that I've done this,\nbut it's the reality.</p>\n<p>You might use nginx to reverse-proxy to the service. Maybe you set up\na PostgreSQL database or MySQL database on that machine. And then you\nwalk away and test out the system, and later you realise you need some\nslight changes to the system configuration. So you SSH into the system\nand makes the small tweaks necessary, such as port settings, encoding\nsettings, or an additional package you forgot to add. Sound familiar?</p>\n<p>But on the whole, your work here is done and for most services this is\npretty much fine. There are plenty of services running that you have\nseen in the past 30 years that have been running like this.</p>\n<h2 id=\"disk-failures-are-not-that-common\">Disk failures are not that common</h2>\n<p>Rhetoric about processes going down due to a hardware failure are\nprobably overblown. Hard drives don’t crash very often. They don’t\nreally wear out as quickly as they used to, and you can be running a\nsystem for years before anything even remotely concerning happens.</p>\n<h2 id=\"auto-deployment-is-better-than-manual\">Auto-deployment is better than manual</h2>\n<p>When you start to iterate a little bit quicker, you get bored of\nmanually building and copying and restarting the binary on the\nsystem. This is especially noticeable if you forget the steps later\non.</p>\n<!-- Implementing Auto-Deployment -->\n<p>If you’re a little bit more advanced you might have some special\nscripts or post-merge git hooks, so that when you push to your repo it\nwould apply to the same machine and you have some associated token on\nyour CI machine that is capable of uploading a binary and running a\ncommand like copy and restart (e.g. SSH key or API\nkey). Alternatively, you might implement a polling system on the\nactual production system which will check if any updates have occurred\nin get and if so pull down a new binary. This is how we were doing\nthings in e.g. 2013.</p>\n<h2 id=\"backups-become-worth-it\">Backups become worth it</h2>\n<p>Eventually, if you're lucky, your service starts to become slightly\nmore important; maybe it’s used in business and people actually are\nusing it and storing valuable things in the database. You start to\nthink that back-ups are a good idea and worth the investment.</p>\n<!-- Redundancy of DB -->\n<p>You probably also have a script to back up the database, or replicate\nit on a separate machine, for redundancy.</p>\n<h2 id=\"deployment-staging\">Deployment staging</h2>\n<p>Eventually, you might have a staged deployment strategy. So you might\nhave a developer testing machine, you might have a QA machine, a\nstaging machine, and finally a production machine. All of these are\nconfigured in pretty much the same way, but they are deployed at\ndifferent times and probably the system administrator is the only one\nwith access to deploy to production.</p>\n<!-- Continuum -->\n<p>It’s clear by this point that I’m describing a continuum from "hobby\nproject" to "enterprise serious business synergy solutions".</p>\n<h2 id=\"packaging-with-docker-is-good\">Packaging with Docker is good</h2>\n<p>Docker effectively leads to collapsing all of your system dependencies\nfor your binary to run into one contained package. This is good,\nbecause dependency management is hell. It's also highly wasteful,\nbecause its level of granularity is very wide. But this is a trade-off\nwe accept for the benefits.</p>\n<h2 id=\"custodians-multiple-processes-are-useful\">Custodians multiple processes are useful</h2>\n<p>Docker doesn’t have much to say about starting and restarting\nservices. I’ve explored using CoreOS with the hosting provider Digital\nOcean, and simply running a fresh virtual machine, with the given\nDocker image.</p>\n<p>However, you quickly run into the problem of starting up and tearing\ndown:</p>\n<ul>\n<li>When you start the service, you need certain liveness checks\nand health checks, so if the service fails to start then you should\nnot stop the existing service from running, for example. You should\nkeep the existing ones running.</li>\n<li>If the process fails at any time during running then you should also\nrestart the process. I thought about this point a lot, and came to the\nconclusion that it’s better to have your process be restarted than to\nassume that the reason it failed was so dangerous that the process\nshouldn’t start again. Probably it’s more likely that there is an\nexception or memory issue that happened in a pathological case which\nyou can investigate in your logging system. But it doesn’t mean that\nyour users should suffer by having downtime.</li>\n<li>The natural progression of this functionality is to support\ndifferent rollout strategies. Do you want to switch everything to the\nnew system in one go, do you want it to be deployed piece-by-piece?</li>\n</ul>\n<!-- Summary: You Realise Worth Of Ops Tools -->\n<p>It’s hard to fully appreciate the added value of ops systems like\nKubernetes, Istio/Linkerd, Argo CD, Prometheus, Terraform, etc. until\nyou decide to design a complete architecture yourself, from scratch,\nthe way you want it to work in the long term.</p>\n<h2 id=\"kubernetes-provides-exactly-that\">Kubernetes provides exactly that</h2>\n<p>What system happens to accept Docker images, provide custodianship,\nroll out strategies, and trivial redeploy? Kubernetes.</p>\n<p>It provides this classical monitoring and custodian responsibilities\nthat plenty of other systems have done in the past. However, unlike\nsimply running a process and testing if it’s fine and then turning off\nanother process, Kubernetes buys into Docker all the way. Processes\nare isolated from each other, in both the network on the file\nsystem. Therefore, you can very reliably start and stop the services\non the same machine. Nothing about a process's machine state is\npersistent, therefore you are forced to design your programs in a way\nthat state is explicitly stored either ephemerally, or elsewhere.</p>\n<!-- Cloud Managed Databases Make This Practical -->\n<p>In the past it might be a little bit scarier to have your database\nrunning in such system, what if it automatically wipes out the\ndatabase process? With today’s cloud base deployments, it's more\ncommon to use a managed database such as that provided by Amazon,\nDigital Ocean, Google or Azure. The whole problem of updating and\nbacking up your database can pretty much be put to one\nside. Therefore, you are free to mess with the configuration or\ntopology of your cluster as much as you like without affecting your\ndatabase.</p>\n<h2 id=\"declarative-is-good-vendor-lock-in-is-bad\">Declarative is good, vendor lock-in is bad</h2>\n<p>A very appealing feature of a deployment system like Kubernetes is\nthat everything is automatic and declarative. You stick all of your\nconfiguration in simple YAML files (which is also a curse because YAML\nhas its own warts and it's not common to find formal schemas for it).\nThis is also known as "infrastructure as code".</p>\n<p>Ideally, you should have as much as possible about your infrastructure\nin code checked in to a repo so that you can reproduce it and track\nit.</p>\n<p>There is also a much more straight-forward path to migrate from one\nservice provider to another service provider. Kubernetes is supported\non all the major service providers (Google, Amazon, Azure), therefore\nyou are less vulnerable to vendor lock-in. They also all provide\nmanaged databases that are standard (PostgreSQL, for example) with\ntheir normal wire protocols. If you were using the vendor-specific\nAPIs to achieve some of this, you'd be stuck on one vendor. I, for\nexample, am not sure whether to go with Amazon or Azure on a big\npersonal project right now. If I use Kubernetes, I am mitigating risk.</p>\n<p>With something like Terraform you can go one step further, in which\nyou write code that can create your cluster completely from\nscratch. This is also more vendor independent/mitigated.</p>\n<h2 id=\"more-advanced-rollout\">More advanced rollout</h2>\n<p>Your load balancer and your DNS can also be in code. Typically a load\nbalancer that does the job is nginx. However, for more advanced\ndeployments such as A/B or green/blue deployments, you may need\nsomething more advanced like Istio or Linkerd.</p>\n<p>Do I really want to deploy a new feature to all of my users? Maybe,\nthat might be easier. Do I want to deploy a different way of marketing\nmy product on the website to all users at once? If I do that, then I\ndon’t exactly know how effective it is. So, I could perhaps do a\ndeployment in which half of my users see one page and half of the\nusers see another page. These kinds of deployments are\nstraight-forwardly achieved with Istio/Linkerd-type service meshes,\nwithout having to change any code in your app.</p>\n<h2 id=\"relationship-between-code-and-deployed-state\">Relationship between code and deployed state</h2>\n<p>Let's think further than this.</p>\n<p>You've set up your cluster with your provider, or Terraform. You've\nset up your Kubernetes deployments and services. You've set up your CI\nto build your project, produce a Docker image, and upload the images\nto your registry. So far so good.</p>\n<p>Suddenly, you’re wondering, how do I actually deploy this? How do I\ncall Kubernetes, with the correct credentials, to apply this new\nDoctor image to the appropriate deployment?</p>\n<p>Actually, this is still an ongoing area of innovation. An obvious way\nto do it is: you put some details on your CI system that has access to\nrun kubectl, then set the image with the image name and that will try\nto do a deployment. Maybe the deployment fails, you can look at that\nresult in your CI dashboard.</p>\n<p>However, the question comes up as what is currently actually deployed\non production? Do we really have infrastructure as code here?</p>\n<p>It’s not that I edited the file and that update suddenly got\nreflected. There’s no file anywhere in Git that contains what the\ncurrent image is. Head scratcher.</p>\n<p>Ideally, you would have a repository somewhere which states exactly\nwhich image should be deployed right now. And if you change it in a\ncommit, and then later revert that commit, you should expect the\nproduction is also reverted to reflect the code, right?</p>\n<h2 id=\"argocd\">ArgoCD</h2>\n<p>One system which attempts to address this is ArgoCD. They implement\nwhat they call "GitOps". All state of the system is reflected in a Git\nrepo somewhere. In Argo CD, after your GitHub/Gitlab/Jenkins/Travis CI\nsystem has pushed your Docker image to the Docker repository, it makes\na gRPC call to Argo, which becomes aware of the new image. As an\nadmin, you can now trivially look in the UI and click "Refresh" to\nredeploy the new version.</p>\n<h2 id=\"infra-as-code\">Infra-as-code</h2>\n<p>The common running theme in all of this is\ninfrastructure-as-code. It’s immutability. It’s declarative. It’s\nremoving the number of steps that the human has to do or care\nabout. It’s about being able to rewind. It’s about redundancy. And\nit’s about scaling easily.</p>\n<!-- Circling Back -->\n<p>When you really try to architect your own system, and your business\nwill lose money in the case of ops mistakes, then you start to think\nthat all of these advantages of infrastructure as code start looking\nreally attractive.</p>\n<p>But before you really sit down and think about this stuff, however, it\nis pretty hard to empathise or sympathise with the kind of concerns\nthat people using these systems have.</p>\n<!-- Downsides/Tax -->\n<p>There are some downsides to these tools, as with any:</p>\n<ul>\n<li>Docker is quite wasteful of time and space</li>\n<li>Kubernetes is undoubtedly complex, and leans heavily on YAML</li>\n<li><a href=\"https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/\">All abstractions are leaky</a>,\ntherefore tools like this all leak</li>\n</ul>\n<h2 id=\"where-the-dev-meets-the-ops\">Where the dev meets the ops</h2>\n<p>Now that I’ve started looking into these things and appreciating their\nuse, I interact a lot more with the ops side of our DevOps team at work,\nand I can also be way more helpful in assisting them with the\ninformation that they need, and also writing apps which anticipate the\nkind of deployment that is going to happen. The most difficult\nchallenge typically is metrics and logging, for run-of-the-mill apps,\nI’m not talking about high-performance apps.</p>\n<!-- An Exercise -->\n<p>One way way to bridge the gap between your ops team and dev team,\ntherefore, might be an exercise meeting in which you do have a dev\nperson literally sit down and design an app architecture and\ninfrastructure, from the ground up using the existing tools that we\nhave that they are aware of and then your ops team can point out the\nadvantages and disadvantages of their proposed solution. Certainly,\nI think I would have benefited from such a mentorship, even for an\nhour or two.</p>\n<!-- Head-In-The-Sand Also Works -->\n<p>It may be that your dev team and your ops team are completely separate\nand everybody’s happy. The devs write code, they push it, and then it\nmagically works in production and nobody has any issues. That’s\ncompletely fine. If anything it would show that you have a very good\nprocess. In fact, that’s pretty much how I’ve worked for the past\neight years at this company.</p>\n<p>However, you could derive some benefit if your teams are having\ndifficulty communicating.</p>\n<p>Finally, the tools in the ops world aren't perfect, and they're made\nby us devs. If you have a hunch that you can do better than these\ntools, you should learn more about them, and you might be right.</p>\n<h2 id=\"what-we-do\">What we do</h2>\n<p>FP Complete are using a great number of these tools, and we're writing\nour own, too. If you'd like to know more, email use at\n<a href=\"mailto:[email protected]\">[email protected]</a>.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/",
"slug": "devops-for-developers",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps for (Skeptical) Developers",
"description": null,
"updated": null,
"date": "2020-08-16",
"year": 2020,
"month": 8,
"day": 16,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"functional programming",
"devops"
]
},
"authors": [],
"extra": {
"author": "Chris Done",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/devops-for-developers/",
"components": [
"blog",
"devops-for-developers"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "on-being-skeptical",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#on-being-skeptical",
"title": "On being skeptical",
"children": []
},
{
"level": 2,
"id": "the-humble-app",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#the-humble-app",
"title": "The humble app",
"children": []
},
{
"level": 2,
"id": "disk-failures-are-not-that-common",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#disk-failures-are-not-that-common",
"title": "Disk failures are not that common",
"children": []
},
{
"level": 2,
"id": "auto-deployment-is-better-than-manual",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#auto-deployment-is-better-than-manual",
"title": "Auto-deployment is better than manual",
"children": []
},
{
"level": 2,
"id": "backups-become-worth-it",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#backups-become-worth-it",
"title": "Backups become worth it",
"children": []
},
{
"level": 2,
"id": "deployment-staging",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#deployment-staging",
"title": "Deployment staging",
"children": []
},
{
"level": 2,
"id": "packaging-with-docker-is-good",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#packaging-with-docker-is-good",
"title": "Packaging with Docker is good",
"children": []
},
{
"level": 2,
"id": "custodians-multiple-processes-are-useful",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#custodians-multiple-processes-are-useful",
"title": "Custodians multiple processes are useful",
"children": []
},
{
"level": 2,
"id": "kubernetes-provides-exactly-that",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#kubernetes-provides-exactly-that",
"title": "Kubernetes provides exactly that",
"children": []
},
{
"level": 2,
"id": "declarative-is-good-vendor-lock-in-is-bad",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#declarative-is-good-vendor-lock-in-is-bad",
"title": "Declarative is good, vendor lock-in is bad",
"children": []
},
{
"level": 2,
"id": "more-advanced-rollout",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#more-advanced-rollout",
"title": "More advanced rollout",
"children": []
},
{
"level": 2,
"id": "relationship-between-code-and-deployed-state",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#relationship-between-code-and-deployed-state",
"title": "Relationship between code and deployed state",
"children": []
},
{
"level": 2,
"id": "argocd",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#argocd",
"title": "ArgoCD",
"children": []
},
{
"level": 2,
"id": "infra-as-code",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#infra-as-code",
"title": "Infra-as-code",
"children": []
},
{
"level": 2,
"id": "where-the-dev-meets-the-ops",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#where-the-dev-meets-the-ops",
"title": "Where the dev meets the ops",
"children": []
},
{
"level": 2,
"id": "what-we-do",
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/#what-we-do",
"title": "What we do",
"children": []
}
],
"word_count": 2618,
"reading_time": 14,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/canary-deployment-istio/",
"title": "Canary Deployment with Kubernetes and Istio"
},
{
"permalink": "https://tech.fpcomplete.com/blog/devops-for-developers/",
"title": "DevOps for (Skeptical) Developers"
},
{
"permalink": "https://tech.fpcomplete.com/blog/devops-unifying-dev-ops-qa/",
"title": "DevOps: Unifying Dev, Ops, and QA"
},
{
"permalink": "https://tech.fpcomplete.com/blog/istio-mtls-debugging-story/",
"title": "An Istio/mutual TLS debugging story"
},
{
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
"title": "Deploying Rust with Windows Containers on Kubernetes"
}
]
},
{
"relative_path": "blog/our-history-containerization.md",
"colocated_path": null,
"content": "<p>FP Complete has been working with containerization (or OS-level virtualization) since before it was popularized by Docker. What follows is a brief history of how and why we got started using containers, and how our use of containerization has evolved as new technology has emerged.</p>\n<h2 id=\"brief-history\">Brief history</h2>\n<p>Our first foray into containerization started at the beginning of the company, when we were building a web-based integrated development environment for Haskell. We needed a secure and cost-effective way to be able to compile and run Haskell code on the server side. While giving each active user their own virtual machine with dedicated CPU and memory would have satisfied the first requirement (security), it would have been far from cost effective. GHC, the de-facto standard Haskell compiler, is notoriously resource hungry, so the VM would have to be quite large (it's not uncommon to need 4 GB or more of RAM to compile a fairly straightforward piece of software). We needed a way to share CPU and memory resources between multiple users securely and be able to shift load around a cluster of virtual machines to keep usage balanced and avoid one heavy user from impacting the experience of others users on the same VM. This sounds like a job for container orchestration! Unfortunately, Docker didn't exist yet, let alone Kubernetes. The state of the art for Linux containers at the time was LXC, which was mostly a collection of shell scripts that helped with using the Linux kernel features that underly all Linux container solutions, but at a much lower level than Docker. On top of this we built everything we needed to distribute "images" of a base filesystem plus overlay for local changes, isolated container networks, and ability to shift load based on VM and container utilization -- that is, many of the things Docker and Kubernetes do now, but tailored specifically for our application's needs.</p>\n<p>When Docker came on the scene, we embraced it despite some early growing pains, since it was much easier to use and more general purpose than our "bespoke" system and we thought it likely that it would soon become a de-facto standard, which is exactly what happened. For internal and customer solutions, Docker allowed us to create much more nimble and efficient deployment solutions that satisfied the requirement for immutable infrastructure. Prior to Docker, we achieved immutability by building VM images and spinning up virtual machines; a much slower and heavier process than building a Docker image and running it on an already-provisioned VM. This also allowed us to run multiple applications isolated from one another on a single VM without worry of interference with each other.</p>\n<p>Finally Kubernetes arrived. While it was not the first orchestration platform, it was the first that wholeheartedly standardized on Docker containers. Once again we embraced it, despite some early growing pains, due to its ease of use, multi-cloud support, fast pace of improvement, and backing of a major company (Google). We once again bet that Kubernetes would become the de-facto standard, which is again exactly what happened. With Kubernetes, instead of having to think about which VM a container would run on, we can have a cluster of general-purpose nodes and let the orchestrator worry about what runs on which node. This lets us squeeze yet more efficiency out of our resources. Due to its ease of use and built-in support for common rollout strategies, we can give developers the ability to deploy their apps directly, and since it is so easy to tie into CI/CD pipelines we can drastically simplify automated deployment processes.</p>\n<p>Going forward, we continue to keep up with the latest developments in containerization and are constantly evaluating new and alternative technologies, to stay on the forefront of DevOps.</p>\n<h2 id=\"why-we-really-like-it\">Why we really like it</h2>\n<ul>\n<li>\n<p>Supports <a href=\"https://tech.fpcomplete.com/platformengineering/immutable-infrastructure/\">immutable infrastructure</a>.</p>\n</li>\n<li>\n<p>Fast build and deployment processes.</p>\n</li>\n<li>\n<p>Low overhead and efficient use of compute resources.</p>\n</li>\n<li>\n<p>Easy integration with CI/CD pipelines.</p>\n</li>\n<li>\n<p>Isolation of applications from others running on the same machine.</p>\n</li>\n<li>\n<p>Bundles dependencies with the application, so they can be tested together and there's no risk of deploying to an incorrect environment.</p>\n</li>\n<li>\n<p>Developers on various platforms can build and test the application in a consistent environment.</p>\n</li>\n</ul>\n<h2 id=\"limitations-of-the-technology\">Limitations of the technology</h2>\n<ul>\n<li>\n<p>Containers and container orchestration are most mature on Linux, although Docker and Kubernetes do now support running Windows containers on machines running Windows, and most modern server operating system have support for some kind of containerization (but not necessarily Docker or Kubernetes).</p>\n</li>\n<li>\n<p>Containers and container orchestration add additional layers of abstraction and complexity. This can, at times, make diagnosing problems more difficult.</p>\n</li>\n<li>\n<p>Legacy applications can be tricky to containerize since they assume they are running on a persistent machine rather than an ephemeral one. While this can be mitigated using persistent volumes, it makes the containerization strategy less straightforward.</p>\n</li>\n<li>\n<p>While properly configured containers are relatively secure, all containers running on a host share a single operating system kernel which means there is greater risk that a process can use a security vulnerability to "break out" of its container than when using VMs.</p>\n</li>\n</ul>\n<h2 id=\"resources\">Resources</h2>\n<p>From FP Complete:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/platformengineering/containerization/\">Introduction to Containerization concepts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/platformengineering/immutable-infrastructure/\">Introduction to Immutable Infrastructure concepts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/deploying_haskell_apps_with_kubernetes/\">Webinar: Deploying Haskell apps with Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Blog post: Deploying rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/02/immutability-docker-haskells-st-type/\">Blog post: Immutability, Docker, and Haskell's ST type</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/01/containerize-legacy-app/\">Blog post: Containerizing a legacy application: an overview</a></li>\n</ul>\n<p>From the web:</p>\n<ul>\n<li><a href=\"https://www.docker.com/resources/what-container\">What is a container?</a></li>\n<li><a href=\"https://www.docker.com/get-started\">Get started with Docker</a></li>\n<li><a href=\"https://kubernetes.io/docs/concepts/\">Kubernetes concepts</a></li>\n<li><a href=\"https://kubernetes.io/docs/setup/\">Getting started with Kubernetes</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
"slug": "our-history-containerization",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Our history with containerization",
"description": "FP Complete has a long history of working with containers, beginning before Docker existed and staying ahead of advances in the technology.",
"updated": null,
"date": "2020-08-13",
"year": 2020,
"month": 8,
"day": 13,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"docker",
"kubernetes"
]
},
"authors": [],
"extra": {
"author": "FP Complete Team",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/our-history-containerization/",
"components": [
"blog",
"our-history-containerization"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "brief-history",
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/#brief-history",
"title": "Brief history",
"children": []
},
{
"level": 2,
"id": "why-we-really-like-it",
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/#why-we-really-like-it",
"title": "Why we really like it",
"children": []
},
{
"level": 2,
"id": "limitations-of-the-technology",
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/#limitations-of-the-technology",
"title": "Limitations of the technology",
"children": []
},
{
"level": 2,
"id": "resources",
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/#resources",
"title": "Resources",
"children": []
}
],
"word_count": 957,
"reading_time": 5,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/cloud-deployment-models-advantages-and-disadvantages.md",
"colocated_path": null,
"content": "<p>In this post we show a couple of options when it comes to a cloud\ndeployment model. Depending on the needs of your organization some\noptions may suit you better than others.</p>\n<h1 id=\"private-cloud\">Private Cloud</h1>\n<p>A private cloud is cloud infrastructure that only members of your organization\ncan utilize. It is typically owned and managed by the organization itself and\nis hosted on premises but it could also be managed by a third party in a secure\ndatacenter. This deployment model is best suited for organizations that deal\nwith sensitive data and/or are required to uphold certain security standards by\nvarious regulations.</p>\n<p>Advantages:</p>\n<ul>\n<li>Organization specific</li>\n<li>High degree of security and level of control</li>\n<li>Ability to choose your resources (ie. specialized hardware)</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Lack of elasticity and capacity to scale (bursts)</li>\n<li>Higher cost</li>\n<li>Requires a significant amount of engineering effort</li>\n</ul>\n<h1 id=\"public-cloud\">Public Cloud</h1>\n<p>Public cloud refers to cloud infrastructure that is located and\naccessed over the public network. It provides a convenient way to\nburst and scale your project depending on the use and is typically\npay-per-use. Popular examples include <a href=\"https://aws.amazon.com\">Amazon AWS</a>,\n<a href=\"https://cloud.google.com/\">Google Cloud Platform</a> and <a href=\"https://azure.microsoft.com/\">Microsoft\nAzure</a>.</p>\n<p>Advantages:</p>\n<ul>\n<li>Scalability/Flexibility/Bursting</li>\n<li>Cost effective</li>\n<li>Ease of use</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Shared resources</li>\n<li>Operated by third party</li>\n<li>Unreliability</li>\n<li>Less secure</li>\n</ul>\n<h1 id=\"hybrid-cloud\">Hybrid Cloud</h1>\n<p>This type of cloud infrastructure assumes that you are hosting your system both\non private and public cloud . One use case might be regulation requiring data\nto be stored in a locked down private data center but have the application\nprocessing parts available on the public cloud and talking to the private\ncomponents over a secure tunnel.</p>\n<p>Another example is hosting most of the system inside a private cloud and having\na clone of the system on the public cloud to allow for rapid scaling and\naccommodating bursts of new usage that would otherwise not be possible on the\nprivate cloud.</p>\n<p>Advantages:</p>\n<ul>\n<li>Cost effective</li>\n<li>Scalability/Flexibility</li>\n<li>Balance of convenience and security</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Same disadvantages as the public cloud</li>\n</ul>\n<h1 id=\"multi-cloud\">Multi-Cloud</h1>\n<p>This option is a variant of the hybrid cloud but we refer to it when we mean\n"using multiple public cloud providers". It is mostly used for mission critical\nsystems that want to minimize the amount of down time if a specific service on\na particular cloud goes down (e.g., the S3 outage of 2017 that took down a lot\nof web services with it). This option is arguably the most advanced option and\nsacrifices convenience for security and reliability. It requires significant\nexpertise and engineering effort to get right since most platforms vary widely\nbetween the type of resources and services that they provide in subtle ways.</p>\n<p>When chosing a cloud deployment model weigh the advantages and disadvantages of\neach option as it relates to your business objectives. </p>\n<p>If you liked this post you may also like: <a href=\"https://tech.fpcomplete.com/blog/intro-to-devops-on-govcloud/\">Introduction to DevOps on AWS Gov Cloud</a></p>\n",
"permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/",
"slug": "cloud-deployment-models-advantages-and-disadvantages",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloud Deployment Models: Advantages and Disadvantages",
"description": "Choosing the correct Cloud Deployment Model is crucial. Discover the advantages and disadvantages of each and how to choose the best one for your organization.",
"updated": null,
"date": "2020-08-07T13:41:00Z",
"year": 2020,
"month": 8,
"day": 7,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "FP Complete Team",
"blogimage": "/images/blog-listing/deployment.png"
},
"path": "/blog/cloud-deployment-models-advantages-and-disadvantages/",
"components": [
"blog",
"cloud-deployment-models-advantages-and-disadvantages"
],
"summary": null,
"toc": [
{
"level": 1,
"id": "private-cloud",
"permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#private-cloud",
"title": "Private Cloud",
"children": []
},
{
"level": 1,
"id": "public-cloud",
"permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#public-cloud",
"title": "Public Cloud",
"children": []
},
{
"level": 1,
"id": "hybrid-cloud",
"permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#hybrid-cloud",
"title": "Hybrid Cloud",
"children": []
},
{
"level": 1,
"id": "multi-cloud",
"permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#multi-cloud",
"title": "Multi-Cloud",
"children": []
}
],
"word_count": 486,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/understanding-cloud-auth.md",
"colocated_path": null,
"content": "<p>The topics of authentication and authorization usually appear simple but turn out to hide significant complexity. That's because, at its core, auth is all about answering two questions:</p>\n<ul>\n<li>Who are you</li>\n<li>What are you allowed to do</li>\n</ul>\n<p>However, the devil is in the details. Seasoned IT professionals, software developers, and even typical end users are fairly accustomed at this point to many of the most common requirements and pain points around auth.</p>\n<p>Cloud authentication and authorization is not drastically different from non-cloud systems, at least in principle. However, there are a few things about the cloud and its common use cases that introduce some curve balls:</p>\n<ul>\n<li>As with most auth systems, cloud providers each have their own idiosyncracies</li>\n<li>Cloud auth systems have almost always been designed from the outset to work API first, and interact with popular web technologies</li>\n<li>Security is usually taken very seriously in cloud, leading to workflows arguably more complex than other systems</li>\n<li>Cloud services themselves typically need some method to authenticate to the cloud, e.g. a virtual machine gaining access to private blob storage</li>\n<li>Many modern DevOps tools are commonly deployed to cloud systems, and introduce extra layers of complexity and indirection</li>\n</ul>\n<p>This blog post series is going to focus on the full picture of authentication and authorization, focusing on a cloud mindset. There is significant overlap with non-cloud systems in this, but we'll be covering those details as well to give a complete picture. Once we have those concepts and terms in place, we'll be ready to tackle the quirks of individual cloud providers and commonly used tooling.</p>\n<h2 id=\"goals-of-authentication\">Goals of authentication</h2>\n<p>We're going to define authentication as proving your identity to a service provider. A service provider can be anything from a cloud provider offering virtual machines, to your webmail system, to a bouncer at a bartender who has your name on a list. The identity is an equally flexible concept, and could be "my email address" or "my user ID in a database" or "my full name."</p>\n<p>To help motivate the concepts we'll be introducing, let's understand what goals we're trying to achieve with typical authentication systems.</p>\n<ul>\n<li>Allow a user to prove who he/she is</li>\n<li>Minimize the number of passwords a user has to memorize</li>\n<li>Minimize the amount of work IT administrator have to do to create new user accounts, maintain them, and ultimately shut them down\n<ul>\n<li>That last point is especially important; no one wants the engineer who was just fired to still be able to authenticate to one of the systems</li>\n</ul>\n</li>\n<li>Provide security against common attack vectors, like compromised passwords or lost devices</li>\n<li>Provide a relatively easy-to-use method for user authentication</li>\n<li>Allow a computer program/application/service (lets call these all apps) to prove what it is</li>\n<li>Provide a simple way to allocate, securely transmit, and store credentials necessary for those proofs</li>\n<li>Ensure that credentials can be revoked when someone leaves a company or an app is no longer desired (or is compromised)</li>\n</ul>\n<h2 id=\"goals-of-authorization\">Goals of authorization</h2>\n<p>Once we know the identity of something or someone, the next question is: what are they allowed to do? That's where authorization comes into play. A good authorization provides these kinds of features:</p>\n<ul>\n<li>Fine grained control, when necessary, of who can do what</li>\n<li>Ability to grant common sets of permissions as a bundle, avoiding tedium and mistakes</li>\n<li>A centralized collection of authorization rules</li>\n<li>Ability to revoke a permission, and see that change propagated quickly to multiple systems</li>\n<li>Ability to delegate permissions from one identity to another\n<ul>\n<li>For example: if I'm allowed to read a file on some cloud storage server, it would be nice if I could let my mail client do that too, without the mail program pretending it's me</li>\n</ul>\n</li>\n<li>To avoid mistakes, it would be nice to assume a smaller set of permissions when performing some operations\n<ul>\n<li>For example: as a super user/global admin/root user, I'd like to be able to say "I don't want to accidentally delete systems files right now"</li>\n</ul>\n</li>\n</ul>\n<p>In simple systems, the two concepts of authentication and authorization is straightforward. For example, on a single-user computer system, my username would be my identity, I would authenticate using my password, and as that user I would be authorized to do anything on the computer system.</p>\n<p>However, most modern systems end up with many additional layers of complexity. Let's step through what some of these concepts are.</p>\n<h2 id=\"users-and-policies\">Users and policies</h2>\n<p>A basic concept of authentication would be a <em>user</em>. This typically would refer to a real human being accessing some service. Depending on the system, they may use identifiers like usernames or email addresses. User accounts are often times given to non-users, like automated processes or Continuous Integration (CI) jobs. However, most modern systems would recommend using a service account (discussed below) or similar instead.</p>\n<p>Sometimes, the user is the end of the story. When I log into my personal Gmail account, I'm allowed to read and write emails in that account. However, when dealing with multiuser shared systems, some form of permissions management comes along as well. Most cloud providers have a robust and sophisticated set of policies, where you can specify fine-grained individual permissions within a policy.</p>\n<p>As an example, with AWS, the S3 file storage service provides an array of individual actions from the obvious (read, write, and delete an object) to the more obscure (like setting retention policies on an object). You can also specify which files can be affected by these permissions, allowing a user to, for example, have read and write access in one directory, but read-only access in another.</p>\n<p>Managing all of these individual permissions each time for each user is tedious and error prone. It makes it difficult to understand what a user can actually do. Common practice is to create a few policies across your organization, and assign them appropriately to each user, trying to minimize the amount of permissions granted out.</p>\n<h2 id=\"groups\">Groups</h2>\n<p>Within the world of authorization, groups are a natural extensions of users and policies. Odds are you'll have multiple users and multiple policies. And odds are that you're likely to have groups of users who need to have similar sets of policy documents. You <em>could</em> create a large master policy that encompasses the smaller policies, but that could be difficult to maintain. You could also apply each individual policy document to each user, but that's difficult to keep track of.</p>\n<p>Instead, with groups, you can assign multiple policies to a group, and multiple groups to a user. If you have a billing team that needs access to the billing dashboard, plus the list of all users in the system, you may have a <code>BillingDashboard</code> policy as well as a <code>ListUsers</code> policy, and assign both policies to a <code>BillingTeam</code> group. You may then also assign the <code>ListUsers</code> policy to the <code>Operators</code> group.</p>\n<h2 id=\"roles\">Roles</h2>\n<p>There's a downside with this policies and groups setup described above. Even if I'm a superadmin on my cloud account, I may not want to have the responsibility of all those powers at all times. It's far too easy to accidentally destroy vital resources like a database server. Often, we would like to artificially limit our permissions while operating with a service.</p>\n<p>Roles allow us to do this. With roles, we create a named role for some set of operations, assign a set of policies to it, and provide some way for users to <em>assume</em> that role. When you assume that role, you can perform actions using that set of permissions, but audit trails will still be able to trace back to the original user who performed the actions.</p>\n<p>Arguably a cloud best practice is to grant users only enough permissions to assume various roles, and otherwise unable to perform any meaningful actions. This forces a higher level of stated intent when interacting with cloud APIs.</p>\n<h2 id=\"service-accounts\">Service accounts</h2>\n<p>Some cloud providers and tools support the concept of a service account. While users <em>can</em> be used for both real human beings and services, there is often a mismatch. For example, we typically want to enable multi-factor authentication on real user accounts, but alternative authentication schemes on services.</p>\n<p>One approach to this is service accounts. Service accounts vary among different providers, but typically allow defining some kind of service, receiving some secure token or password, and assigning either roles or policies to that service account.</p>\n<p>In some cases, such as Amazon's EC2, you can assign roles directly to cloud machines, allowing programs running on those machines to easily and securely assume those roles, without needing to store any kinds of tokens or secrets. This concept nicely ties in with roles for users, making role-based management of both users and services and emerging best practice in industry.</p>\n<h2 id=\"rbac-vs-acl\">RBAC vs ACL</h2>\n<p>The system described above is known as Role Based Access Control, or RBAC. Many people are likely familiar with the related concept known as Access Control Lists, or ACL. With ACLs, administrators typically have more work to do, specifically managing large numbers of resources and assigning users to each of those per-resource lists. Using groups or roles significantly simplifies the job of the operator, and reduces the likelihood of misapplied permissions.</p>\n<h2 id=\"single-sign-on\">Single sign-on</h2>\n<p>Most modern DevOps platforms have multiple systems, each requiring separate authentication. For example, in a modern Kubernetes-based deployment, you're likely to have:</p>\n<ul>\n<li>The underlying cloud vendor\n<ul>\n<li>Both command line and web based access</li>\n</ul>\n</li>\n<li>Kubernetes itself\n<ul>\n<li>Both command line access and the Kubernetes Dashboard</li>\n</ul>\n</li>\n<li>A monitoring dashboard</li>\n<li>A log aggregation system</li>\n<li>Other company-specific services</li>\n</ul>\n<p>That's in addition to maintaining a company's standard directory, such as Active Directory or G Suite. Maintaining this level of duplication among user accounts is time consuming, costly, and dangerous. Furthermore, while it's reasonable to securely lock down a single account via MFA and other mechanisms, expecting users to maintain such information for all of these systems securely is unreasonable. And some of these systems don't even provide such security mechanisms.</p>\n<p>Instead, single sign-on provides a standards-based, secure, and simple method for authenticating to these various systems. In some cases, user accounts still need to be created in each individual system. In those cases, automated user provisioning is ideal. We'll talk about some of that in later posts. In other cases, like AWS's identity provider mechanism, it's possible for temporary identifiers to be generated on-the-fly for each SSO-based login, with roles assigned.</p>\n<p>Deeper questions arise about where permissions management is handled. Should the central directory, like Active Directory, maintain permissions information for all systems? Should a single role in the directory represent permissions information in all of the associated systems? Should a separate set of role mappings be maintained for each service?</p>\n<p>Typically, organizations end up including some of each, depending on the functionality available in the underlying tooling, and organizational discretion on how much information to include in a directory.</p>\n<h2 id=\"going-deeper\">Going deeper</h2>\n<p>What we've covered here sets the stage for understanding many cloud-specific authentication and authorization schemes. Going forward, we're going to cover a look into common auth protocols, followed by a review of specific cloud providers and tools, specifically AWS, Azure, and Kubernetes.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/",
"slug": "understanding-cloud-auth",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Understanding cloud auth",
"description": "Authentication and authorization are a core component to any secure system. In this overview post, we will begin analyzing common patterns in cloud auth",
"updated": null,
"date": "2020-07-29",
"year": 2020,
"month": 7,
"day": 29,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"insights"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/understanding-cloud-auth/",
"components": [
"blog",
"understanding-cloud-auth"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "goals-of-authentication",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#goals-of-authentication",
"title": "Goals of authentication",
"children": []
},
{
"level": 2,
"id": "goals-of-authorization",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#goals-of-authorization",
"title": "Goals of authorization",
"children": []
},
{
"level": 2,
"id": "users-and-policies",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#users-and-policies",
"title": "Users and policies",
"children": []
},
{
"level": 2,
"id": "groups",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#groups",
"title": "Groups",
"children": []
},
{
"level": 2,
"id": "roles",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#roles",
"title": "Roles",
"children": []
},
{
"level": 2,
"id": "service-accounts",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#service-accounts",
"title": "Service accounts",
"children": []
},
{
"level": 2,
"id": "rbac-vs-acl",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#rbac-vs-acl",
"title": "RBAC vs ACL",
"children": []
},
{
"level": 2,
"id": "single-sign-on",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#single-sign-on",
"title": "Single sign-on",
"children": []
},
{
"level": 2,
"id": "going-deeper",
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-auth/#going-deeper",
"title": "Going deeper",
"children": []
}
],
"word_count": 1863,
"reading_time": 10,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
"title": "Deploying Rust with Windows Containers on Kubernetes"
},
{
"permalink": "https://tech.fpcomplete.com/blog/understanding-cloud-deployments/",
"title": "Understanding Cloud Software Deployments"
},
{
"permalink": "https://tech.fpcomplete.com/platformengineering/security/",
"title": "Security in a DevOps World"
}
]
},
{
"relative_path": "blog/understanding-devops-roles-and-responsibilities.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/understanding-devops-roles-and-responsibilities/",
"slug": "understanding-devops-roles-and-responsibilities",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Understanding DevOps Roles and Responsibilities",
"description": "Companies are implementing DevOps at an increasingly rapid rate. Discover the roles and responsibilities and how to implement DevOps into your latest project.",
"updated": null,
"date": "2020-07-24T13:12:00Z",
"year": 2020,
"month": 7,
"day": 24,
"taxonomies": {
"categories": [
"insights",
"devops"
],
"tags": [
"devops"
]
},
"authors": [],
"extra": {
"author": "FP Complete Team",
"html": "hubspot-blogs/understanding-devops-roles-and-responsibilities.html",
"blogimage": "/images/blog-listing/executive-insights.png"
},
"path": "/blog/understanding-devops-roles-and-responsibilities/",
"components": [
"blog",
"understanding-devops-roles-and-responsibilities"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/preparing-for-cloud-computing-trends.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/preparing-for-cloud-computing-trends/",
"slug": "preparing-for-cloud-computing-trends",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Preparing for Upcoming Cloud Computing Trends",
"description": "Cloud Computing is growing at a rate 7 times faster than the rest of IT with no signs of slowing in the coming years. Discover all the trends businesses should be preparing for in order to succeed in 2020 and beyond. ",
"updated": null,
"date": "2020-07-24T11:05:00Z",
"year": 2020,
"month": 7,
"day": 24,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"insights",
"devops"
]
},
"authors": [],
"extra": {
"author": "FP Complete Team",
"html": "hubspot-blogs/preparing-for-cloud-computing-trends.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/preparing-for-cloud-computing-trends/",
"components": [
"blog",
"preparing-for-cloud-computing-trends"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/cloud-preparation-checklist.md",
"colocated_path": null,
"content": "<p>While moving to the cloud brings many benefits associated with it, we\nneed to also be aware of the pain points associated with such a move.\nThis post will discuss those pain points, provide ways to mitigate them, and\ngive you a checklist which can be used if you plan to migrate your\napplications to cloud. We will also discuss the advantages of\nmoving to the cloud.</p>\n<h2 id=\"common-pain-points\">Common pain points</h2>\n<p>One of the primary pain points in moving to the cloud is selecting the\nappropriate tools for a specific usecase. We have an abundance of tools\navailable, with many solving the same problem in different ways. To give\nyou a basic idea, this is the CNCF's (Cloud Native Computing\nFoundation) recommended path through the cloud native technologies:</p>\n<img src=\"/images/insights/cloud-prep-checklist/landscape.png\" alt=\"Cloud Native Landscape\" title=\"Cloud Native Landscape\" width=\"100%\">\n<p></p>\n<p>Picking the right tool is hard, and this is where having experience\nwith them comes in handy.</p>\n<p>Also, the existing knowledge of on-premises data centers may not be\ndirectly transferable when you plan to move to the cloud. An individual might\nhave to undergo a basic training to understand the terminology and the\nconcepts used by a particular cloud vendor. An on-premises system\nadministrator might be used to setting up firewalls via\n<a href=\"https://en.wikipedia.org/wiki/Iptables\">Iptables</a>, but he might also\nwant to consider using <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html\">Security\ngroups</a>\nif he plans to accomplish the same goals in the AWS ecosystem (for EC2 instances).</p>\n<p>Another point to consider while moving to the cloud is the ease with which you\ncan easily get locked in to a single vendor. You might start using\nAmazon's <a href=\"https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html\">Auto Scaling\nGroups</a>\nto automatically handle the load of your application. But when you plan\nto switch to another cloud vendor the migration might not be\nstraightforward. Switching between cloud services isn't easy, and if you want portability, you\nneed to make sure that your applications are built with a multi-cloud\nstrategy. This will allow you to easily switch between vendors if such a\nscenario arises. Taking advantage of containers and Kubernetes may give\nyou additional flexibility and ease portability between different cloud\nvendors.</p>\n<h2 id=\"advantages-of-moving\">Advantages of moving</h2>\n<p>Despite the pain points listed above, there are many advantages involved in\nmoving your applications to cloud. Note that even big media services\nprovider like\n<a href=\"https://netflixtechblog.com/four-reasons-we-choose-amazons-cloud-as-our-computing-platform-4aceb692afec\">Netflix</a>\nhas moved on to the cloud instead of building and managing their own\ndata center solution.</p>\n<h3 id=\"cost\">Cost</h3>\n<p>One of the primary advantages of leveraging the cloud is avoiding\nthe cost of building your\nown data center. Building a secure data center is not trivial. By\noffloading this activity to an external cloud provider, you can instead build your\napplications on top of the infrastructure provided by them. This not\nonly saves the initial capital expenditure but also saves headaches from\nreplacing hardware, such as replacing failing network switches. But note that\nswitching to the cloud will not magically save cost. Depending on your\napplication's architecture and workload, you have to be aware of the\nchoices you make and make sure that your choices are cost efficient.</p>\n<h3 id=\"uptime\">Uptime</h3>\n<p>Cloud vendors provide SLAs (Service Level Agreements) where they state\ninformation about uptime and the guarantees they make. This is a\nsnapshot from the Amazon Compute SLA:</p>\n<p><img src=\"/images/insights/cloud-prep-checklist/sla.png\" alt=\"SLA\" title=\"SLA\" /></p>\n<p>All major cloud providers have historically provided excellent uptime,\nespecially for applications that properly leverage availability zones.\nBut depending on a specific\nusecase/applications, you should define the acceptable uptime for your\napplication and make sure that your SLA matches with it. Also depending\non the requirements, you can architect your application such that it has\nmulti region deployments to provide a better uptime in case there is an\noutage in one region.</p>\n<h3 id=\"security-and-compliance\">Security and Compliance</h3>\n<p>Cloud deployments provide an extra benefit when working in regulated industries\nor with government projects. In many cases, cloud vendors provide regulation-compliant\nhardware.\nBy using cloud providers, we can take advantage of the various\ncompliance standards (eg: HIPAA, PCI etc) they meet.\nValidating an on-premises data center against such standards can be a time consuming,\nexpensive process. Relying on already validated hardware can be faster, cheaper, easier,\nand more reliable.</p>\n<p>Broadening the security topic, cloud vendors typically also provide\na wide range of additional security tools.</p>\n<p>Despite these boons,\nproper care must still be taken, and best practices must still be followed,\nto deploy an application securely.\nAlso, be aware that running on compliant hardware does not automatically\nensure compliance of the software. Code and infrastructure must still meet\nvarious standards.</p>\n<h3 id=\"ease-of-scaling\">Ease of scaling</h3>\n<p>With cloud providers, you can easily add and remove machines or add more\npower (RAM, CPU etc) to them. The ease with which you can horizontally and\nvertically scale your application without worrying about your\ninfrastructure is powerful, and can revolutionize how your approach\nhardware allocation. As your applications load increases,\nyou can easily scale up in a few minutes.</p>\n<p>One of the perhaps surprising benefits of this is that you don't need to\npreemptively scale up your hardware. Many cloud deployments are able\nto reduce the total compute capacity available in a cluster, relying\non the speed of cloud providers to scale up in response to increases in demand.</p>\n<h3 id=\"focus-on-problem-solving\">Focus on problem solving</h3>\n<p>With no efforts in maintaining the on-premises data center, you can\ninstead put your effort in your application and the problem it solves.\nThis allows you to focus on your core business problems and your\ncustomers.</p>\n<p>While not technically important, the cloud providers have energy\nefficient data centers and run it on better efficiency. As a case study,\n<a href=\"https://cloud.google.com/blog/topics/google-cloud-next/our-heads-in-the-cloud-but-were-keeping-the-earth-in-mind\">Google even uses machine learning technology to make its data centers\nmore\nefficient</a>.\nHence, it might be environmentally a better decision to run your\napplications on cloud.</p>\n<h2 id=\"getting-ready-for-cloud\">Getting ready for Cloud</h2>\n<p>Once you are ready for migrating to the cloud, you can plan for the next\nsteps and initiate the process. We have the following general checklist\nwhich we usually take and tailor it based on our clients requirements:</p>\n<h3 id=\"checklist\">Checklist</h3>\n<ul>\n<li>Make a list of your applications and dependencies which need to be\nmigrated.</li>\n<li>Benchmark your applications to establish cloud performance\nKPIs (Key Performance Indicators).</li>\n<li>List out any required compliance requirements for your\napplication and plan for ensuring it.</li>\n<li>Onboard relevant team members to the cloud service's use management\nsystem, ideally integrating with existing user directories and\nleveraging features like single sign on and automated user provisioning.</li>\n<li>Establish access controls to your cloud service, relying on role based\nauthorization techniques.</li>\n<li>Evaluate your migration options. You might want to re-architect it\nto take advantage of cloud-native technologies. Or you might simply\ndecide to shift the existing application without any changes.</li>\n<li>Create your migration plan in a Runbook.</li>\n<li>Have a rollback plan in case migration fails.</li>\n<li>Test your migration and rollback plans in a separate environment.</li>\n<li>Communicate about the migration to internal stakeholders and customers.</li>\n<li>Execute your cloud migration.</li>\n<li>Prune your on-premises infrastructure.</li>\n<li>Optimize your cloud infrastructure for your workloads.</li>\n</ul>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hope we were able to present you the challenges involved in\nmigration to cloud and how to prepare for them. We have helped various\ncompanies in migration and other devops services. Free feel to <a href=\"https://www.fpcomplete.com/contact-us/\">reach out to\nus</a> regarding any questions on\ncloud migrations or any of the other services.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/",
"slug": "cloud-preparation-checklist",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloud preparation checklist",
"description": "Considering a move to the cloud? Read up on cloud advantages, common pain points, and our recommended step by step process",
"updated": null,
"date": "2020-07-22",
"year": 2020,
"month": 7,
"day": 22,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Sibi Prabakaran",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/cloud-preparation-checklist/",
"components": [
"blog",
"cloud-preparation-checklist"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "common-pain-points",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#common-pain-points",
"title": "Common pain points",
"children": []
},
{
"level": 2,
"id": "advantages-of-moving",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#advantages-of-moving",
"title": "Advantages of moving",
"children": [
{
"level": 3,
"id": "cost",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#cost",
"title": "Cost",
"children": []
},
{
"level": 3,
"id": "uptime",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#uptime",
"title": "Uptime",
"children": []
},
{
"level": 3,
"id": "security-and-compliance",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#security-and-compliance",
"title": "Security and Compliance",
"children": []
},
{
"level": 3,
"id": "ease-of-scaling",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#ease-of-scaling",
"title": "Ease of scaling",
"children": []
},
{
"level": 3,
"id": "focus-on-problem-solving",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#focus-on-problem-solving",
"title": "Focus on problem solving",
"children": []
}
]
},
{
"level": 2,
"id": "getting-ready-for-cloud",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#getting-ready-for-cloud",
"title": "Getting ready for Cloud",
"children": [
{
"level": 3,
"id": "checklist",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#checklist",
"title": "Checklist",
"children": []
}
]
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/cloud-preparation-checklist/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 1276,
"reading_time": 7,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/devops-security-and-privacy-strategies.md",
"colocated_path": null,
"content": "<p>DevOps Security and Privacy—FP Complete’s\ncomprehensive, easy to understand guide designed\nto help you understand why they’re so critical to\nthe safety of your DevOps strategy.</p>\n<p>The following is a transcription of a live\nwebinar given by <a href=\"https://tech.fpcomplete.com/\">FP Complete</a>\nFounder and Chairman Aaron Contorer, on\n<a href=\"https://www.youtube.com/user/FPComplete\">FP Complete's YouTube Channel</a>.</p>\n<h2 id=\"introducing-aaron\">Introducing Aaron</h2>\n<p>I’m the Founder and Chairman of <a href=\"https://tech.fpcomplete.com/\">FP Complete</a>,\nwhere we help companies use state-of-the-art tools and\ntechniques to produce secure, lightning-fast,\nfeature-rich software, faster and more often.</p>\n<p>Before founding FP Complete, I was an\nexecutive at Microsoft, where I served as program\nmanager for distributed systems, and general\nmanager of Visual C++, the leading software\ndevelopment tool at that time. Also, I \narchitected MSN’s move to Internet-based server\nsoftware, served as the full-time technology\nadviser to Bill Gates, and I founded and ran the\ncompany’s Productivity Tools Team for complex\nsoftware engineering projects.</p>\n<p>Okay, so enough about me. Let’s begin this\ndiscussion by recognizing our industry’s\nunfortunate—but preventable—reality:</p>\n<h2 id=\"breaches-are-happening-far-too-often\">Breaches are happening far too often</h2>\n<p>We all know how bad the state of the\nworld is within security and privacy\nright now. Projects are getting very\ncomplicated. And I—just as a sample—want\nto point out that this is a very typical\nbreach. Monzo said that for six months\nunauthorized people had access to\npeople’s secret code numbers, their pin\nnumbers. I’m not singling them out at\nall, but rather saying… “This is very\ntypical.” They’re a bank, and they\ncompromised this type of data for months\nand months.</p>\n<p>How does it happen? It’s not only\nbecause of logging and monitoring not\nbeing in place, although that can be a\nbig factor. It’s because of complexity.\nHonestly, we’re all trying very hard to\ndo our jobs, but users keep asking and\nexecutives keep asking for new features.\nAnd that integration just creates point\nafter point where problems can happen,\nand things get overlooked.</p>\n<h2 id=\"opportunities-for-penetration-are-everywhere\">Opportunities for penetration are everywhere</h2>\n<p>I would argue that today’s\napplications are more about assembling\nbuilding blocks than they are about just\nwriting new code. But every time you\nincrease that complexity by adding more\nbuilding blocks, you increase the number\nof interface points between\ncomponents—the number of places where\nsomebody might have done something wrong.\nAnd so we’re really creating a system of\nentry points between component A and\ncomponent B. But entry points—that sounds\nlike something I would compromise if I\nwere a security violator, right?\nFurthermore, we’re manually configuring\nour systems. People aren’t using\ncontinuous deployment. And so there is\nsome wizard who’s supposed to go set up\nthe latest server or integrate it with\nthe database or integrate it with the web\nwith a firewall or whatever they’re\nsupposed to do. Every manual step creates\nfurther opportunities for penetration,\nfor defects, because people are\nimperfect. Even the best person in your\nteam doing a process a hundred times\nmight do it wrong, one or two times. An\nautomated scanner is going to find that\ntime, and it’s going to break into your\nsystem before you know it.</p>\n<h2 id=\"let-s-talk-devsecops\">Let’s talk DevSecOps</h2>\n<p>DevSecOps—DevOps with security stuck\nright in the middle. And I think that’s a\ngood way of looking at this problem. We\nwant to integrate all the different parts\nof our engineering into one pool of\nautomation, and include security and\nquality assurance as part of that\nautomated process. We talked earlier\nabout automated testing being part of our\nbuilds. But we want to go much farther\nthan that, as technical teams. We want to\nstart from the beginning of our projects,\ntalking about how secure they need to be.\nWhat are the risks that they’re supposed\nto defend against or or not create? We\nwant every member of the team to\nunderstand that system downtime—because\nsomebody broke in and trashed it, or even\nworse privacy violations which you can\nnever undo, because when people’s\npersonal information has been published,\nyou can’t unpublish it—we need to let our\nteam members know that these are\npriorities and put them on the to-do list\nfor the project. And we can’t call it\nsomething done if the security part isn’t\ndone. It’s not something we tack on at\nthe end. We don’t build in unsecured,\ncrazy, poorly architected apps, and then\nat the end, ask someone to build a brick\nwall around them. Because as soon as one\nlittle person gets through the brick\nwall, it’s open season. So, we want the\nengineers to know everything they do\nshould be checked for security. That’s a\nculture change to say that it’s\neveryone’s job.</p>\n<p>We need to integrate quality assurance\nwith security, which means somebody is\nchecking the software we wrote for\nweaknesses; somebody is trying to break\nin or, at least, trying to run tools that\nwill show us common ways to break in and\nweather their presence.</p>\n<p>And we need to inspect our cloud\nsystems that are running to make sure\nthat our deployment, and our system\noperations and administration, is as\nsecure as we meant it to be. Did somebody\nomit a step? We want to discover that\nright away and fix it. Or, ideally,\nautomate the way we set up all of our\nsystems using, for example, an\norchestration software package to\nautomatically configure our servers, so\nit isn’t the case that late in the day,\npeople are more likely to make a mistake.\nBecause, well written scripts do just as\ngood a job even when they’re tired.</p>\n<p>And we want to make sure that all of\nour systems are updated and patched and\nnot tell people that security is a waste\nof time and they should get back to work\non features.</p>\n<h2 id=\"process-tips\">Process tips</h2>\n<p>To do all this, we need to have a\nsimple design. And I would encourage\npeople to focus on the idea that\nsimplicity and modular design are great\nways to make a system easier to check for\nsecurity holes.</p>\n<p>We want to make sure that credentials\nthat are used in our modular\nsystems—where one piece of software is\nlogging into another service or another\npiece of software database—are kept in\nproperly secured credential storage. A\ncommon form of security violations is you\nlook at somebody’s source code and… Oh\nlook! There’s the password for the\ndatabase server right there …because the\napp had to connect to the server. That’s\ninappropriate design. There are special\ncredential storage services—your team\nshould use them.</p>\n<p>And we want to make sure that quality\ncontrol remains central to our culture,\nas developers of software, and that\nincludes DevOps, that includes system\nadministration. Too often, we have a good\npiece of software, and then it’s deployed\nincorrectly. And that’s where the problem\noccurs. So if you’re going to test\nwhether your code is written properly,\nmaybe also test whether the servers\nconfigured properly, from time to time.\nIt’s time well spent.</p>\n<h2 id=\"how-to-strengthen-your-security\">How to strengthen your security</h2>\n<p>So how can you move forward on\nsecurity? The good news is, while it may\nsound like a scary and intimidating area,\nthere are lots of practical steps you can\ntake right now, and you don’t even have\nto take them all at the same time, you\ncan take them incrementally. Here are\nsome great steps though that I highly\nrecommend.</p>\n<p>One is that—in your engineering team,\nand if you have multiple teams—in each\nengineering team somebody is explicitly\nthe security person. Somebody knows that\nit’s their job to keep an eye out for\nsecurity issues and prevention and that\nif there’s a problem they’re the person\nwho’s going to hear about it. They should\nhave the power to look into anything they\nneed to make sure there isn’t a security\nhole in the system.</p>\n<p>Use best practices from other\ncompanies. This is a great idea\nthroughout all of DevOps, including\nDevSecOps. You don’t have to reinvent\nanything. You can learn best practices\nand get a checklist together of what\nother companies have found helpful to\nlook for to find opportunities to secure\nyour system incrementally. We just piece\nby piece chip away at the risks that are\npresent in our systems. We don’t have to\nwait until some magic day when all of\nsecurity happens at once.</p>\n<p>Teach your people about security. A\nlot of security problems happen because\none person didn’t realize… Who didn’t\nknow that you’re supposed to not put\npasswords in the source code where\neveryone can see them? Well, one person\ntyped a password into the source code,\nbut now it’s there for everyone. So be\nsure that training and security, and how\nimportant it is, and how to do it is\navailable to everyone in your team. And\nmake sure that there’s a checklist. Who\ntook the security training? Who’s not\nbeen to security training yet?</p>\n<p>Scary but true fact: You should,\naccording to Price Waterhouse Coopers, if\nyou want to be a normal IT operation, be\nspending 11 to 15% of your IT budget on\nsecurity overall. That’s a significant\nnumber. And I think we can all agree that\nwith more internet work and more\nimporting of modules and stuff, we, if\nanything, could be worried that that\nnumber is going to go up. So automation\nthrough DevOps is really a way to keep a\nlid on that number. But I wouldn’t think\nof it as a way to make that number drive\ndown towards zero. Security is everyone’s\njob, and it’s going to remain that\nway.</p>\n<p>Beyond that, I’d say use it use the\nother techniques we talked about earlier\nin this presentation. You don’t have to\nbe the next Equifax, of having no\nmonitoring. You don’t have to allow silly\nmistakes by having no automation. And you\ndon’t have to create more security holes\nby reinventing your own tools and\nprocesses using components. Reuse is your\nfriend.</p>\n<h2 id=\"7-tech-ideas-you-can-start-now\">7 tech ideas you can start now</h2>\n<p>I won’t spend too long on this, but I\nwanted this for people who are more\nhands-on or the people who are\nsupervising hands-on engineers. These are\nsome practical steps that you can take to\nstart turning on pieces of security,\nright now. Every one of these—except\nperhaps service-oriented architecture—is\nsomething that literally you could task\nsomebody to do this week or next\nweek.</p>\n<p>These are straightforward tasks.</p>\n<ol>\n<li>Ensure all databases have firewalls on them. They’re a common data breach source!</li>\n<li>Use a password manager to generate secure passwords; enable two-factor authentication.</li>\n<li>Use roles and policies to assign specific permissions to users and services instead of running everything from root credentials or privileged users.</li>\n<li>Use bastion hosts or VPNs to limit access to internal machines.</li>\n<li>Use service-oriented architecture (SOA) to break off components that need high privilege.</li>\n<li>Include code analysis tools in the dev process and enforce fixes prior to deployment.</li>\n<li>Test your servers with automated scanners for break-in vulnerabilities.</li>\n</ol>\n<h2 id=\"fast-to-market-reliable-and-secure\">Fast to market, reliable, and secure</h2>\n<p>It’s a winning formula!</p>\n<p>So, in short, you have a choice to\nturn on DevOps to use a lot of technology\nthat’s been solved, a lot of best\npractices and engineering techniques that\nhave already been solved and tested at\nnumerous other companies—clients of ours,\nfamous internet companies, everyone. When\nI say “everyone”, the truth is the\nminority of companies are already using\nproper DevOps. But enough companies that\nyou don’t have to be the first, you don’t\nhave to be the Pioneer. DevOps is a\nwinning formula that will get you to\nmarket faster, and more reliable, and\nwith better security. Or you could be the\nnext Equifax and the next Capital One,\nwhich is the default situation.</p>\n<h2 id=\"need-help-with-devops-security-and-privacy\">Need help with DevOps Security and Privacy?</h2>\n<p>FP Complete offers corporations its\nDevOps Success Program which offers\nadvanced Privacy and Security software\nengineering mentoring among many other\nmoving parts in the DevOps world.</p>\n<p>For more information, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us</a>.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/",
"slug": "devops-security-and-privacy-strategies",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps Security and Privacy Strategies",
"description": "DevOps Security and Privacy—FP Complete’s comprehensive, easy to understand guide designed to help you understand why they’re so critical to the safety of your DevOps strategy. The following is a transcription of a live webinar given by FP Complete Founder and Chairman Aaron Contorer, on FP Complete’s YouTube Channel. I’m the Founder and Chairman of FP Complete, where we […]",
"updated": null,
"date": "2020-05-29",
"year": 2020,
"month": 5,
"day": 29,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"insights"
]
},
"authors": [],
"extra": {
"author": "Aaron Contorer",
"blogimage": "/images/blog-listing/network-security.png"
},
"path": "/blog/devops-security-and-privacy-strategies/",
"components": [
"blog",
"devops-security-and-privacy-strategies"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introducing-aaron",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#introducing-aaron",
"title": "Introducing Aaron",
"children": []
},
{
"level": 2,
"id": "breaches-are-happening-far-too-often",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#breaches-are-happening-far-too-often",
"title": "Breaches are happening far too often",
"children": []
},
{
"level": 2,
"id": "opportunities-for-penetration-are-everywhere",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#opportunities-for-penetration-are-everywhere",
"title": "Opportunities for penetration are everywhere",
"children": []
},
{
"level": 2,
"id": "let-s-talk-devsecops",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#let-s-talk-devsecops",
"title": "Let’s talk DevSecOps",
"children": []
},
{
"level": 2,
"id": "process-tips",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#process-tips",
"title": "Process tips",
"children": []
},
{
"level": 2,
"id": "how-to-strengthen-your-security",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#how-to-strengthen-your-security",
"title": "How to strengthen your security",
"children": []
},
{
"level": 2,
"id": "7-tech-ideas-you-can-start-now",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#7-tech-ideas-you-can-start-now",
"title": "7 tech ideas you can start now",
"children": []
},
{
"level": 2,
"id": "fast-to-market-reliable-and-secure",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#fast-to-market-reliable-and-secure",
"title": "Fast to market, reliable, and secure",
"children": []
},
{
"level": 2,
"id": "need-help-with-devops-security-and-privacy",
"permalink": "https://tech.fpcomplete.com/blog/devops-security-and-privacy-strategies/#need-help-with-devops-security-and-privacy",
"title": "Need help with DevOps Security and Privacy?",
"children": []
}
],
"word_count": 2005,
"reading_time": 11,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/rapid-devops-success.md",
"colocated_path": null,
"content": "<p>Continuous integration and deployment, monitoring and logging, and security and privacy—FP Complete’s comprehensive, easy to understand guide designed to help you learn why those three DevOps strategies collectively create an environment where high-quality software can be developed quicker and more efficiently than ever before.</p>\n<p>Aaron Contorer, founder and chairman of FP Complete, presented the following webinar. Read below for a transcript of the video.</p>\n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/5U11unR_py0\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n<h2 id=\"introducing-aaron\">Introducing Aaron</h2>\n<p>I’m the Founder and Chairman of <a href=\"https://tech.fpcomplete.com/\">FP Complete</a>,\nwhere we help companies use\nstate-of-the-art tools and techniques to\nproduce secure, lightning-fast,\nfeature-rich software, faster and more\noften.</p>\n<p>Before founding FP Complete, I was an\nexecutive at Microsoft, where I served as\nprogram manager for distributed systems,\nand general manager of Visual C++, the\nleading software development tool at that\ntime. Also, I architected MSN’s move\nto Internet-based server software, served\nas the full-time technology adviser to\nBill Gates, and I founded and ran the\ncompany’s Productivity Tools Team for\ncomplex software engineering\nprojects.</p>\n<p>Okay, so enough about me. Let’s begin\nthis presentation by stating the\nobvious:</p>\n<h2 id=\"software-development-is-complicated\">Software development is complicated</h2>\n<p>As information technology and software\npeople, it’s easy to recognize how things\nare changing at an astonishing speed. To\nkeep pace, we need tools and processes\nthat allow us to rapidly deploy better\ncode more frequently with fewer errors.\nIs that a high bar to reach? Yes, of\ncourse, it is. But it absolutely must be\nmet—that is <em>if</em> you\nwant your company to survive.</p>\n<h2 id=\"inefficiencies-are-everywhere\">Inefficiencies are everywhere</h2>\n<p>In most companies, I would argue that\nthe information technology team and the\nsoftware engineering team are not totally\ntrusted by the rest of the company.</p>\n<p>Of course, I don’t mean they’re not\ntrusted as in they’re not good, smart\npeople. What I mean is that they don’t\nmeet their deadlines, leading to sprints\nbecoming longer than initially expected,\nultimately causing everyone to feel\nrushed and end results lacking in\nquality.</p>\n<h2 id=\"it-has-lost-management-s-trust\">IT has lost management’s trust</h2>\n<p>When management begins to not trust\nengineering and IT, a bad dynamic\ndevelops. No longer does the team get to\nfocus on building great things for their\nend-users. Instead, they’re forced to\nfocus on solving their struggles and\ndealing with interpersonal friction.</p>\n<p>Believe it or not, the problems we’re\nhaving aren’t people-problems. It’s not\nthat they lack good intentions or\nbrainpower.</p>\n<p>Instead, the problem is this:</p>\n<h2 id=\"modern-software-ancient-tech\">Modern software, ancient tech</h2>\n<p><strong>Modern software development can’t be performed using ancient technologies applied within simplistic workflows.</strong></p>\n<p>I often like to say…</p>\n<p><em>“The best craftsperson with a\nhandsaw cannot do woodworking as\nefficiently as a robotic cutting\ntool.”</em></p>\n<p>When we automate our work, it becomes\nfaster and easier to replicate. We don’t\nbuild in lots of mistakes. As a result,\nwe get to move on with our lives instead\nof going back and reworking things over\nand over again.</p>\n<p>When we automate with good tools and\nbetter processes programmed in, and we\nrepeat this same process every time,\neveryone can trust that our work will be\nperformed with quality, and our systems\nwill be more safe and secure.</p>\n<p>Sounds ideal, doesn’t it? Of course,\nit does.</p>\n<p>But how do you do it? How do you\nevolve from the environment you’re\noperating in today to the utopia DevOps\nstrategies will allow you to live and\nwork within well into the future?</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/",
"slug": "rapid-devops-success",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Webinar Review: Learn Rapid DevOps Success",
"description": "Continuous integration and deployment, monitoring and logging, and security and privacy—FP Complete’s comprehensive, easy to understand guide designed to help you learn why...",
"updated": null,
"date": "2020-05-29",
"year": 2020,
"month": 5,
"day": 29,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Aaron Contorer",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/rapid-devops-success/",
"components": [
"blog",
"rapid-devops-success"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introducing-aaron",
"permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#introducing-aaron",
"title": "Introducing Aaron",
"children": []
},
{
"level": 2,
"id": "software-development-is-complicated",
"permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#software-development-is-complicated",
"title": "Software development is complicated",
"children": []
},
{
"level": 2,
"id": "inefficiencies-are-everywhere",
"permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#inefficiencies-are-everywhere",
"title": "Inefficiencies are everywhere",
"children": []
},
{
"level": 2,
"id": "it-has-lost-management-s-trust",
"permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#it-has-lost-management-s-trust",
"title": "IT has lost management’s trust",
"children": []
},
{
"level": 2,
"id": "modern-software-ancient-tech",
"permalink": "https://tech.fpcomplete.com/blog/rapid-devops-success/#modern-software-ancient-tech",
"title": "Modern software, ancient tech",
"children": []
}
],
"word_count": 582,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/rust-devops.md",
"colocated_path": null,
"content": "<p>On February 2, 2020, one of FP Complete's Lead Software Engineers—Mike McGirr—presented a webinar on using Rust for creating DevOps tooling.</p>\n<h2 id=\"webinar-outline\">Webinar Outline</h2>\n<p>FP Complete is hosting a functional programming\nwebinar on, “Learn Rapid Rust with DevOps Success\nStrategies.” A beginner’s guide including sample Rust\ndemonstration on writing your DevOps tools with Rust\nover Haskell. An introduction to Rust, with basic DevOps\nuse cases, and the library ecosystem, airing on\nFebruary 5th, 2020.</p>\n<p>The webinar will be hosted by Mike McGirr, a DevOps\nSoftware Engineer at FP Complete which will provide an\nabundance of Rust information with respect to\nfunctional programming and DevOps, featuring (safety,\nspeed and accuracy) that make it unique and contributes\nto its popularity, and its possible preference as a\nlanguage of choice for operating systems over Haskell,\nweb browsers and device drivers among others. The\nwebinar offers an interesting opportunity to learn and\nuse Rust in developing real world projects aside from\nHaskell or other functional programming languages\navailable today.</p>\n<h2 id=\"topics-covered\">Topics covered</h2>\n<p>During the webinar we will cover the following\ntopics:</p>\n<ul>\n<li>A quick intro and background into the Rust programming language</li>\n<li>Some scenarios and reasons why you would want to use Rust for writing your DevOps tooling (and some reasons why you wouldn’t)</li>\n<li>A small example of using the existing AWS libraries to create a basic DevOps tool</li>\n<li>How to Integrate FP into your Organization</li>\n</ul>\n<p>Mike Mcgirr, a Lead Software Engineer at FP\nComplete,will help us understand reasoning that\nsupports using Rust over other functional programming\nlanguages offered in the market today.</p>\n<h2 id=\"more-about-your-host\">More about your host</h2>\n<p>The webinar will be hosted by Mike McGirr, a veteran\nDevOps Software Engineer at FP Complete. With years of\nexperience in DevOps software development, Mike will\nwalk us through a first in a series of Rust webinars\ndiscussing why we would, and how we could utilize Rust\nas a functional programming language to build DevOps\nover other functional programming languages available\nin the market today. Mike will also share with us a\nsmall example script written in Rust showing how Rust\nmay be used.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/rust-devops/",
"slug": "rust-devops",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Rust with DevOps Success Strategies",
"description": "Wednesday Feb 5th, 2020, at 10:00 AM PST. Webinar Outline: FP Complete is hosting a functional programming webinar on, “Learn Rapid Rust with DevOps Success Strategies.” A beginner’s guide including sample Rust demonstration on writing your DevOps tools with Rust over Hasell. An introduction to Rust, with basic DevOps use cases, and the library ecosystem, […]",
"updated": null,
"date": "2020-02-05",
"year": 2020,
"month": 2,
"day": 5,
"taxonomies": {
"tags": [
"devops",
"rust",
"insights"
],
"categories": [
"functional programming",
"devops"
]
},
"authors": [],
"extra": {
"author": "Mike McGirr",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "/blog/rust-devops/",
"components": [
"blog",
"rust-devops"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "webinar-outline",
"permalink": "https://tech.fpcomplete.com/blog/rust-devops/#webinar-outline",
"title": "Webinar Outline",
"children": []
},
{
"level": 2,
"id": "topics-covered",
"permalink": "https://tech.fpcomplete.com/blog/rust-devops/#topics-covered",
"title": "Topics covered",
"children": []
},
{
"level": 2,
"id": "more-about-your-host",
"permalink": "https://tech.fpcomplete.com/blog/rust-devops/#more-about-your-host",
"title": "More about your host",
"children": []
}
],
"word_count": 351,
"reading_time": 2,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/",
"title": "Collect in Rust, traverse in Haskell and Scala"
}
]
},
{
"relative_path": "blog/what_is_govcloud.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2019/05/what_is_govcloud/",
"slug": "what-is-govcloud",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "What is GovCloud?",
"description": "Devops, FedRAMP Compliance, and Making your Migration to GovCloud Successful - What is GovCloud?",
"updated": null,
"date": "2019-05-28T17:54:00Z",
"year": 2019,
"month": 5,
"day": 28,
"taxonomies": {
"tags": [
"devops",
"aws",
"govcloud"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "J Boyer",
"html": "hubspot-blogs/what_is_govcloud.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/2019/05/what_is_govcloud/",
"components": [
"blog",
"2019",
"05",
"what_is_govcloud"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/deploying_haskell_apps_with_kubernetes.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/deploying_haskell_apps_with_kubernetes/",
"slug": "deploying-haskell-apps-with-kubernetes",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Deploying Haskell Apps with Kubernetes",
"description": "This webinar describes how to Deploy Haskell applications using Kubernetes. Topics to be discussed include creation of a Kube cluster using Terraform and Kops, describe pods, deployments, services, load balancers, etc., deployment of a built image using kubectl and deploy, and more.",
"updated": null,
"date": "2018-09-11T16:24:00Z",
"year": 2018,
"month": 9,
"day": 11,
"taxonomies": {
"tags": [
"haskell",
"devops"
],
"categories": [
"functional programming",
"devops"
]
},
"authors": [],
"extra": {
"author": "Robert Bobbett",
"html": "hubspot-blogs/deploying_haskell_apps_with_kubernetes.html",
"blogimage": "/images/blog-listing/kubernetes.png"
},
"path": "/blog/deploying_haskell_apps_with_kubernetes/",
"components": [
"blog",
"deploying_haskell_apps_with_kubernetes"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
"title": "Our history with containerization"
},
{
"permalink": "https://tech.fpcomplete.com/platformengineering/containerization/",
"title": "Containerization"
}
]
},
{
"relative_path": "blog/devsecops.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/devsecops/",
"slug": "devsecops",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevSecOps - Putting the Sec in DevOps",
"description": "With today's tremendous security pressures, DevOps teams are moving to continuous development and integration, but continuous security is harder to integrate. To better understand how to secure your DevOps and protect your network read on.",
"updated": null,
"date": "2018-07-18T13:11:00Z",
"year": 2018,
"month": 7,
"day": 18,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Robert Bobbett",
"html": "hubspot-blogs/devsecops.html",
"blogimage": "/images/blog-listing/network-security.png"
},
"path": "/blog/devsecops/",
"components": [
"blog",
"devsecops"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/deploying-rust-with-docker-and-kubernetes.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/",
"slug": "deploying-rust-with-docker-and-kubernetes",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Deploying Rust with Docker and Kubernetes",
"description": "Using a tiny Rust app to demonstrate deploying Rust with Docker and Kubernetes.",
"updated": null,
"date": "2018-07-17T14:36:00Z",
"year": 2018,
"month": 7,
"day": 17,
"taxonomies": {
"tags": [
"rust",
"devops",
"kubernetes"
],
"categories": [
"functional programming",
"devops"
]
},
"authors": [],
"extra": {
"author": "Chris Allen",
"html": "hubspot-blogs/deploying-rust-with-docker-and-kubernetes.html",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "/blog/2018/07/deploying-rust-with-docker-and-kubernetes/",
"components": [
"blog",
"2018",
"07",
"deploying-rust-with-docker-and-kubernetes"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
"title": "Levana NFT Launch"
},
{
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
"title": "Our history with containerization"
},
{
"permalink": "https://tech.fpcomplete.com/blog/rust-kubernetes-windows/",
"title": "Deploying Rust with Windows Containers on Kubernetes"
},
{
"permalink": "https://tech.fpcomplete.com/platformengineering/containerization/",
"title": "Containerization"
}
]
},
{
"relative_path": "blog/devops-to-prepare-for-a-blockchain-world.md",
"colocated_path": null,
"content": "<h2 id=\"introduction\">Introduction</h2>\n<p>As the world adopts blockchain technologies, your IT infrastructure — and its\npredictability — become critical. Many companies lack the levels of automation\nand control needed to survive in this high-opportunity, high-threat environment.</p>\n<p>Are your software, cloud, and server systems automated and robust enough? Do you\nhave enough quality control for both your development and your online operations?\nOr will you join the list of companies bruised by huge data breaches and loss o\nf control over their own computer systems? If you are involved in blockchain, or\nany industry for that matter, these are the questions you need to ask yourself.</p>\n<p>Blockchain will require you to put more information online than ever before,\ncreating huge exposures for organizations that do not have a handle on their\nsecurity. Modern DevOps technologies, including many open-source systems, offer\npowerful solutions that can improve your systems to a level suitable for use with\nblockchain.</p>\n<h2 id=\"are-companies-really-ready-for-blockchain-technology\">Are companies REALLY ready for Blockchain technology?</h2>\n<p>The answer to it is most of the companies are NOT and those who are need to audit\nor reevaluate whether they are. The reason is BlockChain puts data to public making\nit prone to outside attacks if systems are not hardenend and updated on timely\nmanner.</p>\n<p>Big companies such as Equifax had millions of records stolen, Heartland credit\nprocessing was hacked and eventually had to pay 110 million and Airbus A400M due \nto wrong installation of manual software patch resulted in death of everyone on\non the plain. These are few of many such big companies that was hacked due to poorly\nimplemented IT technology.</p>\n<p>Once hailed as unhackable, blockchains are now getting hacked. According to a MIT\ntechnology review, hackers have stolen nearly $2 billion worth of cryptocurrency\nsince the beginning of 2017.</p>\n<h2 id=\"big-question-why-companies-are-getting-hacked\">Big Question: Why Companies are getting hacked ?</h2>\n<p>Blockchain itself isn't always the problem. Sometimes the blockchain is secure \nbut the IT infrastructure is not capable to supporting it. There are cases where \nopen firewalls, unencrypted data, poor testing and manual errors were reasons \nbehind the hacking.</p>\n<p>So, the question to ask is: Is the majority of your IT infrastructure secure \nand reliable enough to support Blockchain Technology ?</p>\n<h2 id=\"what-is-an-it-factory\">What is an IT Factory ?</h2>\n<p>IT factory as per <a href=\"https://www.fpcomplete.com/our-team/\">Aaron Contorer</a>, founder \nand Chariman of FP Complete is divided into 3 parts</p>\n<ol>\n<li>Development</li>\n<li>Deployment</li>\n<li>System Operations</li>\n</ol>\n<p>If IT factory is implemented properly at each stage it could result in a new and\nbetter IT services leading to a more reliable, scalable and secure environment.</p>\n<p>Deployment is a bridge that allows software running on a developer laptop all the\nway to a scalable system and running Ops for monitoring. With DevOps practice,\nwe can ensure all the three stages of IT factory implemented.</p>\n<p>But, the key to build a working IT factory is Automation that ensure each step\nin the deployment process is reliable. With microservices architecture ,building\nand testing a reliable containerized based system is much easier now compared to\nthe earlier days.</p>\n<p>The only way to ensure a reliable, reproducible system is if companies start\nautomating each step of their software life cycle journey. Companies that are ensuring\ngood DevOps practices have a robust IT infrastructure compared to those that are\nNOT.</p>\n<h2 id=\"devops-for-blockchain\">DevOps for Blockchain</h2>\n<p>DevOps tools helps BlockChain better as it can ensure all code is tracked, tested,\ndeployed automatically, audited and Quality Assurance tested along each stage of\nthe delivery pipeline.</p>\n<p>The other benefits of having DevOps methods implemented in BlockChain is that it \nreduces the overall operational cost to companies, speeds up the overall pace of \nsoftware development and release cycle, improves the software quality and increases\nthe productivity.</p>\n<p>The following DevOps methods, if implemented in Blockchain, can be very helpful</p>\n<p><strong>1. Engineer for Safety</strong></p>\n<ul>\n<li>With proper version control tool like GITHUB , source code can be viewed,\ntracked with proper history of all changes to the base</li>\n<li>Development tools used by developers should be of the same version, should be\ntracked and should be uniform across the project</li>\n<li>Continuous Integration (CI) pipeline must be implemented at the development\nstage to ensure nothing breaks on each commit. There are tools such as Jenkins,\nBamboo, Code Pipeline and many more that can help in setting up a proper CI .</li>\n<li>Each commit should be properly tested using test case management system with\nproper unit test cases for each commit</li>\n<li>Each Project should also have an Issue tracking system like JIRA, GITLAB etc\nto ensure all requests are properly tracked and closed.</li>\n</ul>\n<p><strong>2. Deploy for Safety</strong></p>\n<ul>\n<li>Continuous Deployment via DevOps tools to ensure code is automatically deployed\nto each environment</li>\n<li>Each environment (Development, Testing, DR, Production) should be a replica\nof each other</li>\n<li>Allow automation to setup all relevant infrastructure related to allow successful\ndeployment of code</li>\n<li>Setup infrastructure as code (IAC) to provision infrastructure that helps in\nreducing manual errors</li>\n<li>Sanity of each deployment by running test cases to ensure each component is\nfunctioning as expected</li>\n<li>Running Security testing after each Deployment on each environment</li>\n<li>Ensure system can be RollBack/Rollforward without any manual intervention like\nCanary/Blue-Green Deployment</li>\n<li>Use container based deployments that provide more reliability for deployments</li>\n</ul>\n<p><strong>3. Operate for Safety</strong></p>\n<ul>\n<li>Set up Continuous Automated Monitoring and Logging</li>\n<li>Set up Anomaly detection and alerting mechanism</li>\n<li>Set up Automated Response and Recovery for any failures</li>\n<li>Ensure a Highly Available and scalable system for reliability</li>\n<li>Ensure data is encrypted for all outbound and inbound communication</li>\n<li>Ensure separation of admin powers, database powers, deployment powers , user \naccess etc. The more the powers are separated the lesser the risk</li>\n</ul>\n<p><strong>4. Separate for Safety</strong></p>\n<ul>\n<li>Separate each system internally from each other by using multiple small networks.\nFor Eg: database/backend on private subnets while UI on public subnets</li>\n<li>Set Internal and MutFirewalls ensure the database systems are protected with no access</li>\n<li>Separate Responsibility and credentials for reduce risk of exposure</li>\n</ul>\n<p><strong>5. Human systems</strong></p>\n<p>Despite keeping hardware and software checks, most the breaking of blockchain\nsystems today has happened because of "People" or "Human Errors".</p>\n<p>Most people try hacks/workaround to get stuff working on production with no knowledge\non the impacts it could do on the system. Sometimes these stuff are not documented\nmaking it hard for the other person to fix it. Sometimes asking others to login\nto unauthorized systems by sharing credentials over calls paves a path for unsecure\nsystems</p>\n<p>To ensure companies must,</p>\n<ul>\n<li>Train people to STOP doing manual efforts to fix a broken system.</li>\n<li>Train people NOT to do "Social Engineering" like asking colleagues \nto login to systems on their behalf, sharing passwords etc.</li>\n</ul>\n<p><strong>6. Quality Assurance</strong></p>\n<ul>\n<li>Need to review the Architectural as well as best practices are ensured in the\nproduct life cycle</li>\n<li>Need to ensure the code deploy pipeline has scope for penetration Testing</li>\n<li>Need to ensure there is weekly/monthly auditing of metrics, logs , systems to\ncheck for threats to the systems</li>\n<li>Each component and patch on system should be tested and approved by QA before\nrolling out to Production</li>\n<li>Companies could also hire third parties to audit their system on their behalf</li>\n</ul>\n<h2 id=\"how-to-get-there\">How to get there ?</h2>\n<p>The good news is "IT IS POSSIBLE". There is no need for giant or all-in-one solutions.</p>\n<p>Companies that are starting fresh need to start at the early phase of development\nto building a reliable system by focussing on above 6 points mentioned above. They\nneed to start thinking on all areas in the "Plan and Design" phase itself.</p>\n<p>For companies who are already on production or nearing production does not need\nto have to start fresh . They can start making incremental progress but it needs\nto start TODAY.</p>\n<p>Automation is the only SCIENCE in IT that can reduce errors and help towards building \na more and more reliable system. It will in the future save money and resources that \ncan be redirected to focus on other areas.</p>\n<p>To conclude, <a href=\"https://www.fpcomplete.com\">FP Complete</a> has been a leading consultant \non providing DevOps services. We excel at what we do and if you are looking to implement \nDevOps in your BlockChain. Please feel free to reach out to us for free consultations.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/",
"slug": "devops-to-prepare-for-a-blockchain-world",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps to Prepare for a Blockchain World",
"description": "This webinar describes how Devops can be used to prepare any company that is interested in adopting blockchain technology. Many companies lack the level of automation and control needed to survive in this high-opportunity, high-threat environment but DevOps technologies can offer powerful solutions.",
"updated": null,
"date": "2018-06-07T08:03:00Z",
"year": 2018,
"month": 6,
"day": 7,
"taxonomies": {
"categories": [
"functional programming",
"devops"
],
"tags": [
"devops",
"blockchain"
]
},
"authors": [],
"extra": {
"author": "FP Complete Team",
"blogimage": "/images/blog-listing/distributed-ledger.png"
},
"path": "/blog/devops-to-prepare-for-a-blockchain-world/",
"components": [
"blog",
"devops-to-prepare-for-a-blockchain-world"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introduction",
"permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#introduction",
"title": "Introduction",
"children": []
},
{
"level": 2,
"id": "are-companies-really-ready-for-blockchain-technology",
"permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#are-companies-really-ready-for-blockchain-technology",
"title": "Are companies REALLY ready for Blockchain technology?",
"children": []
},
{
"level": 2,
"id": "big-question-why-companies-are-getting-hacked",
"permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#big-question-why-companies-are-getting-hacked",
"title": "Big Question: Why Companies are getting hacked ?",
"children": []
},
{
"level": 2,
"id": "what-is-an-it-factory",
"permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#what-is-an-it-factory",
"title": "What is an IT Factory ?",
"children": []
},
{
"level": 2,
"id": "devops-for-blockchain",
"permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#devops-for-blockchain",
"title": "DevOps for Blockchain",
"children": []
},
{
"level": 2,
"id": "how-to-get-there",
"permalink": "https://tech.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#how-to-get-there",
"title": "How to get there ?",
"children": []
}
],
"word_count": 1354,
"reading_time": 7,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/controlling-access-to-nomad-clusters.md",
"colocated_path": null,
"content": "<p>In this blog post, we will learn how to control access to nomad.</p>\n<h2 id=\"introduction\">Introduction</h2>\n<p>Nomad is an application scheduler, that helps you schedule application-processes\nefficiently, across multiple servers, and keep your infrastructure costs low.\nNomad is capable of scheduling containers, virtual machines, as well as isolated\nforked processes.</p>\n<p>There are other schedulers available, such as Kubernetes, Mesos or Docker Swarm,\nbut each has different mechanisms for securing access. By following this post,\nyou will understand the main components in securing your Nomad cluster, but the\noverall idea is valid across any of the other schedulers available.</p>\n<p>One of Nomad's selling points, and why you could consider it over tools like\nKubernetes, is that you can schedule not only containers, but also QEMU\nimages, LXC, isolated <code>fork/exec</code> processes, and even Java applications in a\nchroot(!). All you need is a driver implemented for Nomad. On the other hand,\nits community is smaller than Kubernetes, so the tradeoffs have to be measured\non a project-by-project basis.</p>\n<p>We will start by deploying a test cluster and configuring access control lists\n(ACLs).</p>\n<h2 id=\"overview\">Overview</h2>\n<ul>\n<li>Nomad uses tokens to authenticate client requests.</li>\n<li>Each token is associated with policies.</li>\n<li>Policies are a collection of rules to allow or deny operations on resources.</li>\n</ul>\n<p>In this tutorial, we will:</p>\n<ol>\n<li>Setup our environment to run nomad inside a Vagrant virtual machine for running experiments</li>\n<li>We generate a root/admin token (usually known as the "management" token) and activate ACLs</li>\n<li>Using the management token, we add a new "non-admin" policy and create a token associated with this new policy</li>\n<li>Use the "non-admin" token to demonstrate access control.</li>\n</ol>\n<h2 id=\"setup-the-environment\">Setup the environment</h2>\n<p>Pre-requisites:</p>\n<ul>\n<li>POSIX shell, such as GNU Bash</li>\n<li>Vagrant > <code>2.0.1</code></li>\n<li>Nomad demo <a href=\"https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/Vagrantfile\"><code>Vagrantfile</code></a></li>\n</ul>\n<p>We will run everything from within a virtual machine with all the necessary\nconfiguration and applications. Execute the following commands on your shell:</p>\n<pre><code>$ cd $(mktemp --directory)\n$ curl -LO https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/Vagrantfile\n$ vagrant up\n ...\n lines and lines of Vagrant output\n this might take a while\n ...\n$ vagrant ssh\n ...\n Message of the day greeting from VM\n Anything after this point is being executed inside the virtual machine\n ...\nvagrant@nomad:~$ nomad version\nNomad vX.X.X\nvagrant@nomad:~$ uname -n\nnomad\n</code></pre>\n<p>Depending on your system and the version of <code>Vagrantfile</code> used, the prompt may\nbe different.</p>\n<h2 id=\"setup-nomad\">Setup Nomad</h2>\n<p>We configure nomad to execute both as server and client for convenience, as\nopposed to a production environment where the server is remote and client is\nlocal to each machine or node. Create a <code>nomad-agent.conf</code> with the following\ncontents:</p>\n<pre><code>bind_addr = "0.0.0.0"\ndata_dir = "/var/lib/nomad"\nregion = "global"\nacl {\n enabled = true\n}\nserver {\n enabled = true\n bootstrap_expect = 1\n authoritative_region = "global"\n}\nclient {\n enabled = true\n}\n</code></pre>\n<p>Then, execute:</p>\n<pre><code>vagrant@nomad:~$ sudo nomad agent -config=nomad-agent.conf # sudo is needed to run as a client\n</code></pre>\n<p>You should see output indicating that Nomad is running.</p>\n<blockquote>\n<p>Clients need root access to be able to execute processes, while servers only\ncommunicate to synchronize state.</p>\n</blockquote>\n<h2 id=\"acl-bootstrap\">ACL Bootstrap</h2>\n<p>On another terminal, after running <code>vagrant ssh</code> from our temporary working\ndirectory, run the following command:</p>\n<pre><code>vagrant@nomad:~$ nomad acl bootstrap\n\nAccessor ID = 2f34299b-0403-074d-83e2-60511341a54c\nSecret ID = 9fff6a06-b991-22db-7fed-55f17918e846\nName = Bootstrap Token\nType = management\nGlobal = true\nPolicies = n/a\nCreate Time = 2018-02-14 19:09:23.424119008 +0000 UTC\nCreate Index = 13\nModify Index = 13\n</code></pre>\n<p>This <code>Secret ID</code> is our <code>management</code> (admin) token. This token is valid globally\nand all operations are permitted. No policies are necessary while authenticating\nwith the management token, and so, none are configured by default.</p>\n<p>It is important to copy the <code>Accessor ID</code> and <code>Secret ID</code> to some file, for\nsafekeeping, as we will need these values later. For a production environment,\nit is safest to store these in a separate vault permanently.</p>\n<p>Once ACLs are on, all operations are denied <em>unless</em> a valid token is provided\nwith each request, and the operation we want is allowed by a policy associated\nwith the provided token.</p>\n<pre><code>vagrant@nomad:~$ nomad node-status\nError querying node status: Unexpected response code: 403 (Permission denied)\n\nvagrant@nomad:~$ export NOMAD_TOKEN='9fff6a06-b991-22db-7fed-55f17918e846' # Secret ID, above\nvagrant@nomad:~$ nomad node-status\n\nID DC Name Class Drain Status\n1f638a17 dc1 nomad <none> false ready\n</code></pre>\n<h2 id=\"designing-policies\">Designing policies</h2>\n<p>Policies are a collection of (ideally, non-overlapping) roles, that provide\naccess to different operations. The table below shows typical users of a Nomad\ncluster.</p>\n<table><thead><tr><th>Role</th><th>Namespace</th><th>Agent</th><th>Node</th><th>Remarks</th></tr></thead><tbody>\n<tr><td>Anonymous</td><td><code>deny</code></td><td><code>deny</code></td><td><code>deny</code></td><td>Unnecessary, as token-less requests are denied all operations.</td></tr>\n<tr><td>Developer</td><td><code>write</code></td><td><code>deny</code></td><td><code>read</code></td><td>Developers are permitted to debug their applications, but not to perform cluster management</td></tr>\n<tr><td>Logger</td><td><code>list-jobs</code>, <code>read-logs</code></td><td><code>deny</code></td><td><code>read</code></td><td>Automated log aggregators or analyzers that need read access to logs</td></tr>\n<tr><td>Job requester</td><td><code>submit-job</code></td><td><code>deny</code></td><td><code>deny</code></td><td>CI systems create new jobs, but don't interact with running jobs.</td></tr>\n<tr><td>Infrastructure</td><td><code>read</code></td><td><code>write</code></td><td><code>write</code></td><td>DevOps teams perform cluster management but seldom need to interact with running jobs.</td></tr>\n</tbody></table>\n<blockquote>\n<p>For namespace access, <code>read</code> is equivalent to\n<code>[read-job, list-jobs]</code>. <code>write</code> is equivalent to\n<code>[list-jobs, read-job, submit-job, read-logs, read-fs, dispatch-job]</code>.</p>\n</blockquote>\n<blockquote>\n<p>In the event that operators do need to have access to namespaces, one can\nalways create a token that has <em>both</em> Developer and Infrastructure policies\nattached. This is equivalent to having a <code>management</code> token.</p>\n</blockquote>\n<p>We have left out multi-region and multi-namespace setups here. We have assumed\neverything to be running under the <code>default</code> namespace. It should be noted that\non production deployments, with much larger needs, the policies could be\ndesigned per-namespace, and tracked between regions.</p>\n<h2 id=\"policy-specification\">Policy specification</h2>\n<p>Policies are expressed by a combination of rules Note that the <code>deny</code> rule will\npreside over any conflicting capability.</p>\n<p>Nomad accepts a JSON payload with the name and description of a policy, along\nwith a <em>quoted</em> JSON or HCL document with rules, like the following.</p>\n<pre><code>{\n "Description": "Agent and node management",\n "Name": "infrastructure",\n "Rules": "{\\"agent\\":{\\"policy\\":\\"write\\"},\\"node\\":{\\"policy\\":\\"write\\"}}"\n}\n</code></pre>\n<p>This policy matches what we have in the table above.\nCreate an <code>infrastructure.json</code> with the content above for use in the next step.</p>\n<blockquote>\n<p>TIP:</p>\n<p>To avoid error-prone quoting, one could write the policies in YAML:</p>\n<pre><code>Name: infrastructure\nDescription: Agent and node management\nRules:\n agent:\n policy: write\n node:\n policy: write\n</code></pre>\n<p>And then, convert them to JSON with the necessary quoting, by:</p>\n<pre><code>$ yaml2json < infrastructure.yaml | jq '.Rules = (.Rules | @text)' > infrastructure.json\n</code></pre>\n</blockquote>\n<h2 id=\"adding-a-policy\">Adding a policy</h2>\n<p>To add the policy, simply make an HTTP POST request to the server. The\n<code>NOMAD_TOKEN</code> below is the "management" token that we first created.</p>\n<pre><code>vagrant@nomad:~$ curl \\\n --request POST \\\n --data @infrastructure.json \\\n --header "X-Nomad-Token: ${NOMAD_TOKEN}" \\\n https://127.0.0.1:4646/v1/acl/policy/infrastructure\n\nvagrant@nomad:~$ nomad acl policy list\nName Description\ninfrastructure Agent and node management\n\nvagrant@nomad:~$ nomad acl policy info infrastructure\nName = infrastructure\nDescription = Agent and node management\nRules = {"agent":{"policy":"write"},"node":{"policy":"write"}}\nCreateIndex = 425\nModifyIndex = 425\n</code></pre>\n<h2 id=\"creating-a-token-for-a-policy\">Creating a token for a policy</h2>\n<p>We now create a token for the <code>infrastructure</code> policy, and attempt a few operations\nwith it:</p>\n<pre><code>vagrant@nomad:~$ nomad acl token create \\\n -name='devops-team' \\\n -type='client' \\\n -global='true' \\\n -policy='infrastructure'\n\nAccessor ID = 927ea7a4-e689-037f-be89-54a2cdbd338c\nSecret ID = 26832c8d-9315-c1ef-aabf-2058c8632da8\nName = devops-team\nType = client\nGlobal = true\nPolicies = [infrastructure]\nCreate Time = 2018-02-15 19:53:59.97900843 +0000 UTC\nCreate Index = 432\nModify Index = 432\n\nvagrant@nomad:~$ export NOMAD_TOKEN='26832c8d-9315-c1ef-aabf-2058c8632da8' # change the token to the new one with the "infrastructure" policy attached\nvagrant@nomad:~$ nomad status\nError querying jobs: Unexpected response code: 403 (Permission denied)\n\nvagrant@nomad:~$ nomad node-status\nID DC Name Class Drain Status\n1f638a17 dc1 nomad <none> false ready\n</code></pre>\n<p>As you can see, anyone with the <code>devops-team</code> token will be allowed to\nrun operations on nodes, but not on jobs -- i.e. on namespace resources.</p>\n<h2 id=\"where-to-go-next\">Where to go next</h2>\n<p>The example above demonstrates adding one of the policies from our list at the\nbeginning. Adding the rest of them and trying different commands could be a\ngood exercise.</p>\n<p>As a reference, the FP Complete team maintains a\n<a href=\"https://github.com/fpco/nomad-acl-policies\">repository</a> with\npolicies ready for use.</p>\n<h4 id=\"related-articles\">Related articles</h4>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/\">DevOps best practices: immutability</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/\">How to implement containers to streamline your DevOps workflow</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2016/05/stack-security-gnupg-keys/\">Stack security: GnuPG keys</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/",
"slug": "controlling-access-to-nomad-clusters",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Controlling access to Nomad clusters",
"description": "Learn how to control access to your Nomad clusters on a per-role basis. This will get you the benefits of application schedulers such as Nomad and Kubernetes with all the security guarantees your services need but without the complex and lengthy setup that some other popular tools demand.",
"updated": null,
"date": "2018-05-17T13:21:00Z",
"year": 2018,
"month": 5,
"day": 17,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops"
]
},
"authors": [],
"extra": {
"author": "FP Complete Team",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/2018/05/controlling-access-to-nomad-clusters/",
"components": [
"blog",
"2018",
"05",
"controlling-access-to-nomad-clusters"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introduction",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#introduction",
"title": "Introduction",
"children": []
},
{
"level": 2,
"id": "overview",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#overview",
"title": "Overview",
"children": []
},
{
"level": 2,
"id": "setup-the-environment",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#setup-the-environment",
"title": "Setup the environment",
"children": []
},
{
"level": 2,
"id": "setup-nomad",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#setup-nomad",
"title": "Setup Nomad",
"children": []
},
{
"level": 2,
"id": "acl-bootstrap",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#acl-bootstrap",
"title": "ACL Bootstrap",
"children": []
},
{
"level": 2,
"id": "designing-policies",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#designing-policies",
"title": "Designing policies",
"children": []
},
{
"level": 2,
"id": "policy-specification",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#policy-specification",
"title": "Policy specification",
"children": []
},
{
"level": 2,
"id": "adding-a-policy",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#adding-a-policy",
"title": "Adding a policy",
"children": []
},
{
"level": 2,
"id": "creating-a-token-for-a-policy",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#creating-a-token-for-a-policy",
"title": "Creating a token for a policy",
"children": []
},
{
"level": 2,
"id": "where-to-go-next",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#where-to-go-next",
"title": "Where to go next",
"children": [
{
"level": 4,
"id": "related-articles",
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#related-articles",
"title": "Related articles",
"children": []
}
]
}
],
"word_count": 1393,
"reading_time": 7,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/continuous-integration-delivery-best-practices.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/continuous-integration-delivery-best-practices/",
"slug": "continuous-integration-delivery-best-practices",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Best practices when implementing continuous integration and delivery",
"description": "Although, there are countless reasons to ditch the old ways of development and adopt DevOps practices, the change from one to the another can be an intimidating task. Use these best practices to ensure your company succeeds during these transitions. ",
"updated": null,
"date": "2018-04-11T12:49:00Z",
"year": 2018,
"month": 4,
"day": 11,
"taxonomies": {
"categories": [
"devops",
"kub360"
],
"tags": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Deni Bertovic",
"html": "hubspot-blogs/continuous-integration-delivery-best-practices.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/continuous-integration-delivery-best-practices/",
"components": [
"blog",
"continuous-integration-delivery-best-practices"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/fintech-best-practices-devops-priorities-for-financial-technology-applications.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/fintech-best-practices-devops-priorities-for-financial-technology-applications/",
"slug": "fintech-best-practices-devops-priorities-for-financial-technology-applications",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "FinTech best practices: DevOps Priorities for Financial Technology Applications",
"description": "Modern software development is complicated, but developing software for the FinTech industry adds a whole new dimension of complexity. Adopting modern DevOps principals will ensure your software adheres to FinTech best practices. This blog explains how you can get started and be successful.",
"updated": null,
"date": "2018-04-05T12:21:00Z",
"year": 2018,
"month": 4,
"day": 5,
"taxonomies": {
"categories": [
"devops",
"kube360"
],
"tags": [
"devops",
"fintech"
]
},
"authors": [],
"extra": {
"author": "Aaron Contorer",
"html": "hubspot-blogs/fintech-best-practices-devops-priorities-for-financial-technology-applications.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/fintech-best-practices-devops-priorities-for-financial-technology-applications/",
"components": [
"blog",
"fintech-best-practices-devops-priorities-for-financial-technology-applications"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/recover-your-elasticsearch.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2018/04/recover-your-elasticsearch/",
"slug": "recover-your-elasticsearch",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Recover your Elasticsearch",
"description": "When using Elasticsearch you may run into cluster problems that could lose data because of a corrupt index. All is not lost because there are ways to recover your Elasticsearch. Find out how to bring the cluster to a healthy state with minimal or no data loss in such situation. ",
"updated": null,
"date": "2018-04-03T13:42:00Z",
"year": 2018,
"month": 4,
"day": 3,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Alexey Kuleshevich",
"html": "hubspot-blogs/recover-your-elasticsearch.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/2018/04/recover-your-elasticsearch/",
"components": [
"blog",
"2018",
"04",
"recover-your-elasticsearch"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/without-performance-tests-we-will-have-a-bad-time-forever.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/without-performance-tests-we-will-have-a-bad-time-forever/",
"slug": "without-performance-tests-we-will-have-a-bad-time-forever",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Without performance tests, we will have a bad time, forever",
"description": "When writing Haskell software code, you cannot assume performance is optimized. You must utilize automated testing and eliminate human inspection. Performance regression is not an option, or you will have a bad day.",
"updated": null,
"date": "2018-03-15T11:36:00Z",
"year": 2018,
"month": 3,
"day": 15,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"haskell"
]
},
"authors": [],
"extra": {
"author": "Niklas Hambüchen",
"html": "hubspot-blogs/without-performance-tests-we-will-have-a-bad-time-forever.html",
"blogimage": "/images/blog-listing/qa.png"
},
"path": "/blog/without-performance-tests-we-will-have-a-bad-time-forever/",
"components": [
"blog",
"without-performance-tests-we-will-have-a-bad-time-forever"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/how-to-implement-containers-to-streamline-your-devops-workflow.md",
"colocated_path": null,
"content": "<h1 id=\"what-are-docker-containers\">What are Docker Containers?</h1>\n<p>Docker containers are a form of "lightweight" virtualization They allow a\nprocess or process group to run in an environment with its own file system,\nsomewhat like <code>chroot</code> jails , and also with its own process table, users and\ngroups and, optionally, virtual network and resource limits. For most purposes,\nthe processes in a container think they have an entire OS to themselves and do\nnot have access to anything outside the container (unless explicitly granted).\nThis lets you precisely control the environment in which your processes run,\nallows multiple processes on the same (virtual) machine that have completely\ndifferent (even conflicting) requirements, and significantly increases isolation\nand container security.</p>\n<p>In addition to containers, Docker makes it easy to build and distribute images\nthat wrap up an application with its complete runtime environment.</p>\n<p>For more information, see \n<a href=\"https://www.cio.com/article/2924995/software/what-are-containers-and-why-do-you-need-them.html\">What are containers and why do you need them?</a> \nand \n<a href=\"https://containerjournal.com/2017/01/11/containers-devops-anyway/\">What Do Containers Have to Do with DevOps, Anyway?</a>.</p>\n<h1 id=\"containers-vs-virtual-machines-vms\">Containers vs Virtual Machines (VMs)</h1>\n<p>The difference between the "lightweight" virtualization of containers and\n"heavyweight" virtualization of VMs boils down to that, for the former, the\nvirtualization happens at the kernel level while for the latter it happens at\nthe hypervisor level. In other words, all the containers on a machine share the\nsame kernel, and code in the kernel isolates the containers from each other\nwhereas each VM acts like separate hardware and has its own kernel.</p>\n<img alt=\"Docker Carrying Haskell.jpg\" sizes=\"(max-width: 320px) 100vw, 320px\" src=\"/images/hubspot/4536576cadee37e3ea1e0a35a83a97a55015af6773242ecda5f919a7f1628cc5.jpeg\" srcset=\"/images/hubspot/04a7b5b957c890331f8535859d7c8528eadf4d83c82ae65e86ea28fea6f82898.jpeg 160w, /images/hubspot/4536576cadee37e3ea1e0a35a83a97a55015af6773242ecda5f919a7f1628cc5.jpeg 320w, /images/hubspot/1652b04e09bee96b23e47adb5830543a1feac5a48d5488b22602cec12a1b131d.jpeg 480w, /images/hubspot/4a5e5498d817ee00db5fdc27b5827a41a41d07253d95f73e093809cd27d6ea45.jpeg 640w, /images/hubspot/d77567fd61f4146be574d81e707b90ca7f80f3005770d6ef527ff656eb9b913d.jpeg 800w, /images/hubspot/f9971781a2d67ed9b0b30a5798652fdc1975985603d9fde0b60bf89de73faa7a.jpeg 960w\" style=\"width: 320px; margin: 0px 0px 10px 10px; letter-spacing: -0.08px; float: right;\" width=\"320\">\n<p>Containers are much less resource intensive than VMs because they do not need\nto be allocated exclusive memory and file system space or have the overhead of\nrunning an entire operating system. This makes it possible to run many more\ncontainers on a machine than you would VMs. Containers start nearly as fast as\nregular processes (you don't have to wait for the OS to boot), and parts of the\nhost's file system can be easily "mounted" into the container's file system\nwithout any additional overhead of network file system protocols.</p>\n<p>On the other hand, isolation is less guaranteed. If not careful, you can\noversubscribe a machine by running containers that need more resources than the\nmachine has available (this can be mitigated by setting appropriate resource\nlimits on containers). While containers security is an improvement over normal\nprocesses, the shared kernel means the attack surface is greater and there is\nmore risk of leakage between containers than there is between VMs.</p>\n<p>For more information, see <a href=\"https://blog.netapp.com/blogs/containers-vs-vms/\">Docker containers vs. virtual machines: What's the\ndifference?</a> and <a href=\"https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/\">DevOps Best\nPractices: Immutability</a></p>\n<h1 id=\"how-docker-containers-enhance-continuous-delivery-pipelines\">How Docker Containers Enhance Continuous Delivery Pipelines</h1>\n<p>There are, broadly, two areas where containers fit into your devops\nworkflow: for builds, and for deployment. They are often used together,\nbut do not have to be.</p>\n<h3 id=\"builds\">Builds</h3>\n<ul>\n<li>\n<p><strong>Synchronizing build environments:</strong> It can be difficult to keep\nbuild environments synchronized between developers and CI/CD\nservers, which can lead to unexpected build failures or changes in\nbehaviour . Docker images let you specify <em>exactly</em> the build tools,\nlibraries, and other dependencies (including their versions)\nrequired without needing to install them on individual machines, and\ndistribute those images easily. This way you can be sure that\neveryone is using exactly the same build environment.</p>\n</li>\n<li>\n<p><strong>Managing changes to build environments:</strong> Managing changes to\nbuild environments can also be difficult, since you need to roll\nthose out to all developers and build servers at the right time.\nThis can be especially tricky when there are multiple branches of\ndevelopment some of which may need older or newer environments than\neach other. With Docker, you can specify a particular version of the\nbuild image along with the source code, which means a particular\nrevision of the source code will always build in the right\nenvironment.</p>\n</li>\n<li>\n<p><strong>Isolating build environments:</strong> One CI/CD server may have to build\nmultiple projects, which may have conflicting requirements for build\ntools, libraries, and other dependencies. By running each build in\nits own ephemeral container created from potentially different\nDocker images, you can be certain that these builds environments\nwill not interfere with each other.</p>\n</li>\n</ul>\n<h3 id=\"deployment\">Deployment</h3>\n<ul>\n<li>\n<p><strong>Runtime environment bundled with application :</strong> The CD system\nbuilds a complete Docker image which bundles the application's\nenvironment with the application itself and then deploys the whole\nimage as one "atomic" step. There is no chance for configuration\nmanagement scripts to fail at deployment time, and no risk of the\nsystem configuration to be out of sync.</p>\n</li>\n<li>\n<p><strong>Preventing malicious changes:</strong> Container security is improved by\nusing immutable SHA digests to identify Docker images, which means\nthere is no way for a malicious actor to inject malware into your\napplication or its environment.</p>\n</li>\n<li>\n<p><strong>Easily roll back to a previous version:</strong> All it takes to roll\nback is to deploy a previous version of the Docker image. There is\nno worrying about system configuration changes needing to be\nmanually rolled back.</p>\n</li>\n<li>\n<p><strong>Zero downtime rollouts:</strong> In conjunction with container\norchestration tools like Kubernetes, it is easily to roll out new\nimage versions with zero downtime.</p>\n</li>\n<li>\n<p><strong>High availability and horizontal scaling:</strong> Container\norchestration tools like Kubernetes make it easy to distribute the\nsame image to containers on multiple servers, and add/remove\nreplicas at will or automatically.</p>\n</li>\n<li>\n<p><strong>Sharing a server between multiple applications:</strong> Multiple\napplications, or multiple versions of the same application (e.g. a\ndev and qa deployment), can run on the same server even if they have\nconflicting dependencies, since their runtime environments are\ncompletely separate.</p>\n</li>\n<li>\n<p><strong>Isolating applications:</strong> When multiple applications are deployed\nto a server in containers, they are isolated from one another.\nContainer security means each has its own file system, processes,\nand users there is less risk that they interfere with each other\nintentionally. When data <em>does</em> need to be shared between\napplications, parts of the host file system can be mounted into\nmultiple containers, but this is something you have full control\nover.</p>\n</li>\n</ul>\n<p>For more information, see:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/03/continuous-integration/\">Continuous Integration: An Overview</a></li>\n<li><a href=\"https://docs.microsoft.com/en-us/dotnet/standard/containerized-lifecycle-architecture/docker-application-lifecycle/containers-foundation-for-devops-collaboration\">Containers as the foundation for DevOps collaboration</a></li>\n<li><a href=\"https://www.sumologic.com/blog/devops/how-containerization-enables-devops/\">Docker and DevOps -- Enabling DevOps Teams Through Containerization</a>.</li>\n</ul>\n<h1 id=\"implementing-containers-into-your-devops-workflow\">Implementing Containers into Your DevOps Workflow</h1>\n<p>Containers can be integrated into your DevOps toolchain incrementally.\nOften it makes sense to start with the build environment, and then move\non to the deployment environment. This is a very broad overview of the\nsteps for a simple approach, without delving into the technical details\nvery much or covering all the possible variations.</p>\n<h3 id=\"requirements\">Requirements</h3>\n<ul>\n<li>Docker Engine installed on build servers and/or application servers</li>\n<li>Access to a Docker Registry. This is where Docker images are stored\nand pulled. There are numerous services that provide registries, and\nit's also easy to run your own.</li>\n</ul>\n<h3 id=\"containerizing-the-build-environment\">Containerizing the build environment</h3>\n<p>Many CI/CD systems now include built-in Docker support or easily enable\nit through plugins, but <code>docker</code> is a command-line application which\ncan be called from any build script even if your CI/CD system does not\nhave explicit support.</p>\n<ol>\n<li>\n<p>Determine your build environment requirements and write\na <code>Dockerfile</code> based on an existing Docker image, which is the\nspecification used to build an image for build containers. If you\nalready use a configuration management tool, you can use it within\nthe Dockerfile. Always specify precise versions of base images and\ninstalled packages so that image builds are consistent and upgrades\nare deliberate.</p>\n</li>\n<li>\n<p>Build the image using <code>docker build</code> and push it to the Docker\nregistry using <code>docker push</code> .</p>\n</li>\n<li>\n<p>Create a <code>Dockerfile</code> for the application that is based on the build\nimage (specify the exact version of the base build image). This file\nbuilds the application, adds any required runtime dependencies that\naren't in the build image, and tests the application. A multi-stage\n <code>Dockerfile</code> can be used if you don't want the application\ndeployment image to include all the build dependencies.</p>\n</li>\n<li>\n<p>Modify CI build scripts to build the application image and push it\nto the Docker registry. The image should be tagged with the build\nnumber, and possibly additional information such as the name of the\nbranch.</p>\n</li>\n<li>\n<p>If you are not yet ready to deploy with Docker, you can extract the\nbuild artifacts from the resulting Docker image.</p>\n</li>\n</ol>\n<p>It is best to <em>also</em> integrate building the build image itself into your\ndevops automation tools.</p>\n<h3 id=\"containerizing-deployment\">Containerizing deployment</h3>\n<p>This can be easier if your CD tool has support for Docker, but that is\nby no means necessary. We also recommend deploying to a container\norchestration system such as Kubernetes in most cases.</p>\n<p>Half the work has already been done, since the build process creates and\npushes an image containing the application and its environment.</p>\n<ul>\n<li>\n<p>If using Docker directly, now it's a matter of updating deployment\nscripts to use <code>docker run</code> on the application server with the\nimage and tag that was pushed in the previous section (after\nstopping any existing container). Ideally your application accepts\nits configuration via environment variables, in which case you use\nthe <code>-e</code> argument to specify those values depending on which\nstage is being deployed. If a configuration file are used, write it\nto the host file system and then use the <code>-v</code> argument to mount\nit to the correct path in the container.</p>\n</li>\n<li>\n<p>If using a container orchestration system such as Kubernetes, you\nwill typically have the deployment script connect to the\norchestration API endpoint to trigger an image update (e.g. using \n<code>kubectl set image</code> , a Helm chart, or better yet, a\n<code>kustomization</code>.).</p>\n</li>\n</ul>\n<p>Once deployed, tools such as Prometheus are well suited to docker\ncontainer monitoring and alerting, but this can be plugged into existing\nmonitoring systems as well.</p>\n<p>FP Complete has implemented this kind of DevOps workflow, and\nsignificantly more complex ones, for many clients and would love to\ncount you among them! <a href=\"https://tech.fpcomplete.com/contact-us/\">Contact us about our Devops Services</a> page.</p>\n<p>For more information, see <a href=\"https://techbeacon.com/how-secure-container-lifecycle\">How to secure the container\nlifecycle</a> and <a href=\"https://tech.fpcomplete.com/blog/2017/01/containerize-legacy-app/\">Containerizing\na legacy application: an\noverview</a>.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
"slug": "how-to-implement-containers-to-streamline-your-devops-workflow",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "How to Implement Containers to Streamline Your DevOps Workflow",
"description": "Many technology companies have been rapidly implementing Docker Containers to enhance their continuous delivery pipeline. However, implementing containers into your DevOps workflow can be difficult. Learn how to execute this process efficiently and securely here. ",
"updated": null,
"date": "2018-01-31T08:00:00Z",
"year": 2018,
"month": 1,
"day": 31,
"taxonomies": {
"tags": [
"devops",
"docker"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Emanuel Borsboom",
"blogimage": "/images/blog-listing/container.png"
},
"path": "/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
"components": [
"blog",
"how-to-implement-containers-to-streamline-your-devops-workflow"
],
"summary": null,
"toc": [
{
"level": 1,
"id": "what-are-docker-containers",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#what-are-docker-containers",
"title": "What are Docker Containers?",
"children": []
},
{
"level": 1,
"id": "containers-vs-virtual-machines-vms",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containers-vs-virtual-machines-vms",
"title": "Containers vs Virtual Machines (VMs)",
"children": []
},
{
"level": 1,
"id": "how-docker-containers-enhance-continuous-delivery-pipelines",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#how-docker-containers-enhance-continuous-delivery-pipelines",
"title": "How Docker Containers Enhance Continuous Delivery Pipelines",
"children": [
{
"level": 3,
"id": "builds",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#builds",
"title": "Builds",
"children": []
},
{
"level": 3,
"id": "deployment",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#deployment",
"title": "Deployment",
"children": []
}
]
},
{
"level": 1,
"id": "implementing-containers-into-your-devops-workflow",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#implementing-containers-into-your-devops-workflow",
"title": "Implementing Containers into Your DevOps Workflow",
"children": [
{
"level": 3,
"id": "requirements",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#requirements",
"title": "Requirements",
"children": []
},
{
"level": 3,
"id": "containerizing-the-build-environment",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containerizing-the-build-environment",
"title": "Containerizing the build environment",
"children": []
},
{
"level": 3,
"id": "containerizing-deployment",
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containerizing-deployment",
"title": "Containerizing deployment",
"children": []
}
]
}
],
"word_count": 1754,
"reading_time": 9,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/",
"title": "Controlling access to Nomad clusters"
}
]
},
{
"relative_path": "blog/signs-your-business-needs-a-devops-consultant.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/signs-your-business-needs-a-devops-consultant/",
"slug": "signs-your-business-needs-a-devops-consultant",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Signs Your Business Needs a DevOps Consultant",
"description": "Today’s business challenges cause issues with traditional deployment models. Find out why a DevOps consultant may be right for you. ",
"updated": null,
"date": "2018-01-18T15:06:00Z",
"year": 2018,
"month": 1,
"day": 18,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"insights",
"devops"
]
},
"authors": [],
"extra": {
"author": "Aaron Contorer",
"html": "hubspot-blogs/signs-your-business-needs-a-devops-consultant.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/signs-your-business-needs-a-devops-consultant/",
"components": [
"blog",
"signs-your-business-needs-a-devops-consultant"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/devops-value-how-to-measure-the-success-of-devops.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/devops-value-how-to-measure-the-success-of-devops/",
"slug": "devops-value-how-to-measure-the-success-of-devops",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps Value: How to Measure the Success of DevOps",
"description": "Faster time to market and lower failure rate are the beginning of the many benefits DevOps offers companies. Discover the measurable metrics and KPIs, as well as the true business value DevOps offers.",
"updated": null,
"date": "2018-01-04T13:51:00Z",
"year": 2018,
"month": 1,
"day": 4,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops",
"insights"
]
},
"authors": [],
"extra": {
"author": "Robert Bobbett",
"html": "hubspot-blogs/devops-value-how-to-measure-the-success-of-devops.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/devops-value-how-to-measure-the-success-of-devops/",
"components": [
"blog",
"devops-value-how-to-measure-the-success-of-devops"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/nat-gateways-in-amazon-govcloud.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/nat-gateways-in-amazon-govcloud/",
"slug": "nat-gateways-in-amazon-govcloud",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "NAT Gateways in Amazon GovCloud",
"description": "Since AWS GovCloud has no managed NAT gateways this task is left for you to set up. This post is the third in a series to explain how you can make it work.",
"updated": null,
"date": "2017-11-30T14:25:00Z",
"year": 2017,
"month": 11,
"day": 30,
"taxonomies": {
"categories": [
"devops",
"kube360"
],
"tags": [
"devops",
"aws",
"govcloud"
]
},
"authors": [],
"extra": {
"author": "Yghor Kerscher",
"html": "hubspot-blogs/nat-gateways-in-amazon-govcloud.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/nat-gateways-in-amazon-govcloud/",
"components": [
"blog",
"nat-gateways-in-amazon-govcloud"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager/",
"slug": "my-devops-journey-and-how-i-became-a-recovering-it-operations-manager",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "My DevOps Journey and How I Became a Recovering IT Operations Manager",
"description": "Learn how containerization and automated deployments laid the groundwork for what would become know as DevOps for a Fortune 500 IT company.",
"updated": null,
"date": "2017-11-15T13:30:00Z",
"year": 2017,
"month": 11,
"day": 15,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops",
"insights"
]
},
"authors": [],
"extra": {
"author": "Steve Bogdan",
"html": "hubspot-blogs/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager/",
"components": [
"blog",
"my-devops-journey-and-how-i-became-a-recovering-it-operations-manager"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/amazon-govcloud-has-no-route53-how-to-solve-this.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/amazon-govcloud-has-no-route53-how-to-solve-this/",
"slug": "amazon-govcloud-has-no-route53-how-to-solve-this",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Amazon GovCloud has no Route53! How to solve this?",
"description": "Since Route53 is not yet available on Amazon GovCloud you need to find a different way to create custom DNS records for your services. We tell you how. ",
"updated": null,
"date": "2017-11-08T14:12:00Z",
"year": 2017,
"month": 11,
"day": 8,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"aws",
"govcloud"
]
},
"authors": [],
"extra": {
"author": "Yghor Kerscher",
"html": "hubspot-blogs/amazon-govcloud-has-no-route53-how-to-solve-this.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/amazon-govcloud-has-no-route53-how-to-solve-this/",
"components": [
"blog",
"amazon-govcloud-has-no-route53-how-to-solve-this"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/intro-to-devops-on-govcloud.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/intro-to-devops-on-govcloud/",
"slug": "intro-to-devops-on-govcloud",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Intro to Devops on GovCloud",
"description": "If you have strict compliance criteria that require you to use AWS GovCloud, there are some obstacles you will encounter that we will help you address.",
"updated": null,
"date": "2017-10-26T11:02:00Z",
"year": 2017,
"month": 10,
"day": 26,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"govcloud"
]
},
"authors": [],
"extra": {
"author": "J Boyer",
"html": "hubspot-blogs/intro-to-devops-on-govcloud.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/intro-to-devops-on-govcloud/",
"components": [
"blog",
"intro-to-devops-on-govcloud"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/",
"title": "Cloud Deployment Models: Advantages and Disadvantages"
}
]
},
{
"relative_path": "blog/credstash.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2017/08/credstash/",
"slug": "credstash",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Manage Secrets on AWS with credstash and terraform",
"description": "Managing secrets is hard. Moving them around securely is even harder. Learn how to get secrets to the Cloud with terraform and credstash.",
"updated": null,
"date": "2017-08-28T15:00:00Z",
"year": 2017,
"month": 8,
"day": 28,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"aws"
]
},
"authors": [],
"extra": {
"author": "Alexey Kuleshevich",
"html": "hubspot-blogs/credstash.html",
"blogimage": "/images/blog-listing/aws.png"
},
"path": "/blog/2017/08/credstash/",
"components": [
"blog",
"2017",
"08",
"credstash"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/functional-programming-and-modern-devops.md",
"colocated_path": null,
"content": "<p>In this presentation, Aaron Contorer presents on how modern tools can\nbe used to reach the Engineering sweet spot.</p>\n<iframe width=\"100%\" height=\"315\"\nsrc=\"https://www.youtube.com/embed/ybSBCVhVWs8\" frameborder=\"0\"\nallow=\"accelerometer; autoplay; encrypted-media; gyroscope;\npicture-in-picture\" allowfullscreen></iframe>\n<br>\n<br>\n<h2 id=\"do-you-know-fp-complete\">Do you know FP Complete</h2>\n<p>At FP Complete, we do so many things to help companies it’s hard to\nencapsulate our impact in a few words. They say a picture is worth a\nthousand words, so a video has to be worth 10,000 words (at\nleast). Therefore, to tell all we can in as little time as possible,\ncheck out our explainer video. It’s only 108 seconds to get the full\nstory of FP Complete.</p>\n<iframe allowfullscreen=\n \"allowfullscreen\" height=\"315\" src=\n \"https://www.youtube.com/embed/JCcuSn_lFKs\"\n target=\"_blank\" width=\n \"100%\"></iframe>\n<br>\n<br>\n<p>Reach us to on <a href=\"mailto:[email protected]\">[email protected]</a> if you have suggestions or if\nyou would like to learn more about FP Complete and the services we\noffer.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/functional-programming-and-modern-devops/",
"slug": "functional-programming-and-modern-devops",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Functional Programming and Modern DevOps",
"description": "In this presentation, Aaron Contorer presents on how modern tools can be used to reach the Engineering sweet spot.",
"updated": null,
"date": "2017-08-11",
"year": 2017,
"month": 8,
"day": 11,
"taxonomies": {
"tags": [
"devops",
"haskell",
"insights"
],
"categories": [
"functional programming",
"devops"
]
},
"authors": [],
"extra": {
"author": "Aaron Contorer",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/functional-programming-and-modern-devops/",
"components": [
"blog",
"functional-programming-and-modern-devops"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "do-you-know-fp-complete",
"permalink": "https://tech.fpcomplete.com/blog/functional-programming-and-modern-devops/#do-you-know-fp-complete",
"title": "Do you know FP Complete",
"children": []
}
],
"word_count": 162,
"reading_time": 1,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/continuous-integration.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2017/03/continuous-integration/",
"slug": "continuous-integration",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Continuous Integration: an overview",
"description": "Continuous integration makes development teams more productive and releases less stressful. Catch regressions quickly and deploy applications automatically.",
"updated": null,
"date": "2017-03-03T17:11:00Z",
"year": 2017,
"month": 3,
"day": 3,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Emanuel Borsboom",
"html": "hubspot-blogs/continuous-integration.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/2017/03/continuous-integration/",
"components": [
"blog",
"2017",
"03",
"continuous-integration"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
"title": "How to Implement Containers to Streamline Your DevOps Workflow"
}
]
},
{
"relative_path": "blog/immutability-docker-haskells-st-type.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2017/02/immutability-docker-haskells-st-type/",
"slug": "immutability-docker-haskells-st-type",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Immutability, Docker, and Haskell's ST type",
"description": "Immutability in software development is a well known constant in functional programming but is relatively new in modern devops and the parallels are worth examining.",
"updated": null,
"date": "2017-02-13T15:24:00Z",
"year": 2017,
"month": 2,
"day": 13,
"taxonomies": {
"tags": [
"haskell",
"docker",
"devops"
],
"categories": [
"functional programming",
"devops"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"html": "hubspot-blogs/immutability-docker-haskells-st-type.html",
"blogimage": "/images/blog-listing/docker.png"
},
"path": "/blog/2017/02/immutability-docker-haskells-st-type/",
"components": [
"blog",
"2017",
"02",
"immutability-docker-haskells-st-type"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
"title": "Our history with containerization"
}
]
},
{
"relative_path": "blog/quickcheck.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2017/01/quickcheck/",
"slug": "quickcheck",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "QuickCheck and Magic of Testing",
"description": "Discover the power of random testing in Haskell with QuickCheck. Learn how to use function properties and software specification to write bug-free software.",
"updated": null,
"date": "2017-01-24T14:24:00Z",
"year": 2017,
"month": 1,
"day": 24,
"taxonomies": {
"tags": [
"haskell"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Alexey Kuleshevich",
"html": "hubspot-blogs/quickcheck.html",
"blogimage": "/images/blog-listing/qa.png"
},
"path": "/blog/2017/01/quickcheck/",
"components": [
"blog",
"2017",
"01",
"quickcheck"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/haskell/syllabus/",
"title": "Applied Haskell Syllabus"
}
]
},
{
"relative_path": "blog/containerize-legacy-app.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2017/01/containerize-legacy-app/",
"slug": "containerize-legacy-app",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Containerizing a legacy application: an overview",
"description": "Running your legacy apps in Docker containers takes the pain out of deployment and puts you on a path to modern practices. Learn what is involved in containerizing your app.",
"updated": null,
"date": "2017-01-12T15:45:00Z",
"year": 2017,
"month": 1,
"day": 12,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Emanuel Borsboom",
"html": "hubspot-blogs/containerize-legacy-app.html",
"blogimage": "/images/blog-listing/container.png"
},
"path": "/blog/2017/01/containerize-legacy-app/",
"components": [
"blog",
"2017",
"01",
"containerize-legacy-app"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
"title": "How to Implement Containers to Streamline Your DevOps Workflow"
},
{
"permalink": "https://tech.fpcomplete.com/blog/our-history-containerization/",
"title": "Our history with containerization"
}
]
},
{
"relative_path": "blog/devops-best-practices-multifaceted-testing.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-multifaceted-testing/",
"slug": "devops-best-practices-multifaceted-testing",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Devops best practices: Multifaceted Testing",
"description": ".",
"updated": null,
"date": "2016-11-28T18:00:00Z",
"year": 2016,
"month": 11,
"day": 28,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Aaron Contorer",
"html": "hubspot-blogs/devops-best-practices-multifaceted-testing.html",
"blogimage": "/images/blog-listing/qa.png"
},
"path": "/blog/2016/11/devops-best-practices-multifaceted-testing/",
"components": [
"blog",
"2016",
"11",
"devops-best-practices-multifaceted-testing"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/rust-at-fpco-2020/",
"title": "Rust at FP Complete, 2020 update"
},
{
"permalink": "https://tech.fpcomplete.com/platformengineering/security/",
"title": "Security in a DevOps World"
}
]
},
{
"relative_path": "blog/devops-best-practices-immutability.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/",
"slug": "devops-best-practices-immutability",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Devops best practices: Immutability",
"description": ".",
"updated": null,
"date": "2016-11-13T18:00:00Z",
"year": 2016,
"month": 11,
"day": 13,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Aaron Contorer",
"html": "hubspot-blogs/devops-best-practices-immutability.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/2016/11/devops-best-practices-immutability/",
"components": [
"blog",
"2016",
"11",
"devops-best-practices-immutability"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/",
"title": "Controlling access to Nomad clusters"
},
{
"permalink": "https://tech.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
"title": "How to Implement Containers to Streamline Your DevOps Workflow"
}
]
},
{
"relative_path": "blog/docker-demons-pid1-orphans-zombies-signals.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2016/10/docker-demons-pid1-orphans-zombies-signals/",
"slug": "docker-demons-pid1-orphans-zombies-signals",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Docker demons: PID-1, orphans, zombies, and signals",
"description": ".",
"updated": null,
"date": "2016-10-05T02:00:00Z",
"year": 2016,
"month": 10,
"day": 5,
"taxonomies": {
"tags": [
"devops",
"docker"
],
"categories": [
"devops"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"html": "hubspot-blogs/docker-demons-pid1-orphans-zombies-signals.html",
"blogimage": "/images/blog-listing/docker.png"
},
"path": "/blog/2016/10/docker-demons-pid1-orphans-zombies-signals/",
"components": [
"blog",
"2016",
"10",
"docker-demons-pid1-orphans-zombies-signals"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/rust/pid1/",
"title": "Implementing pid1 with Rust and async/await"
}
]
},
{
"relative_path": "blog/docker-split-images.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2015/12/docker-split-images/",
"slug": "docker-split-images",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "The split-image approach to building minimal runtime Docker images",
"description": ".",
"updated": null,
"date": "2015-12-15T00:00:00Z",
"year": 2015,
"month": 12,
"day": 15,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"docker"
]
},
"authors": [],
"extra": {
"author": "Emanuel Borsboom",
"html": "hubspot-blogs/docker-split-images.html",
"blogimage": "/images/blog-listing/docker.png"
},
"path": "/blog/2015/12/docker-split-images/",
"components": [
"blog",
"2015",
"12",
"docker-split-images"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/kubernetes.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2015/11/kubernetes/",
"slug": "kubernetes",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Kubernetes for Haskell Services",
"description": ".",
"updated": null,
"date": "2015-11-19T19:00:00Z",
"year": 2015,
"month": 11,
"day": 19,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"haskell",
"kubernetes"
]
},
"authors": [],
"extra": {
"author": "Tim Dysinger",
"html": "hubspot-blogs/kubernetes.html",
"blogimage": "/images/blog-listing/kubernetes.png"
},
"path": "/blog/2015/11/kubernetes/",
"components": [
"blog",
"2015",
"11",
"kubernetes"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/distributing-packages-without-sysadmin.md",
"colocated_path": null,
"content": "",
"permalink": "https://tech.fpcomplete.com/blog/2015/05/distributing-packages-without-sysadmin/",
"slug": "distributing-packages-without-sysadmin",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Distributing our packages without a sysadmin",
"description": ".",
"updated": null,
"date": "2015-05-13T00:00:00Z",
"year": 2015,
"month": 5,
"day": 13,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"insights",
"devops"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"html": "hubspot-blogs/distributing-packages-without-sysadmin.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "/blog/2015/05/distributing-packages-without-sysadmin/",
"components": [
"blog",
"2015",
"05",
"distributing-packages-without-sysadmin"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
}
],
"page_count": 52
},
{
"name": "devsecops",
"slug": "devsecops",
"path": "/categories/devsecops/",
"permalink": "https://tech.fpcomplete.com/categories/devsecops/",
"pages": [
{
"relative_path": "blog/cloud-native.md",
"colocated_path": null,
"content": "<p>You hear "go Cloud-Native," but if you're like many, you wonder, "what does that mean, and how can applying a Cloud-Native strategy help my company's Dev Team be more productive?"\nAt a high level, Cloud-Native architecture means adapting to the many new possibilities—but a very different set of architectural constraints—offered by the cloud compared to traditional on-premises infrastructure.</p>\n<p>Cloud-Native architecture optimizes systems and software for the cloud. This optimization creates an efficient way to utilize the platform by streamlining the processes and workflows. This is accomplished by harnessing the cloud's inherent strengths: </p>\n<ul>\n<li>its flexibility, </li>\n<li>on-demand infrastructure; and </li>\n<li>robust managed services. </li>\n</ul>\n<p>Cloud-native computing couples these strengths with cloud-optimized technologies such as microservices, containers, and continuous delivery. Cloud-Native takes advantage of the cloud's distributed, scalable and adaptable nature. By doing this, Cloud-Native will maximize your dev team's focus on writing code, reducing operational tasks, creating business value, and keeping your customers happy by building high-impact applications faster, without compromising on quality. You might even think you can’t do cloud-native without using one of the big cloud providers- this simply isn’t true, many of the benefits of cloud-native are the approaches and emphasis on better tooling around automation.</p>\n<h2 id=\"why-move-to-cloud-native-now\">Why Move to Cloud-Native Now?</h2>\n<p><em>#1 - High-Frequency Software Release</em></p>\n<p>Faster and more frequent updates and new features releases allow your organization to respond to user needs in near real-time, increasing user retention. For example, new software versions with novel features can be released incrementally and more often as they become available. In addition, Cloud-native makes high-frequency software possible via continuous integration (CI) and continuous deployment (CD), where full version commits are no longer needed. Instead, one can modify, test, and commit just a few lines of code continuously and automatically to meet changing customer trends, thereby giving your organization an edge. </p>\n<p><em>#2 - Automatic Software Updates</em></p>\n<p>One of the most valuable Cloud-native features is automation. For example, updates are deployed automatically without interfering with core applications or user base. Automated redundancies for infrastructure can automatically move applications between data centers as needed with little to zero human intervention. Even scalability, testing, and resource allocation can be automated. There are many available automation tools in the marketplace, such as FP Complete Corporation's widely accepted tool, <a href=\"https://tech.fpcomplete.com/products/kube360/\">Kube360</a>.</p>\n<p><em>#3 - Greater Protection from Software Failures</em></p>\n<p>Isolation of containers is another important cloud-native feature. Software failures and bugs can be traced to a specific microservice version, rolled back, or fixed quickly. Software fixes can be tested in isolation without compromising the stability of the entire application. On the other hand, if there's a widespread failure, automation can restore the application to a previous stable state, minimizing downtime. Automated DevOps testing before code goes to production (example: linting and software scrubbing) drives faster bug detection and resolution- reducing the risk of bugs in production.</p>\n<h2 id=\"wow-cloud-native-seems-perfect-what-s-the-catch\">WOW – Cloud-Native Seems Perfect – What's the Catch?</h2>\n<p>Switching over to Cloud-Native architecture requires a thorough assessment of your existing application setup. The biggest question you and your team need to ask before making any moves is, "should our business modernize our current applications, or should we build new applications from scratch and utilize Cloud-Native development practices?"</p>\n<p>If you choose to modernize your existing application, you will save time and money by capitalizing on the cloud's agility, flexibility, and scalability. Your dev team can retain existing application functionality and business logic, re-architect into a Cloud-Native app, and containerize to utilize the cloud platform's strengths.</p>\n<p>You can also build a net-new application using Cloud-Native development practices instead of upgrading your legacy applications. Building from scratch may make more sense from a corporate culture, risk management, and regulatory compliance standpoint. You keep running old application code unchanged while developing and phasing in a platform. Building new applications also allows dev teams to develop applications free from prior architectural constraints, allowing developers to experiment and deliver innovation to users.</p>\n<h2 id=\"three-essential-tools-for-successful-cloud-native-architecture\">Three Essential Tools for Successful Cloud-Native Architecture</h2>\n<p>Whether you decide to create a new Cloud-Native application or modernize your existing ones, your dev team needs to use these three tools for successful implementation of Cloud-Native Architecture:</p>\n<ol>\n<li><em>Microservices Architecture</em>. </li>\n</ol>\n<p>A cloud-native microservice architecture is considered a "best practice" architectural approach for creating cloud applications because each application makes up a set of services. Each service runs its processes and communicates through clearly defined APIs, which provide good foundations for continuous delivery. With microservices, ideally each service is independently deployable This architecture allows each service to be updated independently without interfering with another service. This results in:</p>\n<ul>\n<li>reduced downtime for users; </li>\n<li>simplified troubleshooting; and </li>\n<li>minimized disruptions even if a problem's identified. \nWhich allows for high-frequency updates and continuous delivery. </li>\n</ul>\n<ol start=\"2\">\n<li><em>Container-based Infrastructure Platform</em>.</li>\n</ol>\n<p>Now that your microservice architecture is broken down into individual container-based services, the next essential tool is a system to manage all those containers automatically - known as a ‘container orchestrator. The most widely accepted platform is Kubernetes, an open-source system originally developed in collaboration with Google, Microsoft, and others. It runs the containerized applications and controls the automated deployment, storage, scaling, scheduling, load balancing, updates, and monitors containers across clusters of hosts. Kubernetes supports all major public cloud service providers, including Azure, AWS, Google Cloud Platform, and Oracle Cloud.</p>\n<ol start=\"3\">\n<li><em>CI/CD Pipeline</em>.</li>\n</ol>\n<p>A CI/CD Pipeline is the third essential tool for a cloud-native environment to work seamlessly. Continuous integration and continuous delivery embody a set of operating principles and a collection of practices that allow dev teams to deliver code changes more frequently and reliably. This implementation is known as the CI/CD Pipeline. By automating deployment processes, the CI/CD pipeline will allow your dev team to focus on:</p>\n<ul>\n<li>meeting business requirements; </li>\n<li>code quality; and </li>\n<li>security. \nCI/CD tools preserve the environment-specific parameters that must be included with each delivery. CI/CD automation then performs any necessary service calls to web servers, databases, and other services that may require a restart or follow other procedures when applications are deployed.</li>\n</ul>\n<h2 id=\"cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use\">Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?</h2>\n<p>As you can probably guess, countless tools make up the cloud-native architecture. Unfortunately, these tools are complex, require separate authentication, and frequently do not interact with each other. In essence, you are expected to integrate these cloud tools yourself as a user. We at FP Complete became frustrated with this approach. So, to save time and provide a turn-key solution, we created Kube360. Kube360 puts all necessary tools into one easy-to-use toolbox, accessed via a single sign-on, and operating as a fully integrated environment. Kube360 combines best practices, technologies, and processes into one complete package, and Kube360 has been proven an effective tool at multiple customer site deployments. In addition, Kube360 supports multiple cloud providers and on-premise infrastructure. Kube360 is vendor agnostic, fully customizable, and has no vendor lock-in.</p>\n<p><strong>Kube360 - Centralized Management</strong>. Kube360 employs centralized management, which increases your dev team's productivity. Increased Dev Team productivity will happen through:</p>\n<ul>\n<li>single-sign-on functionality </li>\n<li>speed-up of installation and setup</li>\n<li>Quick access to all tools</li>\n<li>Automation of logs, backups, and alerts</li>\n</ul>\n<p>This simplified administration hides frequent login complexities and allows single-sign-on through existing company identity management. Kube360 also streamlines tool authentication and access, eliminating many standard security holes. In the background, Kube360 automatically runs everyday tasks such as backups, log aggregation, and alerts.</p>\n<p><strong>Kube360 - Automated Features</strong>. Kube360's automated features include:</p>\n<ul>\n<li>automatic backups of the etcd config;</li>\n<li>log aggregation and indexing of all services; and</li>\n<li>integrated monitoring and alert framework.</li>\n</ul>\n<p><strong>Kube360 - Kubernetes Tooling Features</strong>. Kube360 simplifies Kubernetes management and allows you to take advantage of many cloud-native features such as:\nautoscaling; to stay cost efficient with growing and shrinking demands on systems</p>\n<ul>\n<li>high availability;</li>\n<li>health checks; and</li>\n<li>integrated secrets management.</li>\n</ul>\n<p><strong>Kube360 - Service Mesh</strong>.</p>\n<ul>\n<li>Mutual TLS based encryption within the cluster</li>\n<li>Tracing tools</li>\n<li>Rerouting traffic</li>\n<li>Canary deployments</li>\n</ul>\n<p><strong>Kube360 - Integration</strong>.</p>\n<ul>\n<li>Integrates into existing AWS & Azure infrastructures</li>\n<li>Deploys into existing VPCs</li>\n<li>Leverages existing subnets</li>\n<li>Communicates with components outside of Kube360</li>\n<li>Supports multiple clusters per organization</li>\n<li>Installed by FP Complete team or customer</li>\n</ul>\n<p>As you can see – Kube360 is one of the most comprehensive tools you can rely on for Cloud Native architecture. Kube360 is your one-stop, fully integrated enterprise Kubernetes ecosystem. Kube360 standardizes containerization, software deployment, fault tolerance, auto-scaling, auto-healing, and security - by design. Kube360's modular, standardized architecture mitigates proprietary lock-in, high support costs, and obsolescence. In addition, Kube360 delivers a seamless deployment experience for you and your team.\nFind out how Kube360 can make your business more efficient, more reliable, and more secure, all in a fraction of the time. Speed up your dev team's productivity - <a href=\"https://tech.fpcomplete.com/contact-us/\">Contact us today!</a></p>\n",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/",
"slug": "cloud-native",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Confused about Cloud-Native? Want to speed up your dev team's productivity?",
"description": "Learn about Cloud-Native architecture.",
"updated": null,
"date": "2022-01-17",
"year": 2022,
"month": 1,
"day": 17,
"taxonomies": {
"tags": [
"kubernetes",
"cloud native"
],
"categories": [
"devsecops",
"devops"
]
},
"authors": [],
"extra": {
"author": "FP Complete",
"keywords": "devsecops, devops",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "/blog/cloud-native/",
"components": [
"blog",
"cloud-native"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "why-move-to-cloud-native-now",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/#why-move-to-cloud-native-now",
"title": "Why Move to Cloud-Native Now?",
"children": []
},
{
"level": 2,
"id": "wow-cloud-native-seems-perfect-what-s-the-catch",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/#wow-cloud-native-seems-perfect-what-s-the-catch",
"title": "WOW – Cloud-Native Seems Perfect – What's the Catch?",
"children": []
},
{
"level": 2,
"id": "three-essential-tools-for-successful-cloud-native-architecture",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/#three-essential-tools-for-successful-cloud-native-architecture",
"title": "Three Essential Tools for Successful Cloud-Native Architecture",
"children": []
},
{
"level": 2,
"id": "cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
"permalink": "https://tech.fpcomplete.com/blog/cloud-native/#cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
"title": "Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?",
"children": []
}
],
"word_count": 1482,
"reading_time": 8,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
}
],
"page_count": 1
},
{
"name": "functional programming",
"slug": "functional-programming",
"path": "/categories/functional-programming/",
"permalink": "https://tech.fpcomplete.com/categories/functional-programming/",
"pages": [
{
"relative_path": "blog/axum-hyper-tonic-tower-part4.md",
"colocated_path": null,
"content": "<p>This is the fourth and final post in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li>Today's post: How to combine Axum and Tonic services into a single service</li>\n</ol>\n<h2 id=\"single-port-two-protocols\">Single port, two protocols</h2>\n<p>That heading is a lie. Both an Axum web application and a gRPC server speak the same protocol: HTTP/2. It may be more fair to say they speak different dialects of it. But importantly, it's trivially easy to look at a request and determine whether it wants to talk to the gRPC server or not. gRPC requests will all include the header <code>Content-Type: application/grpc</code>. So our final step today is to write something that can accept both a gRPC <code>Service</code> and a normal <code>Service</code>, and return one unified service. Let's do it! For reference, complete code is in <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/server-hybrid.rs\"><code>src/bin/server-hybrid.rs</code></a>.</p>\n<p>Let's start off with our <code>main</code> function, and demonstrate what we want this thing to look like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n let axum_make_service = axum::Router::new()\n .route("/", axum::handler::get(|| async { "Hello world!" }))\n .into_make_service();\n\n let grpc_service = tonic::transport::Server::builder()\n .add_service(EchoServer::new(MyEcho))\n .into_service();\n\n let hybrid_make_service = hybrid(axum_make_service, grpc_service);\n\n let server = hyper::Server::bind(&addr).serve(hybrid_make_service);\n\n if let Err(e) = server.await {\n eprintln!("server error: {}", e);\n }\n}\n</code></pre>\n<p>We set up simplistic <code>axum_make_service</code> and <code>grpc_service</code> values, and then use the <code>hybrid</code> function to combine them into a single service. Notice the difference in those names, and the fact that we called <code>into_make_service</code> for the former and <code>into_service</code> for the latter. Believe it or not, that's going to cause us a lot of pain very shortly.</p>\n<p>Anyway, with that yet-to-be-explained <code>hybrid</code> function, spinning up a hybrid server is a piece of cake. But the devil's in the details!</p>\n<p>Also: there are simpler ways of going about the code below using trait objects. I avoided any type erasure techniques, since (1) I thought the code was a bit clearer this way, and (2) it turns into a nicer tutorial in my opinion. The one exception is that I <em>am</em> using a trait object for errors, since Hyper itself does so, and it simplifies the code significantly to use the same error representation across services.</p>\n<h1 id=\"defining-hybrid\">Defining <code>hybrid</code></h1>\n<p>Our <code>hybrid</code> function is going to return a <code>HybridMakeService</code> value:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn hybrid<MakeWeb, Grpc>(make_web: MakeWeb, grpc: Grpc) -> HybridMakeService<MakeWeb, Grpc> {\n HybridMakeService { make_web, grpc }\n}\n\nstruct HybridMakeService<MakeWeb, Grpc> {\n make_web: MakeWeb,\n grpc: Grpc,\n}\n</code></pre>\n<p>I'm going to be consistent and verbose with the type variable names throughout. Here, we have the type variables <code>MakeWeb</code> and <code>Grpc</code>. This reflects the difference between what Axum and Tonic provide from an API perspective. We'll need to provide Axum's <code>MakeWeb</code> with connection information in order to get the request-handling <code>Service</code>. With <code>Grpc</code>, we won't have to do that.</p>\n<p>In any event, we're ready to implement our <code>Service</code> for <code>HybridMakeService</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<ConnInfo, MakeWeb, Grpc> Service<ConnInfo> for HybridMakeService<MakeWeb, Grpc>\nwhere\n MakeWeb: Service<ConnInfo>,\n Grpc: Clone,\n{\n // ...\n}\n</code></pre>\n<p>We have the two expected type variables <code>MakeWeb</code> and <code>Grpc</code>, as well as <code>ConnInfo</code>, to represent whatever connection information we're given. <code>Grpc</code> won't care about that at all, but the <code>ConnInfo</code> must match up with what <code>MakeWeb</code> is receiving. Therefore, we have the bound <code>MakeWeb: Service<ConnInfo></code>. The <code>Grpc: Clone</code> bound will make sense shortly.</p>\n<p>When we receive an incoming connection, we'll need to do two things:</p>\n<ul>\n<li>Get a new <code>Service</code> from <code>MakeWeb</code>. Doing this may happen asynchronously, and may have some an error.\n<ul>\n<li><strong>SIDE NOTE</strong> If you remember the actual implementation of Axum, we know for a fact that neither of these are true. Getting a <code>Service</code> from an Axum <code>IntoMakeService</code> will always succeed, and never does any async work. But there are no APIs in Axum exposing this fact, so we're stuck behind the <code>Service</code> API.</li>\n</ul>\n</li>\n<li>Clone the <code>Grpc</code> we already have.</li>\n</ul>\n<p>Once we have the new <code>Web</code> <code>Service</code> and the cloned <code>Grpc</code>, we'll wrap these up into a new <code>struct</code>, <code>HybridService</code>. We're also going to need some help to perform the necessary async actions, so we'll create a new helper <code>Future</code> type. This all looks like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">type Response = HybridService<MakeWeb::Response, Grpc>;\ntype Error = MakeWeb::Error;\ntype Future = HybridMakeServiceFuture<MakeWeb::Future, Grpc>;\n\nfn poll_ready(\n &mut self,\n cx: &mut std::task::Context,\n) -> std::task::Poll<Result<(), Self::Error>> {\n self.make_web.poll_ready(cx)\n}\n\nfn call(&mut self, conn_info: ConnInfo) -> Self::Future {\n HybridMakeServiceFuture {\n web_future: self.make_web.call(conn_info),\n grpc: Some(self.grpc.clone()),\n }\n}\n</code></pre>\n<p>Note that we're deferring to <code>self.make_web</code> to say it's ready and passing along its errors. Let's tie this piece off by looking at <code>HybridMakeServiceFuture</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[pin_project]\nstruct HybridMakeServiceFuture<WebFuture, Grpc> {\n #[pin]\n web_future: WebFuture,\n grpc: Option<Grpc>,\n}\n\nimpl<WebFuture, Web, WebError, Grpc> Future for HybridMakeServiceFuture<WebFuture, Grpc>\nwhere\n WebFuture: Future<Output = Result<Web, WebError>>,\n{\n type Output = Result<HybridService<Web, Grpc>, WebError>;\n\n fn poll(self: Pin<&mut Self>, cx: &mut std::task::Context) -> Poll<Self::Output> {\n let this = self.project();\n match this.web_future.poll(cx) {\n Poll::Pending => Poll::Pending,\n Poll::Ready(Err(e)) => Poll::Ready(Err(e)),\n Poll::Ready(Ok(web)) => Poll::Ready(Ok(HybridService {\n web,\n grpc: this.grpc.take().expect("Cannot poll twice!"),\n })),\n }\n }\n}\n</code></pre>\n<p>We need to pull in <a href=\"https://lib.rs/crates/pin-project\"><code>pin_project</code></a> to allow us to project the pinned web future inside our <code>poll</code> implementation. (If you're not familiar with <code>pin_project</code>, don't worry, we'll describe things later on with <code>HybridFuture</code>.) When we poll <code>web_future</code>, we could end up in one of three states:</p>\n<ul>\n<li><code>Pending</code>: the <code>MakeWeb</code> isn't ready, so we aren't ready either</li>\n<li><code>Ready(Err(e))</code>: the <code>MakeWeb</code> failed, so we pass along the error</li>\n<li><code>Ready(Ok(web))</code>: the <code>MakeWeb</code> is successful, so package up the new <code>web</code> value with the <code>grpc</code> value</li>\n</ul>\n<p>There's some funny business with that <code>this.grpc.take()</code> to get the cloned <code>Grpc</code> value out of the <code>Option</code>. <code>Future</code>s have an invariant that, once they return <code>Ready</code>, they cannot be polled again. Therefore, it's safe to assume that <code>take</code> will only ever be called once. But all of this pain could be avoided if Axum exposed an <code>into_service</code> method instead.</p>\n<h2 id=\"hybridservice\"><code>HybridService</code></h2>\n<p>The previous types will ultimately produce a <code>HybridService</code>. Let's look at what that is:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct HybridService<Web, Grpc> {\n web: Web,\n grpc: Grpc,\n}\n\nimpl<Web, Grpc, WebBody, GrpcBody> Service<Request<Body>> for HybridService<Web, Grpc>\nwhere\n Web: Service<Request<Body>, Response = Response<WebBody>>,\n Grpc: Service<Request<Body>, Response = Response<GrpcBody>>,\n Web::Error: Into<Box<dyn std::error::Error + Send + Sync + 'static>>,\n Grpc::Error: Into<Box<dyn std::error::Error + Send + Sync + 'static>>,\n{\n // ...\n}\n</code></pre>\n<p>This <code>HybridService</code> will take <code>Request<Body></code> as input. The underlying <code>Web</code> and <code>Grpc</code> will also take <code>Request<Body></code> as input, but they'll produce slightly different output: either <code>Response<WebBody></code> or <code>Response<GrpcBody></code>. We're going to need to somehow unify those body representations. As mentioned above, we're going to use trait objects for error handling, so no unification there is necessary.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">type Response = Response<HybridBody<WebBody, GrpcBody>>;\ntype Error = Box<dyn std::error::Error + Send + Sync + 'static>;\ntype Future = HybridFuture<Web::Future, Grpc::Future>;\n</code></pre>\n<p>The associated <code>Response</code> type is going to be a <code>Response<...></code> as well, but its body is going to be the <code>HybridBody<WebBody, GrpcBody></code> type. We'll get to that later. Similarly, we have two different <code>Future</code>s that may get called, depending on the kind of request. We need to unify over that with a <code>HybridFuture</code> type.</p>\n<p>Next, let's look at <code>poll_ready</code>. We need to check for both <code>Web</code> and <code>Grpc</code> being ready for a new request. And each check can result in one of three cases: <code>Pending</code>, <code>Ready(Err)</code>, or <code>Ready(Ok)</code>. This function is all about pattern matching and unifying the error representation using <code>.into()</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn poll_ready(\n &mut self,\n cx: &mut std::task::Context<'_>,\n) -> std::task::Poll<Result<(), Self::Error>> {\n match self.web.poll_ready(cx) {\n Poll::Ready(Ok(())) => match self.grpc.poll_ready(cx) {\n Poll::Ready(Ok(())) => Poll::Ready(Ok(())),\n Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),\n Poll::Pending => Poll::Pending,\n },\n Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),\n Poll::Pending => Poll::Pending,\n }\n}\n</code></pre>\n<p>And finally, we can see <code>call</code>, where the real logic we're trying to accomplish lives. This is where we get to look at the request and determine where to route it:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn call(&mut self, req: Request<Body>) -> Self::Future {\n if req.headers().get("content-type").map(|x| x.as_bytes()) == Some(b"application/grpc") {\n HybridFuture::Grpc(self.grpc.call(req))\n } else {\n HybridFuture::Web(self.web.call(req))\n }\n}\n</code></pre>\n<p>Amazing. All of this work for essentially 5 lines of meaningful code!</p>\n<h2 id=\"hybridfuture\"><code>HybridFuture</code></h2>\n<p>That's it, we're at the end! The final type we're going to analyze in this series is <code>HybridFuture</code>. (There's also a <code>HybridBody</code> type, but it's similar enough to <code>HybridFuture</code> that it doesn't warrant its own explanation.) The <code>struct</code>'s definition is:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[pin_project(project = HybridFutureProj)]\nenum HybridFuture<WebFuture, GrpcFuture> {\n Web(#[pin] WebFuture),\n Grpc(#[pin] GrpcFuture),\n}\n</code></pre>\n<p>Like before, we're using <code>pin_project</code>. This time, let's explore why. The interface for the <code>Future</code> trait requires pinned pointers in memory. Specifically, the first argument to <code>poll</code> is <code>self: Pin<&mut Self></code>. Rust itself never gives any guarantees about object permanence, and that's absolutely critical to writing an async runtime system.</p>\n<p>The <code>poll</code> method on <code>HybridFuture</code> is therefore going to receive an argument of type <code>Pin<&mut HybridFuture></code>. The problem is that we need to call the <code>poll</code> method on the underlying <code>WebBody</code> or <code>GrpcBody</code>. Assuming we have the <code>Web</code> variant, the problem we face is that pattern matching on <code>HybridFuture</code> will give us a <code>&WebFuture</code> or <code>&mut WebFuture</code>. It won't give us a <code>Pin<&mut WebFuture></code>, which is what we need!</p>\n<p><code>pin_project</code> makes a projected data type, and provides a method <code>.project()</code> on the original that gives us those pinned mutable references instead. This allows us to implement the <code>Future</code> trait for <code>HybridFuture</code> correctly, like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<WebFuture, GrpcFuture, WebBody, GrpcBody, WebError, GrpcError> Future\n for HybridFuture<WebFuture, GrpcFuture>\nwhere\n WebFuture: Future<Output = Result<Response<WebBody>, WebError>>,\n GrpcFuture: Future<Output = Result<Response<GrpcBody>, GrpcError>>,\n WebError: Into<Box<dyn std::error::Error + Send + Sync + 'static>>,\n GrpcError: Into<Box<dyn std::error::Error + Send + Sync + 'static>>,\n{\n type Output = Result<\n Response<HybridBody<WebBody, GrpcBody>>,\n Box<dyn std::error::Error + Send + Sync + 'static>,\n >;\n\n fn poll(self: Pin<&mut Self>, cx: &mut std::task::Context) -> Poll<Self::Output> {\n match self.project() {\n HybridFutureProj::Web(a) => match a.poll(cx) {\n Poll::Ready(Ok(res)) => Poll::Ready(Ok(res.map(HybridBody::Web))),\n Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),\n Poll::Pending => Poll::Pending,\n },\n HybridFutureProj::Grpc(b) => match b.poll(cx) {\n Poll::Ready(Ok(res)) => Poll::Ready(Ok(res.map(HybridBody::Grpc))),\n Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),\n Poll::Pending => Poll::Pending,\n },\n }\n }\n}\n</code></pre>\n<p>We unify together the successful response bodies with the <code>HybridBody</code> <code>enum</code> and use a trait object for error handling. And now we're presenting a single unified type for both types of requests. Huzzah!</p>\n<h2 id=\"conclusions\">Conclusions</h2>\n<p>Thank you dear reader for getting through these posts. I hope it was helpful. I definitely felt more comfortable with the Tower/Hyper ecosystem after diving into these details like this. Let's sum up some highlights from this series:</p>\n<ul>\n<li>Tower provides a Rusty interface called <code>Service</code> for async functions from inputs to outputs, or requests to responses, which may fail\n<ul>\n<li>Don't forget, there are two levels of async behavior in this interface: checking whether the <code>Service</code> is ready and then waiting for it to complete processing</li>\n</ul>\n</li>\n<li>HTTP itself necessitates two levels of async functions: a <code>type InnerService = Request -> IO Response</code> for individual requests, and <code>type OuterService = ConnectionInfo -> IO InnerService</code> for the overall connection</li>\n<li>Hyper provides a concrete server implementation that can accept things that look like <code>OuterService</code> and run them\n<ul>\n<li>It uses a lot of traits, some of which are not publicly exposed, to generalize</li>\n<li>It provides significant flexibility in the request and response body representation</li>\n<li>The helper functions <code>service_fn</code> and <code>make_service_fn</code> are a common way to create the two levels of <code>Service</code> necessary</li>\n</ul>\n</li>\n<li>Axum is a lightweight framework sitting on top of Hyper, and exposing a lot of its interface</li>\n<li>gRPC is an HTTP/2 based protocol which can be hosted via Hyper using the Tonic library</li>\n<li>Dispatching between an Axum service and gRPC is conceptually easy: just check the <code>content-type</code> header to see if something is a gRPC request</li>\n<li>But to make that happen, we need a bunch of helper "hybrid" types to unify the different types between Axum and Tonic</li>\n<li>A lot of the time, you can get away with trait objects to enable type erasure, but hybridizing <code>Either</code>-style <code>enum</code>s work as well\n<ul>\n<li>While they're more verbose, they may also be clearer</li>\n<li>There's also a potential performance gain by avoiding dynamic dispatch</li>\n</ul>\n</li>\n</ul>\n<p>If you want to review it, remember that a complete project is available on GitHub at <a href=\"https://github.com/snoyberg/tonic-example\">https://github.com/snoyberg/tonic-example</a>.</p>\n<p>Finally, some more subjective takeaways from me:</p>\n<ul>\n<li>I'm overall liking Axum, and I'm already using it for a new client project.</li>\n<li>I do wish it was a little higher level, and that the type errors weren't quite as intimidating. I think there may be some room in this space for more aggressive type erasure-focused frameworks, exchanging a bit of runtime performance for significantly simpler ergonomics.</li>\n<li>I'm also looking at rewriting our Zehut product to leverage Axum. So far, it's gone pretty well, but other responsibilities have taken me off of that work for the foreseeable future. And there are some <a href=\"https://github.com/tokio-rs/axum/issues/200\">painful compilation issues</a> to be aware of.\n<ul>\n<li><strong>UPDATE January 23, 2022</strong> As <a href=\"https://twitter.com/rbtcollins/status/1484559351490744330?s=21\">pointed out on Twitter</a>, Axum has fixed this issue in newer versions. I've actually already used this improvement in other projects since then, but forgot to update the blog post. Thanks for the reminder Robert!</li>\n</ul>\n</li>\n<li>I do miss strongly typed routes, but overall I'd rather use something like Axum than push farther with <code>routetype</code>. In the future, though, I may look into providing some <code>routetype</code>/<code>axum</code> bridge.</li>\n</ul>\n<p>If this kind of content was helpful, and you're interested in more in the future, please consider <a href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\">subscribing to our blog</a>. Let me know (<a href=\"https://twitter.com/snoyberg\">on Twitter</a> or elsewhere) if you have any requests for additional content like this.</p>\n<p>If you're looking for more Rust content, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
"slug": "axum-hyper-tonic-tower-part4",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4",
"description": "Part 4 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
"updated": null,
"date": "2021-09-20",
"year": 2021,
"month": 9,
"day": 20,
"taxonomies": {
"tags": [
"rust"
],
"categories": [
"functional programming"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/axum-hyper-tonic-tower-part4.png",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "/blog/axum-hyper-tonic-tower-part4/",
"components": [
"blog",
"axum-hyper-tonic-tower-part4"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "single-port-two-protocols",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#single-port-two-protocols",
"title": "Single port, two protocols",
"children": []
},
{
"level": 1,
"id": "defining-hybrid",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#defining-hybrid",
"title": "Defining hybrid",
"children": [
{
"level": 2,
"id": "hybridservice",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#hybridservice",
"title": "HybridService",
"children": []
},
{
"level": 2,
"id": "hybridfuture",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#hybridfuture",
"title": "HybridFuture",
"children": []
},
{
"level": 2,
"id": "conclusions",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#conclusions",
"title": "Conclusions",
"children": []
}
]
}
],
"word_count": 2427,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1"
},
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2"
},
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3"
}
]
},
{
"relative_path": "blog/axum-hyper-tonic-tower-part3.md",
"colocated_path": null,
"content": "<p>This is the third of four posts in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li>Today's post: Demonstration of Tonic for a gRPC client/server</li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<h2 id=\"tonic-and-grpc\">Tonic and gRPC</h2>\n<p>Tonic is a gRPC client and server library. gRPC is a protocol that sits on top of HTTP/2, and therefore Tonic is built on top of Hyper (and Tower). I already mentioned at the beginning of this series that my ultimate goal is to be able to serve hybrid web/gRPC services over a single port. But for now, let's get comfortable with a standard Tonic client/server application. We're going to create an echo server, which provides an endpoint that will repeat back whatever message you send it.</p>\n<p>The full code for this is <a href=\"https://github.com/snoyberg/tonic-example\">available on GitHub</a>. The repository is structured as a single package with three different crates:</p>\n<ul>\n<li>A library crate providing the protobuf definitions and Tonic-generated server and client items</li>\n<li>A binary crate providing a simple client tool</li>\n<li>A binary crate providing the server executable</li>\n</ul>\n<p>The first file we'll look at is the protobuf definition of our service, located in <code>proto/echo.proto</code>:</p>\n<pre><code>syntax = "proto3";\n\npackage echo;\n\nservice Echo {\n rpc Echo (EchoRequest) returns (EchoReply) {}\n}\n\nmessage EchoRequest {\n string message = 1;\n}\n\nmessage EchoReply {\n string message = 1;\n}\n</code></pre>\n<p>Even if you're not familiar with protobuf, hopefully the example above is fairly self-explanatory. We need a <code>build.rs</code> file to use <code>tonic_build</code> to compile this file:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n tonic_build::configure()\n .compile(&["proto/echo.proto"], &["proto"])\n .unwrap();\n}\n</code></pre>\n<p>And finally, we have our mammoth <code>src/lib.rs</code> providing all the items we'll need for implementing our client and server:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">tonic::include_proto!("echo");\n</code></pre>\n<p>There's nothing terribly interesting about the client. It's a typical <code>clap</code>-based CLI tool that uses Tokio and Tonic. You can <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/client.rs\">read the source on GitHub</a>.</p>\n<p>Let's move onto the important part: the server.</p>\n<h2 id=\"the-server\">The server</h2>\n<p>The Tonic code we put into our library crate generates an <code>Echo</code> trait. We need to implement that trait on some type to make our gRPC service. This isn't directly related to our topic today. It's also fairly straightforward Rust code. I've so far found the experience of writing client/server apps with Tonic to be a real pleasure, specifically because of how easy these kinds of implementations are:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use tonic_example::echo_server::{Echo, EchoServer};\nuse tonic_example::{EchoReply, EchoRequest};\n\npub struct MyEcho;\n\n#[async_trait]\nimpl Echo for MyEcho {\n async fn echo(\n &self,\n request: tonic::Request<EchoRequest>,\n ) -> Result<tonic::Response<EchoReply>, tonic::Status> {\n Ok(tonic::Response::new(EchoReply {\n message: format!("Echoing back: {}", request.get_ref().message),\n }))\n }\n}\n</code></pre>\n<p>If you look in the <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/server.rs\">source on GitHub</a>, there are two different implementations of <code>main</code>, one of them commented out. That one's the more straightforward approach, so let's start with that:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() -> anyhow::Result<()> {\n let addr = ([0, 0, 0, 0], 3000).into();\n\n tonic::transport::Server::builder()\n .add_service(EchoServer::new(MyEcho))\n .serve(addr)\n .await?;\n\n Ok(())\n}\n</code></pre>\n<p>This uses Tonic's <code>Server::builder</code> to create a new <code>Server</code> value. It then calls <code>add_service</code>, which looks like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<L> Server<L> {\n pub fn add_service<S>(&mut self, svc: S) -> Router<S, Unimplemented, L>\n where\n S: Service<Request<Body>, Response = Response<BoxBody>>\n + NamedService\n + Clone\n + Send\n + 'static,\n S::Future: Send + 'static,\n S::Error: Into<crate::Error> + Send,\n L: Clone\n}\n</code></pre>\n<p>We've got another <code>Router</code>. This works like in Axum, but it's for routing gRPC calls to the appropriate named service. Let's talk through the type parameters and traits here:</p>\n<ul>\n<li><code>L</code> represents the <em>layer</em>, or the middlewares added to this server. It will default to <a href=\"https://docs.rs/tower/0.4.8/tower/layer/util/struct.Identity.html\"><code>Identity</code></a>, to represent the no middleware case.</li>\n<li><code>S</code> is the new service we're trying to add, which in our case is an <code>EchoServer</code>.</li>\n<li>Our service needs to accept the ever-familiar <code>Request<Body></code> type, and respond with a <code>Response<BoxBody></code>. (We'll discuss <code>BoxBody</code> on its own below.) It also needs to be <a href=\"https://docs.rs/tonic/0.5.2/tonic/transport/trait.NamedService.html\"><code>NamedService</code></a> (for routing).</li>\n<li>As usual, there are a bunch of <code>Clone</code>, <code>Send</code>, and <code>'static</code> bounds too, and requirements on the error representation.</li>\n</ul>\n<p>As complicated as all of that appears, the nice thing is that we don't really need to deal with those details in a simple Tonic application. Instead, we simply call the <code>serve</code> method and everything works like magic.</p>\n<p>But we're trying to go off the beaten path and get a better understanding of how this interacts with Hyper. So let's go deeper!</p>\n<h2 id=\"into-service\"><code>into_service</code></h2>\n<p>In addition to the <code>serve</code> method, Tonic's <code>Router</code> type also provides an <a href=\"https://docs.rs/tonic/0.5.2/tonic/transport/server/struct.Router.html#method.into_service\"><code>into_service</code> method</a>. I'm not going to go into all of its glory here, since it doesn't add much to the discussion but adds a lot to the reading you'll have to do. Instead, suffice it to say that</p>\n<ul>\n<li><code>into_service</code> returns a <code>RouterService<S></code> value</li>\n<li><code>S</code> must implement <code>Service<Request<Body>, Response = Response<ResBody>></code></li>\n<li><code>ResBody</code> is a type that Hyper can use for response bodies</li>\n</ul>\n<p>OK, cool? Now we can write our slightly more long-winded <code>main</code> function. First we create our <code>RouterService</code> value:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let grpc_service = tonic::transport::Server::builder()\n .add_service(EchoServer::new(MyEcho))\n .into_service();\n</code></pre>\n<p>But now we have a bit of a problem. Hyper expects a "make service" or an "app factory", and instead we just have a request handling service. So we need to go back to Hyper and use <code>make_service_fn</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_grpc_service = make_service_fn(move |_conn| {\n let grpc_service = grpc_service.clone();\n async { Ok::<_, Infallible>(grpc_service) }\n});\n</code></pre>\n<p>Notice that we need to clone a new copy of the <code>grpc_service</code>, and we need to play all the games with splitting up the closure and the async block, plus <code>Infallible</code>, that we saw before. But now, with <em>that</em> in place, we can launch our gRPC service:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let server = hyper::Server::bind(&addr).serve(make_grpc_service);\n\nif let Err(e) = server.await {\n eprintln!("server error: {}", e);\n}\n</code></pre>\n<p>If you want to play with this, you can clone <a href=\"https://github.com/snoyberg/tonic-example\">the tonic-example repo</a> and then:</p>\n<ul>\n<li>Run <code>cargo run --bin server</code> in one terminal</li>\n<li>Run <code>cargo run --bin client "Hello world!"</code> in another</li>\n</ul>\n<p>However, trying to open up http://localhost:3000 in your browser isn't going to work out too well. This server will only handle gRPC connections, not standard web browser requests, RESTful APIs, etc. We've got one final step now: writing something that can handle both Axum and Tonic services and route to them appropriately.</p>\n<h2 id=\"boxbody\"><code>BoxBody</code></h2>\n<p>Let's look into that <code>BoxBody</code> type in a little more detail. We're using the <code>tonic::body::BoxBody</code> <code>struct</code>, which is defined as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub type BoxBody = http_body::combinators::BoxBody<bytes::Bytes, crate::Status>;\n</code></pre>\n<p><code>http_body</code> itself provides its own <code>BoxBody</code>, which is parameterized over the <em>data</em> and <em>error</em>. Tonic uses the <code>Status</code> type for errors, and represents the different status codes a gRPC service can return. For those not familiar with <code>Bytes</code>, here's a quick excerpt from <a href=\"https://docs.rs/bytes/1.1.0/bytes/\">the docs</a></p>\n<blockquote>\n<p><code>Bytes</code> is an efficient container for storing and operating on contiguous slices of memory. It is intended for use primarily in networking code, but could have applications elsewhere as well.</p>\n<p><code>Bytes</code> values facilitate zero-copy network programming by allowing multiple <code>Bytes</code> objects to point to the same underlying memory. This is managed by using a reference count to track when the memory is no longer needed and can be freed.</p>\n</blockquote>\n<p>When you see <code>Bytes</code>, you can semantically think of it as a byte slice or byte vector. The underlying <code>BoxBody</code> from the <code>http_body</code> crate represents some kind of implementation of the <a href=\"https://docs.rs/http-body/0.4.3/http_body/trait.Body.html\"><code>http_body::Body</code></a> trait. The <code>Body</code> trait represents a streaming HTTP body, and contains:</p>\n<ul>\n<li>Associated types for <code>Data</code> and <code>Error</code>, corresponding to the type parameters to <code>BoxBody</code></li>\n<li><code>poll_data</code> for asynchronously reading more data from the body</li>\n<li>Helper <code>map_data</code> and <code>map_err</code> methods for manipulating the <code>Data</code> and <code>Error</code> associated types</li>\n<li>A <code>boxed</code> method for some type erasure, allowing us to get back a <code>BoxBody</code></li>\n<li>A few other helper methods around size hints and HTTP/2 trailing data</li>\n</ul>\n<p>The important thing to note for our purposes is that "type erasure" here isn't really complete type erasure. When we use <code>boxed</code> to get a trait object representing the body, we still have type parameters to represent the <code>Data</code> and <code>Error</code>. Therefore, if we end up with two different representations of <code>Data</code> or <code>Error</code>, they won't be compatible with each other. And let me ask you: do you think Axum will use the same <code>Status</code> error type to represent errors that Tonic does? (Hint: it doesn't.) So when we get to it next time, we'll have some footwork to do around unifying error types.</p>\n<h2 id=\"almost-there\">Almost there!</h2>\n<p>We'll tie up next week with the final post in this series, tying together all the different things we've seen so far.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part4\">Read part 4 now</a></p>\n<p>If you're looking for more Rust content, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
"slug": "axum-hyper-tonic-tower-part3",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3",
"description": "Part 3 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
"updated": null,
"date": "2021-09-13",
"year": 2021,
"month": 9,
"day": 13,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"rust"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/axum-hyper-tonic-tower-part3.png",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "/blog/axum-hyper-tonic-tower-part3/",
"components": [
"blog",
"axum-hyper-tonic-tower-part3"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "tonic-and-grpc",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#tonic-and-grpc",
"title": "Tonic and gRPC",
"children": []
},
{
"level": 2,
"id": "the-server",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#the-server",
"title": "The server",
"children": []
},
{
"level": 2,
"id": "into-service",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#into-service",
"title": "into_service",
"children": []
},
{
"level": 2,
"id": "boxbody",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#boxbody",
"title": "BoxBody",
"children": []
},
{
"level": 2,
"id": "almost-there",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#almost-there",
"title": "Almost there!",
"children": []
}
],
"word_count": 1583,
"reading_time": 8,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1"
},
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2"
},
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4"
}
]
},
{
"relative_path": "blog/axum-hyper-tonic-tower-part2.md",
"colocated_path": null,
"content": "<p>This is the second of four posts in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li>Today's post: Understanding Hyper, and first experiences with Axum</li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<p>I recommend checking out the first post in the series if you haven't already.</p>\n<p class=\"text-center\" style=\"border: 1px solid #000;border-radius:1rem;padding:1rem;background-color:#f1f1f1\">\n <a class=\"btn btn-primary\" href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\" target=\"_blank\">\n Subscribe to our blog via email\n </a>\n <br>\n <small>Email subscriptions come from our <a target=\"_blank\" href=\"/feed/\">Atom feed</a> and are handled by <a target=\"_blank\" href=\"https://blogtrottr.com\">Blogtrottr</a>. You will only receive notifications of blog posts, and can unsubscribe any time.</small>\n</p>\n<h2 id=\"quick-recap\">Quick recap</h2>\n<ul>\n<li>Tower provides a <code>Service</code> trait, which is basically an asynchronous function from requests to responses</li>\n<li><code>Service</code> is parameterized on the request type, and has an associated type for <code>Response</code></li>\n<li>It also has an associated <code>Error</code> type, and an associated <code>Future</code> type</li>\n<li><code>Service</code> allows async behavior in both checking whether the service is ready to accept a request, and for handling the request</li>\n<li>A web application ends up having two sets of async request/response behavior\n<ul>\n<li>Inner: a service that accepts HTTP requests and returns HTTP responses</li>\n<li>Outer: a service that accepts the incoming network connections and returns an inner service</li>\n</ul>\n</li>\n</ul>\n<p>With that in mind, let's look at Hyper.</p>\n<h2 id=\"services-in-hyper\">Services in Hyper</h2>\n<p>Now that we've got Tower under our belts a bit, it's time to dive into the specific world of Hyper. Much of what we saw above will apply directly to Hyper. But Hyper has a few additional curveballs to deal with:</p>\n<ul>\n<li>Both the <code>Request</code> and <code>Response</code> types are parameterized over the representation of the request/response bodies</li>\n<li>There are a bunch of additional traits and type parameterized in the public API, some not appearing in the docs at all, and many that are unclear</li>\n</ul>\n<p>In place of the <code>run</code> function we had in our previous fake server example, Hyper follows a builder pattern for initializing HTTP servers. After providing configuration values, you create an active <code>Server</code> value from your <code>Builder</code> with the <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/struct.Builder.html#method.serve\"><code>serve</code></a> method. Just to get it out of the way now, this is the type signature of <code>serve</code> from the public docs:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub fn serve<S, B>(self, new_service: S) -> Server<I, S, E>\nwhere\n I: Accept,\n I::Error: Into<Box<dyn StdError + Send + Sync>>,\n I::Conn: AsyncRead + AsyncWrite + Unpin + Send + 'static,\n S: MakeServiceRef<I::Conn, Body, ResBody = B>,\n S::Error: Into<Box<dyn StdError + Send + Sync>>,\n B: HttpBody + 'static,\n B::Error: Into<Box<dyn StdError + Send + Sync>>,\n E: NewSvcExec<I::Conn, S::Future, S::Service, E, NoopWatcher>,\n E: ConnStreamExec<<S::Service as HttpService<Body>>::Future, B>,\n</code></pre>\n<p>That's a lot of requirements, and not all of them are clear from the docs. Hopefully we can bring some clarity to this. But for now, let's start off with something simpler: the "Hello world" example from <a href=\"https://hyper.rs\">the Hyper homepage</a>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::{convert::Infallible, net::SocketAddr};\nuse hyper::{Body, Request, Response, Server};\nuse hyper::service::{make_service_fn, service_fn};\n\nasync fn handle(_: Request<Body>) -> Result<Response<Body>, Infallible> {\n Ok(Response::new("Hello, World!".into()))\n}\n\n#[tokio::main]\nasync fn main() {\n let addr = SocketAddr::from(([127, 0, 0, 1], 3000));\n\n let make_svc = make_service_fn(|_conn| async {\n Ok::<_, Infallible>(service_fn(handle))\n });\n\n let server = Server::bind(&addr).serve(make_svc);\n\n if let Err(e) = server.await {\n eprintln!("server error: {}", e);\n }\n}\n</code></pre>\n<p>This follows the same pattern we established above:</p>\n<ul>\n<li><code>handle</code> is an async function from a <code>Request</code> to a <code>Response</code>, which may fail with an <code>Infallible</code> value.\n<ul>\n<li>Both <code>Request</code> and <code>Response</code> are parameterized with <code>Body</code>, a default HTTP body representation.</li>\n</ul>\n</li>\n<li><code>handle</code> gets wrapped up in <code>service_fn</code> to produce a <code>Service<Request<Body>></code>. This is like <code>app_fn</code> above.</li>\n<li>We use <code>make_service_fn</code>, like <code>app_factory_fn</code> above, to produce the <code>Service<&AddrStream></code> (we'll get to that <code>&AddrStream</code> shortly).\n<ul>\n<li>We don't care about the <code>&AddrStream</code> value, so we ignore it</li>\n<li>The return value from the function inside <code>make_service_fn</code> must be a <code>Future</code>, so we wrap with <code>async</code></li>\n<li>The output of that <code>Future</code> must be a <code>Result</code>, so we wrap with an <code>Ok</code></li>\n<li>We need to help the compiler out a bit and provide a type annotation of <code>Infallible</code>, otherwise it won't know the type of the <code>Ok(service_fn(handle))</code> expression</li>\n</ul>\n</li>\n</ul>\n<p>Using this level of abstraction for writing a normal web app is painful for (at least) three different reasons:</p>\n<ul>\n<li>Managing all of these <code>Service</code> pieces manually is a pain</li>\n<li>There's very little in the way high level helpers, like "parse the request body as a JSON value"</li>\n<li>Any kind of mistake in your types may lead to very large, non-local error messages that are difficult to diagnose</li>\n</ul>\n<p>So we'll be more than happy to move on from Hyper to Axum a bit later. But for now, let's continue exploring things at the Hyper layer.</p>\n<h2 id=\"bypassing-service-fn-and-make-service-fn\">Bypassing <code>service_fn</code> and <code>make_service_fn</code></h2>\n<p>What I found most helpful when trying to grok Hyper was implementing a simple app without <code>service_fn</code> and <code>make_service_fn</code>. So let's go through that ourselves here. We're going to create a simple counter app (I'm nothing if not predictable). We'll need two different data types: one for the "app factory", and one for the app itself. Let's start with the app itself:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct DemoApp {\n counter: Arc<AtomicUsize>,\n}\n\nimpl Service<Request<Body>> for DemoApp {\n type Response = Response<Body>;\n type Error = hyper::http::Error;\n type Future = Ready<Result<Self::Response, Self::Error>>;\n\n fn poll_ready(&mut self, _cx: &mut std::task::Context) -> Poll<Result<(), Self::Error>> {\n Poll::Ready(Ok(()))\n }\n\n fn call(&mut self, _req: Request<Body>) -> Self::Future {\n let counter = self.counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n let res = Response::builder()\n .status(200)\n .header("Content-Type", "text/plain; charset=utf-8")\n .body(format!("Counter is at: {}", counter).into());\n std::future::ready(res)\n }\n}\n</code></pre>\n<p>This implementation uses the <code>std::future::Ready</code> struct to create a <code>Future</code> which is immediately ready. In other words, our application doesn't perform any async actions. I've set the <code>Error</code> associated type to <code>hyper::http::Error</code>. This error would be generated if, for example, you provided invalid strings to the <code>header</code> method call, such as non-ASCII characters. As we've seen multiple times, <code>poll_ready</code> just advertises that it's always ready to handle another request.</p>\n<p>The implementation of <code>DemoAppFactory</code> isn't terribly different:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct DemoAppFactory {\n counter: Arc<AtomicUsize>,\n}\n\nimpl Service<&AddrStream> for DemoAppFactory {\n type Response = DemoApp;\n type Error = Infallible;\n type Future = Ready<Result<Self::Response, Self::Error>>;\n\n fn poll_ready(&mut self, _cx: &mut std::task::Context) -> Poll<Result<(), Self::Error>> {\n Poll::Ready(Ok(()))\n }\n\n fn call(&mut self, conn: &AddrStream) -> Self::Future {\n println!("Accepting a new connection from {:?}", conn);\n std::future::ready(Ok(DemoApp {\n counter: self.counter.clone()\n }))\n }\n}\n</code></pre>\n<p>We have a different parameter to <code>Service</code>, this time <code>&AddrStream</code>. I did initially find the naming here confusing. In Tower, a <code>Service</code> takes some <code>Request</code>. And with our <code>DemoApp</code>, the <code>Request</code> it takes is a Hyper <code>Request<Body></code>. But in the case of <code>DemoAppFactory</code>, the <code>Request</code> it's taking is a <code>&AddrStream</code>. Keep in mind that a <code>Service</code> is really just a generalization of failable, async functions from input to output. The input may be a <code>Request<Body></code>, or may be a <code>&AddrStream</code>, or something else entirely.</p>\n<p>Similarly, the "response" here isn't an HTTP response, but a <code>DemoApp</code>. I again find it easier to use the terms "input" and "output" to avoid the name overloading of request and response.</p>\n<p>Finally, our <code>main</code> function looks much the same as the original from the "Hello world" example:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n let factory = DemoAppFactory {\n counter: Arc::new(AtomicUsize::new(0)),\n };\n\n let server = Server::bind(&addr).serve(factory);\n\n if let Err(e) = server.await {\n eprintln!("server error: {}", e);\n }\n}\n</code></pre>\n<p>If you're looking to extend your understanding here, I'd recommend extending this example to perform some async actions within the app. How would you modify <code>Future</code>? If you use a trait object, how exactly do you pin?</p>\n<p>But now it's time to take a dive into a topic I've avoided for a while.</p>\n<h2 id=\"understanding-the-traits\">Understanding the traits</h2>\n<p>Let's refresh our memory from above on the signature of <code>serve</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub fn serve<S, B>(self, new_service: S) -> Server<I, S, E>\nwhere\n I: Accept,\n I::Error: Into<Box<dyn StdError + Send + Sync>>,\n I::Conn: AsyncRead + AsyncWrite + Unpin + Send + 'static,\n S: MakeServiceRef<I::Conn, Body, ResBody = B>,\n S::Error: Into<Box<dyn StdError + Send + Sync>>,\n B: HttpBody + 'static,\n B::Error: Into<Box<dyn StdError + Send + Sync>>,\n E: NewSvcExec<I::Conn, S::Future, S::Service, E, NoopWatcher>,\n E: ConnStreamExec<<S::Service as HttpService<Body>>::Future, B>,\n</code></pre>\n<p>Up until preparing this blog post, I have never tried to take a deep dive into understanding all of these bounds. So this will be an adventure for us all! (And perhaps it should end up with some documentation PRs by me...) Let's start off with the type variables. Altogether, we have four: two on the <code>impl</code> block itself, and two on this method:</p>\n<ul>\n<li><code>I</code> represents the incoming stream of connections.</li>\n<li><code>E</code> represents the executor.</li>\n<li><code>S</code> is the service we're going to run. Using our terminology from above, this would be the "app factory." Using Tower/Hyper terminology, this is the "make service."</li>\n<li><code>B</code> is the choice of response body the service returns (the "app", not the "app factory", using nomenclature above).</li>\n</ul>\n<h3 id=\"i-accept\"><code>I: Accept</code></h3>\n<p><code>I</code> needs to implement the <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/accept/trait.Accept.html\"><code>Accept</code></a> trait, which represents the ability to accept a new connection from some a source. The only implementation out of the box is for <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/conn/struct.AddrIncoming.html\"><code>AddrIncoming</code></a>, which can be created from a <code>SocketAddr</code>. And in fact, that's exactly what <a href=\"https://docs.rs/hyper/0.14.12/src/hyper/server/server.rs.html#66-71\"><code>Server::bind</code> does</a>.</p>\n<p><code>Accept</code> has two associated types. <code>Error</code> must be something that can be converted into an error object, or <code>Into<Box<dyn StdError + Send + Sync>></code>. This is the requirement of (almost?) every associated error type we look at, so from now on I'll just skip over them. We need to be able to convert whatever error happened into a uniform representation.</p>\n<p>The <code>Conn</code> associated type represents an individual connection. In the case of <code>AddrIncoming</code>, the associated type is <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/conn/struct.AddrStream.html\"><code>AddrStream</code></a>. This type must implement <code>AsyncRead</code> and <code>AsyncWrite</code> for communication, <code>Send</code> and <code>'static</code> so it can be sent to different threads, and <code>Unpin</code>. The requirement for <code>Unpin</code> bubbles up from deeper in the stack, and I honestly don't know what drives it.</p>\n<h3 id=\"s-makeserviceref\"><code>S: MakeServiceRef</code></h3>\n<p><code>MakeServiceRef</code> is one of those traits that doesn't appear in the public documentation. This seems to be intentional. Reading the source:</p>\n<blockquote>\n<p>Just a sort-of "trait alias" of <code>MakeService</code>, not to be implemented by anyone, only used as bounds.</p>\n</blockquote>\n<p>Were you confused as to why we were receiving a reference with <code>&AddrStream</code>? This is the trait that powers that transformation. Overall, the trait bound <code>S: MakeServiceRef<I::Conn, Body, ResBody = B></code> means:</p>\n<ul>\n<li><code>S</code> must be a <code>Service</code></li>\n<li><code>S</code> will accept input of type <code>&I::Conn</code></li>\n<li>It will in turn produce a <em>new</em> <code>Service</code> as output</li>\n<li>That new service will accept <code>Request<Body></code> as input, and produce <code>Response<ResBody></code> as output</li>\n</ul>\n<p>And while we're talking about it: that <code>ResBody</code> has the restriction that it must implement <a href=\"https://docs.rs/hyper/0.14.12/hyper/body/trait.HttpBody.html\"><code>HttpBody</code></a>. As you might guess, the <code>Body</code> struct mentioned above implements <code>HttpBody</code>. There are a number of implementations too. When we get to Tonic and gRPC, we'll see that there are, in fact, other response bodies we have to deal with.</p>\n<h3 id=\"newsvcexec-and-connstreamexec\"><code>NewSvcExec</code> and <code>ConnStreamExec</code></h3>\n<p>The default value for the <code>E</code> parameter is <code>Exec</code>, which does not appear in the generated docs. But of course you can find it <a href=\"https://docs.rs/crate/hyper/0.14.12/source/src/common/exec.rs\">in the source</a>. The concept of <code>Exec</code> is to specify how tasks are spawned off. By default, it leverages <code>tokio::spawn</code>.</p>\n<p>I'm not entirely certain of how all of these plays out, but I believe the two traits in the heading allow for different handling of spawning for the connection service (app factory) versus the request service (app).</p>\n<h2 id=\"using-axum\">Using Axum</h2>\n<p>Axum is the new web framework that kicked off this whole blog post. Instead of dealing directly with Hyper like we did above, let's reimplement our counter web service using Axum. We'll be using <code>axum = "0.2"</code>. The <a href=\"https://docs.rs/axum/0.2.3/axum/index.html\">crate docs</a> provide a great overview of Axum, and I'm not going to try to replicate that information here. Instead, here's my rewritten code. We'll analyze a few key pieces below:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use axum::extract::Extension;\nuse axum::handler::get;\nuse axum::{AddExtensionLayer, Router};\nuse hyper::{HeaderMap, Server, StatusCode};\nuse std::net::SocketAddr;\nuse std::sync::atomic::AtomicUsize;\nuse std::sync::Arc;\n\n#[derive(Clone, Default)]\nstruct AppState {\n counter: Arc<AtomicUsize>,\n}\n\n#[tokio::main]\nasync fn main() {\n let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n let app = Router::new()\n .route("/", get(home))\n .layer(AddExtensionLayer::new(AppState::default()));\n\n let server = Server::bind(&addr).serve(app.into_make_service());\n\n if let Err(e) = server.await {\n eprintln!("server error: {}", e);\n }\n}\n\nasync fn home(state: Extension<AppState>) -> (StatusCode, HeaderMap, String) {\n let counter = state\n .counter\n .fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n let mut headers = HeaderMap::new();\n headers.insert("Content-Type", "text/plain; charset=utf-8".parse().unwrap());\n let body = format!("Counter is at: {}", counter);\n (StatusCode::OK, headers, body)\n}\n</code></pre>\n<p>The first thing I'd like to get out of the way is this whole <code>AddExtensionLayer</code>/<code>Extension</code> bit. This is how we're managing shared state within our application. It's not directly relevant to our overall analysis of Tower and Hyper, so I'll suffice with a <a href=\"https://docs.rs/axum/0.2.3/axum/index.html#sharing-state-with-handlers\">link to the docs demonstrating how this works</a>. Interestingly, you may notice that this implementation relies on middlewares, which does in fact leverage Tower, so it's not completely separate.</p>\n<p>Anyway, back to our point at hand. Within our <code>main</code> function, we're now using this <code>Router</code> concept to build up our application:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let app = Router::new()\n .route("/", get(home))\n .layer(AddExtensionLayer::new(AppState::default()));\n</code></pre>\n<p>This says, essentially, "please call the <code>home</code> function when you receive a request for <code>/</code>, and add a middleware that does that whole extension thing." The <code>home</code> function uses an extractor to get the <code>AppState</code>, and returns a value of type <code>(StatusCode, HeaderMap, String)</code> to represent the response. In Axum, any implementation of the appropriately named <a href=\"https://docs.rs/axum/0.2.3/axum/response/trait.IntoResponse.html\"><code>IntoResponse</code> trait</a> can be returned from handler functions.</p>\n<p>Anyway, our <code>app</code> value is now a <code>Router</code>. But a <code>Router</code> cannot be directly run by Hyper. Instead, we need to convert it into a <code>MakeService</code> (a.k.a. an app factory). Fortunately, that's easy: we call <code>app.into_make_service()</code>. Let's look at that method's signature:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<S> Router<S> {\n pub fn into_make_service(self) -> IntoMakeService<S>\n where\n S: Clone;\n}\n</code></pre>\n<p>And going down the rabbit hole a bit further:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub struct IntoMakeService<S> { /* fields omitted */ }\n\nimpl<S: Clone, T> Service<T> for IntoMakeService<S> {\n type Response = S;\n type Error = Infallible;\n // other stuff omitted\n}\n</code></pre>\n<p>The type <code>Router<S></code> is a value that can produce a service of type <code>S</code>. <code>IntoMakeService<S></code> will take some kind of connection info, <code>T</code>, and produce that service <code>S</code> asynchronously. And since <code>Error</code> is <code>Infallible</code>, we know it can't fail. But as much as we say "asynchronously", looking at the implementation of <code>Service</code> for <code>IntoMakeService</code>, we see a familiar pattern:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {\n Poll::Ready(Ok(()))\n}\n\nfn call(&mut self, _target: T) -> Self::Future {\n future::MakeRouteServiceFuture {\n future: ready(Ok(self.service.clone())),\n }\n}\n</code></pre>\n<p>Also, notice how that <code>T</code> value for connection info doesn't actually have any bounds or other information. <code>IntoMakeService</code> just throws away the connection information. (If you need it for some reason, see <a href=\"https://docs.rs/axum/0.2.3/axum/routing/struct.Router.html#method.into_make_service_with_connect_info\"><code>into_make_service_with_connect_info</code></a>.) In other words:</p>\n<ul>\n<li><code>Router<S></code> is a type that lets us add routes and middleware layers</li>\n<li>You can convert a <code>Router<S></code> into an <code>IntoMakeService<S></code></li>\n<li>But <code>IntoMakeService<S></code> is really just a fancy wrapper around an <code>S</code> to appease the Hyper requirements around app factories</li>\n<li>So the real workhorse here is just <code>S</code></li>\n</ul>\n<p>So where does that <code>S</code> type come from? It's built up by all the <code>route</code> and <code>layer</code> calls you make. For example, check out the <code>get</code> function's signature:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub fn get<H, B, T>(handler: H) -> OnMethod<H, B, T, EmptyRouter>\nwhere\n H: Handler<B, T>,\n\npub struct OnMethod<H, B, T, F> { /* fields omitted */ }\n\nimpl<H, B, T, F> Service<Request<B>> for OnMethod<H, B, T, F>\nwhere\n H: Handler<B, T>,\n F: Service<Request<B>, Response = Response<BoxBody>, Error = Infallible> + Clone,\n B: Send + 'static,\n{\n type Response = Response<BoxBody>;\n type Error = Infallible;\n // and more stuff\n}\n</code></pre>\n<p><code>get</code> returns an <code>OnMethod</code> value. And <code>OnMethod</code> is a <code>Service</code> that takes a <code>Request<B></code> and returns a <code>Response<BoxBody></code>. There's some funny business at play regarding the representations of bodies, which we'll eventually dive into a bit more. But with our newfound understanding of Tower and Hyper, the types at play here are no longer inscrutable. In fact, they may even be scrutable!</p>\n<p>And one final note on the example above. Axum works directly with a lot of the Hyper machinery. And that includes the <code>Server</code> type. While the <code>axum</code> crate reexports many things from Hyper, you can use those types directly from Hyper instead if so desired. In other words, Axum is pretty close to the underlying libraries, simply providing some convenience on top. It's one of the reasons I'm pretty excited to get a bit deeper into my experiments with Axum.</p>\n<p>So to sum up at this point:</p>\n<ul>\n<li>Tower provides an abstraction for asynchronous functions from input to output, which may fail. This is called a service.</li>\n<li>HTTP servers have two levels of services. The lower level is a service from HTTP requests to HTTP responses. The upper level is a service from connection information to the lower level service.</li>\n<li>Hyper has a lot of additional traits floating around, some visible, some invisible, which allow for more generality, and also make things a bit more complicated to understand.</li>\n<li>Axum sits on top of Hyper and provides an easier to use interface for many common cases. It does this by providing the same kind of services that Hyper is expecting to see. And it seems to be doing a bunch of fancy footwork around HTTP body representations.</li>\n</ul>\n<p>Next step on our journey: let's look at another library for building Hyper services. We'll follow up on this in our next post.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part3\">Read part 3 now</a></p>\n<p>If you're looking for more Rust content from FP Complete, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
"slug": "axum-hyper-tonic-tower-part2",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2",
"description": "Part 2 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
"updated": null,
"date": "2021-09-06",
"year": 2021,
"month": 9,
"day": 6,
"taxonomies": {
"tags": [
"rust"
],
"categories": [
"functional programming"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/axum-hyper-tonic-tower-part2.png",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "/blog/axum-hyper-tonic-tower-part2/",
"components": [
"blog",
"axum-hyper-tonic-tower-part2"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "quick-recap",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#quick-recap",
"title": "Quick recap",
"children": []
},
{
"level": 2,
"id": "services-in-hyper",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#services-in-hyper",
"title": "Services in Hyper",
"children": []
},
{
"level": 2,
"id": "bypassing-service-fn-and-make-service-fn",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#bypassing-service-fn-and-make-service-fn",
"title": "Bypassing service_fn and make_service_fn",
"children": []
},
{
"level": 2,
"id": "understanding-the-traits",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#understanding-the-traits",
"title": "Understanding the traits",
"children": [
{
"level": 3,
"id": "i-accept",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#i-accept",
"title": "I: Accept",
"children": []
},
{
"level": 3,
"id": "s-makeserviceref",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#s-makeserviceref",
"title": "S: MakeServiceRef",
"children": []
},
{
"level": 3,
"id": "newsvcexec-and-connstreamexec",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#newsvcexec-and-connstreamexec",
"title": "NewSvcExec and ConnStreamExec",
"children": []
}
]
},
{
"level": 2,
"id": "using-axum",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#using-axum",
"title": "Using Axum",
"children": []
}
],
"word_count": 3119,
"reading_time": 16,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1"
},
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3"
},
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4"
}
]
},
{
"relative_path": "blog/axum-hyper-tonic-tower-part1.md",
"colocated_path": null,
"content": "<p>I've played around with various web server libraries and frameworks in Rust, and found various strengths and weaknesses with them. Most recently, I put together an FP Complete solution called Zehut (which I'll blog about another time) that needed to combine a web frontend and gRPC server. I used Hyper, Tonic, and a minimal library I put together called <a href=\"https://github.com/snoyberg/routetype-rs\">routetype</a>. It worked, but I was left underwhelmed. Working directly with Hyper, even with the minimal <code>routetype</code> layer, felt too ad-hoc.</p>\n<p>When I recently saw the release of <a href=\"https://lib.rs/crates/axum\">Axum</a>, it seemed to be speaking to many of the needs I had, especially calling out Tonic support. I decided to make an experiment of replacing the direct Hyper+<code>routetype</code> usage I'd used with Axum. Overall the approach works, but (like the <code>routetype</code> work I'd already done) involved some hairy business around the Hyper and Tower APIs.</p>\n<p>I've been meaning to write some blog post/tutorial/experience report for Hyper+Tower for a while now. So I decided to take this opportunity to step through these four libraries (Tower, Hyper, Axum, and Tonic), with the specific goal in mind of creating hybrid web/gRPC apps. It turned out that there was more information here than I'd anticipated. To make for easier reading, I've split this up into a four part blog post series:</p>\n<ol>\n<li>Today's post: overview of Tower</li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<p>Let's dive in!</p>\n<p class=\"text-center\" style=\"border: 1px solid #000;border-radius:1rem;padding:1rem;background-color:#f1f1f1\">\n <a class=\"btn btn-primary\" href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\" target=\"_blank\">\n Subscribe to our blog via email\n </a>\n <br>\n <small>Email subscriptions come from our <a target=\"_blank\" href=\"/feed/\">Atom feed</a> and are handled by <a target=\"_blank\" href=\"https://blogtrottr.com\">Blogtrottr</a>. You will only receive notifications of blog posts, and can unsubscribe any time.</small>\n</p>\n<h2 id=\"what-is-tower\">What is Tower?</h2>\n<p>The first stop on our journey is the <a href=\"https://lib.rs/crates/tower\">tower crate</a>. To quote the docs, which state this succinctly:</p>\n<blockquote>\n<p>Tower provides a simple core abstraction, the <code>Service</code> trait, which represents an asynchronous function taking a request and returning either a response or an error. This abstraction can be used to model both clients and servers.</p>\n</blockquote>\n<p>This sounds fairly straightforward. To express it in Haskell syntax, I'd probably say <code>Request -> IO Response</code>, leveraging the fact that <code>IO</code> handles both error handling and asynchronous I/O. But the <code>Service</code> trait is necessarily more complex than that simplified signature:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub trait Service<Request> {\n type Response;\n type Error;\n\n // This is what it says in the generated docs\n type Future: Future;\n\n // But this more informative piece is in the actual source code\n type Future: Future<Output = Result<Self::Response, Self::Error>>;\n\n fn poll_ready(\n &mut self,\n cx: &mut Context<'_>\n ) -> Poll<Result<(), Self::Error>>;\n fn call(&mut self, req: Request) -> Self::Future;\n}\n</code></pre>\n<p><code>Service</code> is a trait, parameterized on the types of <code>Request</code>s it can handle. There's nothing specific about HTTP in Tower, so <code>Request</code>s may be lots of different things. And even within Hyper, an HTTP library leveraging Tower, we'll see that there are at least two different types of <code>Request</code> we care about.</p>\n<p>Anyway, two of the associated types here are straightforward: <code>Response</code> and <code>Error</code>. Combining the parameterized <code>Request</code> with <code>Response</code> and <code>Error</code>, we basically have all the information we care about for a <code>Service</code>.</p>\n<p>But it's <em>not</em> all the information Rust cares about. To provide for asynchronous calls, we need to provide a <code>Future</code>. And the compiler needs to know the type of the <code>Future</code> we'll be returning. This isn't really useful information to use as a programmer, but there are <a href=\"https://lib.rs/crates/async-trait\">plenty of pain points already</a> around <code>async</code> code in traits.</p>\n<p>And finally, what about those last two methods? They are there to allow the <code>Service</code> itself to be asynchronous. It took me quite a while to fully wrap my head around this. We have two different components of async behavior going on here:</p>\n<ul>\n<li>The <code>Service</code> may not be immediately ready to handle a new incoming request. For example (coming from <a href=\"https://docs.rs/tower-service/0.3.1/src/tower_service/lib.rs.html#244-257\">the docs on <code>poll_ready</code></a>), the server may currently be at capacity. You need to check <code>poll_ready</code> to find out whether the <code>Service</code> is ready to accept a new request. Then, when it's ready, you use <code>call</code> to initiate handling of a new <code>Request</code>.</li>\n<li>The handling of the request itself is <em>also</em> async, returning a <code>Future</code>, which can be polled/awaited.</li>\n</ul>\n<p>Some of this complexity can be hidden away. For example, instead of giving a concrete type for <code>Future</code>, you can use a trait object (a.k.a. type erasure). Stealing again from the docs, the following is a perfectly valid associated type for <code>Future</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>>>>;\n</code></pre>\n<p>However, this incurs some overhead for dynamic dispatch.</p>\n<p>Finally, these two layers of async behavior are often unnecessary. Many times, our server is <em>always</em> ready to handle a new incoming <code>Request</code>. In the wild, you'll often see code that hard-codes the idea that a service is always ready. To quote from those docs for the final time in this section:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {\n Poll::Ready(Ok(()))\n}\n</code></pre>\n<p>This isn't saying that request handling is synchronous in our <code>Service</code>. It's saying that request <em>acceptance</em> always succeeds immediately.</p>\n<p>Going along with the two layers of async handling, there are similarly two layers of error handling. Both accepting the new request may fail, and processing the new request may fail. But as you can see in the code above, it's possible to hard-code something which always succeeds with <code>Ok(())</code>, which is fairly common for <code>poll_ready</code>. When processing the request itself also cannot fail, using <a href=\"https://doc.rust-lang.org/stable/std/convert/enum.Infallible.html\"><code>Infallible</code></a> (and eventually <a href=\"https://doc.rust-lang.org/stable/std/primitive.never.html\">the <code>never</code> type</a>) as the <code>Error</code> associated type is a good call.</p>\n<h2 id=\"fake-web-server\">Fake web server</h2>\n<p>That was all relatively abstract, which is part of the problem with understanding Tower (at least for me). Let's make it more concrete by implementing a fake web server and fake web application. My <code>Cargo.toml</code> file looks like:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[package]\nname = "learntower"\nversion = "0.1.0"\nedition = "2018"\n\n[dependencies]\ntower = { version = "0.4", features = ["full"] }\ntokio = { version = "1", features = ["full"] }\nanyhow = "1"\n</code></pre>\n<p>I've uploaded <a href=\"https://gist.github.com/snoyberg/c6c54ed38ec8fac966e362eb212ab421\">the full source code as a Gist</a>, but let's walk through this example. First we define some helper types to represent HTTP request and response values:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub struct Request {\n pub path_and_query: String,\n pub headers: HashMap<String, String>,\n pub body: Vec<u8>,\n}\n\n#[derive(Debug)]\npub struct Response {\n pub status: u32,\n pub headers: HashMap<String, String>,\n pub body: Vec<u8>,\n}\n</code></pre>\n<p>Next we want to define a function, <code>run</code>, which:</p>\n<ul>\n<li>Accepts a web application as an argument</li>\n<li>Loops infinitely</li>\n<li>Generates fake <code>Request</code> values</li>\n<li>Prints out the <code>Response</code> values it gets from the application</li>\n</ul>\n<p>The first question is: how do you represent that web application? It's going to be an implementation of <code>Service</code>, with the <code>Request</code> and <code>Response</code> types being those we defined above. We don't need to know much about the errors, since we'll simply print them. These parts are pretty easy:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub async fn run<App>(mut app: App)\nwhere\n App: Service<crate::http::Request, Response = crate::http::Response>,\n App::Error: std::fmt::Debug,\n</code></pre>\n<p>But there's one final bound we need to take into account. We want our fake web server to be able to handle requests concurrently. To do that, we'll use <code>tokio::spawn</code> to create new tasks for handling requests. Therefore, we need to be able to send the request handling to a separate task, which will require bounds of both <code>Send</code> and <code>'static</code>. There are at least two different ways of handling this:</p>\n<ul>\n<li>Cloning the <code>App</code> value in the main task and sending it to the spawned task</li>\n<li>Creating the <code>Future</code> in the main task and sending it to the spawned task</li>\n</ul>\n<p>There are different runtime impacts of making this decision, such as whether the main request accept loop will be blocked or not by the application reporting that it's not available for requests. I decided to go with the latter approach. So we've got one more bound on <code>run</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">App::Future: Send + 'static,\n</code></pre>\n<p>The body of <code>run</code> is wrapped inside a <code>loop</code> to allow simulating an infinitely running server. First we sleep for a bit and then generate our new fake request:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;\n\nlet req = crate::http::Request {\n path_and_query: "/fake/path?page=1".to_owned(),\n headers: HashMap::new(),\n body: Vec::new(),\n};\n</code></pre>\n<p>Next, we use the <code>ready</code> method (from the <code>ServiceExt</code> extension trait) to check whether the service is ready to accept a new request:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let app = match app.ready().await {\n Err(e) => {\n eprintln!("Service not able to accept requests: {:?}", e);\n continue;\n }\n Ok(app) => app,\n};\n</code></pre>\n<p>Once we know we can make another request, we get our <code>Future</code>, spawn the task, and then wait for the <code>Future</code> to complete:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let future = app.call(req);\ntokio::spawn(async move {\n match future.await {\n Ok(res) => println!("Successful response: {:?}", res),\n Err(e) => eprintln!("Error occurred: {:?}", e),\n }\n});\n</code></pre>\n<p>And just like that, we have a fake web server! Now it's time to implement our fake web application. I'll call it <code>DemoApp</code>, and give it an atomic counter to make things slightly interesting:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Default)]\npub struct DemoApp {\n counter: Arc<AtomicUsize>,\n}\n</code></pre>\n<p>Next comes the implementation of <code>Service</code>. The first few bits are relatively easy:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl tower::Service<crate::http::Request> for DemoApp {\n type Response = crate::http::Response;\n type Error = anyhow::Error;\n #[allow(clippy::type_complexity)]\n type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;\n\n // Still need poll_ready and call\n}\n</code></pre>\n<p><code>Request</code> and <code>Response</code> get set to the types we defined, we'll use the wonderful <code>anyhow</code> crate's <code>Error</code> type, and we'll use a trait object for the <code>Future</code>. We're going to implement a <code>poll_ready</code> which is always ready for a <code>Request</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn poll_ready(\n &mut self,\n _cx: &mut std::task::Context<'_>,\n) -> Poll<Result<(), Self::Error>> {\n Poll::Ready(Ok(())) // always ready to accept a connection\n}\n</code></pre>\n<p>And finally we get to our <code>call</code> method. We're going to implement some logic to increment the counter, fail 25% of the time, and otherwise echo back the request from the user, with an added <code>X-Counter</code> response header. Let's see it in action:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn call(&mut self, mut req: crate::http::Request) -> Self::Future {\n let counter = self.counter.clone();\n Box::pin(async move {\n println!("Handling a request for {}", req.path_and_query);\n let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n anyhow::ensure!(counter % 4 != 2, "Failing 25% of the time, just for fun");\n req.headers\n .insert("X-Counter".to_owned(), counter.to_string());\n let res = crate::http::Response {\n status: 200,\n headers: req.headers,\n body: req.body,\n };\n Ok::<_, anyhow::Error>(res)\n })\n}\n</code></pre>\n<p>With all that in place, running our fake web app on our fake web server is nice and easy:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n fakeserver::run(app::DemoApp::default()).await;\n}\n</code></pre>\n<h2 id=\"app-fn\"><code>app_fn</code></h2>\n<p>One thing that's particularly unsatisfying about the code above is how much ceremony it takes to write a web application. I need to create a new data type, provide a <code>Service</code> implementation for it, and futz around with all that <code>Pin<Box<Future>></code> business to make things line up. The core logic of our <code>DemoApp</code> is buried inside the <code>call</code> method. It would be nice to provide a helper of some kind that lets us define things more easily.</p>\n<p>You can check out <a href=\"https://gist.github.com/snoyberg/cb72a9cbefc608ec15e05ed70ced1a6b\">the full code as a Gist</a>. But let's talk through it here. We're going to implement a new helper <code>app_fn</code> function which takes a closure as its argument. That closure will take in a <code>Request</code> value, and then return a <code>Response</code>. But we want to make sure it asynchronously returns the <code>Response</code>. So we'll need our calls to look something like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">app_fn(|req| async { some_code(req).await })\n</code></pre>\n<p>This <code>app_fn</code> function needs to return a type which provides our <code>Service</code> implementation. Let's call it <code>AppFn</code>. Putting these two things together, we get:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">pub struct AppFn<F> {\n f: F,\n}\n\npub fn app_fn<F, Ret>(f: F) -> AppFn<F>\nwhere\n F: FnMut(crate::http::Request) -> Ret,\n Ret: Future<Output = Result<crate::http::Response, anyhow::Error>>,\n{\n AppFn { f }\n}\n</code></pre>\n<p>So far, so good. We can see with the bounds on <code>app_fn</code> that we'll accept a <code>Request</code> and return some <code>Ret</code> type, and <code>Ret</code> must be a <code>Future</code> that produces a <code>Result<Response, Error></code>. Implementing <code>Service</code> for this isn't too bad:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<F, Ret> tower::Service<crate::http::Request> for AppFn<F>\nwhere\n F: FnMut(crate::http::Request) -> Ret,\n Ret: Future<Output = Result<crate::http::Response, anyhow::Error>>,\n{\n type Response = crate::http::Response;\n type Error = anyhow::Error;\n type Future = Ret;\n\n fn poll_ready(\n &mut self,\n _cx: &mut std::task::Context<'_>,\n ) -> Poll<Result<(), Self::Error>> {\n Poll::Ready(Ok(())) // always ready to accept a connection\n }\n\n fn call(&mut self, req: crate::http::Request) -> Self::Future {\n (self.f)(req)\n }\n}\n</code></pre>\n<p>We have the same bounds as on <code>app_fn</code>, the associated types <code>Response</code> and <code>Error</code> are straightforward, and <code>poll_ready</code> is the same as it was before. The first interesting bit is <code>type Future = Ret;</code>. We previously went the route of a trait object, which was more verbose and less performant. This time, we already have a type, <code>Ret</code>, that represents the <code>Future</code> the caller of our function will be providing. It's really nice that we get to simply use it here!</p>\n<p>The <code>call</code> method leverages the function provided by the caller to produce a new <code>Ret</code>/<code>Future</code> value per incoming request and hand it back to the web server for processing.</p>\n<p>And finally, our <code>main</code> function can now embed our application logic inside it as a closure. This looks like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n let counter = Arc::new(AtomicUsize::new(0));\n fakeserver::run(util::app_fn(move |mut req| {\n // need to clone this from the closure before moving it into the async block\n let counter = counter.clone();\n async move {\n println!("Handling a request for {}", req.path_and_query);\n let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n anyhow::ensure!(counter % 4 != 2, "Failing 25% of the time, just for fun");\n req.headers\n .insert("X-Counter".to_owned(), counter.to_string());\n let res = crate::http::Response {\n status: 200,\n headers: req.headers,\n body: req.body,\n };\n Ok::<_, anyhow::Error>(res)\n }\n }))\n .await;\n}\n</code></pre>\n<h3 id=\"side-note-the-extra-clone\">Side note: the extra clone</h3>\n<p>From bitter experience, both my own and others I've spoken with, that <code>let counter = counter.clone();</code> above is likely the trickiest piece of the code above. It's all too easy to write code that looks something like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let counter = Arc::new(AtomicUsize::new(0));\nfakeserver::run(util::app_fn(move |_req| async move {\n let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n Err(anyhow::anyhow!(\n "Just demonstrating the problem, counter is {}",\n counter\n ))\n}))\n.await;\n</code></pre>\n<p>This looks perfectly reasonable. We move the <code>counter</code> into the closure and then use it. However, the compiler isn't too happy with us:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">error[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n --> src\\main.rs:96:57\n |\n95 | let counter = Arc::new(AtomicUsize::new(0));\n | ------- captured outer variable\n96 | fakeserver::run(util::app_fn(move |_req| async move {\n | _________________________________________________________^\n97 | | let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n | | -------\n | | |\n | | move occurs because `counter` has type `Arc<AtomicUsize>`, which does not implement the `Copy` trait\n | | move occurs due to use in generator\n98 | | Err(anyhow::anyhow!(\n99 | | "Just demonstrating the problem, counter is {}",\n100 | | counter\n101 | | ))\n102 | | }))\n | |_____^ move out of `counter` occurs here\n</code></pre>\n<p>It's a slightly confusing error message. In my opinion, it's confusing because of the formatting I've used. And I've used that formatting because (1) <code>rustfmt</code> encourages it, and (2) the Hyper docs encourage it. Let me reformat a bit, and then explain the issue:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let counter = Arc::new(AtomicUsize::new(0));\nfakeserver::run(util::app_fn(move |_req| {\n async move {\n let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n Err(anyhow::anyhow!(\n "Just demonstrating the problem, counter is {}",\n counter\n ))\n }\n}))\n</code></pre>\n<p>The issue is that, in the argument to <code>app_fn</code>, we have two different control structures:</p>\n<ul>\n<li>A move closure, which takes ownership of <code>counter</code> and produces a <code>Future</code></li>\n<li>An <code>async move</code> block, which takes ownership of <code>counter</code></li>\n</ul>\n<p>The issue is that there's only one <code>counter</code> value. It gets moved first into the closure. That means we can't use <code>counter</code> again outside the closure, which we don't try to do. All good. The second thing is that, when that closure is called, the <code>counter</code> value will be moved from the closure into the <code>async move</code> block. That's also fine, but it's only fine once. If you try to call the closure a second time, it would fail, because the <code>counter</code> has already been moved. Therefore, this closure is a <code>FnOnce</code>, not a <code>Fn</code> or <code>FnMut</code>.</p>\n<p>And that's the problem here. As we saw above, we need at least a <code>FnMut</code> as our argument to the fake web server. This makes intuitive sense: we will call our application request handling function multiple times, not just once.</p>\n<p>The fix for this is to clone the <code>counter</code> inside the closure body, but before moving it into the <code>async move</code> block. That's easy enough:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fakeserver::run(util::app_fn(move |_req| {\n let counter = counter.clone();\n async move {\n let counter = counter.fetch_add(1, std::sync::atomic::Ordering::SeqCst);\n Err(anyhow::anyhow!(\n "Just demonstrating the problem, counter is {}",\n counter\n ))\n }\n}))\n</code></pre>\n<p>This is a really subtle point, hopefully this demonstration will help make it clearer.</p>\n<h2 id=\"connections-and-requests\">Connections and requests</h2>\n<p>There's a simplification in our fake web server above. A real HTTP workflow starts off with a new connection, and then handles a stream of requests off of that connection. In other words, instead of having just one service, we really need two services:</p>\n<ol>\n<li>A service like we have above, which accepts <code>Request</code>s and returns <code>Response</code>s</li>\n<li>A service that accepts connection information and returns one of the above services</li>\n</ol>\n<p>Again, leaning on some terse Haskell syntax, we'd want:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">type InnerService = Request -> IO Response\ntype OuterService = ConnectionInfo -> IO InnerService\n</code></pre>\n<p>Or, to borrow some beautiful Java terminology, we want to create a <em>service factory</em> which will take some connection information and return a request handling service. Or, to use Tower/Hyper terminology, we have a <em>service</em>, and a <em>make service</em>. Which, if you've ever been confused by the Hyper tutorials like I was, may finally explain why "Hello World" requires both a <code>service_fn</code> and <code>make_service_fn</code> call.</p>\n<p>Anyway, it's too detailed to dive into all the changes necessary to the code above to replicate this concept, but I've <a href=\"https://gist.github.com/snoyberg/b574ef4ece5f23913c6c70b1f4f22ed5\">provided a Gist showing an <code>AppFactoryFn</code></a>.</p>\n<p>And with that... we've finally played around with fake stuff long enough that we can dive into real life Hyper code. Hurrah!</p>\n<h2 id=\"next-time\">Next time</h2>\n<p>Up until this point, we've only played with Tower. The next post in this series is available, where we try to <a href=\"https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">understand Hyper and experiment with Axum</a>.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part2\">Read part 2 now</a></p>\n<p>If you're looking for more Rust content from FP Complete, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
"slug": "axum-hyper-tonic-tower-part1",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1",
"description": "Part 1 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
"updated": null,
"date": "2021-08-30",
"year": 2021,
"month": 8,
"day": 30,
"taxonomies": {
"tags": [
"rust"
],
"categories": [
"functional programming"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/axum-hyper-tonic-tower-part1.png",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "/blog/axum-hyper-tonic-tower-part1/",
"components": [
"blog",
"axum-hyper-tonic-tower-part1"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-tower",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#what-is-tower",
"title": "What is Tower?",
"children": []
},
{
"level": 2,
"id": "fake-web-server",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#fake-web-server",
"title": "Fake web server",
"children": []
},
{
"level": 2,
"id": "app-fn",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#app-fn",
"title": "app_fn",
"children": [
{
"level": 3,
"id": "side-note-the-extra-clone",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#side-note-the-extra-clone",
"title": "Side note: the extra clone",
"children": []
}
]
},
{
"level": 2,
"id": "connections-and-requests",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#connections-and-requests",
"title": "Connections and requests",
"children": []
},
{
"level": 2,
"id": "next-time",
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#next-time",
"title": "Next time",
"children": []
}
],
"word_count": 3168,
"reading_time": 16,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2"
},
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3"
},
{
"permalink": "https://tech.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4"
},
{
"permalink": "https://tech.fpcomplete.com/blog/levana-nft-launch/",
"title": "Levana NFT Launch"
}
]
},
{
"relative_path": "blog/rust-asref-asderef.md",
"colocated_path": null,
"content": "<p>What's wrong with this program?</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let option_name: Option<String> = Some("Alice".to_owned());\n match option_name {\n Some(name) => println!("Name is {}", name),\n None => println!("No name provided"),\n }\n println!("{:?}", option_name);\n}\n</code></pre>\n<p>The compiler gives us a wonderful error message, including a hint on how to fix it:</p>\n<pre><code>error[E0382]: borrow of partially moved value: `option_name`\n --> src\\main.rs:7:22\n |\n4 | Some(name) => println!("Name is {}", name),\n | ---- value partially moved here\n...\n7 | println!("{:?}", option_name);\n | ^^^^^^^^^^^ value borrowed here after partial move\n |\n = note: partial move occurs because value has type `String`, which does not implement the `Copy` trait\nhelp: borrow this field in the pattern to avoid moving `option_name.0`\n |\n4 | Some(ref name) => println!("Name is {}", name),\n | ^^^\n</code></pre>\n<p>The issue here is that our pattern match on <code>option_name</code> moves the <code>Option<String></code> value into the match. We can then no longer use <code>option_name</code> after the <code>match</code>. But this is disappointing, because our usage of <code>option_name</code> and <code>name</code> inside the pattern match doesn't actually require moving the value at all! Instead, borrowing would be just fine.</p>\n<p>And that's exactly what the <code>note</code> from the compiler says. We can use the <code>ref</code> keyword in the <a href=\"https://doc.rust-lang.org/stable/reference/patterns.html#identifier-patterns\">identifier pattern</a> to change this behavior and, instead of <em>moving</em> the value, we'll borrow a reference to the value. Now we're free to reuse <code>option_name</code> after the <code>match</code>. That version of the code looks like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let option_name: Option<String> = Some("Alice".to_owned());\n match option_name {\n Some(ref name) => println!("Name is {}", name),\n None => println!("No name provided"),\n }\n println!("{:?}", option_name);\n}\n</code></pre>\n<p>For the curious, you can <a href=\"https://doc.rust-lang.org/std/keyword.ref.html\">read more about the <code>ref</code> keyword</a>.</p>\n<h2 id=\"more-idiomatic\">More idiomatic</h2>\n<p>While this is <em>working</em> code, in my opinion and experience, it's not idiomatic. It's far more common to put the borrow on <code>option_name</code>, like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let option_name: Option<String> = Some("Alice".to_owned());\n match &option_name {\n Some(name) => println!("Name is {}", name),\n None => println!("No name provided"),\n }\n println!("{:?}", option_name);\n}\n</code></pre>\n<p>I like this version more, since it's blatantly obvious that we have no intention of moving <code>option_name</code> in the pattern match. Now <code>name</code> still remains as a reference, <code>println!</code> can use it as a reference, and everything is fine.</p>\n<p>The fact that this code works, however, is a specifically added feature of the language. Before <a href=\"https://rust-lang.github.io/rfcs/2005-match-ergonomics.html\">RFC 2005 "match ergonomics" landed in 2016</a>, the code above would have failed. That's because we tried to match the <code>Some</code> constructor against a <em>reference</em> to an <code>Option</code>, and those types don't match up. To borrow the RFC's terminology, getting that code to work would require "a bit of a dance":</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let option_name: Option<String> = Some("Alice".to_owned());\n match &option_name {\n &Some(ref name) => println!("Name is {}", name),\n &None => println!("No name provided"),\n }\n println!("{:?}", option_name);\n}\n</code></pre>\n<p>Now all of the types really line up explicitly:</p>\n<ul>\n<li>We have an <code>&Option<String></code></li>\n<li>We can therefore match on a <code>&Some</code> variant or a <code>&None</code> variant</li>\n<li>In the <code>&Some</code> variant, we need to make sure we borrow the inner value, so we add a <code>ref</code> keyword</li>\n</ul>\n<p>Fortunately, with RFC 2005 in place, this extra noise isn't needed, and we can simplify our pattern match as above. The Rust language is better for this change, and the masses can rejoice.</p>\n<h2 id=\"introducing-as-ref\">Introducing as_ref</h2>\n<p>But what if we didn't have RFC 2005? Would we be required to use the awkward syntax above forever? Thanks to a helper method, no. The problem in our code is that <code>&option_name</code> is a reference to an <code>Option<String></code>. And we want to pattern match on the <code>Some</code> and <code>None</code> constructors, and capture a <code>&String</code> instead of a <code>String</code> (avoiding the move). RFC 2005 implements that as a direct language feature. But there's also a method on <code>Option</code> that does just this: <code>as_ref</code>.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<T> Option<T> {\n pub const fn as_ref(&self) -> Option<&T> {\n match *self {\n Some(ref x) => Some(x),\n None => None,\n }\n }\n}\n</code></pre>\n<p>This is another way of avoiding the "dance," by capturing it in the method definition itself. But thankfully, there's a great language ergonomics feature that captures this pattern, and automatically applies this rule for us. Meaning that <code>as_ref</code> isn't really necessary any more... right?</p>\n<h2 id=\"side-rant-ergonomics-in-rust\">Side rant: ergonomics in Rust</h2>\n<p>I absolutely love the ergonomics features of Rust. There is no "but" in my love for RFC 2005. There is, however, a concern around learning and teaching a language with these kinds of ergonomics. These kinds of features work 99% of the time. But when they fail, as we're about to see, it can come as a large shock.</p>\n<p>I'm guessing most Rustaceans, at least those that learned the language after 2016, never considered the fact that there was something weird about being able to pattern match a <code>Some</code> from an <code>&Option<String></code> value. It feels natural. It <em>is</em> natural. But because you were never forced to confront this while learning the language, at some point in the distant future you'll crash into a wall when this ergonomic feature doesn't kick in.</p>\n<p>I kind of wish there was a <code>--no-ergonomics</code> flag that we could turn on when learning the language to force us to confront all of these details. But there isn't. I'm hoping blog posts like this help out. Anyway, </rant>.</p>\n<h2 id=\"when-rfc-2005-fails\">When RFC 2005 fails</h2>\n<p>We can fairly easily create a contrived example of match ergonomics failing to solve our problem. Let's "improve" our program above by factoring out the greet logic to its own helper function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn try_greet(option_name: Option<&String>) {\n match option_name {\n Some(name) => println!("Name is {}", name),\n None => println!("No name provided"),\n }\n}\n\nfn main() {\n let option_name: Option<String> = Some("Alice".to_owned());\n try_greet(&option_name);\n println!("{:?}", option_name);\n}\n</code></pre>\n<p>This code won't compile:</p>\n<pre><code>error[E0308]: mismatched types\n --> src\\main.rs:10:15\n |\n10 | try_greet(&option_name);\n | ^^^^^^^^^^^^\n | |\n | expected enum `Option`, found `&Option<String>`\n | help: you can convert from `&Option<T>` to `Option<&T>` using `.as_ref()`: `&option_name.as_ref()`\n |\n = note: expected enum `Option<&String>`\n found reference `&Option<String>`\n</code></pre>\n<p>Now we've bypassed any ability to use match ergonomics at the call site. With what we know about <code>as_ref</code>, it's easy enough to fix this. But, at least in my experience, the first time someone runs into this kind of error, it's a bit surprising, since most of us have never previously thought about the distinction between <code>Option<&T></code> and <code>&Option<T></code>.</p>\n<p>These kinds of errors tend to pop up when combining together other helper functions, such as <code>map</code>, which circumvent the need for explicit pattern matching.</p>\n<p>As an aside, you could solve this compile error pretty easily, without resorting to <code>as_ref</code>. Instead, you could change the type signature of <code>try_greet</code> to take a <code>&Option<String></code> instead of an <code>Option<&String></code>, and then allow the match ergonomics to kick in within the body of <code>try_greet</code>. One reason not to do this is that, as mentioned, this was all a contrived example to demonstrate a failure. But the other reason is more important: neither <code>&Option<String></code> nor <code>Option<&String></code> are good argument types. Let's explore that next.</p>\n<h2 id=\"when-as-ref-fails\">When as_ref fails</h2>\n<p>We're taught pretty early in our Rust careers that, when receiving an argument to a function, we should prefer taking references to slices instead of references to owned objects. In other words:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn greet_good(name: &str) {\n println!("Name is {}", name);\n}\n\nfn greet_bad(name: &String) {\n println!("Name is {}", name);\n}\n</code></pre>\n<p>And in fact, if you pass this code by <code>clippy</code>, it will tell you to change the signature of <code>greet_bad</code>. The <a href=\"https://rust-lang.github.io/rust-clippy/master/index.html#ptr_arg\">clippy lint description</a> provides a great explanation of this, but suffice it to say that <code>greet_good</code> is more general in what it accepts than <code>greet_bad</code>.</p>\n<p>The same logic applies to <code>try_greet</code>. Why should we accept <code>Option<&String></code> instead of <code>Option<&str></code>? And interestingly, clippy doesn't complain in this case like it did in <code>greet_bad</code>. To see why, let's change our signature like so and see what happens:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn try_greet(option_name: Option<&str>) {\n match option_name {\n Some(name) => println!("Name is {}", name),\n None => println!("No name provided"),\n }\n}\n\nfn main() {\n let option_name: Option<String> = Some("Alice".to_owned());\n try_greet(option_name.as_ref());\n println!("{:?}", option_name);\n}\n</code></pre>\n<p>This code no longer compiles:</p>\n<pre><code>error[E0308]: mismatched types\n --> src\\main.rs:10:15\n |\n10 | try_greet(option_name.as_ref());\n | ^^^^^^^^^^^^^^^^^^^^ expected `str`, found struct `String`\n |\n = note: expected enum `Option<&str>`\n found enum `Option<&String>`\n</code></pre>\n<p>This is another example of ergonomics failing. You see, when you call a function with an argument of type <code>&String</code>, but the function expects a <code>&str</code>, <a href=\"https://doc.rust-lang.org/book/ch15-02-deref.html#implicit-deref-coercions-with-functions-and-methods\">deref coercion</a> kicks in and will perform a conversion for you. This is a piece of Rust ergonomics that we all rely on regularly, and every once in a while it completely fails to help us. This is one of those times. The compiler will not automatically convert a <code>Option<&String></code> into an <code>Option<&str></code>.</p>\n<p>(You can also read more about <a href=\"https://doc.rust-lang.org/nomicon/coercions.html\">coercions in the nomicon</a>.)</p>\n<p>Fortunately, there's another helper method on <code>Option</code> that does this for us. <code>as_deref</code> works just like <code>as_ref</code>, but additionally performs a <code>deref</code> method call on the value. Its implementation in <code>std</code> is interesting:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<T: Deref> Option<T> {\n pub fn as_deref(&self) -> Option<&T::Target> {\n self.as_ref().map(|t| t.deref())\n }\n}\n</code></pre>\n<p>But we can also implement it more explicitly to see the behavior spelled out:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::ops::Deref;\n\nfn try_greet(option_name: Option<&str>) {\n match option_name {\n Some(name) => println!("Name is {}", name),\n None => println!("No name provided"),\n }\n}\n\nfn my_as_deref<T: Deref>(x: &Option<T>) -> Option<&T::Target> {\n match *x {\n None => None,\n Some(ref t) => Some(t.deref())\n }\n}\n\nfn main() {\n let option_name: Option<String> = Some("Alice".to_owned());\n try_greet(my_as_deref(&option_name));\n println!("{:?}", option_name);\n}\n</code></pre>\n<p>And to bring this back to something closer to real world code, here's a case where combining <code>as_deref</code> and <code>map</code> leads to much cleaner code than you'd otherwise have:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn greet(name: &str) {\n println!("Name is {}", name);\n}\n\nfn main() {\n let option_name: Option<String> = Some("Alice".to_owned());\n option_name.as_deref().map(greet);\n println!("{:?}", option_name);\n}\n</code></pre>\n<h2 id=\"real-ish-life-example\">Real-ish life example</h2>\n<p>Like most of my blog posts, this one was inspired by some real world code. To simplify the concept down a bit, I was parsing a config file, and ended up with an <code>Option<String></code>. I needed some code that would either provide the value from the config, or default to a static string in the source code. Without <code>as_deref</code>, I could have used <code>STATIC_STRING_VALUE.to_string()</code> to get types to line up, but that would have been ugly and inefficient. Here's a somewhat intact representation of that code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use serde::Deserialize;\n\n#[derive(Deserialize)]\nstruct Config {\n some_value: Option<String>\n}\n\nconst DEFAULT_VALUE: &str = "my-default-value";\n\nfn main() {\n let mut file = std::fs::File::open("config.yaml").unwrap();\n let config: Config = serde_yaml::from_reader(&mut file).unwrap();\n let value = config.some_value.as_deref().unwrap_or(DEFAULT_VALUE);\n println!("value is {}", value);\n}\n</code></pre>\n<p>Want to learn more Rust with FP Complete? Check out these links:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training courses</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"/tags/rust/\">Rust tagged articles</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust homepage</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/",
"slug": "rust-asref-asderef",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Rust's as_ref vs as_deref",
"description": "A short analysis of when to use the Option methods as_ref and as_deref",
"updated": null,
"date": "2021-07-05",
"year": 2021,
"month": 7,
"day": 5,
"taxonomies": {
"tags": [
"rust"
],
"categories": [
"functional programming",
"rust"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/rust-asref-asderef.png"
},
"path": "/blog/rust-asref-asderef/",
"components": [
"blog",
"rust-asref-asderef"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "more-idiomatic",
"permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#more-idiomatic",
"title": "More idiomatic",
"children": []
},
{
"level": 2,
"id": "introducing-as-ref",
"permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#introducing-as-ref",
"title": "Introducing as_ref",
"children": []
},
{
"level": 2,
"id": "side-rant-ergonomics-in-rust",
"permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#side-rant-ergonomics-in-rust",
"title": "Side rant: ergonomics in Rust",
"children": []
},
{
"level": 2,
"id": "when-rfc-2005-fails",
"permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#when-rfc-2005-fails",
"title": "When RFC 2005 fails",
"children": []
},
{
"level": 2,
"id": "when-as-ref-fails",
"permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#when-as-ref-fails",
"title": "When as_ref fails",
"children": []
},
{
"level": 2,
"id": "real-ish-life-example",
"permalink": "https://tech.fpcomplete.com/blog/rust-asref-asderef/#real-ish-life-example",
"title": "Real-ish life example",
"children": []
}
],
"word_count": 1822,
"reading_time": 10,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/intermediate-training-courses.md",
"colocated_path": null,
"content": "<p>I'm happy to announce that over the next few months, FP Complete will be offering intermediate training courses on both Haskell and Rust. This is a follow up to our previous beginner courses on both languages as well. I'm excited to get to teach both of these courses.</p>\n<p>More details below, but cutting to the chase: if you'd like to sign up, or just get more information on these courses, please <a href=\"mailto:[email protected]\">email [email protected]</a>.</p>\n<h2 id=\"overall-structure\">Overall structure</h2>\n<p>Each course consists of:</p>\n<ul>\n<li>Four sessions, held on Sunday, 1500 UTC, 8am Pacific time, 5pm Central European</li>\n<li>Each session is three hours, with a ten minute break</li>\n<li>Slides, exercises, and recordings will be provided to all participants</li>\n<li>Private Discord chat room is available to those interested to interact with other students and the teacher, kept open after the course finishes</li>\n</ul>\n<h2 id=\"dates\">Dates</h2>\n<p>We'll be holding these courses on the following dates</p>\n<ul>\n<li>Haskell\n<ul>\n<li>June 13</li>\n<li>June 20</li>\n<li>July 11</li>\n<li>July 25</li>\n</ul>\n</li>\n<li>Rust\n<ul>\n<li>August 8</li>\n<li>August 15</li>\n<li>August 22</li>\n<li>August 29</li>\n</ul>\n</li>\n</ul>\n<h2 id=\"cost-and-signup\">Cost and signup</h2>\n<p>Each course costs $150 per participant. Please register and arrange payment (via PayPal or Venmo) by contacting <a href=\"mailto:[email protected]\">[email protected]</a>.</p>\n<h2 id=\"topics-covered\">Topics covered</h2>\n<p>Before the course begins, and throughout the course, I'll ask participants for feedback on additional topics to cover, and tune the course appropriately. Below is the basis of the course which we'll focus on:</p>\n<ul>\n<li>Haskell (based largely on our <a href=\"https://tech.fpcomplete.com/haskell/syllabus/\">Applied Haskell syllabus</a>)\n<ul>\n<li>Data structures (<code>bytestring</code>, <code>text</code>, <code>containers</code> and <code>vector</code>)</li>\n<li>Evaluation order</li>\n<li>Mutable variables</li>\n<li>Concurrent programming (<code>async</code> and <code>stm</code>)</li>\n<li>Exception safety</li>\n<li>Testing</li>\n<li>Data serialization</li>\n<li>Web clients and servers</li>\n<li>Streaming data</li>\n</ul>\n</li>\n<li>Rust\n<ul>\n<li>Error handling</li>\n<li>Closures</li>\n<li>Multithreaded programming</li>\n<li><code>async</code>/<code>.await</code> and Tokio</li>\n<li>Basics of <code>unsafe</code></li>\n<li>Macros</li>\n<li>Testing and benchmarks</li>\n</ul>\n</li>\n</ul>\n<h2 id=\"want-to-learn-more\">Want to learn more?</h2>\n<p>Not sure if this is right for you? Feel free to <a href=\"https://twitter.com/snoyberg\">hit me up on Twitter</a> for more information, or <a href=\"mailto:[email protected]\">contact [email protected]</a>.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/",
"slug": "intermediate-training-courses",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Intermediate Training Courses - Haskell and Rust",
"description": "Announcing two more training courses, covering intermediate Haskell and Rust topics. Sign up today!",
"updated": null,
"date": "2021-06-03",
"year": 2021,
"month": 6,
"day": 3,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"haskell",
"rust"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"blogimage": "/images/blog-listing/functional.png",
"image": "images/blog/thumbs/intermediate-training-courses.png"
},
"path": "/blog/intermediate-training-courses/",
"components": [
"blog",
"intermediate-training-courses"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "overall-structure",
"permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#overall-structure",
"title": "Overall structure",
"children": []
},
{
"level": 2,
"id": "dates",
"permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#dates",
"title": "Dates",
"children": []
},
{
"level": 2,
"id": "cost-and-signup",
"permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#cost-and-signup",
"title": "Cost and signup",
"children": []
},
{
"level": 2,
"id": "topics-covered",
"permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#topics-covered",
"title": "Topics covered",
"children": []
},
{
"level": 2,
"id": "want-to-learn-more",
"permalink": "https://tech.fpcomplete.com/blog/intermediate-training-courses/#want-to-learn-more",
"title": "Want to learn more?",
"children": []
}
],
"word_count": 312,
"reading_time": 2,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/tying-the-knot-haskell.md",
"colocated_path": null,
"content": "<p>This post has nothing to do with marriage. Tying the knot is, in my opinion at least, a relatively obscure technique you can use in Haskell to address certain corner cases. I've used it myself only a handful of times, one of which I'll reference below. I preface it like this to hopefully make clear: tying the knot is a fine technique to use in certain cases, but don't consider it a general technique that you should need regularly. It's not nearly as generally useful as something like <a href=\"https://tech.fpcomplete.com/haskell/library/stm/\">Software Transactional Memory</a>.</p>\n<p>That said, you're still interested in this technique, and are still reading this post. Great! Let's get started where all bad Haskell code starts: C++.</p>\n<h2 id=\"doubly-linked-lists\">Doubly linked lists</h2>\n<p>Typically I'd demonstrate imperative code in Rust, but <a href=\"https://rust-unofficial.github.io/too-many-lists/\">it's not a good idea for this case</a>. So we'll start off with a very simple doubly linked list implementation in C++. And by "very simple" I should probably say "very poorly written," since I'm out of practice.</p>\n<p><img src=\"/images/haskell/cpp-is-rusty.png\" alt=\"Rusty C++\" /></p>\n<p>Anyway, reading the entire code isn't necessary to get the point across. Let's look at some relevant bits. We define a node of the list like this, including a nullable pointer to the previous and next node in the list:</p>\n<pre data-lang=\"cpp\" class=\"language-cpp \"><code class=\"language-cpp\" data-lang=\"cpp\">template <typename T> class Node {\npublic:\n Node(T value) : value(value), prev(NULL), next(NULL) {}\n Node *prev;\n T value;\n Node *next;\n};\n</code></pre>\n<p>When you add the first node to the list, you set the new node's previous and next values to <code>NULL</code>, and the list's first and last values to the new node. The more interesting case is when you already have something in the list. To add a new node to the back of the list, you need some code that looks like the following:</p>\n<pre data-lang=\"cpp\" class=\"language-cpp \"><code class=\"language-cpp\" data-lang=\"cpp\">node->prev = this->last;\nthis->last->next = node;\nthis->last = node;\n</code></pre>\n<p>For those (like me) not fluent in C++, I'm making three mutations:</p>\n<ol>\n<li>Mutating the new node's <code>prev</code> member to point to the currently last node of the list.</li>\n<li>Mutating the currently last node's <code>next</code> member to point at the new node.</li>\n<li>Mutating the list itself so that its <code>last</code> member points to the new node.</li>\n</ol>\n<p>Point being in all of this: there's a lot of mutation going on in order to create a double linked list. Contrast that with singly linked lists in Haskell, which are immutable data structures and require no mutation at all.</p>\n<p>Anyway, I've written my annual quota of C++ at this point, it's time to go back to Haskell.</p>\n<h2 id=\"riih-rewrite-it-in-haskell\">RIIH (Rewrite it in Haskell)</h2>\n<p>Using <code>IORef</code>s and lots of <code>IO</code> calls everywhere, it's possible to reproduce the C++ concept of a mutable doubly linked list in Haskell. Full code is <a href=\"https://gist.github.com/snoyberg/5de410aba87a4208b7c701e954c61d9d\">available in a Gist</a>, but let's step through the important bits. Our core data types look quite like the C++ version, but with <code>IORef</code> and <code>Maybe</code> sprinkled in for good measure:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Node a = Node\n { prev :: IORef (Maybe (Node a))\n , value :: a\n , next :: IORef (Maybe (Node a))\n }\n\ndata List a = List\n { first :: IORef (Maybe (Node a))\n , last :: IORef (Maybe (Node a))\n }\n</code></pre>\n<p>And adding a new value to a non-empty list looks like this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">node <- Node <$> newIORef (Just last') <*> pure value <*> newIORef Nothing\nwriteIORef (next last') (Just node)\nwriteIORef (last list) (Just node)\n</code></pre>\n<p>Notice that, like in the C++ code, we need to perform mutations on the existing node and the <code>last</code> member of the list.</p>\n<p>This certainly works, but it probably feels less than satisfying to a Haskeller:</p>\n<ul>\n<li>I don't love the idea of mutations all over the place.</li>\n<li>The code looks and feels ugly.</li>\n<li>I can't access the values of the list from pure code.</li>\n</ul>\n<p>So the challenge is: can we write a doubly linked list in Haskell in pure code?</p>\n<h2 id=\"defining-our-data\">Defining our data</h2>\n<p>I'll warn you in advance. Every single time I've written code that "ties the knot" in Haskell, I've gone through at least two stages:</p>\n<ol>\n<li>This doesn't make any sense, there's no way this is going to work, what exactly am I doing?</li>\n<li>Oh, it's done, how exactly did that work?</li>\n</ol>\n<p>It happened while writing the code below. You're likely to have the same feeling while reading this of "wait, what? I don't get it, huh?"</p>\n<p>Anyway, let's start off by defining our data types. We didn't like the fact that we had <code>IORef</code> all over the place. So let's just get rid of it!</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Node a = Node\n { prev :: Maybe (Node a)\n , value :: a\n , next :: Maybe (Node a)\n }\n\ndata List a = List\n { first :: Maybe (Node a)\n , last :: Maybe (Node a)\n }\n</code></pre>\n<p>We still have <code>Maybe</code> to indicate the presence or absence of nodes before or after our own. That translation is pretty easy. The problem is going to arise when we try to build such a structure, since we've seen that we need mutation to make it happen. We'll need to rethink our API to get going.</p>\n<h2 id=\"non-mutable-api\">Non-mutable API</h2>\n<p>The first change we need to consider is getting rid of the <em>concept</em> of mutation in the API. Previously, we had functions like <code>pushBack</code> and <code>popBack</code>, which were inherently mutating. Instead, we should be thinking in terms of immutable data structures and APIs.</p>\n<p>We already know all about singly linked lists, the venerable <code>[]</code> data type. Let's see if we can build a function that will let us construct a doubly linked list from a singly linked list. In other words:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList :: [a] -> List a\n</code></pre>\n<p>Let's knock out two easy cases first. An empty list should end up with no nodes at all. That clause would be:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList [] = List Nothing Nothing\n</code></pre>\n<p>The next easy case is a single value in the list. This ends up with a single node with no pointers to other nodes, and a <code>first</code> and <code>last</code> field that both point to that one node. Again, fairly easy, no knot tying required:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList [x] =\n let node = Node Nothing x Nothing\n in List (Just node) (Just node)\n</code></pre>\n<p>OK, that's too easy. Let's kick it up a notch.</p>\n<h2 id=\"two-element-list\">Two-element list</h2>\n<p>To get into things a bit more gradually, let's handle the two element case next, instead of the general case of "2 or more", which is a bit more complicated. We need to:</p>\n<ol>\n<li>Construct a first node that points at the last node</li>\n<li>Construct a last node that points at the first node</li>\n<li>Construct a list that points at both the first and last nodes</li>\n</ol>\n<p>Step (3) isn't too hard. Step (2) doesn't sound too bad either, since presumably the first node already exists at that point. The problem appears to be step (1). How can we construct a first node that points at the second node, when we haven't constructed the second node yet? Let me show you how:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList [x, y] =\n let firstNode = Node Nothing x (Just lastNode)\n lastNode = Node (Just firstNode) y Nothing\n in List (Just firstNode) (Just lastNode)\n</code></pre>\n<p>If that code doesn't confuse or bother you you've probably already learned about tying the knot. This seems to make no sense. I'm referring to <code>lastNode</code> while constructing <code>firstNode</code>, and referring to <code>firstNode</code> while constructing <code>lastNode</code>. This kind of makes me think of an <a href=\"https://en.wikipedia.org/wiki/Ouroboros\">Ouroboros</a>, or a snake eating its own tail:</p>\n<p><img src=\"/images/haskell/ouroboros.jpeg\" alt=\"Ouroboros\" /></p>\n<p>In a normal programming language, this concept wouldn't make sense. We'd need to define <code>firstNode</code> first with a null pointer for <code>next</code>. Then we could define <code>lastNode</code>. And then we could mutate <code>firstNode</code>'s <code>next</code> to point to the last node. But not in Haskell! Why? Because of <em>laziness</em>. Thanks to laziness, both <code>firstNode</code> and <code>lastNode</code> are initially created as thunks. Their contents need not exist yet. But thankfully, we can still create pointers to these not-fully-evaluated values.</p>\n<p>With those pointers available, we can then define an expression for each of these that leverages the pointer of the other. And we have now, successfully, tied the knot.</p>\n<h2 id=\"expanding-beyond-two\">Expanding beyond two</h2>\n<p>Expanding beyond two elements follows the exact same pattern, but (at least in my opinion) is significantly more complicated. I implemented it by writing a helper function, <code>buildNodes</code>, which (somewhat spookily) takes the previous node in the list as a parameter, and returns back the next node and the final node in the list. Let's see all of this in action:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">buildList (x:y:ys) =\n let firstNode = Node Nothing x (Just secondNode)\n (secondNode, lastNode) = buildNodes firstNode y ys\n in List (Just firstNode) (Just lastNode)\n\n-- | Takes the previous node in the list, the current value, and all following\n-- values. Returns the current node as well as the final node constructed in\n-- this list.\nbuildNodes :: Node a -> a -> [a] -> (Node a, Node a)\nbuildNodes prevNode value [] =\n let node = Node (Just prevNode) value Nothing\n in (node, node)\nbuildNodes prevNode value (x:xs) =\n let node = Node (Just prevNode) value (Just nextNode)\n (nextNode, lastNode) = buildNodes node x xs\n in (node, lastNode)\n</code></pre>\n<p>Notice that in <code>buildList</code>, we're using the same kind of trick to use <code>secondNode</code> to construct <code>firstNode</code>, and <code>firstNode</code> is a parameter passed to <code>buildNodes</code> that is used to construct <code>secondNode</code>.</p>\n<p>Within <code>buildNodes</code>, we have two clauses. The first clause is one of those simpler cases: we've only got one value left, so we create a terminal node that points back at previous. No knot tying required. The second clause, however, once again uses the knot tying technique, together with a recursive call to <code>buildNodes</code> to build up the rest of the nodes in the list.</p>\n<p>The full code is <a href=\"https://gist.github.com/snoyberg/876ad1ad0f106c80239bf098a6965a53\">available as a Gist</a>. I recommend reading through the code a few times until you feel comfortable with it. When you have a good grasp on what's going on, try implementing it from scratch yourself.</p>\n<h2 id=\"limitation\">Limitation</h2>\n<p>It's important to understand a limitation of this approach versus both mutable doubly linked lists and singly linked lists. With singly linked lists, I can easily construct a new singly linked list by <code>cons</code>ing a new value to the front. Or I can drop a few values from the front and cons some new values in front of that new tail. In other words, I can construct new values based on old values as much as I want.</p>\n<p>Similarly, with mutable doubly linked lists, I'm free to mutate at will, changing my existing data structure. This behaves slightly different from constructing new singly linked lists, and falls into the same category of mutable-vs-immutable data structures that Haskellers know and love so well. If you want a refresher, check out:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/haskell/tutorial/data-structures/\">Data structures</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/library/vector/\">vector</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/tutorial/mutable-variables/\">Mutable variables</a></li>\n</ul>\n<p>None of these apply with a tie-the-knot approach to data structures. Once you construct this doubly linked list, it is locked in place. If you try to prepend a new node to the front of this list, you'll find that you cannot update the <code>prev</code> pointer in the old first node.</p>\n<p>There is a workaround. You can construct a brand new doubly linked list using the values in the original. A common way to do this would be to provide a conversion function back from your <code>List a</code> to a <code>[a]</code>. Then you could append a value to a doubly linked list with some code like:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">let oldList = buildList [2..10]\n newList = buildList $ 1 : toSinglyLinkedList oldList\n</code></pre>\n<p>However, unlike singly linked lists, we lose all possibilities of data sharing, at least at the structure level (the values themselves can still be shared).</p>\n<h2 id=\"why-tie-the-knot\">Why tie the knot?</h2>\n<p>That's a cool trick, but is it actually useful? In some situations, absolutely! One example I've worked on is in the <a href=\"https://www.stackage.org/package/xml-conduit\">xml-conduit</a> package. Some people may be familiar with XPath, a pretty nice standard for XML traversals. It allows you to say things like "find the first <code>ul</code> tag in document, then find the <code>p</code> tag before that, and tell me its <code>id</code> attribute."</p>\n<p>A simple implementation of an XML data type in Haskell may look like this:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Element = Element Name (Map Name AttributeValue) [Node]\ndata Node\n = NodeElement Element\n | NodeContent Text\n</code></pre>\n<p>Using this kind of data structure, it would be pretty difficult to implement the traversal that I just described. You would need to write logic to keep track of where you are in the document, and then implement logic to say "OK, given that I was in the third child of the second child of the sixth child, what are all of the nodes that came before me?"</p>\n<p>Instead, in <code>xml-conduit</code>, we use knot tying to create a data structure called a <a href=\"https://www.stackage.org/haddock/nightly-2021-05-23/xml-conduit-1.9.1.1/Text-XML-Cursor.html#t:Cursor\"><code>Cursor</code></a>. A <code>Cursor</code> not only keeps track of its own contents, but also contains a pointer to its parent cursor, its predecessor cursors, its following cursors, and its child cursors. You can then traverse the tree with ease. The traversal above would be implemented as:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">#!/usr/bin/env stack\n-- stack --resolver lts-17.12 script\n{-# LANGUAGE OverloadedStrings #-}\nimport qualified Text.XML as X\nimport Text.XML.Cursor\n\nmain :: IO ()\nmain = do\n doc <- X.readFile X.def "input.xml"\n let cursor = fromDocument doc\n print $ cursor $// element "ul" >=> precedingSibling >=> element "p" >=> attribute "id"\n</code></pre>\n<p>You can test this out yourself with this sample input document:</p>\n<pre data-lang=\"xml\" class=\"language-xml \"><code class=\"language-xml\" data-lang=\"xml\"><foo>\n <bar>\n <baz>\n <p id="hello">Something</p>\n <ul>\n <li>Bye!</li>\n </ul>\n </baz>\n </bar>\n</foo>\n</code></pre>\n<h2 id=\"should-i-tie-the-knot\">Should I tie the knot?</h2>\n<p><em>Insert bad marriage joke here</em></p>\n<p>Like most techniques in programming in general, and Haskell in particular, it can be tempting to go off and look for a use case to throw this technique at. The use cases definitely exist. I think <code>xml-conduit</code> is one of them. But let me point out that it's the <em>only</em> example I can think of in my career as a Haskeller where tying the knot was a great solution to the problem. There are similar cases out there that I'd include too (such as JSON document traversal).</p>\n<p>Is it worth learning the technique? Yeah, definitely. It's a mind-expanding move. It helps you internalize concepts of laziness just a bit better. It's really fun and mind-bending. But don't rush off to rewrite your code to use a relatively niche technique.</p>\n<p>If anyone's wondering, this blog post came out of a question that popped up during a Haskell training course. If you'd like to come learn some Haskell and dive into weird topics like this, come find out more about <a href=\"https://tech.fpcomplete.com/training/\">FP Complete's training programs</a>. We're gearing up for some intermediate Haskell and Rust courses soon, so add your name to the list if you want to get more information.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/",
"slug": "tying-the-knot-haskell",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Tying the Knot in Haskell",
"description": "An overview of a somewhat obscure technique in Haskell code, when you can use it, and its limitations.",
"updated": null,
"date": "2021-05-25",
"year": 2021,
"month": 5,
"day": 25,
"taxonomies": {
"tags": [
"haskell"
],
"categories": [
"functional programming"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"blogimage": "/images/blog-listing/functional.png",
"image": "images/blog/tying-the-knot-haskell.png"
},
"path": "/blog/tying-the-knot-haskell/",
"components": [
"blog",
"tying-the-knot-haskell"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "doubly-linked-lists",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#doubly-linked-lists",
"title": "Doubly linked lists",
"children": []
},
{
"level": 2,
"id": "riih-rewrite-it-in-haskell",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#riih-rewrite-it-in-haskell",
"title": "RIIH (Rewrite it in Haskell)",
"children": []
},
{
"level": 2,
"id": "defining-our-data",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#defining-our-data",
"title": "Defining our data",
"children": []
},
{
"level": 2,
"id": "non-mutable-api",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#non-mutable-api",
"title": "Non-mutable API",
"children": []
},
{
"level": 2,
"id": "two-element-list",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#two-element-list",
"title": "Two-element list",
"children": []
},
{
"level": 2,
"id": "expanding-beyond-two",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#expanding-beyond-two",
"title": "Expanding beyond two",
"children": []
},
{
"level": 2,
"id": "limitation",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#limitation",
"title": "Limitation",
"children": []
},
{
"level": 2,
"id": "why-tie-the-knot",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#why-tie-the-knot",
"title": "Why tie the knot?",
"children": []
},
{
"level": 2,
"id": "should-i-tie-the-knot",
"permalink": "https://tech.fpcomplete.com/blog/tying-the-knot-haskell/#should-i-tie-the-knot",
"title": "Should I tie the knot?",
"children": []
}
],
"word_count": 2453,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/haskell/syllabus/",
"title": "Applied Haskell Syllabus"
}
]
},
{
"relative_path": "blog/pains-path-parsing.md",
"colocated_path": null,
"content": "<p>I've spent a considerable amount of coding time getting into the weeds of path parsing and generation in web applications. First with <a href=\"https://www.yesodweb.com/\">Yesod in Haskell</a>, and more recently with a side project for <a href=\"https://github.com/snoyberg/routetype-rs\">routetypes in Rust</a>. (Side note: I'll likely do some blogging and/or videos about that project in the future, stay tuned.) My recent work reminded me of a bunch of the pain points involved here. And as so often happens, I was complaining to my wife about these pain points, and decided to write a blog post about it.</p>\n<p>First off, there are plenty of pain points I'm not going to address. For example, the insane world of percent encoding, and the different rules for what part of the URL you're in, is a constant source of misery and mistakes. Little things like required leading forward slashes, or whether query string parameters should differentiate between "no value provided" (e.g. <code>?foo</code>) versus "empty value provided" (e.g. <code>?foo=</code>). But I'll restrict myself to just one aspect: <strong>roundtripping path segments and rendered paths</strong>.</p>\n<h2 id=\"what-s-a-path\">What's a path?</h2>\n<p>Let's take this blog post's URL: <code>https://www.fpcomplete.com/blog/pains-path-parsing/</code>. We can break it up into four logical pieces:</p>\n<ul>\n<li><code>https</code> is the <em>scheme</em></li>\n<li><code>://</code> is a required part of the URL syntax</li>\n<li><code>www.fpcomplete.com</code> is the <em>authority</em>. You may be wondering: isn't it just the domain name? Well, yes. But the authority may contain additional information too, like port number, username, password</li>\n<li><code>/blog/pains-path-parsing/</code> is the path, including the leading and trailing forward slashes</li>\n</ul>\n<p>This URL doesn't include them, but URLs may also include query strings, like <code>?source=rss</code>, and fragments, like <code>#what-s-a-path</code>. But we just care about that <code>path</code> component.</p>\n<p>The first way to think of a path is as a string. And by string, I mean a sequence of characters. And by sequence of characters, I really mean Unicode code points. (See how ridiculously pedantic I'm getting? Yeah, that's important.) But that's not true at all. To demonstrate, here's some Rust code that uses Hebrew letters in the path:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let uri = http::Uri::builder().path_and_query("/hello/מיכאל/").build();\n println!("{:?}", uri);\n}\n</code></pre>\n<p>And while that looks nice and simple, it fails spectacularly with the error message:</p>\n<pre><code>Err(http::Error(InvalidUri(InvalidUriChar)))\n</code></pre>\n<p>In reality, according to <a href=\"https://tools.ietf.org/html/rfc3986#section-2\">the RFC</a>, paths are made up of a limited set of ASCII characters, represented as octets (raw bytes). And we somehow have to use percent encoding to represent other characters.</p>\n<p>But before we can really talk about encoding and representing, we have to ask another orthogonal question.</p>\n<h2 id=\"what-do-paths-represent\">What do paths represent?</h2>\n<p>While a path is technically a sequence of a reserved number of ASCII octets, that's not how our applications treat them. Instead, we <em>want</em> to be able to talk about the full range of Unicode code points. But it's more than just that. We want to be able to talk about <em>groupings</em> of <em>sequences</em>. We call these <em>segments</em> typically. The raw path <code>/hello/world</code> can be thought of as the segments <code>["hello", "world"]</code>. I would call this <em>parsing</em> the path. And, in reverse, we can <em>render</em> those segments back into the original raw path.</p>\n<p>With these kinds of parse/render pairs, it's always nice to have complete roundtripping abilities. In other words, <code>parse(render(x)) == x</code> and <code>render(parse(x)) == x</code>. Generally these rules fail for a variety of reasons, such as:</p>\n<ol>\n<li>Multiple valid representations. For example, with the percent encoding we'll mention below, <code>%2a</code> and <code>%2A</code> mean the same thing.</li>\n<li>Often unimportant whitespace details get lost during parsing. This applies to formats like JSON, where <code>[true, false]</code> and <code>[ true, false ]</code> have the same meaning.</li>\n<li>Parsing can fail, so that it's invalid to call <code>render</code> on <code>parse(x)</code>.</li>\n</ol>\n<p>Because of this, we often end up reducing our goals to something like: for all <code>x</code>, <code>parse(render(x))</code> is successful, and produces output identical to <code>x</code>.</p>\n<p>In path parsing, we definitely have problem (1) above (multiple valid representations). But by using this simplified goal, we no longer worry about that problem. Paths in URLs also don't have unimportant whitespace details (every octet has meaning), so (2) isn't a problem to be concerned with. Even if it was, our <code>parse(render(x))</code> step would end up "fixing" it.</p>\n<p>The final point is interesting, and is going to be crucial to our complete solution. What exactly does it mean for path parsing to fail? I can think of two ideas in basic path parsing:</p>\n<ul>\n<li>It includes an octet outside of the allowed range</li>\n<li>It includes a percent encoding which is invalid, e.g. <code>%@@</code></li>\n</ul>\n<p>Let's assume for the rest of this post, however, that those have been dealt with at a previous step, and we know for a fact that those error conditions will not occur. Are there any other ways for parsing to fail? In a basic sense: no. In a more sophisticated parsing: absolutely.</p>\n<h2 id=\"basic-rendering\">Basic rendering</h2>\n<p>The basic rendering steps are fairly straightforward:</p>\n<ul>\n<li>Perform percent encoding on each segment</li>\n<li>Interpolate the segments with a slash separator</li>\n<li>Prepend a slash to the entire string</li>\n</ul>\n<p>To allow roundtripping, we need to ensure that each <em>input</em> to the <code>render</code> function generates a unique output. Unfortunately, with these basic rendering steps, we immediately run into an error:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">render segs = "/" ++ interpolate '/' (map percentEncode segs)\n\nrender []\n = "/" ++ interpolate '/' (map percentEncode [])\n = "/" ++ interpolate '/' []\n = "/" ++ ""\n = "/"\n\nrender [""]\n = "/" ++ interpolate '/' (map percentEncode [""])\n = "/" ++ interpolate '/' [""]\n = "/" ++ ""\n = "/"\n</code></pre>\n<p>In other words, both <code>[]</code> and <code>[""]</code> encode to the same raw path, <code>/</code>. This may seem like a trivial corner case not worth addressing. In fact, even more generally, empty path segments seem like a corner case. One possibility would be to say "segments must be non-zero length". Then there's no potential <code>[""]</code> input to worry about.</p>\n<p>When this topic came up in Yesod, we decided to approach this differently. We actually <em>did</em> have some people who had use cases for empty path segments. We'll get back to this in normalized rendering.</p>\n<h2 id=\"percent-encoding\">Percent encoding</h2>\n<p>I mentioned originally the annoyances of percent encoding character sets. I'm still not going to go deeply into details of it. But we do need to discuss it at a surface level. In the steps above, let's ask two related questions:</p>\n<ul>\n<li>Why did we percent encode <em>before</em> interpolating?</li>\n<li>Do we percent encode forward slashes?</li>\n</ul>\n<p>Let's try percent encoding <em>after</em> interpolating. And let's say we decide not to percent encode forward slashes. Then <code>render(["foo/bar"])</code> would turn into <code>/foo/bar</code>, which is identical to <code>render(["foo", "bar"])</code>. That's not what we want. And if we decide we're going to percent encode <em>after</em> interpolating and that we <em>will percent encode forward slashes</em>, both inputs result in <code>/foo%2Fbar</code> as output. Neither of those is any good.</p>\n<p>OK, going back to percent encoding before interpolating, let's say that we don't percent encode forward slashes. Then both <code>["foo/bar"]</code> and <code>["foo", "bar"]</code> will turn into <code>/foo/bar</code>, again bad. So by process of elimination, we're left with percent encoding before interpolating, and escaping the forward slashes in segments. With this configuration, we're left with <code>render(["foo/bar"]) == "/foo%2Fbar"</code> and <code>render(["foo", "bar"]) == "/foo/bar"</code>. Not only is this unique output (our goal here), but it also intuitively feels right, at least to me.</p>\n<h2 id=\"unicode-codepoint-handling\">Unicode codepoint handling</h2>\n<p>One detail we've glossed over here is Unicode, and the difference between codepoints and octets. It's time to rectify that. Percent encoding is a process that works on <em>bytes</em>, not characters. I can percent encode <code>/</code> into <code>%2F</code>, but only because I'm assuming an ASCII representation of that character. By contrast, let's go back to my favorite non-Latin alphabet example, Hebrew. How do you represent the Hebrew letter Alef <code>א</code> with percent encoding? The answer is that you can't, at least not directly. Instead, we need to represent that <a href=\"https://unicode-table.com/en/05D0/\">Unicode codepoint</a> (U+05D0) as bytes. And the most universally accepted way to do that is to use UTF-8. So our process is something like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let segment: &[char] = "א";\nlet segment_bytes: &[u8] = encode_utf8(segment); // b"\\xD7\\x90"\nlet encoded: &[u8] = percent_encode(segment_bytes); // b"%D7%90"\n</code></pre>\n<p>OK, awesome, we now have a way to take a sequence of non-empty Unicode strings and generate a unique path representation of that. What's next?</p>\n<h2 id=\"basic-parsing\">Basic parsing</h2>\n<p>How do we go <em>backwards</em>? Easy: we reverse each of the steps above. Let's see the render steps again:</p>\n<ul>\n<li>Percent encode each segment, consisting of:\n<ul>\n<li>UTF-8 encode the codepoints into bytes</li>\n<li>Percent encode all relevant octets, including the forward slash</li>\n</ul>\n</li>\n<li>Interpolate all of the segments together, separated by a forward slash\n<ul>\n<li>Technically, the "forward slash" here is the forward slash <em>octet</em> <code>\\x2F</code>. But because everyone basically assumes ASCII/UTF-8 encoding, we can typically be a little loose in our terminology.</li>\n</ul>\n</li>\n<li>Prepend a forward slash (octet).</li>\n</ul>\n<p>Basic parsing is exactly the same steps in reverse:</p>\n<ul>\n<li>Strip off the forward slash.\n<ul>\n<li>Arguably, if a forward slash is missing, you could consider this a parse error. Most parsers simply ignore it instead.</li>\n</ul>\n</li>\n<li>Split the raw path on each occurrence of a forward slash. We'll discuss some subtleties about this next.</li>\n<li>Percent decode each segment, consisting of:\n<ul>\n<li>Look for any <code>%</code> signs, and grab the next two hexadecimal digits. In theory, you could treat an incorrect or missing digit as a parse error. In practice, many people end up using some kind of fallback.</li>\n<li>Take the percent decoded octets and UTF-8 decode them. Again, in theory, you could treat invalid UTF-8 data as a parse error, but many people simply use the <a href=\"https://en.wikipedia.org/wiki/Replacement_character\">Unicode replacement character</a>.</li>\n</ul>\n</li>\n</ul>\n<p>If implemented correctly, this should result in the goal we mentioned above: encoding and decoding a specific input will always give back the original value (ignoring the empty segment case, which we still haven't addressed). The one really tricky thing is making sure that our <em>split</em> and <em>interpolate</em> operations mirror each other correctly. There are actually <a href=\"https://www.stackage.org/package/split\">many different ways of splitting lists and strings</a>. Fortunately for my Rust interpolation, the <a href=\"https://doc.rust-lang.org/stable/std/primitive.str.html#method.split\">standard <code>split</code> method on <code>str</code></a> happens to implement exactly the behavior we want. You can check out the method's documentation for details (helpful even for non-Rustaceans!). Pay particular attention to the comments about contiguous separators, and think about how <code>["foo", "", "", "bar"]</code> would end up being interpolated and then parsed.</p>\n<p>OK, we're all done, right? Wrong!</p>\n<h2 id=\"normalization\">Normalization</h2>\n<p>I bet you thought I forgot about the empty segments. (Actually, given how many times I called them out, I bet you <em>didn't</em> think that.) Before, we saw exactly one problem with empty segments: the weird case of <code>[""]</code>. I want to first establish that empty segments are a much bigger problem than that.</p>\n<p>I gave a link above to a GitHub repository: <code>https://github.com/snoyberg/routetype-rs</code>. Let's change that URL ever so slightly, and add an extra forward slash in between <code>snoyberg</code> and <code>routetype-rs</code>: <code>https://github.com/snoyberg//routetype-rs</code>. Amazingly, you get the same page for both URLs. Isn't that weird?</p>\n<p>No, not really. Extra forward slashes are often times ignored by web servers. "I know what you meant, and you didn't mean an empty path segment." This isn't just a "feature" of webservers. The same concept applies on my Linux command line:</p>\n<pre><code>$ cat /etc/debian_version\nbullseye/sid\n$ cat /etc///debian_version\nbullseye/sid\n</code></pre>\n<p>I've got two problems with the behavior GitHub is demonstrating above:</p>\n<ul>\n<li>What if I'm writing some web application and I really, truly want to be able to embed a <em>meaningful</em> empty segment in the path?</li>\n<li>Doesn't it feel wrong, and maybe even hurt SEO, to have two different URLs that resolve to the same content?</li>\n</ul>\n<p>In Yesod, we addressed the second issue with a class method called <code>cleanPath</code>, that analyzes the segments of an incoming path and sees if there's a more canonical representation of them. For the case above, <code>https://github.com/snoyberg//routetype-rs</code> would produce the segments <code>["snoyberg", "", "routetype-rs"]</code>, and <code>cleanPath</code> would decide that a more canonical representation would be <code>["snoyberg", "routetype-rs"]</code>. Then, Yesod would take the canonical representation and generate a redirect. In other words, if GitHub was written in Yesod, my request to <code>https://github.com/snoyberg//routetype-rs</code> would result in a redirect to <code>https://github.com/snoyberg/routetype-rs</code>.</p>\n<p><a href=\"https://github.com/yesodweb/yesod/issues/421\">Way back in 2012</a>, this led to a problem, however. Someone actually had empty path segments, and Yesod was automatically redirecting away from the generated URLs. We came up with a solution back then that I'm still very fond of: dash prefixing. See the linked issue for the details, but the way it works is:</p>\n<ul>\n<li>When encoding, if a segment consists entirely of dashes, add one more dash to it.\n<ul>\n<li>By our definition of "consists entirely of dashes," the empty string counts too. So <code>dashPrefix "" == "-"</code>, and <code>dashPrefix "---" == "----"</code>.</li>\n</ul>\n</li>\n<li>When decoding:\n<ul>\n<li>Perform the split operation above.</li>\n<li>Next, perform the clean path check, and generate a redirect if there are any empty path segments.</li>\n<li>Once we know that there are no empty path segments, <em>then</em> undo dash prefixing. If a segment consists of only dashes, remove one of the dashes.</li>\n</ul>\n</li>\n</ul>\n<p>If you work this through enough, you can see that with this addition, every possible sequence of segments—even empty segments—results in a unique raw path after rendering. And every incoming raw path can either be parsed to a necessary redirect (if there are empty segments) or to a sequence of segments. And finally, each sequence of segments will successfully roundtrip back to the original sequence when parsing and rendering.</p>\n<p>I call this <em>normalized</em> parsing and rendering, since it is normalizing each incoming path to a single, canonical representation, at least as far as empty path segments are concerned. I suppose if someone wanted to be truly pedantic, they could also try to address variations in percent encoding behavior or invalid UTF-8 sequences. But I'd consider the former a non-semantic difference, and the latter garbage-in-garbage-out.</p>\n<h2 id=\"trailing-slashes\">Trailing slashes</h2>\n<p>There's one final point to bring up. What exactly causes an empty path segment to occur when parsing? One example is contiguous slashes, like our <code>snoyberg//routetype-rs</code> example above. But there's a far more interesting and prevalent case: the trailing slash. Many web servers use trailing slashes, likely originating from the common pattern of having <code>index.html</code> files and accessing a page based on the containing directory name. In fact, this blog post is hosted on a statically generating site which uses that technique, which is why the URL has a trailing slash. And if you perform basic parsing on our path here, you'd get:</p>\n<pre><code>basic_parse("/blog/pains-path-parsing/") == ["blog", "pains-path-parsing", ""]\n</code></pre>\n<p>Whether to include trailing slashes in URLs has been an old argument on the internet. Personally, because I consider the parsing-into-segments concept to be central to path parsing, I prefer excluding the trailing slash. And in fact, Yesod's default (and, at least for now, <code>routetype-rs</code>'s default) is to treat such a URL as non-canonical and redirect away from it. I felt even more strongly about that when I realized lots of frameworks have special handling for "final segments with filename extensions." For example, <code>/blog/bananas/</code> is good with a trailing slash, but <code>/images/bananas.png</code> should <em>not</em> have a trailing slash.</p>\n<p>However, since so many people like having trailing slashes, Yesod is configurable on this point, which is why <code>cleanPath</code> is a typeclass method that can be overridden. To each their own I suppose.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hope this blog post gave a little more insight into the wild world of the web and how something as seemingly innocuous as paths actually hides some depth. If you're interested in learning more about the <code>routetype-rs</code> project, please let me know, and I'll try to prioritize some follow ups on it.</p>\n<p>You may be interested in more <a href=\"https://tech.fpcomplete.com/rust/\">Rust</a> or <a href=\"https://tech.fpcomplete.com/haskell/\">Haskell</a> from FP Complete. Also, check out <a href=\"https://tech.fpcomplete.com/blog/\">our blog</a> for a wide range of technical content.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/",
"slug": "pains-path-parsing",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "The Pains of Path Parsing",
"description": "A semi-deep dive into the finer points of path parsing and rendering in web applications",
"updated": null,
"date": "2021-04-26",
"year": 2021,
"month": 4,
"day": 26,
"taxonomies": {
"tags": [
"haskell",
"rust",
"web",
"devops"
],
"categories": [
"functional programming"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"blogimage": "/images/blog-listing/functional.png",
"image": "images/blog/pains-path-parsing.png"
},
"path": "/blog/pains-path-parsing/",
"components": [
"blog",
"pains-path-parsing"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-s-a-path",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#what-s-a-path",
"title": "What's a path?",
"children": []
},
{
"level": 2,
"id": "what-do-paths-represent",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#what-do-paths-represent",
"title": "What do paths represent?",
"children": []
},
{
"level": 2,
"id": "basic-rendering",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#basic-rendering",
"title": "Basic rendering",
"children": []
},
{
"level": 2,
"id": "percent-encoding",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#percent-encoding",
"title": "Percent encoding",
"children": []
},
{
"level": 2,
"id": "unicode-codepoint-handling",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#unicode-codepoint-handling",
"title": "Unicode codepoint handling",
"children": []
},
{
"level": 2,
"id": "basic-parsing",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#basic-parsing",
"title": "Basic parsing",
"children": []
},
{
"level": 2,
"id": "normalization",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#normalization",
"title": "Normalization",
"children": []
},
{
"level": 2,
"id": "trailing-slashes",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#trailing-slashes",
"title": "Trailing slashes",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/pains-path-parsing/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2670,
"reading_time": 14,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/captures-closures-async.md",
"colocated_path": null,
"content": "<p>This blog post is the second in the <a href=\"/tags/rust-quickies/\">Rust quickies</a> series. In my <a href=\"https://tech.fpcomplete.com/training/\">training sessions</a>, we often come up with quick examples to demonstrate some point. Instead of forgetting about them, I want to put short blog posts together focusing on these examples. Hopefully these will be helpful, enjoy!</p>\n<div class=\"alert alert-secondary text-center\">FP Complete is looking for Rust and DevOps engineers. Interested in working with us? <a href=\"/jobs/\">Check out our jobs page</a>.</div>\n<h2 id=\"hello-hyper\">Hello Hyper!</h2>\n<p>For those not familiar, <a href=\"https://hyper.rs/\">Hyper</a> is an HTTP implementation for Rust, built on top of Tokio. It's a low level library powering frameworks like <a href=\"https://crates.io/crates/warp\">Warp</a> and <a href=\"https://rocket.rs/\">Rocket</a>, as well as the <a href=\"https://lib.rs/crates/reqwest\">reqwest</a> client library. For most people, most of the time, using a higher level wrapper like these is the right thing to do.</p>\n<p>But sometimes we like to get our hands dirty, and sometimes working directly with Hyper is the right choice. And definitely from a learning perspective, it's worth doing so at least once. And what could be easier than following the example from Hyper's homepage? To do so, <code>cargo new</code> a new project, add the following dependencies:</p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">hyper = { version = "0.14", features = ["full"] }\ntokio = { version = "1", features = ["full"] }\n</code></pre>\n<p>And add the following to <code>main.rs</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::convert::Infallible;\nuse std::net::SocketAddr;\nuse hyper::{Body, Request, Response, Server};\nuse hyper::service::{make_service_fn, service_fn};\n\nasync fn hello_world(_req: Request<Body>) -> Result<Response<Body>, Infallible> {\n Ok(Response::new("Hello, World".into()))\n}\n\n#[tokio::main]\nasync fn main() {\n // We'll bind to 127.0.0.1:3000\n let addr = SocketAddr::from(([127, 0, 0, 1], 3000));\n\n // A `Service` is needed for every connection, so this\n // creates one from our `hello_world` function.\n let make_svc = make_service_fn(|_conn| async {\n // service_fn converts our function into a `Service`\n Ok::<_, Infallible>(service_fn(hello_world))\n });\n\n let server = Server::bind(&addr).serve(make_svc);\n\n // Run this server for... forever!\n if let Err(e) = server.await {\n eprintln!("server error: {}", e);\n }\n}\n</code></pre>\n<p>If you're interested, there's a <a href=\"https://hyper.rs/guides/server/hello-world/\">quick explanation</a> of this code available on Hyper's website. But our focus will be on making an ever-so-minor modification to this code. Let's go!</p>\n<h2 id=\"counter\">Counter</h2>\n<p>Remember the good old days of Geocities websites, where every page had to have a visitor counter? I want that. Let's modify our <code>hello_world</code> function to do just that:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::sync::{Arc, Mutex};\n\ntype Counter = Arc<Mutex<usize>>; // Bonus points: use an AtomicUsize instead\n\nasync fn hello_world(counter: Counter, _req: Request<Body>) -> Result<Response<Body>, Infallible> {\n let mut guard = counter.lock().unwrap(); // unwrap poisoned Mutexes\n *guard += 1;\n let message = format!("You are visitor number {}", guard);\n Ok(Response::new(message.into()))\n}\n</code></pre>\n<p>That's easy enough, and now we're done with <code>hello_world</code>. The only problem is rewriting <code>main</code> to pass in a <code>Counter</code> value to it. Let's take a first, naive stab at the problem:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let addr = SocketAddr::from(([127, 0, 0, 1], 3000));\nlet counter: Counter = Arc::new(Mutex::new(0));\n\nlet make_svc = make_service_fn(|_conn| async {\n Ok::<_, Infallible>(service_fn(|req| hello_world(counter, req)))\n});\n\nlet server = Server::bind(&addr).serve(make_svc);\n\nif let Err(e) = server.await {\n eprintln!("server error: {}", e);\n}\n</code></pre>\n<p>Unfortunately, this fails due to moving out of captured variables. (That's a topic we cover in detail in our closure training module.)</p>\n<pre><code>error[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n --> src\\main.rs:21:58\n |\n18 | let counter: Counter = Arc::new(Mutex::new(0));\n | ------- captured outer variable\n...\n21 | Ok::<_, Infallible>(service_fn(|req| hello_world(counter, req)))\n | ^^^^^^^ move occurs because `counter` has type `Arc<std::sync::Mutex<usize>>`, which does not implement the `Copy` trait\n\nerror[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n --> src\\main.rs:20:50\n |\n18 | let counter: Counter = Arc::new(Mutex::new(0));\n | ------- captured outer variable\n19 |\n20 | let make_svc = make_service_fn(|_conn| async {\n | __________________________________________________^\n21 | | Ok::<_, Infallible>(service_fn(|req| hello_world(counter, req)))\n | | -------------------------------\n | | |\n | | move occurs because `counter` has type `Arc<std::sync::Mutex<usize>>`, which does not implement the `Copy` trait\n | | move occurs due to use in generator\n22 | | });\n | |_____^ move out of `counter` occurs here\n</code></pre>\n<h2 id=\"clone\">Clone</h2>\n<p>That error isn't terribly surprising. We put our <code>Mutex</code> inside an <code>Arc</code> for a reason: we'll need to make multiple clones of it and pass those around to each new request handler. But we haven't called <code>clone</code> once yet! Again, let's do the most naive thing possible, and change:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">Ok::<_, Infallible>(service_fn(|req| hello_world(counter, req)))\n</code></pre>\n<p>into</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">Ok::<_, Infallible>(service_fn(|req| hello_world(counter.clone(), req)))\n</code></pre>\n<p>This is where the error messages begin to get more interesting:</p>\n<pre><code>error[E0597]: `counter` does not live long enough\n --> src\\main.rs:21:58\n |\n20 | let make_svc = make_service_fn(|_conn| async {\n | ____________________________________-------_-\n | | |\n | | value captured here\n21 | | Ok::<_, Infallible>(service_fn(|req| hello_world(counter.clone(), req)))\n | | ^^^^^^^ borrowed value does not live long enough\n22 | | });\n | |_____- returning this value requires that `counter` is borrowed for `'static`\n...\n29 | }\n | - `counter` dropped here while still borrowed\n</code></pre>\n<p>Both <code>async</code> blocks and closures will, by default, capture variables from their environment by reference, instead of taking ownership. Our closure needs to have a <code>'static</code> lifetime, and therefore can't hold onto a reference to data in our <code>main</code> function.</p>\n<h2 id=\"move-all-the-things\"><code>move</code> all the things!</h2>\n<p>The standard solution to this is to simply sprinkle <code>move</code>s on each <code>async</code> block and closure. This will force each closure to own the <code>Arc</code> itself, not a reference to it. Doing so looks simple:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| async move {\n Ok::<_, Infallible>(service_fn(move |req| hello_world(counter.clone(), req)))\n});\n</code></pre>\n<p>And this does in fact fix the error above. But it gives us a new error instead:</p>\n<pre><code>error[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n --> src\\main.rs:20:60\n |\n18 | let counter: Counter = Arc::new(Mutex::new(0));\n | ------- captured outer variable\n19 |\n20 | let make_svc = make_service_fn(move |_conn| async move {\n | ____________________________________________________________^\n21 | | Ok::<_, Infallible>(service_fn(move |req| hello_world(counter.clone(), req)))\n | | --------------------------------------------\n | | |\n | | move occurs because `counter` has type `Arc<std::sync::Mutex<usize>>`, which does not implement the `Copy` trait\n | | move occurs due to use in generator\n22 | | });\n | |_____^ move out of `counter` occurs here\n</code></pre>\n<h2 id=\"double-the-closure-double-the-clone\">Double the closure, double the clone!</h2>\n<p>Well, even <em>this</em> error makes a lot of sense. Let's understand better what our code is doing:</p>\n<ul>\n<li>Creates a closure to pass to <code>make_service_fn</code>, which will be called for each new incoming connection</li>\n<li>Within <em>that</em> closure, creates a new closure to pass to <code>service_fn</code>, which will be called for each new incoming request on an existing connection</li>\n</ul>\n<p>This is where the trickiness of working directly with Hyper comes into play. Each of those layers of closure need to own their own clone of the <code>Arc</code>. And in our code above, we're trying to move the <code>Arc</code> from the outer closure's captured variable into the inner closure's captured variable. If you squint hard enough, that's what the error message above is saying. Our outer closure is an <code>FnMut</code>, which must be callable multiple times. Therefore, we cannot move out of its captured variable.</p>\n<p>It seems like this should be an easy fix: just <code>clone</code> again!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| async move {\n let counter_clone = counter.clone();\n Ok::<_, Infallible>(service_fn(move |req| hello_world(counter_clone.clone(), req)))\n});\n</code></pre>\n<p>And this is the point at which we hit a real head scratcher: we get almost exactly the same error message:</p>\n<pre><code>error[E0507]: cannot move out of `counter`, a captured variable in an `FnMut` closure\n --> src\\main.rs:20:60\n |\n18 | let counter: Counter = Arc::new(Mutex::new(0));\n | ------- captured outer variable\n19 |\n20 | let make_svc = make_service_fn(move |_conn| async move {\n | ____________________________________________________________^\n21 | | let counter_clone = counter.clone();\n | | -------\n | | |\n | | move occurs because `counter` has type `Arc<std::sync::Mutex<usize>>`, which does not implement the `Copy` trait\n | | move occurs due to use in generator\n22 | | Ok::<_, Infallible>(service_fn(move |req| hello_world(counter_clone.clone(), req)))\n23 | | });\n | |_____^ move out of `counter` occurs here\n</code></pre>\n<h2 id=\"the-paradigm-shift\">The paradigm shift</h2>\n<p>What we need to do is to rewrite our code ever so slightly so reveal what the problem is. Let's add a bunch of unnecessary braces. We'll convert the code above:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| async move {\n let counter_clone = counter.clone();\n Ok::<_, Infallible>(service_fn(move |req| hello_world(counter_clone.clone(), req)))\n});\n</code></pre>\n<p>into this semantically identical code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| { // outer closure\n async move { // async block\n let counter_clone = counter.clone();\n Ok::<_, Infallible>(service_fn(move |req| { // inner closure\n hello_world(counter_clone.clone(), req)\n }))\n }\n});\n</code></pre>\n<p>The error message is basically identical, just slightly different source locations. But now I can walk through the ownership of <code>counter</code> more correctly. I've added comments to highlight three different entities in the code above that can take ownership of values via some kind of environment:</p>\n<ul>\n<li>The outer closure, which handles each connection</li>\n<li>An <code>async</code> block, which forms the body of the outer closure</li>\n<li>The inner closure, which handles each request</li>\n</ul>\n<p>In the original structuring of the code, we put <code>move |_conn| async move</code> next to each other on one line, which—at least for me—obfuscated the fact that the closure and <code>async</code> block were two completely separate entities. With that change in place, let's track the ownership of <code>counter</code>:</p>\n<ol>\n<li>We create the <code>Arc</code> in the <code>main</code> function; it's owned by the <code>counter</code> variable.</li>\n<li>We move the <code>Arc</code> from the <code>main</code> function's <code>counter</code> variable into the outer closure's captured variables.</li>\n<li>We move the <code>counter</code> variable out of the outer closure and into the <code>async</code> block's captured variables.</li>\n<li>Within the body of the <code>async</code> block, we create a clone of <code>counter</code>, called <code>counter_clone</code>. This does not move out of the <code>async</code> block, since the <code>clone</code> method only requires a reference to the <code>Arc</code>.</li>\n<li>We move the <code>Arc</code> out of the <code>counter_clone</code> variable and into the inner closure.</li>\n<li>Within the body of the inner closure, we clone the <code>Arc</code> (which, as explained in (4), doesn't move) and pass it into the <code>hello_world</code> function.</li>\n</ol>\n<p>Based on this breakdown, can you see where the problem is? It's at step (3). We don't want to move out of the outer closure's captured variables. We try to avoid that move by cloning <code>counter</code>. But we clone too late! By using <code>counter</code> from inside an <code>async move</code> block, we're forcing the compiler to move. Hurray, we've identified the problem!</p>\n<h2 id=\"non-solution-non-move-async\">Non-solution: non-move <code>async</code></h2>\n<p>It seems like we were simply over-ambitious with our "sprinkling <code>move</code>" attempt above. The problem is that the <code>async</code> block is taking ownership of <code>counter</code>. Let's try simply removing the <code>move</code> keyword there:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n async {\n let counter_clone = counter.clone();\n Ok::<_, Infallible>(service_fn(move |req| {\n hello_world(counter_clone.clone(), req)\n }))\n }\n});\n</code></pre>\n<p>Unfortunately, this isn't a solution:</p>\n<pre><code>error: captured variable cannot escape `FnMut` closure body\n --> src\\main.rs:21:9\n |\n18 | let counter: Counter = Arc::new(Mutex::new(0));\n | ------- variable defined here\n19 |\n20 | let make_svc = make_service_fn(move |_conn| {\n | - inferred to be a `FnMut` closure\n21 | / async {\n22 | | let counter_clone = counter.clone();\n | | ------- variable captured here\n23 | | Ok::<_, Infallible>(service_fn(move |req| {\n24 | | hello_world(counter_clone.clone(), req)\n25 | | }))\n26 | | }\n | |_________^ returns an `async` block that contains a reference to a captured variable, which then escapes the closure body\n |\n = note: `FnMut` closures only have access to their captured variables while they are executing...\n = note: ...therefore, they cannot allow references to captured variables to escape\n</code></pre>\n<p>The problem here is that the outer closure will return the <code>Future</code> generated by the <code>async</code> block. And if the <code>async</code> block doesn't <code>move</code> the <code>counter</code>, it will be holding a reference to the outer closure's captured variables. And that's not allowed.</p>\n<h2 id=\"real-solution-clone-early-clone-often\">Real solution: clone early, clone often</h2>\n<p>OK, undo the <code>async move</code> to <code>async</code> transformation, it's a dead end. It turns out that all we've got to do is clone the <code>counter</code> before we start the <code>async move</code> block, like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n let counter_clone = counter.clone(); // this moved one line earlier\n async move {\n Ok::<_, Infallible>(service_fn(move |req| {\n hello_world(counter_clone.clone(), req)\n }))\n }\n});\n</code></pre>\n<p>Now, we create a temporary <code>counter_clone</code> within the outer closure. This works by reference, and therefore doesn't move anything. We then move the new, temporary <code>counter_clone</code> into the <code>async move</code> block via a capture, and from there move it into the inner closure. With this, all of our closure captured variables remain unmoved, and therefore the requirements of <code>FnMut</code> are satisfied.</p>\n<p>And with that, we can finally enjoy the glory days of Geocities visitor counters!</p>\n<h2 id=\"async-closures\">Async closures</h2>\n<p>The formatting recommended by <code>rustfmt</code> hides away the fact that there are two different environments at play between the outer closure and the <code>async block</code>, by moving the two onto a single line with <code>move |_conn| async move</code>. That makes it feel like the two entities are somehow one and the same. But as we've demonstrated, they aren't.</p>\n<p>Theoretically this could be solved by having an async closure. I tested with <code>#![feature(async_closure)]</code> on <code>nightly-2021-03-02</code>, but couldn't figure out a way to use an async closure to solve this problem differently than I solved it above. But that may be my own lack of familiarity with <code>async_closure</code>.</p>\n<p>For now, the main takeaway is that closures and <code>async</code> blocks are two different entities, each with their own environment.</p>\n<p>If you liked this post you may also be interested in:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust home page</a></li>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/jobs/\">Jobs at FP Complete</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/",
"slug": "captures-closures-async",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Captures in closures and async blocks",
"description": "In this Rust Quickie, we'll cover a common mistake when writing async/await code, and how to more easily spot and fix it.",
"updated": null,
"date": "2021-03-03",
"year": 2021,
"month": 3,
"day": 3,
"taxonomies": {
"tags": [
"rust",
"rust-quickies"
],
"categories": [
"functional programming"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"blogimage": "/images/blog-listing/rust.png",
"image": "images/blog/rust-quickies/captures-closures-async.png"
},
"path": "/blog/captures-closures-async/",
"components": [
"blog",
"captures-closures-async"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "hello-hyper",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#hello-hyper",
"title": "Hello Hyper!",
"children": []
},
{
"level": 2,
"id": "counter",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#counter",
"title": "Counter",
"children": []
},
{
"level": 2,
"id": "clone",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#clone",
"title": "Clone",
"children": []
},
{
"level": 2,
"id": "move-all-the-things",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#move-all-the-things",
"title": "move all the things!",
"children": []
},
{
"level": 2,
"id": "double-the-closure-double-the-clone",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#double-the-closure-double-the-clone",
"title": "Double the closure, double the clone!",
"children": []
},
{
"level": 2,
"id": "the-paradigm-shift",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#the-paradigm-shift",
"title": "The paradigm shift",
"children": []
},
{
"level": 2,
"id": "non-solution-non-move-async",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#non-solution-non-move-async",
"title": "Non-solution: non-move async",
"children": []
},
{
"level": 2,
"id": "real-solution-clone-early-clone-often",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#real-solution-clone-early-clone-often",
"title": "Real solution: clone early, clone often",
"children": []
},
{
"level": 2,
"id": "async-closures",
"permalink": "https://tech.fpcomplete.com/blog/captures-closures-async/#async-closures",
"title": "Async closures",
"children": []
}
],
"word_count": 2199,
"reading_time": 11,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/short-circuit-sum-rust.md",
"colocated_path": null,
"content": "<p>This blog post is the first in a planned series I'm calling "Rust quickies." In my <a href=\"https://tech.fpcomplete.com/training/\">training sessions</a>, we often come up with quick examples to demonstrate some point. Instead of forgetting about them, I want to put short blog posts together focusing on these examples. Hopefully these will be helpful, enjoy!</p>\n<div class=\"alert alert-secondary text-center\">FP Complete is looking for Rust and DevOps engineers. Interested in working with us? <a href=\"/jobs/\">Check out our jobs page</a>.</div>\n<h2 id=\"short-circuiting-a-for-loop\">Short circuiting a <code>for</code> loop</h2>\n<p>Let's say I've got an <code>Iterator</code> of <code>u32</code>s. I want to double each value and print it. Easy enough:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) {\n for x in iter.into_iter().map(|x| x * 2) {\n println!("{}", x);\n }\n}\n\nfn main() {\n weird_function(1..10);\n}\n</code></pre>\n<p>And now let's say we hate the number 8, and want to stop when we hit it. That's a simple one-line change:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) {\n for x in iter.into_iter().map(|x| x * 2) {\n if x == 8 { return } // added this line\n println!("{}", x);\n }\n}\n</code></pre>\n<p>Easy, done, end of story. And for this reason, I <em>recommend</em> using <code>for</code> loops when possible. Even though, from a functional programming background, it feels overly imperative. However, some people out there want to be more functional, so let's explore that.</p>\n<h2 id=\"for-each-vs-map\">for_each vs map</h2>\n<p>Let's forget about the short-circuiting for a moment. And now we want to go back to the original version of the program, but <em>without</em> using a <code>for</code> loop. Easy enough with the method <code>for_each</code>. It takes a closure, which it runs for each value in the <code>Iterator</code>. Let's check it out:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) {\n iter.into_iter().map(|x| x * 2).for_each(|x| {\n println!("{}", x);\n })\n}\n</code></pre>\n<p>But why, exactly do we need <code>for_each</code>? That seems awfully similar to <code>map</code>, which <em>also</em> applies a function over every value in an <code>Iterator</code>. Trying to make that change, however, demonstrates the problem. With this code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) {\n iter.into_iter().map(|x| x * 2).map(|x| {\n println!("{}", x);\n })\n}\n</code></pre>\n<p>we get an error message:</p>\n<pre><code>error[E0308]: mismatched types\n --> src\\main.rs:2:5\n |\n2 | / iter.into_iter().map(|x| x * 2).map(|x| {\n3 | | println!("{}", x);\n4 | | })\n | |______^ expected `()`, found struct `Map`\n</code></pre>\n<p>Undaunted, I fix this error by sticking a semicolon at the end of that expression. That generates a warning of <code>unused `Map` that must be used</code>. And sure enough, running this program produces no output.</p>\n<p>The problem is that <code>map</code> doesn't drain the <code>Iterator</code>. Said another way, <code>map</code> is <em>lazy</em>. It adapts one <code>Iterator</code> into a new <code>Iterator</code>. But unless something comes along and <em>drains</em> or <em>forces</em> the <code>Iterator</code>, no actions will occur. By contrast, <code>for_each</code> will always drain an <code>Iterator</code>.</p>\n<p>One easy trick to force draining of an <code>Iterator</code> is with the <code>count()</code> method. This will perform some unnecessary work of counting how many values are in the <code>Iterator</code>, but it's not that expensive. Another approach would be to use <code>collect</code>. This one is a little trickier, since <code>collect</code> typically needs some type annotations. But thanks to a fun trick of how <code>FromIterator</code> is implemented for the unit type, we can collect a stream of <code>()</code>s into a single <code>()</code> value. Meaning, this code works:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) {\n iter.into_iter().map(|x| x * 2).map(|x| {\n println!("{}", x);\n }).collect()\n}\n</code></pre>\n<p>Note the lack of a semicolon at the end there. What do you think will happen if we add in the semicolon?</p>\n<h2 id=\"short-circuiting\">Short circuiting</h2>\n<p><strong>EDIT</strong> Enough people have asked "why not use <code>take_while</code>?" that I thought I'd address it. Yes, below, <code>take_while</code> will work for "short circuiting." It's probably even a good idea. But the main goal in this post is to explore some funny implementation approaches, not recommend a best practice. And overall, despite some good arguments for <code>take_while</code> being a good choice here, I still stand by the overall recommendation to prefer <code>for</code> loops for simplicity.</p>\n<p>With the <code>for</code> loop approach, stopping at the first 8 was a trivial, 1 line addition. Let's do the same thing here:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) {\n iter.into_iter().map(|x| x * 2).map(|x| {\n if x == 8 { return }\n println!("{}", x);\n }).collect()\n}\n</code></pre>\n<p>Take a guess at what the output will be. Ready? OK, here's the real thing:</p>\n<pre><code>2\n4\n6\n10\n12\n14\n16\n18\n</code></pre>\n<p>We <em>skipped</em> 8, but we didn't stop. It's the difference between a <code>continue</code> and a <code>break</code> inside the <code>for</code> loop. Why did this happen?</p>\n<p>It's important to think about the scope of a <code>return</code>. It will exit the current function. And in this case, the current function isn't <code>weird_function</code>, but the <em>closure inside the <code>map</code> call</em>. This is what makes short-circuiting inside <code>map</code> so difficult.</p>\n<p>The same exact comment will apply to <code>for_each</code>. The only way to stop a <code>for_each</code> from continuing is to panic (or abort the program, if you want to get really aggressive).</p>\n<p>But with <code>map</code>, we have some ingenious ways of working around this and short-circuiting. Let's see it in action.</p>\n<h2 id=\"collect-an-option\">collect an <code>Option</code></h2>\n<p><code>map</code> needs some draining method to drive it. We've been using <code>collect</code>. I've <a href=\"https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/\">previously discussed the intricacies of this method</a>. One cool feature of <code>collect</code> is that, for <code>Option</code> and <code>Result</code>, it provides short-circuit capabilities. We can modify our program to take advantage of that:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) -> Option<()> {\n iter.into_iter().map(|x| x * 2).map(|x| {\n if x == 8 { return None } // short circuit!\n println!("{}", x);\n Some(()) // keep going!\n }).collect()\n}\n</code></pre>\n<p>I put a return type of <code>weird_function</code>, though we could also use turbofish on <code>collect</code> and throw away the result. We just need some type annotation to say what we're trying to collect. Since collecting the underlying <code>()</code> values doesn't take up extra memory, this is even pretty efficient! The only cost is the extra <code>Option</code>. But that extra <code>Option</code> is (arguably) useful; it lets us know if we short-circuited or not.</p>\n<p>But the story isn't so rosy with other types. Let's say our closure within <code>map</code> returns the <code>x</code> value. In other words, replace the last line with <code>Some(x)</code> instead of <code>Some(())</code>. Now we need to somehow collect up those <code>u32</code>s. Something like this would work:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) -> Option<Vec<u32>> {\n iter.into_iter().map(|x| x * 2).map(|x| {\n if x == 8 { return None } // short circuit!\n println!("{}", x);\n Some(x) // keep going!\n }).collect()\n}\n</code></pre>\n<p>But that incurs a heap allocation that we don't want! And using <code>count()</code> from before is useless too, since it won't even short circuit.</p>\n<p>But we do have one other trick.</p>\n<h2 id=\"sum\">sum</h2>\n<p>It turns out there's another draining method on <code>Iterator</code> that performs short circuiting: <code>sum</code>. This program works perfectly well:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn weird_function(iter: impl IntoIterator<Item=u32>) -> Option<u32> {\n iter.into_iter().map(|x| x * 2).map(|x| {\n if x == 8 { return None } // short circuit!\n println!("{}", x);\n Some(x) // keep going!\n }).sum()\n}\n</code></pre>\n<p>The downside is that it's unnecessarily summing up the values. And maybe that could be a real problem if some kind of overflow occurs. But this mostly works. But is there some way we can stay functional, short circuit, and get no performance overhead? Sure!</p>\n<h2 id=\"short\">Short</h2>\n<p>The final trick here is to create a new helper type for summing up an <code>Iterator</code>. But this thing won't really sum. Instead, it will throw away all of the values, and stop as soon as it sees an <code>Option</code>. Let's see it in practice:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Debug)]\nenum Short {\n Stopped,\n Completed,\n}\n\nimpl<T> std::iter::Sum<Option<T>> for Short {\n fn sum<I: Iterator<Item = Option<T>>>(iter: I) -> Self {\n for x in iter {\n if let None = x { return Short::Stopped }\n }\n Short::Completed\n }\n}\nfn weird_function(iter: impl IntoIterator<Item=u32>) -> Short {\n iter.into_iter().map(|x| x * 2).map(|x| {\n if x == 8 { return None } // short circuit!\n println!("{}", x);\n Some(x) // keep going!\n }).sum()\n}\n\nfn main() {\n println!("{:?}", weird_function(1..10));\n}\n</code></pre>\n<p>And voila! We're done!</p>\n<p><strong>Exercise</strong> It's pretty cheeky to use <code>sum</code> here. <code>collect</code> makes more sense. Replace <code>sum</code> with <code>collect</code>, and then change the <code>Sum</code> implementation into something else. Solution at the end.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>That's a lot of work to be functional. Rust has a great story around short circuiting. And it's not just with <code>return</code>, <code>break</code>, and <code>continue</code>. It's with the <code>?</code> try operator, which forms the basis of error handling in Rust. There are times when you'll want to use <code>Iterator</code> adapters, async streaming adapters, and functional-style code. But unless you have a pressing need, my recommendation is to stick to <code>for</code> loops.</p>\n<p>If you liked this post, and would like to see more Rust quickies, <a href=\"https://twitter.com/snoyberg\">let me know</a>. You may also like these other pages:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/\">FP Complete Rust home page</a></li>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://tech.fpcomplete.com/jobs/\">Jobs at FP Complete</a></li>\n</ul>\n<h2 id=\"solution\">Solution</h2>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::iter::FromIterator;\n\n#[derive(Debug)]\nenum Short {\n Stopped,\n Completed,\n}\n\nimpl<T> FromIterator<Option<T>> for Short {\n fn from_iter<I: IntoIterator<Item = Option<T>>>(iter: I) -> Self {\n for x in iter {\n if let None = x { return Short::Stopped }\n }\n Short::Completed\n }\n}\nfn weird_function(iter: impl IntoIterator<Item=u32>) -> Short {\n iter.into_iter().map(|x| x * 2).map(|x| {\n if x == 8 { return None } // short circuit!\n println!("{}", x);\n Some(x) // keep going!\n }).collect()\n}\n\nfn main() {\n println!("{:?}", weird_function(1..10));\n}\n</code></pre>\n",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/",
"slug": "short-circuit-sum-rust",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Short Circuit Sum in Rust",
"description": "Haskell and Rust both support asynchronous programming. Haskell includes a feature called async exceptions, which allow cancelling threads, but they come at a cost. See how Rust does the same job, and the relative trade-offs of each approach.",
"updated": null,
"date": "2021-02-15",
"year": 2021,
"month": 2,
"day": 15,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"rust",
"rust-quickies"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png",
"image": "images/blog/rust-quickies/short-circuit-sum.png"
},
"path": "/blog/short-circuit-sum-rust/",
"components": [
"blog",
"short-circuit-sum-rust"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "short-circuiting-a-for-loop",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#short-circuiting-a-for-loop",
"title": "Short circuiting a for loop",
"children": []
},
{
"level": 2,
"id": "for-each-vs-map",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#for-each-vs-map",
"title": "for_each vs map",
"children": []
},
{
"level": 2,
"id": "short-circuiting",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#short-circuiting",
"title": "Short circuiting",
"children": []
},
{
"level": 2,
"id": "collect-an-option",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#collect-an-option",
"title": "collect an Option",
"children": []
},
{
"level": 2,
"id": "sum",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#sum",
"title": "sum",
"children": []
},
{
"level": 2,
"id": "short",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#short",
"title": "Short",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#conclusion",
"title": "Conclusion",
"children": []
},
{
"level": 2,
"id": "solution",
"permalink": "https://tech.fpcomplete.com/blog/short-circuit-sum-rust/#solution",
"title": "Solution",
"children": []
}
],
"word_count": 1556,
"reading_time": 8,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/philosophies-rust-haskell.md",
"colocated_path": null,
"content": "<p>Rust is a systems programming language following fairly standard imperative approaches and a C-style syntax. Haskell is a purely functional programming language, innovating in areas such as type theory and effect management. Viewed that way, these languages are polar opposites.</p>\n<p>And yet, these two languages attract many of the same people, including the engineering team at FP Complete. Putting on a different set of lenses, both languages provide powerful abstractions, enforce different kinds of correctness via static analysis in the compiler, and favor powerful features over quick adoption.</p>\n<p>In this post, I want to look at some of the philosophical underpinnings that explain some of the similarities and differences in the languages. Some of these are inherent. Rust's status as a systems programming language essentially requires some different approaches to Haskell's purely functional nature. But some of these are not. It wasn't strictly necessary for both languages to converge on similar systems for Algebraic Data Types (ADTs) and ad hoc polymorphism (via traits/type classes).</p>\n<p>Keep in mind that in writing this post, I'm viewing it as a <em>consumer</em> of the languages, not a designer. The designers themselves may have different motivations than those I describe. It would certainly be interesting to see if others have different takes on this topic.</p>\n<h2 id=\"rust-ownership\">Rust: ownership</h2>\n<p>This is so obvious that I almost forgot to include it. If there's one thing that defines Rust versus any other language, it's ownership and the borrow checker. This speaks to two core pieces of Rust:</p>\n<ul>\n<li>The goal of serving as a systems programming language, where garbage collection is not an option</li>\n<li>The goal of providing a safe subset of the language, where undefined behavior cannot occur</li>\n</ul>\n<p>The concept of ownership achieves both of these. Many additions have been made to the language to make it easier to work with ownership overall. This hints at the concept of ergonomics, which is fundamental to Rust philosophy. But ownership and borrow checking are also known as the harder parts of the language. Putting it together, we see a philosophy of striving to meet our goals safely, while making the usage of the features as easy as possible. However, if there's a conflict between the goals and ease of use, the goals win out.</p>\n<p>All of this stands in stark contrast to Haskell, which is explicitly <em>not</em> a systems language, and does not attempt in any way to address those cases. Instead, it leverages garbage collection quite happily, with the trade-offs between performance and ease-of-use inherent in that choice.</p>\n<h2 id=\"haskell-purely-functional\">Haskell: purely functional</h2>\n<p>The underlying goal of Haskell is ultimately to create a purely functional programming language. Many of the most notable and unusual features of Haskell directly derive from this goal, such as using monads to explicitly track effects.</p>\n<p>Other parts of the language follow from this less directly. For example, Haskell strongly embraces Higher Order Functions, currying, and partial function application. This combination turns many common structures in other languages (like loops) into normal functions. But in order to make this feel natural, Haskell uses slightly odd (compared to other languages) syntax for function application.</p>\n<p>And this gets into a more fundamental piece of philosophy. Haskell is willing to be quite dramatically different from other programming languages in its pursuit of its goals. In my opinion, Rust has been less willing to diverge from mainstream approaches, veering away only out of absolute necessity.</p>\n<p>This results in a world where Haskell feels quite a bit more foreign to others, but has more freedom to innovate. Rust, on the other hand, has stuck to existing solutions when possible, such as eschewing monadic futures in favor of <code>async</code>/<code>.await</code> syntax.</p>\n<h2 id=\"expression-oriented\">Expression oriented</h2>\n<p>I undervalued how important this feature was for a while, but recently I've realized that it's one of the most important features in both languages for me.</p>\n<blockquote class=\"twitter-tweet\"><p lang=\"en\" dir=\"ltr\">I used to think that the reason I loved both Haskell and Rust so much was their shared strong typing, ADTs, and pattern matching combination.<br><br>After a recent discussion, I think it may be more about being expression-oriented languages.</p>— Michael Snoyman (@snoyberg) <a href=\"https://twitter.com/snoyberg/status/1348486654017855489?ref_src=twsrc%5Etfw\">January 11, 2021</a></blockquote> <script async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>\n<p>Instead of relying on declare-then-assign patterns, both languages allow conditionals and other constructs to evaluate to values. This reduces the frequency of seeing mutable assignment and avoids cases of uninitialized variables. By restricting mutable assignment to cases where it's actual mutation, we get to free up a lot of head space to focus on the trickier parts of programming.</p>\n<h2 id=\"type-system\">Type system</h2>\n<p>Rust and Haskell have very similar type systems. Both make it easy to create new types, provide for features like newtypes, provide type aliases, and offer a combination of product (<code>struct</code>) and sum (<code>enum</code>) types. Both allow labeling fields or accessing values positionally. Both offer <a href=\"https://tech.fpcomplete.com/blog/pattern-matching/\">pattern matching</a> constructs. Overall, the similarities between the two languages far outweigh the differences.</p>\n<p>I place a large part of the shared interest between these languages at the feet of the type system. Since I started using Haskell, I feel strongly hampered using any language without a rich, flexible, and powerful type system. Rust's embrace of Algebraic Data Types (ADTs) feels natural.</p>\n<p>There are some differences between the languages in these topics, but they are <em>mostly</em> superficial. For example, Haskell uses the single keyword <code>data</code> for introducing both product and sum types, while Rust uses <code>struct</code> and <code>enum</code>, respectively. Haskell will allow creation of partial field accessors in sum types, while Rust does not. Haskell allows for partial pattern matches (with an optional warning), and Rust does not.</p>\n<p>These are meaningful and affect the way you use the languages, but I don't see them as deeply philosophical. Instead, I see both languages embracing the idea that encouraging programmers to define and use strong typing mechanisms leads to better code. And it's a message I wholeheartedly endorse.</p>\n<h2 id=\"traits-and-type-classes\">Traits and type classes</h2>\n<p>In the wide world of inheritance and polymorphism, there are a lot of different approaches. Within that, Rust's traits and Haskell's type classes are far more similar than different. Both of them allow you to separate out functionality (methods) from data (<code>struct</code>/<code>data</code>). Both allow you to create new types or traits/classes yourself and add them on to existing types/traits/classes. Both of them support a concept of associated types, and multiple parameters (either via parameterized traits or multi-param type classes).</p>\n<p>There are some differences between the two. For one, Rust doesn't allow orphans. An implementation must appear in the same crate as either the type definition or the trait definition. (The fact that Rust treats an entire crate as a compilation unit instead of a single module makes this restriction less of an imposition.) Also, Haskell supports functional dependencies, but that's not terribly interesting, since that can be closely approximated with associated types. And there are other, more subtle differences, around issues like overlapping instances. Rust's lack of orphans allows it to make some closed world assumptions that Haskell cannot.</p>\n<p>Ultimately, the distinctions above don't lend themselves to a deep philosophical difference, but rather minor variations on a theme. There is, however, one major distinction in this area between the two languages: Higher Kinded Types (HKTs). In Haskell, HKTs provide the basis for such typeclasses as <code>Functor</code>, <code>Applicative</code>, <code>Monad</code>, <code>Foldable</code>, and <code>Traversable</code>. In Rust, implementing some kind of traits around these concepts is <a href=\"https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/\">a bit more complicated</a>.</p>\n<p>And this is one of the deeper philosophical differences between the two languages. Haskellers readily embrace concepts like HKTs. The Rust community has adamantly avoided embracing them, due to their perceived complexity. Instead, in Rust, alternative and arguably simpler approaches have been used to solve the same problems these typeclasses solve in Haskell. Which leads us to probably the biggest philosophical difference between the languages.</p>\n<h2 id=\"general-vs-specific\">General vs specific</h2>\n<p>Let's say I want to have early termination in the case of an error. Or asynchronous coding capabilities. Or the ability to pass information to the rest of a computation. How would I achieve this?</p>\n<p>In Haskell, the answer is <em>obviously</em> <code>Monad</code>s. <code>do</code>-notation is a general purpose "programmable semicolon." It generally solves all of these cases. And many, many more. Writing a parser? <code>Monad</code>. Concurrency? Maybe <code>Monad</code>, or maybe <code>Applicative</code> with <code>ApplicativeDo</code> turned on. But the common factor: we can express large classes of problems as <code>do</code>-notation.</p>\n<p>How about Rust? Well, if you want early termination for errors, you'll use a <code>Result</code> return type and the <code>?</code> try operator. Async? <code>async</code>/<code>.await</code> syntax. Pass in information? Maybe use method syntax, maybe use thread-local state, maybe something else.</p>\n<p>The point is that the Haskell community overall reaches for generalizing a solution as far as possible, usually along the lines of some abstract mathematical underpinning. There are huge advantages to this. We build out solutions to problems we didn't even know we had. We are able to rely on mathematical laws to guide our designs and ensure concepts compose nicely.</p>\n<p>The Rust community, instead, favors specific, ergonomic solutions. Error handling is <em>really</em> common, so give it a single character operator. Make sure that it handles common cases, like unifying error types via the <code>From</code> trait. Make sure error messages are as clear as possible. Optimize for the 95%, and don't worry about the 5% yet. (And see the next section for the 5%.)</p>\n<p>To me, this is the deepest non-inherent divide between the languages. Sure, ownership versus purity is huge, but it's right there on the label of the languages. <strong>This distinction ends up impacting how new language features are added, how people generally think about solutions, and how libraries are designed.</strong></p>\n<p>One final point. As much as I've implied that the Rust and Haskell communities are in two camps here, that's not quite fair. There are people in the Haskell community looking to make more specific solutions to some problems. (I'm probably one of them with things like <code>RIO</code>.) And while I can't think of a concrete Rust example to the contrary, I have no doubt that there are cases where people design general solutions when a more specific one would suffice.</p>\n<h2 id=\"code-generation-metaprogramming-macros\">Code generation/metaprogramming/macros</h2>\n<p>Haskell has metaprogramming via Template Haskell (TH). It's almost universally viewed as a necessary evil, but evil nonetheless. It screws up compilation in some cases via stage restrictions, it requires a language pragma to enable, and introduces awkward syntax. Features like deriving serialization instances are generally moving towards in-language features via the <code>Generic</code> typeclass.</p>\n<p>Rust's "Hello World" sticks a macro call on the second line via <code>println!</code>. The syntax for calling macros looks almost identical to function calls. Common libraries encourage macro usage all over the place. <code>serde</code> serialization deriving, <code>structopt</code> command line parsing, and <code>snafu</code>/<code>thiserror</code> error type creation all leverage macro attributes and deriving.</p>\n<p>This is a fascinating distinction to me. I've been on both sides of the TH divide. Yesod famously uses TH for a lot of code generation, which has earned the ire of many Haskellers. I've since generally avoided using TH when possible in the past few years. And when I picked up Rust, I studiously avoided learning how to create macros until relatively recently, lest I be tempted to slip back into my old, evil ways.</p>\n<p>Metaprogramming definitely complicates some things. It makes it harder to debug some problems. Rust does a pretty good job at making sure error messages can be comprehensible. But documentation on macro arguments and return types is still not as nice as functions and methods.</p>\n<p>I think I'm still mostly in the Haskell camp of avoiding unnecessary metaprogramming in my API design, but I'm beginning to be more free with it. And I have no reservations in Rust about <em>using</em> macros; they're wonderful. I do wonder if the main issue in Haskell isn't the overall concept of metaprogramming, but the specific implementation with Template Haskell.</p>\n<h2 id=\"backwards-compatibility\">Backwards compatibility</h2>\n<p>Rust overall has a more coherent and consistent story around backwards compatibility. It's almost always painless to upgrade to new versions of the Rust compiler. This puts an extra burden on the compiler team, and constrains changes that can be made to the language. And in one case (the module system update), it required a new <code>edition</code> system to allow for full backwards compatibility.</p>\n<p>The Haskell community overall cares less about backwards compatibility. New versions of the compiler regularly break code. New versions of libraries will get released to smooth out rough edges in the APIs. (I used to do this regularly, and now regret that. I've tried hard to keep backwards compatibility in my libraries.)</p>\n<p>Overall, I think the Rust community's approach here is better for producing production software. Arguably the Haskell approach allows for much more exploration and attainment of some higher level of beauty. Or as they say, "avoid (success at all costs)."</p>\n<h2 id=\"optimistic-optimizations\">Optimistic optimizations</h2>\n<p>GHC has a powerful rewrite rules system, which can rewrite less efficient combinations of functions to more optimized ones. This plays in a big way in the <code>vector</code> package, where rewrite rules implement stream fusion, allowing many classes of vector pipelines to completely avoid allocation. This is a massive optimization. At least when it works. As I've personally experienced, and many others have too, rewrite rules can be finicky. The Haskell approach is to be happy that our code sometimes gets much faster, and that we get to keep elegant, easy-to-understand code.</p>\n<p>The Rust approach is the polar opposite. Either code will <em>definitely</em> be fast or <em>definitely</em> be slow. I learned this a while ago when looking into recursive functions and tail call optimization (TCO). The Rust compiler will <em>not</em> perform a TCO, because it's so easy to accidentally change a TCO-able implementation into something that eats up stack space. There are plans to make explicit tail calls possible with the <code>become</code> keyword someday.</p>\n<p>More generally, Rust embraces the concept of zero cost abstractions. The idea is that you should be able to abstract and simplify code, when we can guarantee that there is no cost. In the Haskell world, we tend to focus on the elegant abstraction, even if a cost will be involved.</p>\n<h2 id=\"learning-curve\">Learning curve</h2>\n<p>A short one here. Both languages have a higher-than-average learning curve compared with other languages. Both languages embrace their learning curves. As much as possible, we try to make learning and using the languages easy. But neither language shies away from powerful features, even if it will make the language a bit harder to learn.</p>\n<p>To quote a Perlism: you'll only learn the language once, you'll use it for the rest of your life.</p>\n<h2 id=\"explicitly-mark-things\">Explicitly mark things</h2>\n<p>Both languages embrace the idea of explicitly marking things. For example, both languages encourage (in Haskell's case) or enforce (in Rust's case) marking the type signature of all functions. But that's pretty common. Haskell goes further, and requires that you mark all effectful computations with the <code>IO</code> type (or something similar, like <code>MonadIO</code>). Rust requires than anything which may fail be marked with a <code>Result</code> return value.</p>\n<p>You may argue that these are actually a <em>difference</em> in the language, and to some extent that's true. But I think the difference is about what the language considers important. Haskell, for reasons of purity, values deeply the idea that an effect may be performed. It then lumps errors and exceptions into the contract of <code>IO</code> and the concept of laziness (for better or worse). Rust, on the other hand, doesn't care if you may perform an effect, but deeply cares about whether an error may occur.</p>\n<h2 id=\"type-enforce-everything\">Type enforce <em>everything</em>?</h2>\n<p>When I initially implemented Haskell's <code>monad-logger</code>, I provided an instance for <code>IO</code> which performed no output. I received many complaints that people would rather get a compile time error if they forgot to initialize the logging system, and I removed the <code>IO</code> instance. (Without getting into details: this was <em>definitely</em> the right decision for the API, regardless of the distinction with Rust.)</p>\n<p>That's why I was so amused when I first used the <code>log</code> crate in Rust, and realized that if you don't initialize the logging system, it produces no output. There's no runtime error, just silence.</p>\n<p>Similarly, many functions in the Tokio crate will fail at runtime if run from outside of the context of a Tokio runtime. But nothing in the type system enforces this idea.</p>\n<p>And finally, I've been bitten a few times by <code>actix-web</code>'s state management. If you mismatch the type of the state between your handlers and your service declaration, you'll end up with a runtime error instead of a compile time bug.</p>\n<p>In the Haskell world, the overall philosophy is generally to approach "if it compiles, it works." Haskellers love enforcing almost every invariant at the type level.</p>\n<p>I haven't discussed this much with Rustaceans, but it seems to me that the overall Rust philosophy here is slightly different. Instead, we like to express <em>tricky</em> invariants at the type level. But if something is so obviously going to fail or behave incorrectly in the most basic smoke testing, such as a Tokio function crashing, there's no need to develop type-level protections against it.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hope this laundry list comparison was interesting. I've been meaning to write it down for a while, so I kind of feel like I checked off a New Year's Resolution in doing so. I'd be curious to hear any other points of comparison people have, or disagreements about my assessments.</p>\n<p>You may also like:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/rust-at-fpco-2020/\">Rust at FP Complete, 2020 update</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/\">Collect in Rust, traverse in Haskell and Scala</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/error-handling-is-hard/\">Error handling is hard</a></li>\n<li><a href=\"https://tech.fpcomplete.com/training/\">Training at FP Complete</a></li>\n<li><a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"https://tech.fpcomplete.com/haskell/syllabus/\">Applied Haskell syllabus</a></li>\n</ul>\n",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
"slug": "philosophies-rust-haskell",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Philosophies of Rust and Haskell",
"description": "As regular users of both Rust and Haskell, the FP Complete engineering team often discusses the similarities and differences in these languages. In this post, we'll review some of the philosophical underpinnings of these languages.",
"updated": null,
"date": "2021-01-11",
"year": 2021,
"month": 1,
"day": 11,
"taxonomies": {
"tags": [
"rust",
"haskell",
"insights"
],
"categories": [
"functional programming"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png",
"image": "images/blog/philosophies-rust-haskell.png"
},
"path": "/blog/philosophies-rust-haskell/",
"components": [
"blog",
"philosophies-rust-haskell"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "rust-ownership",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#rust-ownership",
"title": "Rust: ownership",
"children": []
},
{
"level": 2,
"id": "haskell-purely-functional",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#haskell-purely-functional",
"title": "Haskell: purely functional",
"children": []
},
{
"level": 2,
"id": "expression-oriented",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#expression-oriented",
"title": "Expression oriented",
"children": []
},
{
"level": 2,
"id": "type-system",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#type-system",
"title": "Type system",
"children": []
},
{
"level": 2,
"id": "traits-and-type-classes",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#traits-and-type-classes",
"title": "Traits and type classes",
"children": []
},
{
"level": 2,
"id": "general-vs-specific",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#general-vs-specific",
"title": "General vs specific",
"children": []
},
{
"level": 2,
"id": "code-generation-metaprogramming-macros",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#code-generation-metaprogramming-macros",
"title": "Code generation/metaprogramming/macros",
"children": []
},
{
"level": 2,
"id": "backwards-compatibility",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#backwards-compatibility",
"title": "Backwards compatibility",
"children": []
},
{
"level": 2,
"id": "optimistic-optimizations",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#optimistic-optimizations",
"title": "Optimistic optimizations",
"children": []
},
{
"level": 2,
"id": "learning-curve",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#learning-curve",
"title": "Learning curve",
"children": []
},
{
"level": 2,
"id": "explicitly-mark-things",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#explicitly-mark-things",
"title": "Explicitly mark things",
"children": []
},
{
"level": 2,
"id": "type-enforce-everything",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#type-enforce-everything",
"title": "Type enforce everything?",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2999,
"reading_time": 15,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/cloning-reference-method-calls.md",
"colocated_path": null,
"content": "<p>This semi-surprising corner case came up in some recent <a href=\"https://tech.fpcomplete.com/training/\">Rust training</a> I was giving. I figured a short write-up may help some others in the future.</p>\n<p>Rust's language design focuses on ergonomics. The goal is to make common patterns easy to write on a regular basis. This overall works out very well. But occasionally, you end up with a surprising outcome. And I think this situation is a good example.</p>\n<p>Let's start off by pretending that method syntax doesn't exist at all. Let's say I've got a <code>String</code>, and I want to clone it. I know that there's a <code>Clone::clone</code> method, which takes a <code>&String</code> and returns a <code>String</code>. We can leverage that like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn uses_string(x: String) {\n println!("I consumed the String! {}", x);\n}\n\nfn main() {\n let name = "Alice".to_owned();\n let name_clone = Clone::clone(&name);\n uses_string(name);\n uses_string(name_clone);\n}\n</code></pre>\n<p>Notice that I needed to pass <code>&name</code> to <code>clone</code>, not simply <code>name</code>. If I did the latter, I would end up with a type error:</p>\n<pre><code>error[E0308]: mismatched types\n --> src\\main.rs:7:35\n |\n7 | let name_clone = Clone::clone(name);\n | ^^^^\n | |\n | expected reference, found struct `String`\n | help: consider borrowing here: `&name`\n</code></pre>\n<p>And that's because Rust won't automatically borrow a reference from function arguments. You need to explicit say that you want to borrow the value. Cool.</p>\n<p>But now I've remembered that method syntax <em>is</em>, in fact, a thing. So let's go ahead and use it!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let name_clone = (&name).clone();\n</code></pre>\n<p>Remembering that <code>clone</code> takes a <code>&String</code> and not a <code>String</code>, I've gone ahead and helpfully borrowed from <code>name</code> before calling the <code>clone</code> method. And I needed to wrap up that whole expression in parentheses, otherwise it will be parsed incorrectly by the compiler.</p>\n<p>That all works, but it's clearly not the way we want to write code in general. Instead, we'd like to forgo the parentheses and the <code>&</code> symbol. And fortunately, we can! Most Rustaceans early on learn that you can simply do this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let name_clone = name.clone();\n</code></pre>\n<p>In other words, when we use method syntax, we can call <code>.clone()</code> on either a <code>String</code> <em>or</em> a <code>&String</code>. That's because with a <a href=\"https://doc.rust-lang.org/stable/reference/expressions/method-call-expr.html\">method call expression</a>, "the receiver may be automatically dereferenced or borrowed in order to call a method." Essentially, the compiler follows these steps:</p>\n<ul>\n<li>What's the type of <code>name</code>? OK, it's a <code>String</code></li>\n<li>Is there a method available that takes a <code>String</code> as the receiver? Nope.</li>\n<li>OK, try borrowing it. Is there a method available that takes a <code>&String</code> as the receiver? Yes. Use that!</li>\n</ul>\n<p>And, for the most part, this works exactly as you'd expect. Until it doesn't. Let's start off with a confusing error message. Let's say I've got a helper function to loudly clone a <code>String</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn clone_loudly(x: &String) -> String {\n println!("Cloning {}", x);\n x.clone()\n}\n\nfn uses_string(x: String) {\n println!("I consumed the String! {}", x);\n}\n\nfn main() {\n let name = "Alice".to_owned();\n let name_clone = clone_loudly(&name);\n uses_string(name);\n uses_string(name_clone);\n}\n</code></pre>\n<p>Looking at <code>clone_loudly</code>, I realize that I can easily generalize this to more than just a <code>String</code>. The only two requirements are that the type must implement <code>Display</code> (for the <code>println!</code> call) and <code>Clone</code>. Let's go ahead and implement that, accidentally forgetting about the <code>Clone</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::fmt::Display;\nfn clone_loudly<T: Display>(x: &T) -> T {\n println!("Cloning {}", x);\n x.clone()\n}\n</code></pre>\n<p>As you'd expect, this doesn't compile. However, the error message given may be surprising. If you're like me, you were probably expecting an error message about missing a <code>Clone</code> bound on <code>T</code>. In fact, we get something else entirely:</p>\n<pre><code>error[E0308]: mismatched types\n --> src\\main.rs:4:5\n |\n2 | fn clone_loudly<T: Display>(x: &T) -> T {\n | - this type parameter - expected `T` because of return type\n3 | println!("Cloning {}", x);\n4 | x.clone()\n | ^^^^^^^^^ expected type parameter `T`, found `&T`\n |\n = note: expected type parameter `T`\n found reference `&T`\n</code></pre>\n<p>Strangely enough, the <code>.clone()</code> seems to have succeeded, but returned a <code>&T</code> instead of a <code>T</code>. That's because the method call expression is following the same steps as above with <code>String</code>, namely:</p>\n<ul>\n<li>What's the type of <code>x</code>? OK, it's a <code>&T</code></li>\n<li>Is there a <code>clone</code> method available that takes a <code>&T</code> as the receiver? Nope, since we don't know that <code>T</code> implements the <code>Clone</code> trait.</li>\n<li>OK, try borrowing it. Is there a method available that takes a <code>&&T</code> as the receiver? <a href=\"https://doc.rust-lang.org/1.48.0/src/core/clone.rs.html#222-227\">Interestingly yes</a>.</li>\n</ul>\n<p>Let's dig in on that <code>Clone</code> implementation a bit. Removing a bit of noise so we can focus on the important bits:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<T> Clone for &T {\n fn clone(self: &&T) -> &T {\n *self\n }\n}\n</code></pre>\n<p>Since references are <code>Copy</code>able, derefing a reference to a reference results in copying the inner reference value. What I find fascinating, and slightly concerning, is that we have two orthogonal features in the language:</p>\n<ul>\n<li>Method call syntax automatically causing borrows</li>\n<li>The ability to implement traits for both a type and a reference to that type</li>\n</ul>\n<p>When combined, there's some level of ambiguity about <em>which</em> trait implementation will end up being used.</p>\n<p>In this example, we're fortunate that the code didn't compile. We ended up with nothing more than a confusing error message. I haven't yet run into a real life issue where this behavior can result in code which compiles but does the wrong thing. It's certainly theoretically possible, but seems unlikely to occur unintentionally. That said, if anyone has been bitten by this, I'd be very interested to hear the details.</p>\n<p>So the takeaway: autoborrowing and derefing as part of method call syntax is a great feature of the language. It would be a major pain to use Rust without it. I'm glad it's present. Having traits implemented for references is a great feature, and I wouldn't want to use the language without it.</p>\n<p>But every once in a while, these two things bite us. Caveat emptor.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/cloning-reference-method-calls/",
"slug": "cloning-reference-method-calls",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloning a reference and method call syntax in Rust",
"description": "A short example of a possibly surprising impact of how method resolution works in Rust",
"updated": null,
"date": "2020-12-28",
"year": 2020,
"month": 12,
"day": 28,
"taxonomies": {
"categories": [
"functional programming",
"rust"
],
"tags": [
"rust"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png",
"image": "images/blog/method-syntax-autoborrow-surprise.png",
"author_avatar": "/images/leaders/michael-snoyman.png"
},
"path": "/blog/cloning-reference-method-calls/",
"components": [
"blog",
"cloning-reference-method-calls"
],
"summary": null,
"toc": [],
"word_count": 975,
"reading_time": 5,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": []
},
{
"relative_path": "blog/pattern-matching.md",
"colocated_path": null,
"content": "<p>I first started writing Haskell about 15 years ago. My learning curve for the language was haphazard at best. In many cases, I learnt concepts by osmosis, and only later learned the proper terminology and details around them. One of the prime examples of this is pattern matching. Using a <code>case</code> expression in Haskell, or a <code>match</code> expression in Rust, always felt natural. But it took years to realize that patterns appeared in other parts of the languages than just these expressions, and what terms like <em>irrefutable</em> meant.</p>\n<p>It's quite possible most Haskellers and Rustaceans will consider this content obvious. But maybe there are a few others like me out there who never had a chance to realize how ubiquitous patterns are in these languages. This post may also be a fun glimpse into either Haskell or Rust if you're only familiar with one of the languages.</p>\n<h2 id=\"language-references\">Language references</h2>\n<p>Both Haskell and Rust have language references available online. The caveats are that the Rust reference is marked as incomplete, and the Haskell language reference is for Haskell2010, which GHC does not strictly adhere to. That said, both are readily understandable and complete enough to get a very good intuition. If you've never looked at either of these documents, I highly recommend having a peek.</p>\n<ul>\n<li><a href=\"https://www.haskell.org/onlinereport/haskell2010/haskellch3.html#x8-580003.17\">Haskell 2010 Language Report, section 3.17 Pattern Matching</a></li>\n<li><a href=\"https://doc.rust-lang.org/stable/reference/patterns.html#range-patterns\">Rust language reference, Patterns</a></li>\n</ul>\n<h2 id=\"case-and-match\">case and match</h2>\n<p>The first place most of us hear the term "pattern matching" is in Haskell's <code>case</code> expression, or Rust's <code>match</code> expression. And it makes perfect sense here. We can provide multiple <em>patterns</em>, typically based on a data constructor/variant, and the language will match the most appropriate one. Slightly tying in with <a href=\"https://tech.fpcomplete.com/blog/error-handling-is-hard/\">my previous post on errors</a>, let's look at a common example: pattern matching on an <code>Either</code> value in Haskell.</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">mightFail :: Either String Int\n\nmain =\n case mightFail of\n Left err -> putStrLn $ "Error occurred: " ++ err\n Right x -> putStrLn $ "Successful result: " ++ show x\n</code></pre>\n<p>Or a <code>Result</code> value in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn might_fail() -> Result<i32, String> { ... }\n\nfn main() {\n match might_fail() {\n Err(err) => println!("Error occurred: {}", err),\n Ok(x) => println!("Successful result: {}", x),\n }\n}\n</code></pre>\n<p>I think most programmers, even those unfamiliar with these languages, could intuit to some extent what these expressions do. <code>mightFail</code> and <code>might_fail()</code> return some kind of value. The value may be in multiple different "states." The patterns match, and we branch our behavior depending on which state. Easy enough.</p>\n<p>Already here, though, there's an important detail many of us gloss over. Or at least I did. Our patterns not only <em>match a constructor</em>, they also <em>bind a variable</em>. In the examples above, we bind the variables <code>err</code> and <code>x</code> to values contained by the data constructors. And that's pretty interesting, because both Haskell and Rust <em>also</em> use <code>let</code> bindings for defining variables. I wonder if there's some kind of connection there.</p>\n<p><em>Narrator: there was a connection</em></p>\n<h2 id=\"functions-in-haskell\">Functions in Haskell</h2>\n<p>Haskell immediately adds a curve ball (in a good way) to this story. Let's take a classic recursive definition of a factorial function (note: this isn't a <em>good</em> definition since it has a space leak).</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">fact :: Int -> Int\nfact i =\n case i of\n 0 -> 1\n _ -> i * fact (i - 1)\n</code></pre>\n<p>This feels a bit verbose. We capture the variable <code>i</code>, only to immediately pattern match on it. We also have a <em>new</em> kind of pattern, <code>_</code>. When I first learned Haskell, I thought of <code>_</code> as "a variable I don't care about." But it's actually more specialized than this: a wildcard pattern, something which matches anything. (We'll get into what variables match later.)</p>\n<p>Anyway, to make this kind of code a bit terser, Haskell offers a different way of writing this function:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">fact :: Int -> Int\nfact 0 = 1\nfact i = i * fact (i - 1)\n</code></pre>\n<p>These two versions of the code are identical. It's just a syntactic trick. Let's see another more interesting syntactic trick.</p>\n<h2 id=\"what-about-let\">What about let?</h2>\n<p>We use <code>let</code> expressions (and <code>let</code> bindings in <code>do</code>-notation) in Haskell to create new variables, e.g.:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main =\n let name = "Alice"\n in putStrLn $ "Hello, " ++ name\n</code></pre>\n<p>And we do the same with <code>let</code> statements in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let name = "Alice";\n println!("Hello, {}", name);\n}\n</code></pre>\n<p>But here's where we begin to get a bit fancy. We already saw that we can bind variables in <code>case</code> and <code>match</code> expressions. Does that mean we can do away with the <code>let</code>s? Yes we can!</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main =\n case "Alice" of\n name -> putStrLn $ "Hello, " ++ name\n</code></pre>\n<p>And</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n match "Alice" {\n name => println!("Hello, {}", name)\n }\n}\n</code></pre>\n<p>This isn't <em>good</em> code per se. In fact, <code>cargo clippy</code> will complain about it. But it does hint at the fact that there's a deeper connection between two constructs. And the connection is this: the left hand side of the equals sign in a <code>let</code> statement/expression/binding is a <em>pattern</em>.</p>\n<h2 id=\"ditch-the-case-ditch-the-match\">Ditch the case! Ditch the match!</h2>\n<p>Alright, so we can technically get rid of <code>let</code>s if we wanted to (which we don't). Can we get rid of the <code>case</code> expressions in Haskell? The real answer is "definitely not." But interestingly, this code compiles!</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">mightFail :: Either String Int\nmightFail = Left "It failed"\n\nmain :: IO ()\nmain =\n let Right x = mightFail\n in putStrLn $ "Successful result: " ++ show x\n</code></pre>\n<p>As mentioned, we can put a pattern on the left hand side of the equals sign. And we've done just that here. But what on Earth does this code <em>do</em>? As you can see, the <code>mightFail</code> expression will evaluate to a <code>Left</code> value. But our pattern only matches on <code>Right</code> values! Running this code gives us:</p>\n<pre><code>Main.hs:10:9-27: Non-exhaustive patterns in Right x\n</code></pre>\n<p>Haskell is a non-strict language. Performing this binding is allowed. But evaluating the result of this binding blows up.</p>\n<p>Rust, however, <strong>is</strong> a strict language. We can do something very similar in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let Ok(x) = might_fail();\n println!("Successful result: {}", x);\n}\n</code></pre>\n<p>But this code won't even compile:</p>\n<pre><code>error[E0005]: refutable pattern in local binding: `Err(_)` not covered\n...\n = note: `let` bindings require an "irrefutable pattern", like a `struct` or an `enum` with only one variant\n = note: for more information, visit https://doc.rust-lang.org/book/ch18-02-refutability.html\n = note: the matched value is of type `std::result::Result<i32, std::string::String>`\nhelp: you might want to use `if let` to ignore the variant that isn't matched\n</code></pre>\n<p>Let's dive into those "exhaustive" and "refutable" concepts, and then round out this post with a glance at where else patterns appear in these languages.</p>\n<p>Side note: it's true that the Haskell code above compiles. However, if you turn on the <a href=\"https://ghc.gitlab.haskell.org/ghc/doc/users_guide/using-warnings.html#ghc-flag--Wincomplete-uni-patterns\"><code>-Wincomplete-uni-patterns</code> warning</a>, you'll get a warning about this. I personally think this warning should be included in <code>-Wall</code>.</p>\n<h2 id=\"refutable-and-irrefutable-exhaustive-and-non-exhaustive\">Refutable and irrefutable, exhaustive and non-exhaustive</h2>\n<p>This topic is quite a bit more complicated in Haskell due to non-strictness. How matching works in the presence of "bottom" or undefined values is an entire extra wrench of complication. I'm going to ignore those cases entirely here. If you're interested in more information on this, my article <a href=\"https://tech.fpcomplete.com/haskell/tutorial/all-about-strictness/\">All about strictness</a> discusses some of these points.</p>\n<p>Some patterns will <em>always</em> match a value. The simplest example of this is a wildcard. In fact, that's basically its definition. Quoting the Rust reference:</p>\n<blockquote>\n<p>The <em>wildcard pattern</em> (an underscore symbol) matches any value.</p>\n</blockquote>\n<p>And fortunately for us, things behave exactly the same way in Haskell.</p>\n<p>Another pattern that matches any value given is variable. <code>let x = blah</code> is a valid binding, regardless of what <code>blah</code> is. Both of these are known as <em>irrefutable</em> patterns.</p>\n<p>By contrast, some patterns are refutable. They are patterns that only match some possible cases of the value, not all. The simplest example is the one we saw before: matching on one of many data constructors/variants in a data type (Haskell) or enum (Rust).</p>\n<p>Contrasting yet again: if you have a <code>struct</code> in Rust, or a Rust <code>enum</code> with only variant, or a Haskell <code>data</code> with only one data constructor, or a Haskell <code>newtype</code>, the pattern will always match. That is, of course, assuming any patterns nested within will <em>also</em> always match. To demonstrate, this pattern match is irrefutable:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Foo = Foo Bar\ndata Bar = Baz Int\n\nmain :: IO ()\nmain =\n let Foo (Baz x) = Foo (Baz 5)\n in putStrLn $ "x == " ++ show x\n</code></pre>\n<p>However, if I add another data constructor to <code>Bar</code>, it becomes refutable:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Foo = Foo Bar\ndata Bar = Baz Int | Bin Char\n\nmain :: IO ()\nmain =\n let Foo (Baz x) = Foo (Bin 'c')\n in putStrLn $ "x == " ++ show x\n</code></pre>\n<p>In both Haskell and Rust, tuples behave like data types with one constructor, and therefore as long as the patterns inside of them are irrefutable, they are irrefutable too.</p>\n<p>The final case I want to point out is <em>literal patterns</em>. Literal patterns are very much refutable. This code thankfully does not compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let 'x' = 'a';\n}\n</code></pre>\n<p>But the really interesting thing for someone not used to pattern matching is that you can do this at all! We've already done pattern matching on literal values above, in our definition of <code>fact</code>. It's very convenient to be able to build up complex case/match expressions using literal syntax (like list/slice syntax).</p>\n<p>Alright, let's see a few more examples of where patterns are used in these languages, then tie it up.</p>\n<h2 id=\"function-arguments\">Function arguments</h2>\n<p>Function arguments are patterns in both languages. In Haskell we saw that you can use <em>refutable</em> patterns, and provide multiple function clauses. The same doesn't apply to Rust functions. You'll need to use an irrefutable pattern in the function, and then do some pattern matching or other kind of branching in the body of the functions. For example, the poorly written <code>fact</code> function can be rewritten in Rust as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn fact(i: u32) -> u32 {\n if i == 0 {\n 1\n } else {\n i * fact(i - 1)\n }\n}\n\nfn main() {\n println!("5! == {}", fact(5));\n}\n</code></pre>\n<p>Perhaps more interestingly in both languages, you can use a pattern matching a data structure in the function argument. For example, in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct Person {\n name: String,\n age: u32,\n}\n\nfn greet(Person { name, age }: &Person) {\n println!("{} is {} years old", name, age);\n}\n\nfn main() {\n let alice = Person {\n name: "Alice".to_owned(),\n age: 30,\n };\n greet(&alice);\n}\n</code></pre>\n<p>Or in Haskell, using positional instead of named fields:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">data Person = Person String Int\n\ngreet :: Person -> IO ()\ngreet (Person name age) = putStrLn $ name ++ " is " ++ show age ++ " years old"\n\nmain :: IO ()\nmain = greet $ Person "Alice" 30\n</code></pre>\n<h2 id=\"closures-functions-and-lambdas\">Closures, functions, and lambdas</h2>\n<p>The arguments to closures (Rust) and lambdas (Haskell) are patterns. That means we can match on irrefutable things like tuples fairly easily:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let greet = |(name, age)| println!("{} is {} years old", name, age);\n greet(("Alice", 30));\n}\n</code></pre>\n<p>The big difference is that, in Rust, the pattern must be irrefutable. This is again due to strictness. The following code will compile in Haskell, but fail at runtime:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main :: IO ()\nmain =\n let mylambda = \\(Right x) -> putStrLn x\n in mylambda (Left "Error!")\n</code></pre>\n<p>Again, <code>-Wincomplete-uni-patterns</code> will warn about this. But again, it's not on by default.</p>\n<p>By contrast, in Rust, the equivalent code will fail to compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let myclosure = |Ok(x): Result<i32, &str>| println!("{}", x);\n myclosure(Err("Hello"));\n}\n</code></pre>\n<p>This produces:</p>\n<pre><code>error[E0005]: refutable pattern in function argument: `Err(_)` not covered\n</code></pre>\n<p>And if you're wondering: I needed to add the explicit <code>: Result<i32, &str></code> type annotation to help type inference get to that error message. Without it, it just complained that it couldn't infer the type of <code>x</code>.</p>\n<h2 id=\"if-let-while-let-and-for-rust\">if let, while let, and for (Rust)</h2>\n<p>The <code>if let</code> and <code>while let</code> expressions are all about refutable pattern matches. "Only do this if the pattern matches" and "keep doing this while the pattern matches." <code>if let</code> looks something like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let result: Result<(), String> = Err("Something happened".to_owned());\n if let Err(e) = result {\n eprintln!("Something went wrong: {}", e);\n }\n}\n</code></pre>\n<p>And with <code>while let</code>, you can make something close to a <code>for</code> loop:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let mut iter = 1..=10;\n while let Some(i) = iter.next() {\n println!("i == {}", i);\n }\n}\n</code></pre>\n<p>And speaking of <code>for</code> loops, the left hand side of the <code>in</code> keyword is a pattern. This can be really nice for cases like destructuring the tuple generated by the <code>enumerate()</code> method:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n for (idx, c) in "Hello, world!".chars().enumerate() {\n println!("{}: {}", idx, c);\n }\n}\n</code></pre>\n<p>The patterns in a <code>for</code> loop must be irrefutable. This code won't compile:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let array = [Ok(1), Ok(2), Err("something"), Ok(3)];\n for Ok(x) in &array {\n println!("x == {}", x);\n }\n}\n</code></pre>\n<p>Instead, if you want to exit the <code>for</code> loop at the first <code>Err</code> value, you would need to do something like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let array = [Ok(1), Ok(2), Err("something"), Ok(3)];\n for x in &array {\n match x {\n Ok(x) => println!("x == {}", x),\n Err(_) => break,\n }\n }\n}\n</code></pre>\n<h2 id=\"where-they-re-used\">Where they're used</h2>\n<p>This was not intended to be a complete explanation of all examples of patterns in these languages. However, for a bit of completeness, let me quote the Haskell language specification for where patterns are part of the language:</p>\n<blockquote>\n<p>Patterns appear in lambda abstractions, function definitions, pattern bindings, list comprehensions, do expressions, and case expressions. However, the first five of these ultimately translate into case expressions, so defining the semantics of pattern matching for case expressions is sufficient.</p>\n</blockquote>\n<p>And similarly for Rust:</p>\n<blockquote>\n<ul>\n<li>let declarations</li>\n<li>Function and closure parameters</li>\n<li>match expressions</li>\n<li>if let expressions</li>\n<li>while let expressions</li>\n<li>for expressions</li>\n</ul>\n</blockquote>\n<p>There are also more advanced examples of patterns that I haven't touched on at all. Reference patterns in Rust would be relevant here, as would lazy patterns in Haskell.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hoped this gave a little bit of insight into the value of patterns. For me, the important takeaway is:</p>\n<ul>\n<li>Patterns appear in lots of places</li>\n<li>The difference between refutable and irrefutable patterns</li>\n<li>There are some places where you must use irrefutable patterns</li>\n<li>There are some places where Haskell lets you use refutable patterns, but you shouldn't</li>\n<li>Variable binding is just one special case of patterns</li>\n</ul>\n<p>If you're interested in learning more about either Haskell or Rust, check out our <a href=\"https://tech.fpcomplete.com/haskell/syllabus/\">Haskell syllabus</a> or our <a href=\"https://tech.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a>. FP Complete also offers both corporate and public training classes on both Haskell and Rust. If you're interested in learning more, please <a href=\"https://tech.fpcomplete.com/contact-us/\">contact us for details</a>.</p>\n<div class=\"text-center\">\n<a class=\"button-coral\" href=\"/training/\">Learn about Rust training</a>\n</div>\n",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/",
"slug": "pattern-matching",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Pattern matching",
"description": "Pattern matching is a central feature of some programming languages, notably both Rust and Haskell. But patterns may be even more central than you realize. We'll look at some details in this post.",
"updated": null,
"date": "2020-12-14",
"year": 2020,
"month": 12,
"day": 14,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"rust",
"haskell",
"insights"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/functional.png"
},
"path": "/blog/pattern-matching/",
"components": [
"blog",
"pattern-matching"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "language-references",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#language-references",
"title": "Language references",
"children": []
},
{
"level": 2,
"id": "case-and-match",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#case-and-match",
"title": "case and match",
"children": []
},
{
"level": 2,
"id": "functions-in-haskell",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#functions-in-haskell",
"title": "Functions in Haskell",
"children": []
},
{
"level": 2,
"id": "what-about-let",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#what-about-let",
"title": "What about let?",
"children": []
},
{
"level": 2,
"id": "ditch-the-case-ditch-the-match",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#ditch-the-case-ditch-the-match",
"title": "Ditch the case! Ditch the match!",
"children": []
},
{
"level": 2,
"id": "refutable-and-irrefutable-exhaustive-and-non-exhaustive",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#refutable-and-irrefutable-exhaustive-and-non-exhaustive",
"title": "Refutable and irrefutable, exhaustive and non-exhaustive",
"children": []
},
{
"level": 2,
"id": "function-arguments",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#function-arguments",
"title": "Function arguments",
"children": []
},
{
"level": 2,
"id": "closures-functions-and-lambdas",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#closures-functions-and-lambdas",
"title": "Closures, functions, and lambdas",
"children": []
},
{
"level": 2,
"id": "if-let-while-let-and-for-rust",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#if-let-while-let-and-for-rust",
"title": "if let, while let, and for (Rust)",
"children": []
},
{
"level": 2,
"id": "where-they-re-used",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#where-they-re-used",
"title": "Where they're used",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2421,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
"title": "Philosophies of Rust and Haskell"
}
]
},
{
"relative_path": "blog/monads-gats-nightly-rust.md",
"colocated_path": null,
"content": "<p>This blog post was entirely inspired by reading the <a href=\"https://www.reddit.com/r/rust/comments/k4vzvp/gats_on_nightly/\">GATs on Nightly!</a> Reddit post by /u/C5H5N5O. I just decided to take things a little bit too far, and thought a blog post on it would be fun. I want to be clear from the start: I'm introducing some advanced concepts in this post that rely on unstable features in Rust. I'm not advocating their usage <em>at all</em>. I'm just exploring what may and may not be possible with GATs.</p>\n<p>Rust shares many similarities with Haskell at the type system level. Both have types, generic types, associated types, and traits/type classes (which are basically equivalent). However, Haskell has one important additional feature that is lacking in Rust: Higher Kinded Types (HKTs). This isn't an accidental limitation in Rust, or some gap that should be filled in. It's an intentional design decision, at least as far as I know. But as a result, some things until now can't really be implemented in Rust.</p>\n<p>Take, for instance, a <code>Functor</code> in Haskell. For all of its scary sounding name, almost all developers today are familiar with the concept of a <code>Functor</code>. A <code>Functor</code> provides a general purpose interface for "map a function over this structure." Many different structures in Rust can provide such mapping functionality, including <code>Option</code>, <code>Result</code>, <code>Iterator</code>, and <code>Future</code>.</p>\n<p>However, it hasn't been possible to write a general purpose <code>Functor</code> trait that can be implemented by multiple types. Instead, individual types can implement them as methods on that type. For example, we can write our own custom <code>MyOption</code> and <code>MyResult</code> enums and provide <code>map</code> methods:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Debug, PartialEq)]\nenum MyOption<A> {\n Some(A),\n None,\n}\n\nimpl<A> MyOption<A> {\n fn map<F: FnOnce(A) -> B, B>(self, f: F) -> MyOption<B> {\n match self {\n MyOption::Some(a) => MyOption::Some(f(a)),\n MyOption::None => MyOption::None,\n }\n }\n}\n\n#[test]\nfn test_option_map() {\n assert_eq!(MyOption::Some(5).map(|x| x + 1), MyOption::Some(6));\n assert_eq!(MyOption::None.map(|x: i32| x + 1), MyOption::None);\n}\n\n#[derive(Debug, PartialEq)]\nenum MyResult<A, E> {\n Ok(A),\n Err(E),\n}\n\nimpl<A, E> MyResult<A, E> {\n fn map<F: FnOnce(A) -> B, B>(self, f: F) -> MyResult<B, E> {\n match self {\n MyResult::Ok(a) => MyResult::Ok(f(a)),\n MyResult::Err(e) => MyResult::Err(e),\n }\n }\n}\n\n#[test]\nfn test_result_map() {\n assert_eq!(MyResult::Ok(5).map(|x| x + 1), MyResult::Ok::<i32, ()>(6));\n assert_eq!(MyResult::Err("hello").map(|x: i32| x + 1), MyResult::Err("hello"));\n}\n</code></pre>\n<p>However, it hasn't been possible without GATs to define <code>map</code> as a trait method. Let's see why. Here's a naive approach to a "monomorphic functor" trait, and an implementation for <code>Option</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">/// Monomorphic functor trait\ntrait MonoFunctor {\n type Unwrapped; // value "contained inside"\n fn map<F>(self, f: F) -> Self\n where\n F: FnMut(Self::Unwrapped) -> Self::Unwrapped;\n}\n\nimpl<A> MonoFunctor for Option<A> {\n type Unwrapped = A;\n fn map<F: FnMut(A) -> A>(self, mut f: F) -> Option<A> {\n match self {\n Some(a) => Some(f(a)),\n None => None,\n }\n }\n}\n</code></pre>\n<p>In our trait definition, we define an associated type <code>Unwrapped</code>, for the value that lives "inside" the <code>MonoFunctor</code>. In the case of <code>Option<A></code>, that would be <code>A</code>. And herein lies the problem. We're hard-coding the <code>Unwrapped</code> to just one type, <code>A</code>. Usually with a <code>map</code> function, we want to change the type to <code>B</code>. But we have no way in current, stable Rust to say "I want a type that's associated with this <code>MonoFunctor</code>, but also a little bit different in what lives inside of it."</p>\n<p>That's where Generic Associated Types come in.</p>\n<h2 id=\"polymorphic-functor\">Polymorphic Functor</h2>\n<p>In order to get a polymorphic functor, we need to be able to say "here's how my type would look if I wrapped up a <em>different</em> type inside of it." For example, with <code>Option</code>, we'd like to say "hey, I've got <code>Option<A></code>, and it contains an <code>A</code> type, but if it contained a <code>B</code> type instead, it would be <code>Option<B></code>." To do this, we're going to use the generic associated type <code>Wrapped<B></code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Functor {\n type Unwrapped;\n type Wrapped<B>: Functor;\n\n fn map<F, B>(self, f: F) -> Self::Wrapped<B>\n where\n F: FnMut(Self::Unwrapped) -> B;\n}\n</code></pre>\n<p>So what we're saying is:</p>\n<ul>\n<li>Each functor has an associated type <code>Unwrapped</code>, which is the thing it contains</li>\n<li>When we know a functor, we can also figure out another, associated type <code>Wrapped<B></code> which is "like <code>Self</code>, but has a different wrapped up value underneath"</li>\n<li>Like before, <code>map</code> is a method that takes two parameters: <code>self</code> and a function</li>\n<li>The function parameter will map from the current underlying <code>Unwrapped</code> value to some new type <code>B</code></li>\n<li>And the output of <code>map</code> will be a <code>Wrapped<B></code></li>\n</ul>\n<p>That's a bit abstract. Let's see what this looks like for the <code>Option</code> type:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<A> Functor for Option<A> {\n type Unwrapped = A;\n type Wrapped<B> = Option<B>;\n\n fn map<F: FnMut(A) -> B, B>(self, mut f: F) -> Option<B> {\n match self {\n Some(x) => Some(f(x)),\n None => None,\n }\n }\n}\n\n#[test]\nfn test_option_map() {\n assert_eq!(Some(5).map(|x| x + 1), Some(6));\n assert_eq!(None.map(|x: i32| x + 1), None);\n}\n</code></pre>\n<p>And if you play with all of the type gymnastics, you'll see that this ends up being identical to the <code>map</code> method we special-cased for <code>MyOption</code> above (sans a difference between <code>FnOnce</code> and <code>FnMut</code>). Cool!</p>\n<h3 id=\"side-note-hkts\">Side note: HKTs</h3>\n<p>In Haskell, none of this generic associated type business is needed. In fact, Haskell <code>Functor</code>s don't use <em>any</em> associated types. The typeclass for <code>Functor</code> in Haskell far predates the presence of associated types in the language. For comparison, let's see what that looks like, renaming a bit to match up with Rust:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">class Functor f where\n map :: (a -> b) -> f a -> f b\ninstance Functor Option where\n map f option =\n case option of\n Some x -> Some (f x)\n None -> None\n</code></pre>\n<p>Or, to translate it into Rust-like syntax:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait HktFunctor {\n fn map<A, B, F: FnMut(A) -> B>(self: Self<A>, f: F) -> Self<B>;\n\nimpl HktFunctor for Option {\n fn map<A, B, F: FnMut(A) -> B>(self: Option<A>, f: F) -> Option<B> {\n match self {\n Some(a) => Some(f(a)),\n None => None,\n }\n }\n}\n</code></pre>\n<p>But this isn't valid Rust! That's because we're trying to provide type parameters to <code>Self</code>. But in Rust, <code>Option</code> isn't a type. <code>Option</code> must be applied to a single type parameter before it becomes a type. <code>Option<i32></code> is a type. <code>Option</code> on its own is not.</p>\n<p>By contrast, in Haskell, <code>Maybe Int</code> is a type of <em>kind</em> <code>Type</code>. <code>Maybe</code> is a <em>type constructor</em>, of <em>kind</em> <code>Type -> Type</code>. But you can treat <code>Maybe</code> has a type of its own for purposes of creating type classes and instance. <code>Functor</code> in Haskell works on the kind <code>Type -> Type</code>. This is what we mean by "higher kinded types": we can have types whose <em>kind</em> is higher than just <code>Type</code>.</p>\n<p>GATs in Rust are a workaround for this lack of HKTs for the examples below. But as we'll ultimately see, they are more brittle and more verbose. That's not to say that GATs are a Bad Thing, far from it. It <em>is</em> to say that trying to write Haskell in Rust is probably not a good idea.</p>\n<p>OK, now that we've thoroughly established that what we're about to do isn't a great idea... let's do it!</p>\n<h2 id=\"pointed\">Pointed</h2>\n<p>There's a controversial typeclass in Haskell called <code>Pointed</code>. It's controversial because it introduces a typeclass without any laws associated with it, which is often not very liked. But since I already told you this is all a bad idea, let's implement <code>Pointed</code>.</p>\n<p>The idea of <code>Pointed</code> is simple: wrap up a value into a <code>Functor</code>-like thing. So in the case of <code>Option</code>, it would be like wrapping it with <code>Some</code>. In a <code>Result</code>, it's using <code>Ok</code>. And for a <code>Vec</code>, it would be a single-element vector. Unlike <code>Functor</code>, this will be a static method, since we don't have an existing <code>Pointed</code> value to change. Let's see it in action:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Pointed: Functor {\n fn wrap<T>(t: T) -> Self::Wrapped<T>;\n}\n\nimpl<A> Pointed for Option<A> {\n fn wrap<T>(t: T) -> Option<T> {\n Some(t)\n }\n}\n</code></pre>\n<p>What's particularly interesting about this is that we don't use the <code>A</code> type parameter in the <code>Option</code> implementation at all.</p>\n<p>There's one more thing worth noting. The result of calling <code>wrap</code> is a <code>Self::Wrapped<T></code> value. What exactly do we know about <code>Self::Wrapped<T></code>? Well, from the <code>Functor</code> trait definition, we know exactly one thing: that <code>Wrapped<T></code> must be a <code>Functor</code>. Interestingly, we have <em>lost the knowledge</em> here that <code>Self::Wrapped<T></code> is also a <code>Pointed</code>. That's going to be a recurring theme for the next few traits.</p>\n<p>But let me reiterate this a different way. When we're working with a general <code>Functor</code> trait implementation, we don't know <em>anything at all</em> about the <code>Wrapped</code> associated type except that it implements <code>Functor</code> itself. Logically, we know that for a <code>Option<A></code> implementation, we'd like <code>Wrapped</code> to be a <code>Option<B></code> kind of thing. But the GAT implementation does not enforce it. (By contrast, the HKT approach in Haskell <em>does</em> enforce this.) Nothing prevents us from writing a horrifically non-sensible implementation such as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<A> Functor for MyOption<A> {\n type Unwrapped = A;\n type Wrapped<B> = Result<B, String>; // wut?\n\n fn map<F: FnMut(A) -> B, B>(self, mut f: F) -> Result<B, String> {\n match self {\n MyOption::Some(a) => Ok(f(a)),\n MyOption::None => Err("Well this is weird, isn't it?".to_owned()),\n }\n }\n}\n\n</code></pre>\n<p>You may be thinking, "So what, no one would write something like that. And it's their own fault if they do." That's not the point here. The point is that the compiler can't know that there's a connection between <code>Self</code> and <code>Wrapped<B></code>. And since it can't know that, there are some things we can't get to type check. I'll show you one of those at the end.</p>\n<h2 id=\"applicative\">Applicative</h2>\n<p>When I give Haskell training, and I get to the <code>Functor</code>/<code>Applicative</code>/<code>Monad</code> section, most people are nervous about <code>Monad</code>s. In my experience, the really confusing part is really <code>Applicative</code>. Once you understand that, <code>Monad</code> is relatively speaking easy.</p>\n<p>The <code>Applicative</code> typeclass in Haskell has two methods. <code>pure</code> is equivalent to the <code>wrap</code> that I put into <code>Pointed</code>, so we can ignore it. The other method is <code><*></code>, known as "apply," or "splat", or "the tie fighter." I originally implemented <code>Applicative</code> with a method called <code>apply</code> that matches that operator, but found that it was better to go a different route.</p>\n<p>Instead, there's an alternate way to define an <code>Applicative</code> typeclass, based on a different function called <code>liftA2</code> (or, in Rust, <code>lift_a2</code>). Here's the idea. Suppose I have two functions:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn birth_year() -> Option<i32> { ... }\nfn current_year() -> Option<i32> { ... }\n</code></pre>\n<p>I may not know the current year or the birth year, in which case I'll return <code>None</code>. But if I get a <code>Some</code> return for both of these function calls, then I can calculate the age. In normal Rust code, this may look like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn age() -> Option<i32> {\n let birth_year = birth_year()?;\n let current_year = current_year()?;\n Some(current_year - birth_year)\n}\n</code></pre>\n<p>But that's leveraging <code>?</code> and early return. A primary purpose of <code>Applicative</code> is to address the same problem. So let's rewrite this without any early return, and instead use some pattern matching:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn age() -> Option<i32> {\n match (birth_year(), current_year()) {\n (Some(birth_year), Some(current_year)) => Some(current_year - birth_year),\n _ => None,\n }\n}\n</code></pre>\n<p>This certainly works, but it's verbose. It also doesn't generalize to other cases, like a <code>Result</code>. And what about a really sophisticated case, like "I have a <code>Future</code> that will return the birth year, a <code>Future</code> that will return the current year, and I want to produce a <code>Future</code> that finds the difference." With async/await syntax, it's easy enough to do. But we can also do it with <code>Applicative</code>, using our <code>lift_a2</code> method.</p>\n<p>The point of <code>lift_a2</code> is: I've got two values wrapped up, perhaps both in an <code>Option</code>. I'd like to use a function to combine them together. Let's see what that looks like in Rust:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Applicative: Pointed {\n fn lift_a2<F, B, C>(self, b: Self::Wrapped<B>, f: F) -> Self::Wrapped<C>\n where\n F: FnMut(Self::Unwrapped, B) -> C;\n}\n\nimpl<A> Applicative for Option<A> {\n fn lift_a2<F, B, C>(self, b: Self::Wrapped<B>, mut f: F) -> Self::Wrapped<C>\n where\n F: FnMut(Self::Unwrapped, B) -> C\n {\n let a = self?;\n let b = b?;\n Some(f(a, b))\n }\n}\n</code></pre>\n<p>With this definition in place, we can now rewrite <code>age</code> as:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn age() -> Option<i32> {\n current_year().lift_a2(birth_year(), |cy, by| cy - by)\n}\n</code></pre>\n<p>Whether this is an improvement or not probably depends heavily on how much Haskell you've written in your life. Again, I'm not advocating changing Rust here, but it's certainly interesting.</p>\n<p>We could also do the same kind of thing with <code>Result</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn birth_year() -> Result<i32, String> {\n Err("No birth year".to_string())\n}\n\nfn current_year() -> Result<i32, String> {\n Err("No current year".to_string())\n}\n\nfn age() -> Result<i32, String> {\n current_year().lift_a2(birth_year(), |cy, by| cy - by)\n}\n</code></pre>\n<p>Which may beg the question: which of the two <code>Err</code> values do we take? Well, that depends on our implementation of <code>Applicative</code>, but typically we would prefer choosing the first:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<A, E> Applicative for Result<A, E> {\n fn lift_a2<F, B, C>(self, b: Self::Wrapped<B>, mut f: F) -> Self::Wrapped<C>\n where\n F: FnMut(Self::Unwrapped, B) -> C\n {\n match (self, b) {\n (Ok(a), Ok(b)) => Ok(f(a, b)),\n (Err(e), _) => Err(e),\n (_, Err(e)) => Err(e),\n }\n }\n}\n</code></pre>\n<p>But what if we wanted both? Here's a case where <code>Applicative</code> gives us power that <code>?</code> doesn't.</p>\n<h2 id=\"validation\">Validation</h2>\n<p>The <code>Validation</code> type from Haskell represents the idea "I'm going to try lots of things, some of them may fail, and I want to collect together all of the error results." A typically example of this would be web form parsing. If a user enters an invalid email address, invalid phone number, <em>and</em> forgets to click the "I agree" box, you'd want to generate all three error messages. You don't want to generate just one.</p>\n<p>To start off our <code>Validation</code> implementation, we need to introduce one more Haskell-y typeclass, this time for representing the concept of "combining together multiple values." We <em>could</em> just hard-code <code>Vec</code> in here, but where's the fun in that? Instead, let's introduce the strangely-named <code>Semigroup</code> trait. This doesn't even require any special GAT code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Semigroup {\n fn append(self, rhs: Self) -> Self;\n}\n\nimpl Semigroup for String {\n fn append(mut self, rhs: Self) -> Self {\n self += &rhs;\n self\n }\n}\n\nimpl<T> Semigroup for Vec<T> {\n fn append(mut self, mut rhs: Self) -> Self {\n Vec::append(&mut self, &mut rhs);\n self\n }\n}\n\nimpl Semigroup for () {\n fn append(self, (): ()) -> () {}\n}\n</code></pre>\n<p>With that in place, we can now define a new <code>enum</code> called <code>Validation</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(PartialEq, Debug)]\nenum Validation<A, E> {\n Ok(A),\n Err(E),\n}\n</code></pre>\n<p>The <code>Functor</code> and <code>Pointed</code> implementations are boring, let's skip straight to the meat with the <code>Applicative</code> implementation:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<A, E: Semigroup> Applicative for Validation<A, E> {\n fn lift_a2<F, B, C>(self, b: Self::Wrapped<B>, mut f: F) -> Self::Wrapped<C>\n where\n F: FnMut(Self::Unwrapped, B) -> C\n {\n match (self, b) {\n (Validation::Ok(a), Validation::Ok(b)) => Validation::Ok(f(a, b)),\n (Validation::Err(e), Validation::Ok(_)) => Validation::Err(e),\n (Validation::Ok(_), Validation::Err(e)) => Validation::Err(e),\n (Validation::Err(e1), Validation::Err(e2)) => Validation::Err(e1.append(e2)),\n }\n }\n}\n</code></pre>\n<p>Here, we're saying that the error type parameter must implement <code>Semigroup</code>. If both values are <code>Ok</code>, we apply the <code>f</code> function to them and wrap up the result in <code>Ok</code>. If only one of the values is <code>Err</code>, we return that error. But if <em>both</em> of them are error, we leverage the <code>append</code> method of <code>Semigroup</code> to combine them together. This is something you can't get with <code>?</code>-style error handling.</p>\n<h2 id=\"monad\">Monad</h2>\n<p>At last, the dreaded monad rears its head! But in reality, at least for Rustaceans, monad isn't much of a surprise. You're already used to it: it's the <code>and_then</code> method. Almost any chain of statements that end with <code>?</code> in Rust can be reimagined as monadic binds. In my opinion, the main reason monad has the allure of the unknowable was a series of particularly bad tutorials that cemented this idea in people's minds.</p>\n<p>Anyway, since we're just trying to match the existing method signature of <code>and_then</code> on <code>Option</code>, I'm not going to spend much time motivating "why monad." Instead, let's just look at the definition of the trait:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait Monad : Applicative {\n fn bind<B, F>(self, f: F) -> Self::Wrapped<B>\n where\n F: FnMut(Self::Unwrapped) -> Self::Wrapped<B>;\n}\n\nimpl<A> Monad for Option<A> {\n fn bind<B, F>(self, f: F) -> Option<B>\n where\n F: FnMut(A) -> Option<B>,\n {\n self.and_then(f)\n }\n}\n</code></pre>\n<p>And just like that, we've got monadic Rust. Time to ride off into the sunset.</p>\n<p>But wait, there's more!</p>\n<h2 id=\"monad-transformers\">Monad transformers</h2>\n<img src=\"/images/blog/transformers-rust.jpg\">\n<p>I'm overall not a huge fan of monad transformers. I think they are drastically overused in Haskell, and lead to huge amounts of complication. I instead advocate the <a href=\"https://www.fpcomplete.com/blog/2017/06/readert-design-pattern/\">ReaderT design pattern</a>. But again, this post is definitely not about best practices.</p>\n<p>Typically, each monad instance provides some kind of additional functionality. <code>Option</code> means "it might not produce a value." <code>Result</code> means "it might fail with an error." If we provided it, <code>Future</code> means "it won't produce a value immediately, but it will eventually." And as a final example, the <code>Reader</code> monad means "I have read-only access to some environmental data."</p>\n<p>But what if we want to have two pieces of functionality? There's no obvious way to combine a <code>Reader</code> and a <code>Result</code>. In Rust, we <em>do</em> combine together <code>Result</code> and <code>Future</code> via <code>async</code> functions and <code>?</code>, but that had to have carefully designed language support. Instead, the Haskell approach to this problem would be: just provide <code>do</code> notation (syntactic sugar for monads), and then layer up your monad transformers to add together all of the functionality.</p>\n<p>I've considered writing a blog post on this philosophical difference for a while. (If people are interested in such a post, please let me know.) But for now, let's simply explore what it looks like to provide a monad transformer in Rust. We'll implement it for the most boring of all monad transformers, <code>IdentityT</code>. This is the transformer that doesn't do anything at all. (And if you're wondering "why have it," consider why Rust has 1-tuples. Sometimes, you need something that fits a certain shape to make some generic code work nicely.)</p>\n<p>Since <code>IdentityT</code> doesn't do anything, it's comforting to see that its type reflects that perfectly:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct IdentityT<M>(M);\n</code></pre>\n<p>I'm calling the type parameter <code>M</code>, because it's going to itself be an implementation of <code>Monad</code>. That's the idea here: every monad transformer sits on top of a "base monad."</p>\n<p>Next, let's look at a <code>Functor</code> implementation. The idea is to unwrap the <code>IdentityT</code> layer, leverage the underlying <code>map</code> method, and then rewrap <code>IdentityT</code>.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<M: Functor> Functor for IdentityT<M> {\n type Unwrapped = M::Unwrapped;\n type Wrapped<A> = IdentityT<M::Wrapped<A>>;\n\n fn map<F, B>(self, f: F) -> Self::Wrapped<B>\n where\n F: FnMut(M::Unwrapped) -> B\n {\n IdentityT(self.0.map(f))\n }\n}\n</code></pre>\n<p>For our associated types, we leverage the associated types of <code>M</code>. Inside <code>map</code>, we use <code>self.0</code> to get the underlying <code>M</code>, and wrap the result of the <code>map</code> method call with <code>IdentityT</code>. Cool!</p>\n<p>The <code>Pointed</code>, <code>Applicative</code>, and <code>Monad</code> implementations follow similar patterns, so I'll drop all of those in too:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl<M: Pointed> Pointed for IdentityT<M> {\n fn wrap<T>(t: T) -> IdentityT<M::Wrapped<T>> {\n IdentityT(M::wrap(t))\n }\n}\n\nimpl<M: Applicative> Applicative for IdentityT<M> {\n fn lift_a2<F, B, C>(self, b: Self::Wrapped<B>, f: F) -> Self::Wrapped<C>\n where\n F: FnMut(Self::Unwrapped, B) -> C\n {\n IdentityT(self.0.lift_a2(b.0, f))\n }\n}\n\nimpl<M: Monad> Monad for IdentityT<M> {\n fn bind<B, F>(self, mut f: F) -> Self::Wrapped<B>\n where\n F: FnMut(Self::Unwrapped) -> Self::Wrapped<B>\n {\n IdentityT(self.0.bind(|x| f(x).0))\n }\n}\n</code></pre>\n<p>And finally, we'll define one new trait: <code>MonadTrans</code>. <code>MonadTrans</code> captures the idea of "layering up" a base monad into the transformed monad. In Haskell, you'll often see code like <code>lift (readFile "foo.txt")</code>, where <code>readFile</code> works in the base monad, and we're sitting in a layer on top of that.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">trait MonadTrans {\n type Base: Monad;\n\n fn lift(base: Self::Base) -> Self;\n}\n\nimpl<M: Monad> MonadTrans for IdentityT<M> {\n type Base = M;\n\n fn lift(base: M) -> Self {\n IdentityT(base)\n }\n}\n</code></pre>\n<p>So is this useful? Not terribly on its own. We could arguably create an ecosystem of <code>ReaderT</code>, <code>WriterT</code>, <code>ContT</code>, <code>ConduitT</code>, and more, and start building up sophisticated systems. But I'm strongly of the opinion that we don't need that stuff in Rust, at least not yet. I'm happy to go this far in my implementation to explore the wonders of GATs, but let's not go crazy and try to make something useful just because we can.</p>\n<h2 id=\"join\">join</h2>\n<p>Alright, now the fun begins. We've seen GATs in practice. And it seems like Rust is keeping pace with Haskell pretty well. That's about to end.</p>\n<p>There's another method that goes along with <code>Monad</code>s in Haskell, called <code>join</code>. It's equivalent in power to the <code>bind</code> method we've already seen, but works differently. <code>join</code> "flattens" two layers of monads in Haskell. And a side note: there's already <a href=\"https://doc.rust-lang.org/stable/std/option/enum.Option.html#impl-10\">a method called <code>flatten</code></a> in Rust that does just this for <code>Option</code> and <code>Result</code>.</p>\n<p>The catch with <code>join</code>: the monads have to be the same. In other words, <code>join (Just (Just 5)) == Just 5</code>, but <code>join (Just (Right 6))</code> is a type error, since <code>Just</code> is a <code>Maybe</code> data constructor, and <code>Right</code> is an <code>Either</code> data constructor.</p>\n<p>Now we're in a bit of a quandary. In Haskell, where we have higher kinded types, it's easy to say "<code>Maybe</code> must be the same as <code>Maybe</code>":</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">join :: Monad m => m (m a) -> m a\njoin m = bind m (\\x -> x)\n</code></pre>\n<p>But I couldn't figure out a way to express the same idea with GATs in Rust and get the syntax accepted by the compiler. This is the closest I came:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn join<MOuter, MInner, A>(outer: MOuter) -> MOuter::Wrapped<A>\nwhere\n MOuter: Monad<Unwrapped = MInner>,\n MInner: Monad<Unwrapped = A, Wrapped = MOuter::Wrapped<A>>,\n{\n outer.bind(|inner| inner)\n}\n\n#[test]\nfn test_join() {\n assert_eq!(join(Some(Some(true))), Some(true));\n}\n</code></pre>\n<p>Unfortunately, this broke the compiler:</p>\n<pre><code>error: internal compiler error: compiler\\rustc_middle\\src\\ty\\subst.rs:529:17: type parameter `B/#1` (B/1) out of range when substituting, substs=[MInner]\n\nthread 'rustc' panicked at 'Box<Any>', /rustc/b7ebc6b0c1ba3c27ebb17c0b496ece778ef11e18\\compiler\\rustc_errors\\src\\lib.rs:904:9\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\n\nnote: the compiler unexpectedly panicked. this is a bug.\n\nnote: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md\n\nnote: rustc 1.50.0-nightly (b7ebc6b0c 2020-11-30) running on x86_64-pc-windows-msvc\n\nnote: compiler flags: -C embed-bitcode=no -C debuginfo=2 -C incremental --crate-type bin\n\nnote: some of the compiler flags provided by cargo are hidden\n</code></pre>\n<p>I think it's fair to say I was pushing the compiler to the limit here. In any event, I opened up <a href=\"https://github.com/rust-lang/rust/issues/79636\">a GitHub issue</a> for this.</p>\n<h2 id=\"mapm-traverse\">mapM/traverse</h2>\n<p>Already, we were stymied by <code>join</code>. How about another popular functional idiom: <code>traverse</code>. As I <a href=\"https://tech.fpcomplete.com/blog/collect-rust-traverse-haskell-scala/\">previously mentioned</a>, <code>traverse</code> is incredibly popular in Scala, and pretty common in Haskell. It functions very much like a <code>map</code>, except the result of each step through the <code>map</code> is wrapped in some <code>Applicative</code>, and the <code>Applicative</code> values are combined into an overall data structure.</p>\n<p>Sound confusing? Fair enough. As a simpler example: if you have a <code>Vec<A></code> value, and a function from <code>A</code> to <code>Option<B></code>, <code>traverse</code> can put these together into an <code>Option<Vec<B>></code>. Or using the <code>Validation</code> type we had above, you could combine <code>Vec<A></code> and <code>Fn(A) -> Validation<B, Vec<MyErr>></code> into a <code>Validation<Vec<B>, Vec<MyErr>></code>, returning either all of the successfully generated <code>B</code> values, or all of the errors that occurred along the way.</p>\n<p>Anyway, I ended up with this as a starting type signature for our function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn traverse<F, M, A, B, I>(iter: I, f: F) -> M::Wrapped<Vec<B>>\n</code></pre>\n<p>Then we have the following trait bounds:</p>\n<ul>\n<li><code>I: IntoIterator<Item = A></code>: <code>I</code> is an iterator of <code>A</code> values. To simplify, you can think of it as <code>Vec<A></code>.j</li>\n<li><code>M: Applicative<Unwrapped = B></code>: <code>M</code> is some implementation of <code>Applicative</code> which unwraps to a <code>B</code>. In our example: this would be <code>Validation<B, Vec<MyErr>></code>.</li>\n<li><code>F: FnMut(A) -> M</code>: <code>F</code> is a function that takes the <code>A</code> values from the iterator and produces <code>M</code> values.</li>\n<li><code>M::Wrapped<Vec<B>>: Applicative<Unwrapped = Vec<B>></code>: wrapping up the result <code>Vec<B></code> in <code>M</code>'s wrapping produces a value which is also an <code>Applicative</code></li>\n</ul>\n<p>This last bullet shows one of the pain points I mentioned above. Since we don't really know from the <code>Wrapped</code> associated type itself much at all, we only get the <code>Functor</code> bound "for free". We need to explicitly say that it's also <code>Applicative</code>, and that unwrapping it again will get you back a <code>Vec<B></code>.</p>\n<p>In any event, I wasn't clever enough to figure out a way to make all of this compile. This was the final version of the code I came up with:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn traverse<F, M, A, B, I>(iter: I, f: F) -> M::Wrapped<Vec<B>>\nwhere\n F: FnMut(A) -> M,\n M: Applicative<Unwrapped = B>,\n I: IntoIterator<Item = A>,\n M::Wrapped<Vec<B>>: Applicative<Unwrapped = Vec<B>>,\n{\n let mut iter = iter.into_iter().map(f);\n\n let mut result: M::Wrapped<Vec<B>> = match iter.next() {\n Some(b) => b.map(|x| vec![x]),\n None => return M::wrap(Vec::new()),\n };\n\n for m in iter {\n result = result.lift_a2(m, |vec, b| {\n vec.push(b);\n vec\n });\n }\n\n result\n}\n</code></pre>\n<p>But this fails with the error messages:</p>\n<pre><code>error[E0308]: mismatched types\n --> src\\main.rs:448:33\n |\n433 | fn traverse<F, M, A, B, I>(iter: I, f: F) -> M::Wrapped<Vec<B>>\n | - this type parameter\n...\n448 | result = result.lift_a2(m, |vec, b| {\n | ^ expected associated type, found type parameter `M`\n |\n = note: expected associated type `<<M as Functor>::Wrapped<Vec<B>> as Functor>::Wrapped<_>`\n found type parameter `M`\n = note: you might be missing a type parameter or trait bound\n\nerror[E0308]: mismatched types\n --> src\\main.rs:448:18\n |\n433 | fn traverse<F, M, A, B, I>(iter: I, f: F) -> M::Wrapped<Vec<B>>\n | - this type parameter\n...\n448 | result = result.lift_a2(m, |vec, b| {\n | __________________^\n449 | | vec.push(b);\n450 | | vec\n451 | | });\n | |__________^ expected type parameter `M`, found associated type\n |\n = note: expected associated type `<M as Functor>::Wrapped<Vec<B>>`\n found associated type `<<M as Functor>::Wrapped<Vec<B>> as Functor>::Wrapped<Vec<B>>`\nhelp: consider further restricting this bound\n |\n436 | M: Applicative<Unwrapped = B> + Functor<Wrapped = M>,\n | ^^^^^^^^^^^^^^^^^^^^^^\n</code></pre>\n<p>Maybe this is a limitation in GATs. Maybe I'm just not clever enough to figure it out. But I thought this was a good point to call it quits. If anyone knows a trick to make this work, let me know!</p>\n<h2 id=\"should-we-have-hkts-in-rust\">Should we have HKTs in Rust?</h2>\n<p>This was a fun adventure. GATs look like a nice extension to the trait system in Rust. I look forward to the feature stabilizing and landing. And it's certainly fun to play with all of this.</p>\n<p>But Rust is not Haskell. The ergonomics of GATs, in my opinion, will never compete with higher kinded types on Haskell's home turf. And I'm not at all convinced that it should. Rust is a wonderful language as is. I'm happy to write Rust style in a Rust codebase, and save my Haskell coding for my Haskell codebases.</p>\n<p>I hope others enjoyed this adventure as much as I have. A really ugly version of my code is available <a href=\"https://gist.github.com/snoyberg/91ae892199bc8a6687d3798343a9ee54\">as a Gist</a>. You'll need to use a recent nightly Rust build, but otherwise it has no dependencies.</p>\n<p>If you liked this post, you may be interested in some other Haskell/Rust hybrid posts:</p>\n<ul>\n<li><a href=\"https://tech.fpcomplete.com/blog/2017/07/iterators-streams-rust-haskell/\">Iterators and Streams in Rust and Haskell</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/07/streaming-utf8-haskell-rust/\">Streaming UTF-8 in Haskell and Rust</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/2018/10/is-rust-functional/\">Is Rust functional?</a></li>\n<li><a href=\"https://tech.fpcomplete.com/blog/async-exceptions-haskell-rust/\">Async Exceptions in Haskell, and Rust</a></li>\n</ul>\n<p>FP Complete offers training, consulting, and review services in both Haskell and Rust. Want to hear more? <a href=\"https://tech.fpcomplete.com/contact-us/\">Contact us to speak with one of our engineers about how we can help.</a></p>\n",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/",
"slug": "monads-gats-nightly-rust",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Monads and GATs in nightly Rust",
"description": "I saw a recent Reddit post on the advances in Generic Associated Types (GATs) in Rust, which allows for the definition of a Monad trait. In this post, I'm going to take it one step further: a monad transformer trait in Rust!",
"updated": null,
"date": "2020-12-07",
"year": 2020,
"month": 12,
"day": 7,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"rust"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png",
"image": "images/blog/transformers-rust.jpg"
},
"path": "/blog/monads-gats-nightly-rust/",
"components": [
"blog",
"monads-gats-nightly-rust"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "polymorphic-functor",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#polymorphic-functor",
"title": "Polymorphic Functor",
"children": [
{
"level": 3,
"id": "side-note-hkts",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#side-note-hkts",
"title": "Side note: HKTs",
"children": []
}
]
},
{
"level": 2,
"id": "pointed",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#pointed",
"title": "Pointed",
"children": []
},
{
"level": 2,
"id": "applicative",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#applicative",
"title": "Applicative",
"children": []
},
{
"level": 2,
"id": "validation",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#validation",
"title": "Validation",
"children": []
},
{
"level": 2,
"id": "monad",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#monad",
"title": "Monad",
"children": []
},
{
"level": 2,
"id": "monad-transformers",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#monad-transformers",
"title": "Monad transformers",
"children": []
},
{
"level": 2,
"id": "join",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#join",
"title": "join",
"children": []
},
{
"level": 2,
"id": "mapm-traverse",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#mapm-traverse",
"title": "mapM/traverse",
"children": []
},
{
"level": 2,
"id": "should-we-have-hkts-in-rust",
"permalink": "https://tech.fpcomplete.com/blog/monads-gats-nightly-rust/#should-we-have-hkts-in-rust",
"title": "Should we have HKTs in Rust?",
"children": []
}
],
"word_count": 4766,
"reading_time": 24,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
"title": "Philosophies of Rust and Haskell"
}
]
},
{
"relative_path": "blog/error-handling-is-hard.md",
"colocated_path": null,
"content": "<p>This blog post will use mostly Rust and Haskell code snippets to demonstrate its points. But I don't believe the core point is language-specific at all.</p>\n<p>Here's a bit of Rust code to read the contents of <code>input.txt</code> and print it to <code>stdout</code>. What's wrong with it?</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n let s = std::fs::read_to_string("input.txt").unwrap();\n println!("{}", s);\n}\n</code></pre>\n<p>If you're Rust-fluent, that <code>.unwrap()</code> may stick out to you like a sore thumb. You know it means "convert any error that occurred into a panic." And panics are a Bad Thing. It's not correct error handling. Instead, something like this is "better":</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() {\n match std::fs::read_to_string("input.txt") {\n Ok(s) => println!("{}", s),\n Err(e) => eprintln!("Unable to read from input.txt: {:?}", e),\n }\n}\n</code></pre>\n<p>The presence of <code>enum</code>s in Rust makes it really easy to ensure you properly handle all failure cases fully. The code above will not panic. If an I/O error occurs, such as file not found, permissions denied, or a hardware failure, it will print an error message to <code>stderr</code>. But this <em>still</em> isn't good error handling, for two reasons:</p>\n<ol>\n<li>The exit code of the program doesn't indicate an error occurred. We'd need to use something like <a href=\"https://doc.rust-lang.org/stable/std/process/fn.abort.html\"><code>abort</code></a> to fix that, which isn't too hard. But it's something else to remember.</li>\n<li>This is <em>very</em> verbose! We've got a trivial little program here, and we're obscuring the actual behavior of the program with all of this line noise around matching different <code>enum</code> variants.</li>\n</ol>\n<p>Fortunately, the Rust language is benevolent, and it makes it possible to do things <em>even better</em> than before. The <code>?</code> operator will try to do something, and automatically short-circuit if an error occurs. We now get to avoid those pesky panics without cluttering our code. And we get the proper exit code to boot!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() -> Result<(), std::io::Error> {\n let s = std::fs::read_to_string("input.txt")?;\n println!("{}", s);\n Ok(())\n}\n</code></pre>\n<p>All is good in the world, we can stop this post here and go home. The greatest marvel of error handling has arrived!</p>\n<h2 id=\"look-again\">Look again</h2>\n<p>So it turns out I forgot to create my <code>input.txt</code> file. Let's see the beautiful error message generated by my program:</p>\n<pre><code>Error: Os { code: 2, kind: NotFound, message: "The system cannot find the file specified." }\n</code></pre>\n<p>Huh... that's thoroughly unhelpful. In my 5-line program, it's trivial enough to figure out which file doesn't exist. But imagine a 5,000 line program. Or if the code in question is in a dependency. Or if you're a member of the ops team, have never written a line of Rust in your life, don't have access to the codebase, the production server is down at 2am, and you see this error message in your logs.</p>\n<h2 id=\"runtime-exceptions-to-the-rescue\">Runtime exceptions to the rescue?</h2>\n<p>Well, <em>obviously</em> this is just because Rust uses error returns instead of Good Ol' Runtime Exceptions. Obviously something like Haskell solves this problem better, right? Well, sort of. With this program, and no <code>input.txt</code>:</p>\n<pre data-lang=\"haskell\" class=\"language-haskell \"><code class=\"language-haskell\" data-lang=\"haskell\">main = do\n s <- readFile "input.txt"\n putStrLn s\n</code></pre>\n<p>I do in fact get a much nicer error message:</p>\n<pre><code>input.txt: openFile: does not exist (No such file or directory)\n</code></pre>\n<p>I didn't even need to include any error handling logic in the code; it's all implicit! But in reality, the clarity of this error message has little to do with exception handling semantics. It has to do with the construction of this specific error message. It contains enough information to help debug this.</p>\n<p>But there are plenty of counterexamples in Haskell. Calling <code>head</code> on an empty list provides a line number these days, but you used to just get an error that "oops, tried to <code>head</code> an empty list, somewhere, in one of your libraries. Good luck!" Some low-level network functionality still gives vague error messages.</p>\n<p>And even the glorious <code>does not exist</code> message above is only marginally useful. And that's because of...</p>\n<h2 id=\"context\">Context!</h2>\n<p>In a trivial 2-line program, the reality is that "file not found" without any additional information is perfectly reasonable. That's because I know <em>exactly</em> the context in which the error occurred. It either occurred on line 1, or line 2. By contrast, in a 500k SLOC codebase, knowing that <code>input.txt</code> doesn't exist is probably not nearly enough to debug things.</p>\n<ul>\n<li>What content is <code>input.txt</code> supposed to have?</li>\n<li>What part of the code was trying to read it?</li>\n<li>What was I going to do with the contents of the file?</li>\n</ul>\n<p>Similarly, knowing that I can't connect to IP address 255.813.20.1 may be sufficient in a small network test. But in a reasonably complicated program, I'd <em>much</em> rather get the context that I'm trying to make an HTTPS request to example.com proxied through a server with IP address 255.813.20.1, which was specified via the <code>HTTP_PROXY</code> environment variable. That last bit of information may shortcircuit days of debugging to point out "doh, I had a typo in my Kubernetes manifest file!"</p>\n<p>Stack traces are often a huge help here. They tell you a lot of useful context. And both Rust and Haskell are particularly weak at providing this context in their error representations. But it's still not a panacea. The ugly reality is that...</p>\n<h2 id=\"there-s-an-inherent-trade-off\">There's an inherent trade-off</h2>\n<p>Like so many other things, error handling ultimately is a trade-off. When we're writing our initial code, <strong>we don't want to think about errors</strong>. We code to the happy path. How productive would you be if you had to derail every line of code with thought processes around the myriad ways your code could fail?</p>\n<p>But then we're debugging a production issue, and <strong>we definitely want to think about errors</strong>. We curse our lazy selves for not handling an error case that <em>obviously</em> could have arisen. "Why did I decide to abort the process when the TCP connection failed? I should have retried! I should have logged the address I tried to connect to!"</p>\n<p>Then we flood our code with log messages, and are frustrated when we can't see the important bits.</p>\n<p>Finding the right balance is an art. And typically it's an art that we don't spend enough time thinking about. There are some well-established tools for this, like runtime-configurable log levels. That's a huge step in the right direction.</p>\n<p>Rust is such a great example of this. Explicit <code>match</code>ing on <code>Result</code> values really forces you to think through all of the different error cases and how to report them correctly. Complex custom <code>enum</code> error types allow you to define all of the different values you'd want reported. But all of this adds huge line noise compared to <code>?</code>. So <code>?</code> wins the day.</p>\n<h2 id=\"the-method-is-secondary\">The method is secondary</h2>\n<p>The Rust community accepts that panics are bad. The Haskell community constantly argues about whether runtime exceptions are a good or bad thing. Java is either loved or hated for checked exceptions. Golang is either lauded or mocked for <code>if err != nil</code>.</p>\n<p>I'm not at all arguing that those discussions are irrelevant. There are significant trade-offs to these various approaches. They affect performance, trackability of errors, and more.</p>\n<p>What I'm arguing here is that we spend a disproportionate time on how we report and recover from errors, and far less on discussing what a good error actually contains.</p>\n<h2 id=\"my-ideal\">My ideal</h2>\n<p>These are evolving thoughts for me. So take them with a grain of salt. And I'm very interested to hear differing opinions.</p>\n<p>I've long held that in Haskell, we should use runtime exceptions. This has been interpreted by many as my <em>advocacy</em> of runtime exceptions. Instead, I would advocate: use the language's native mechanism. I don't pine for exceptions when writing Rust. Quite the opposite in fact. I overall prefer explicit error handling. But it's not worth fighting the battle against runtime exceptions when they are already ubiquitous.</p>\n<p>I think Rust and Haskell are both close to the sweet spot in error handling. There's relatively little verbosity around adding this handling. If you leverage libraries like <a href=\"https://crates.io/crates/anyhow\"><code>anyhow</code></a> in Rust, there's even less.</p>\n<p>My biggest concern with a library like <code>anyhow</code> is how easy it becomes to do the wrong thing. Taking our broken example from above. It's trivial to "upgrade" it to use <code>anyhow</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">fn main() -> anyhow::Result<()> {\n let s = std::fs::read_to_string("input.txt")?;\n println!("{}", s);\n Ok(())\n}\n</code></pre>\n<p>However, this still produces the same useless error message we started with. Instead, we need to be a bit more explicit with a <code>context</code> method call to get a nicer message:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use anyhow::Context;\n\nfn main() -> anyhow::Result<()> {\n let s = std::fs::read_to_string("input.txt")\n .context("Failed to read input.txt")?;\n println!("{}", s);\n Ok(())\n}\n</code></pre>\n<p>Now we get the much more helpful error message:</p>\n<pre><code>Error: Failed to read input.txt\n\nCaused by:\n The system cannot find the file specified. (os error 2)\n</code></pre>\n<p>This is a good balance of concision and helpfulness. The downside is that lack of <em>enforcement</em>. Nothing forced me to add the <code>.context</code> call. I worry that in a large codebase, or under time pressure, people like me will end up forgetting to add the helpful context.</p>\n<p>Could we design a modified <code>anyhow</code> that <em>forces</em> a <code>context</code> call? Certainly. But:</p>\n<ol>\n<li>It will lose out on the current simple ergonomics.</li>\n<li>No tool can force the "right" level of context, that requires human insight and thought. And those are quantities in short supply, and not usually interested in error messages.</li>\n</ol>\n<h2 id=\"advice\">Advice</h2>\n<p>I don't have an answer here. I would advise people to start by recognizing that good error handling is <em>difficult</em>. We like to think of it as a trivial but tedious task. It isn't. Doing this correctly requires real thought and design. We're too quick to sweep it under the rug as the unimportant parts of our code.</p>\n<p>I'll continue with my general advice of using your language's preferred mechanisms for error handling. In Rust, that means using <code>Result</code> and avoiding panics. In Haskell, it means some mixture of explicit <code>Either</code> return values and runtime exceptions (the exact mixture very much up for debate). In Java, it's mostly checked exceptions, though there's plenty of added unchecked exceptions to gum up the works too.</p>\n<p>But consider spending a bit more time on thinking through not just <em>how</em> to report/raise/throw an error/exception, but what exactly you're reporting/raising/throwing. Think of the poor ops guy drinking his 7th cup of coffee at 4am trying to figure out what part of the codebase needs <code>input.txt</code>, or why in the world the program is trying to connect to an invalid IP address.</p>\n",
"permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/",
"slug": "error-handling-is-hard",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Error handling is hard",
"description": "Arguments rage over topics like explicit errors vs runtime exceptions, checked vs unchecked, and more. In this post, I want to reframe the discussion a bit. Good error handling is simply hard, and consists of conflicting goals.",
"updated": null,
"date": "2020-11-30",
"year": 2020,
"month": 11,
"day": 30,
"taxonomies": {
"tags": [
"rust",
"haskell",
"insights"
],
"categories": [
"functional programming"
]
},
"authors": [],
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "/blog/error-handling-is-hard/",
"components": [
"blog",
"error-handling-is-hard"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "look-again",
"permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#look-again",
"title": "Look again",
"children": []
},
{
"level": 2,
"id": "runtime-exceptions-to-the-rescue",
"permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#runtime-exceptions-to-the-rescue",
"title": "Runtime exceptions to the rescue?",
"children": []
},
{
"level": 2,
"id": "context",
"permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#context",
"title": "Context!",
"children": []
},
{
"level": 2,
"id": "there-s-an-inherent-trade-off",
"permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#there-s-an-inherent-trade-off",
"title": "There's an inherent trade-off",
"children": []
},
{
"level": 2,
"id": "the-method-is-secondary",
"permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#the-method-is-secondary",
"title": "The method is secondary",
"children": []
},
{
"level": 2,
"id": "my-ideal",
"permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#my-ideal",
"title": "My ideal",
"children": []
},
{
"level": 2,
"id": "advice",
"permalink": "https://tech.fpcomplete.com/blog/error-handling-is-hard/#advice",
"title": "Advice",
"children": []
}
],
"word_count": 1757,
"reading_time": 9,
"assets": [],
"draft": false,
"lang": "en",
"lower": null,
"higher": null,
"translations": [],
"backlinks": [
{
"permalink": "https://tech.fpcomplete.com/blog/pattern-matching/",
"title": "Pattern matching"
},
{
"permalink": "https://tech.fpcomplete.com/blog/philosophies-rust-haskell/",
"title": "Philosophies of Rust and Haskell"
}
]
},
{
"relative_path": "blog/ownership-puzzle-rust-async-hyper.md",
"colocated_path": null,
"content": "<p>Most of the web services I've written in Rust have used <code>actix-web</code>. Recently, I needed to write something that will provide some reverse proxy functionality. I'm more familiar with the hyper-powered HTTP client libraries (<code>reqwest</code> in particular). I decided this would be a good time to experiment again with hyper on the server side as well. The theory was that having matching <code>Request</code> and <code>Response</code> types between the client and server would work nicely. And it certainly did.</p>\n<p>In the process, I ended up with an interesting example of battling ownership through closures and async blocks. This is a topic I typically mention in my Rust training sessions as the hardest thing I had to learn when learning Rust. So I figure a blog post demonstrating one of these crazy cases would be worthwhile.</p>\n<p>Side note: If you're interested in learning more about Rust, we'll be offering a <a href=\"https://tech.fpcomplete.com/training/\">free Rust training course</a> in December. Sign up for more information.</p>\n<h2 id=\"cargo-toml\">Cargo.toml</h2>\n<p>If you want to play along, you should start off with a <code>cargo new</code>. I'm using the following <code>[dependencies]</code> in my <code>Cargo.toml</code></p>\n<pre data-lang=\"toml\" class=\"language-toml \"><code class=\"language-toml\" data-lang=\"toml\">[dependencies]\nhyper = "0.13"\ntokio = { version = "0.2", features = ["full"] }\nlog = "0.4.11"\nenv_logger = "0.8.1"\nhyper-tls = "0.4.3"\n</code></pre>\n<p>I'm also compiling with Rust version 1.47.0. If you'd like, you can add <code>1.47.0</code> to your <code>rust-toolchain</code>. And finally, my full <code>Cargo.lock</code> is <a href=\"https://gist.github.com/snoyberg/550a96c3888a2563f20afcec2c652801\">available as a Gist</a>.</p>\n<h2 id=\"basic-web-service\">Basic web service</h2>\n<p>To get started with a hyper-powered web service, we can use the example straight from the <a href=\"https://hyper.rs/\">hyper homepage</a>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use std::{convert::Infallible, net::SocketAddr};\nuse hyper::{Body, Request, Response, Server};\nuse hyper::service::{make_service_fn, service_fn};\n\nasync fn handle(_: Request<Body>) -> Result<Response<Body>, Infallible> {\n Ok(Response::new("Hello, World!".into()))\n}\n\n#[tokio::main]\nasync fn main() {\n let addr = SocketAddr::from(([127, 0, 0, 1], 3000));\n\n let make_svc = make_service_fn(|_conn| async {\n Ok::<_, Infallible>(service_fn(handle))\n });\n\n let server = Server::bind(&addr).serve(make_svc);\n\n if let Err(e) = server.await {\n eprintln!("server error: {}", e);\n }\n}\n</code></pre>\n<p>It's worth explaining this a little bit, since at least in my opinion the distinction between <code>make_service_fn</code> and <code>service_fn</code> wasn't clear. There are two different things we're trying to create here:</p>\n<ul>\n<li>A <code>MakeService</code>, which takes a <code>&AddrStream</code> and gives back a <code>Service</code></li>\n<li>A <code>Service</code>, which takes a <code>Request</code> and gives back a <code>Response</code></li>\n</ul>\n<p>This glosses over a number of details, such as:</p>\n<ul>\n<li>Error handling</li>\n<li>Everything is async (<code>Future</code>s are everywhere)</li>\n<li>Everything is expressed in terms of general purpose <code>trait</code>s</li>\n</ul>\n<p>To help us with that "glossing", hyper provides two convenience functions for creating <code>MakeService</code> and <code>Service</code> values, <code>make_service_fn</code> and <code>service_fn</code>. Each of these will convert a closure into their respective types. Then the <code>MakeService</code> closure can return a <code>Service</code> value, and the <code>MakeService</code> value can be provided to <code>hyper::server::Builder::serve</code>. Let's get even more concrete from the code above:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">async fn handle(_: Request<Body>) -> Result<Response<Body>, Infallible> {...}\nlet make_svc = make_service_fn(|_conn| async {\n Ok::<_, Infallible>(service_fn(handle))\n});\n</code></pre>\n<p>The <code>handle</code> function takes a <code>Request<Body></code> and returns a <code>Future<Output=Result<Response<Body, Infallible>>></code>. The <code>Infallible</code> is a nice way of saying "no errors can possibly occur here." The type signatures at play require that we use a <code>Result</code>, but morally <code>Result<T, Infallible></code> is equivalent to <code>T</code>.</p>\n<p><code>service_fn</code> converts this <code>handle</code> value into a <code>Service</code> value. This new value implements all of the appropriate traits to satisfy the requirements of <code>make_service_fn</code> and <code>serve</code>. We wrap up that new <code>Service</code> in its own <code>Result<_, Infallible></code>, ignore the input <code>&AddrStream</code> value, and pass all of this to <code>make_service_fn</code>. <code>make_svc</code> is now a value that can be passed to <code>serve</code>, and we have "Hello, world!"</p>\n<p>And if all of this seems a bit complicated for a "Hello world," you may understand why there are lots of frameworks built on top of hyper to make it easier to work with. Anyway, onwards!</p>\n<h2 id=\"initial-reverse-proxy\">Initial reverse proxy</h2>\n<p>Next up, we want to modify our <code>handle</code> function to perform a reverse proxy instead of returning the "Hello, World!" text. For this example, we're going to hard-code <code>https://www.fpcomplete.com</code> as the destination site for this reverse proxy. To make this happen, we'll need to:</p>\n<ul>\n<li>Construct a <code>Request</code> value, based on the incoming <code>Request</code>'s request headers and path, but targeting the <code>www.fpcomplete.com</code> server</li>\n<li>Construct a <code>Client</code> value from hyper with TLS support</li>\n<li>Perform the request</li>\n<li>Return the <code>Response</code> as the response from <code>handle</code></li>\n<li>Introduce error handling</li>\n</ul>\n<p>I'm also going to move over to the <code>env-logger</code> and <code>log</code> crates for producing output. I did this when working on the code myself, and switching to <code>RUST_LOG=debug</code> was a great way to debug things. (When I was working on this, I forgot I needed to create a special <code>Client</code> with TLS support.)</p>\n<p>So from the top! We now have the following <code>use</code> statements:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">use hyper::service::{make_service_fn, service_fn};\nuse hyper::{Body, Client, Request, Response, Server};\nuse hyper_tls::HttpsConnector;\nuse std::net::SocketAddr;\n</code></pre>\n<p>We next have three constants. The <code>SCHEME</code> and <code>HOST</code> are pretty self-explanatory: the hardcoded destination.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">const SCHEME: &str = "https";\nconst HOST: &str = "www.fpcomplete.com";\n</code></pre>\n<p>Next we have some HTTP request headers that should <em>not</em> be forwarded onto the destination server. This blacklist approach to HTTP headers in reverse proxies works well enough. It's probably a better idea in general to follow a whitelist approach. In any event, these six headers have the potential to change behavior at the transport layer, and therefore cannot be passed on from the client:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">/// HTTP headers to strip, a whitelist is probably a better idea\nconst STRIPPED: [&str; 6] = [\n "content-length",\n "transfer-encoding",\n "accept-encoding",\n "content-encoding",\n "host",\n "connection",\n];\n</code></pre>\n<p>And next we have a fairly boilerplate error type definition. We can generate a <code>hyper::Error</code> when performing the HTTP request to the destination server, and a <code>hyper::http::Error</code> when constructing the new <code>Request</code>. Arguably we should simply panic if the latter error occurs, since it indicates programmer error. But I've decided to treat it as its own error variant. So here's some boilerplate!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[derive(Debug)]\nenum ReverseProxyError {\n Hyper(hyper::Error),\n HyperHttp(hyper::http::Error),\n}\n\nimpl From<hyper::Error> for ReverseProxyError {\n fn from(e: hyper::Error) -> Self {\n ReverseProxyError::Hyper(e)\n }\n}\n\nimpl From<hyper::http::Error> for ReverseProxyError {\n fn from(e: hyper::http::Error) -> Self {\n ReverseProxyError::HyperHttp(e)\n }\n}\n\nimpl std::fmt::Display for ReverseProxyError {\n fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {\n write!(fmt, "{:?}", self)\n }\n}\n\nimpl std::error::Error for ReverseProxyError {}\n</code></pre>\n<p>With all of this in place, we can finally start writing our <code>handle</code> function:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">async fn handle(mut req: Request<Body>) -> Result<Response<Body>, ReverseProxyError> {\n}\n</code></pre>\n<p>We're going to mutate the incoming <code>Request</code> to have our new destination, and then pass it along to the destination server. This is where the beauty of using hyper for client <em>and</em> server comes into play: no need to futz around with changing body or header representations. The first thing we do is strip out any of the <code>STRIPPED</code> request headers:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let h = req.headers_mut();\nfor key in &STRIPPED {\n h.remove(*key);\n}\n</code></pre>\n<p>Next, we're going to construct the new request URI by combining:</p>\n<ul>\n<li>The hard-coded scheme (<code>https</code>)</li>\n<li>The hard-coded authority (<code>www.fpcomplete.com</code>)</li>\n<li>The path and query from the incoming request</li>\n</ul>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let mut builder = hyper::Uri::builder()\n .scheme(SCHEME)\n .authority(HOST);\nif let Some(pq) = req.uri().path_and_query() {\n builder = builder.path_and_query(pq.clone());\n}\n*req.uri_mut() = builder.build()?;\n</code></pre>\n<p>Panicking if <code>req.uri().path_and_query()</code> is <code>None</code> would be appropriate here, but as is my wont, I'm avoiding panics if possible. Next, for good measure, let's add in a little bit of debug output:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">log::debug!("request == {:?}", req);\n</code></pre>\n<p>Now we can construct our <code>Client</code> value to perform the HTTPS request:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let https = HttpsConnector::new();\nlet client = Client::builder().build(https);\n</code></pre>\n<p>And finally, let's perform the request, log the response, and return the response:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let response = client.request(req).await?;\nlog::debug!("response == {:?}", response);\nOk(response)\n</code></pre>\n<p>Our <code>main</code> function looks pretty similar to what we had before. I've added in initialization of <code>env-logger</code> with a default to <code>info</code> level output, and modified the program to <code>abort</code> if the server produces any errors:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();\n let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n let make_svc = make_service_fn(|_conn| async {\n Ok::<_, ReverseProxyError>(service_fn(handle))\n });\n\n let server = Server::bind(&addr).serve(make_svc);\n log::info!("Server started, bound on {}", addr);\n\n if let Err(e) = server.await {\n log::error!("server error: {}", e);\n std::process::abort();\n }\n}\n</code></pre>\n<p>The full code is <a href=\"https://gist.github.com/snoyberg/ab29c50671858e82ed5f6a88f8170449\">available as a Gist</a>. This program works as expected, and if I <code>cargo run</code> it and connect to <code>http://localhost:3000</code>, I see the FP Complete homepage. Yay!</p>\n<h2 id=\"wasteful-client\">Wasteful Client</h2>\n<p>The problem with this program is that it constructs a brand new <code>Client</code> value on every incoming request. That's expensive. Instead, we would like to produce the <code>Client</code> once, in <code>main</code>, and reuse it for each request. And herein lies the ownership puzzle. While we're at this, let's move away from using <code>const</code>s for the scheme and host, and instead bundle together the client, scheme, and host into a new <code>struct</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">struct ReverseProxy {\n scheme: String,\n host: String,\n client: Client<HttpsConnector<hyper::client::HttpConnector>>,\n}\n</code></pre>\n<p>Next, we'll want to change <code>handle</code> from a standalone function to a method on <code>ReverseProxy</code>. (We could equivalently pass in a reference to <code>ReverseProxy</code> for <code>handle</code>, but this feels more idiomatic):</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl ReverseProxy {\n async fn handle(&self, mut req: Request<Body>) -> Result<Response<Body>, ReverseProxyError> {\n ...\n }\n}\n</code></pre>\n<p>Then, within <code>handle</code>, we can replace <code>SCHEME</code> and <code>HOST</code> with <code>&*self.scheme</code> and <code>&*self.host</code>. You may be wondering "why <code>&*</code> and not <code>&</code>." Without <code>&*</code>, you'll get an error message:</p>\n<pre><code>error[E0277]: the trait bound `hyper::http::uri::Scheme: std::convert::From<&std::string::String>` is not satisfied\n --> src\\main.rs:59:14\n |\n59 | .scheme(&self.scheme)\n | ^^^^^^ the trait `std::convert::From<&std::string::String>` is not implemented for `hyper::http::uri::Scheme`\n</code></pre>\n<p>This is one of those examples where the magic of deref coercion seems to fall apart. Personally, I prefer using <code>self.scheme.as_str()</code> instead of <code>&*self.scheme</code> to be more explicit, but <code>&*self.scheme</code> is likely more idiomatic.</p>\n<p>Anyway, the final change within <code>handle</code> is to remove the <code>let https = ...;</code> and <code>let client = ...;</code> statements, and instead construct our response with:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let response = self.client.request(req).await?;\n</code></pre>\n<p>With that, our <code>handle</code> method is done, and we can focus our efforts on the true puzzle: the <code>main</code> function itself.</p>\n<h2 id=\"the-easy-part\">The easy part</h2>\n<p>The easy part of this is great: construct a <code>ReverseProxy</code> value, and provide the <code>make_svc</code> to <code>serve</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">#[tokio::main]\nasync fn main() {\n env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();\n let addr = SocketAddr::from(([0, 0, 0, 0], 3000));\n\n let https = HttpsConnector::new();\n let client = Client::builder().build(https);\n\n let rp = ReverseProxy {\n client,\n scheme: "https".to_owned(),\n host: "www.fpcomplete.com".to_owned(),\n };\n\n // here be dragons\n\n let server = Server::bind(&addr).serve(make_svc);\n log::info!("Server started, bound on {}", addr);\n\n if let Err(e) = server.await {\n log::error!("server error: {}", e);\n std::process::abort();\n }\n}\n</code></pre>\n<p>That middle part is where the difficulty lies. Previously, this code looked like:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(|_conn| async {\n Ok::<_, ReverseProxyError>(service_fn(handle))\n});\n</code></pre>\n<p>We no longer have a <code>handle</code> function. Working around that little enigma doesn't seem so bad initially. We'll create a closure as the argument to <code>service_fn</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(|_conn| async {\n Ok::<_, ReverseProxyError>(service_fn(|req| {\n rp.handle(req)\n }))\n});\n</code></pre>\n<p>While that looks appealing, it fails lifetimes completely:</p>\n<pre><code>error[E0597]: `rp` does not live long enough\n --> src\\main.rs:90:13\n |\n88 | let make_svc = make_service_fn(|_conn| async {\n | ____________________________________-------_-\n | | |\n | | value captured here\n89 | | Ok::<_, ReverseProxyError>(service_fn(|req| {\n90 | | rp.handle(req)\n | | ^^ borrowed value does not live long enough\n91 | | }))\n92 | | });\n | |_____- returning this value requires that `rp` is borrowed for `'static`\n...\n101 | }\n | - `rp` dropped here while still borrowed\n</code></pre>\n<p>Nothing in the lifetimes of these values tells us that the <code>ReverseProxy</code> value will outlive the service. We cannot simply borrow a reference to <code>ReverseProxy</code> inside our closure. Instead, we're going to need to move ownership of the <code>ReverseProxy</code> to the closure.</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(|_conn| async {\n Ok::<_, ReverseProxyError>(service_fn(move |req| {\n rp.handle(req)\n }))\n});\n</code></pre>\n<p>Note the addition of <code>move</code> in front of the closure. Unfortunately, this doesn't work, and gives us a confusing error message:</p>\n<pre><code>error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements\n --> src\\main.rs:90:16\n |\n90 | rp.handle(req)\n | ^^^^^^\n |\nnote: first, the lifetime cannot outlive the lifetime `'_` as defined on the body at 89:47...\n --> src\\main.rs:89:47\n |\n89 | Ok::<_, ReverseProxyError>(service_fn(move |req| {\n | ^^^^^^^^^^\nnote: ...so that closure can access `rp`\n --> src\\main.rs:90:13\n |\n90 | rp.handle(req)\n | ^^\n = note: but, the lifetime must be valid for the static lifetime...\nnote: ...so that the type `hyper::proto::h2::server::H2Stream<impl std::future::Future, hyper::Body>` will meet its required lifetime bounds\n --> src\\main.rs:94:38\n |\n94 | let server = Server::bind(&addr).serve(make_svc);\n | ^^^^^\n\nerror: aborting due to previous error\n</code></pre>\n<p>Instead of trying to parse that, let's take a step back, reassess, and then try again.</p>\n<h2 id=\"so-many-layers\">So many layers!</h2>\n<p>Remember way back to the beginning of this post. I went into some details around the process of having a <code>MakeService</code>, which would be run for each new incoming connection, and a <code>Service</code>, which would be run for each new request on an existing connection. The way we've written things so far, the first time we handle a request, that request handler will consume the <code>ReverseProxy</code>. That means that we would have a use-after-move for each subsequent request on that connection. We'd <em>also</em> have a use-after-move for each subsequent connection we receive.</p>\n<p>We want to share our <code>ReverseProxy</code> across multiple different <code>MakeService</code> and <code>Service</code> instantiations. Since this will occur across multiple system threads, the most straightforward way to handle this is to wrap our <code>ReverseProxy</code> in an <code>Arc</code>:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let rp = std::sync::Arc::new(ReverseProxy {\n client,\n scheme: "https".to_owned(),\n host: "www.fpcomplete.com".to_owned(),\n});\n</code></pre>\n<p>Now we're going to need to play around with <code>clone</code>ing this <code>Arc</code> at appropriate times. In particular, we'll need to clone twice: once inside the <code>make_service_fn</code> closure, and once inside the <code>service_fn</code> closure. This will ensure that we never move the <code>ReverseProxy</code> value out of the closure's environment, and that our closure can remain a <code>FnMut</code> instead of an <code>FnOnce</code>.</p>\n<p>And, in order to make <em>that</em> happen, we'll need to convince the compiler through appropriate usages of <code>move</code> to move ownership of the <code>ReverseProxy</code>, instead of borrowing a reference to a value with a different lifetime. This is where the fun begins! Let's go through a series of modifications until we get to our final mind-bender.</p>\n<h2 id=\"adding-move\">Adding move</h2>\n<p>To recap, we'll start with this code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let rp = std::sync::Arc::new(ReverseProxy {\n client,\n scheme: "https".to_owned(),\n host: "www.fpcomplete.com".to_owned(),\n});\n\nlet make_svc = make_service_fn(|_conn| async {\n Ok::<_, ReverseProxyError>(service_fn(|req| {\n rp.handle(req)\n }))\n});\n</code></pre>\n<p>The first thing I tried was adding an <code>rp.clone()</code> inside the first <code>async</code> block:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(|_conn| async {\n let rp = rp.clone();\n Ok::<_, ReverseProxyError>(service_fn(|req| {\n rp.handle(req)\n }))\n});\n</code></pre>\n<p>This doesn't work, presumably because I need to stick some <code>move</code>s on the initial closure and <code>async</code> block like so:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| async move {\n let rp = rp.clone();\n Ok::<_, ReverseProxyError>(service_fn(|req| {\n rp.handle(req)\n }))\n});\n</code></pre>\n<p>This unfortunately still doesn't work, and gives me the error message:</p>\n<pre><code>error[E0507]: cannot move out of `rp`, a captured variable in an `FnMut` closure\n --> src\\main.rs:88:60\n |\n82 | let rp = std::sync::Arc::new(ReverseProxy {\n | -- captured outer variable\n...\n88 | let make_svc = make_service_fn(move |_conn| async move {\n | ____________________________________________________________^\n89 | | let rp = rp.clone();\n | | --\n | | |\n | | move occurs because `rp` has type `std::sync::Arc<ReverseProxy>`, which does not implement the `Copy` trait\n | | move occurs due to use in generator\n90 | | Ok::<_, ReverseProxyError>(service_fn(|req| {\n91 | | rp.handle(req)\n92 | | }))\n93 | | });\n | |_____^ move out of `rp` occurs here\n</code></pre>\n<p>It took me a while to grok what was happening. And in fact, I'm not 100% certain I grok it yet. But I believe what is happening is:</p>\n<ul>\n<li>The closure grabs ownership of <code>rp</code> (good)</li>\n<li>The <code>async</code> block grabs ownership of <code>rp</code>, which seemed good, but isn't</li>\n<li>Inside the <code>async</code> block, we make a clone of <code>rp</code></li>\n<li>When the <code>async</code> block is dropped, its ownership of the original <code>rp</code> is dropped</li>\n<li>Since the <code>rp</code> was moved out of the closure, the closure is now an <code>FnOnce</code> and cannot be called a second time</li>\n</ul>\n<p>That's no good! It turns out the trick to fixing this isn't so difficult. Don't grab ownership in the <code>async</code> block. Instead, clone the <code>rp</code> in the closure, before the <code>async</code> block:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n let rp = rp.clone();\n async move {\n Ok::<_, ReverseProxyError>(service_fn(|req| {\n rp.handle(req)\n }))\n }\n});\n</code></pre>\n<p>Woohoo! One <code>clone</code> down. This code still doesn't compile, but we're closer. The next change to make is simple: stick a <code>move</code> on the inner closure:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n let rp = rp.clone();\n async move {\n Ok::<_, ReverseProxyError>(service_fn(move |req| {\n rp.handle(req)\n }))\n }\n});\n</code></pre>\n<p>This also fails, but going back to our description before, it's easy to see why. We still need a second <code>clone</code>, to make sure we aren't moving the <code>ReverseProxy</code> value out of the closure. Making that change is easy, but unfortunately doesn't fully solve our problem. This code:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n let rp = rp.clone();\n async move {\n Ok::<_, ReverseProxyError>(service_fn(move |req| {\n let rp = rp.clone();\n rp.handle(req)\n }))\n }\n});\n</code></pre>\n<p>Still gives us the error message:</p>\n<pre><code>error[E0515]: cannot return value referencing local variable `rp`\n --> src\\main.rs:93:17\n |\n93 | rp.handle(req)\n | --^^^^^^^^^^^^\n | |\n | returns a value referencing data owned by the current function\n | `rp` is borrowed here\n</code></pre>\n<p>What's going on here?</p>\n<h2 id=\"did-your-future-borrow-my-reference\">Did your Future borrow my reference?</h2>\n<p>Again, referring to the introduction, I mentioned that the <code>service_fn</code> parameter had to return a <code>Future<Output...></code>. This is an example of the <code>impl Trait</code> approach. I've previously <a href=\"https://tech.fpcomplete.com/rust/ownership-and-impl-trait/\">blogged about ownership and impl trait</a>. There are some pain points around this combination. And we've hit one of them.</p>\n<p>The return type of our <code>handle</code> method doesn't indicate what underlying type is implementing <code>Future</code>. That underlying implementation <em>may</em> choose to hold onto references passed into the <code>handle</code> method. That would include references to <code>&self</code>. And that means if we return that <code>Future</code> outside of our closure, a reference may outlive the value.</p>\n<p>I can think of two ways to solve this problem, though there are probably more. The first one I'll show you isn't the one I prefer, but is the one that likely gets the idea across more clearly. Our <code>handle</code> method is taking a reference to <code>ReverseProxy</code>. But if it didn't take a reference, and instead received the <code>ReverseProxy</code> by move, there would be no references to accidentally end up in the <code>Future</code>.</p>\n<p>Cloning the <code>ReverseProxy</code> itself is expensive. Fortunately, we have another option: pass in the <code>Arc<ReverseProxy></code>!</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">impl ReverseProxy {\n async fn handle(self: std::sync::Arc<Self>, mut req: Request<Body>) -> Result<Response<Body>, ReverseProxyError> {\n ...\n }\n}\n</code></pre>\n<p>Without changing any code inside the <code>handle</code> method or the <code>main</code> function, this compiles and behaves correctly. But like I said: I don't like it very much. This is limiting the generality of our <code>handle</code> method. It feels like putting the complexity in the wrong place. (Maybe you'll disagree and say that this is the better solution. That's fine, I'd be really interested to hear people's thoughts.)</p>\n<p>Instead, another possibility is to introduce an <code>async move</code> inside <code>main</code>. This will take ownership of the <code>Arc<ReverseProxy></code>, and ensure that it lives as long as the <code>Future</code> generated by that <code>async move</code> block itself. This solution looks like this:</p>\n<pre data-lang=\"rust\" class=\"language-rust \"><code class=\"language-rust\" data-lang=\"rust\">let make_svc = make_service_fn(move |_conn| {\n let rp = rp.clone();\n async move {\n Ok::<_, ReverseProxyError>(service_fn(move |req| {\n let rp = rp.clone();\n async move { rp.handle(req).await }\n }))\n }\n});\n</code></pre>\n<p>We need to call <code>.await</code> inside the <code>async</code> block to ensure we don't return a future-of-a-future. But with that change, everything works. I'm not terribly thrilled with this. It feels like an ugly hack. I don't have any recommendations, but I hope there are improvements to the <code>impl Trait</code> ownership story in the future.</p>\n<h2 id=\"one-final-improvement\">One final improvement</h2>\n<p>One final tweak. We put <code>async move</code> after the first <code>rp.clone()</code> originally. This helped make the error messages more tractable. But it turns out that that <code>move</code> isn't doing anything useful. The <code>move</code> on the inner <code>closure</code> already forces a move of the cloned <code>rp</code>.