Cloud computing architecture refers to how components interact with other components. These components typically consist of a front end platform (web application, desktop, mobile), back end platforms (servers, storage), a Cloud based delivery, and a network (Internet, Intranet, Intercloud).
Requirements to make visual cloud management effective and scalable
In the previous article about the future of cloud computing, we stated that “cloud management will eventually be done with visual representations of architectural concepts”. Today, several options exist for an engineer to design a cloud architecture visually. One can use a whiteboard with abstracts representations of components. Alternatively, some products like Draw.io allow the use of engineering diagrams. However, it turns out that while Infrastructure as Code (IaC) tools have been widely adopted, managing cloud resources visually is not yet considered a viable practice. Since an image is worth a thousand words, we can legitimately ask ourselves why the industry sticks to thousands of lines of Domain-Specific Languages (DSLs). What does it take to allow engineers to deploy their infrastructures from a whiteboard? What are the limitations of the current solutions? This article goes through these questions and exposes a new solution. All along, I’ll refer to the brain-to-cloud chain of actions. It designates all the steps required from the moment an engineer has an infrastructure need, up to the moment those needs are addressed by existing & deployed architecture on a cloud provider.
During my engineering school, a teacher showed the class those two image and asked “can you tell me what bar on the left is different from the others ?”. Of course, anyone answered within a second. Right after, asked, “now can you tell me where is the typo in this code ?”. Of course, people needed several minutes to identify the missing semicolon. Any linter would identify this problem on a simple C algorithm. But when it comes to more complex programs, debugging can be tricky to achieve by just reading code.
As the teacher explained, that’s because code is not meant for humans to understand it, it is designed for computers to execute it. And even if languages are doing an impressive job abstracting the complexity and offering an “easy to use” API, they will never be able to compete with the visual power of our brains.
While Infrastructure as Code (IaC) has addressed many challenges related to distributed architectures, it is not a silver bullet. In fact, it still requires the user to have a profound understanding of computing principles like networking or system administration. Moreover, a new set of challenges has come up: one often spends hours in extensive documentations to identify the tiny variable required to just run a simple application. With scale, template files become unreadable and the thousands of lines of yaml are error-prone. And, as we showed in the previous section, code files are simply not adapted for human reading. Let’s just take a look at these two images and honestly state which one is better suited to understand a cloud architecture.
As a result, IaC totally fails into easing the deployment of any architecture. Let’s take a moment to do it step by step: first, an engineer wants to deploy a simple three tier architecture with a frontend, a backend and a database. Then, the drawing on a whiteboard takes three minutes, the time to draw a bucket for the frontend, a box for the backend server and a disk for the database (add two arrows to represent data flows). Now, this clear and intuitive schema necessitates an hour to translate it into AWS compliant Terraform file. And another one to be able to actually deploy something without having an unreadable error from the non-standard API you’re using (cloud vendor & DSL specific). We’re now happy that it’s replicable on demand (not even on a different cloud provider since the Terraform APIs are vendor-specific), but the process took time not because of design concerns, but because of frictions all along the brain-to-cloud chain of actions. These frictions arise at two moments. First, when the engineer translates agnostic components to a specific code API (generally a DSL in the context of IaC). Because translation needs the mastering of a complex language, that means going back and forth in the deep seas of an extended documentation to be able to write valid files. Second, when applying the code files, the specificities of the target cloud provider make the engineer deep dive into another documentation to comply with the opinionated cloud API and features. As a result, IaC make the brain-to-cloud chain time-consuming and ineffective.
Since I’m not the only one who prefers to work on visual objects, solutions exist to help our engineer design some more or less complex architectures. Those solutions range from the most easy-to-use ones to the most agnostic ones. On one side, CloudCraft allows you to use AWS existing components. On the other, agnostic design tools like Draw.io offer a range of standardized diagrams along with cloud products and free drawing. But it turns out that those solutions do not tackle the pain on the entire chain of the brain-to-cloud process.
Definitely, these tools now allow saving and potentially even version the visuals. But none successfully integrates with the DevOps tools like Git. This schema is not linked with the documentation, and do not necessarily represent what is actually deployed on a cloud provider. Moreover, the more you want to use cloud products, the less you’re able to translate them into another cloud provider equivalent.
That’s because currently, visual designer for cloud management did not solve any problem, they only moved the whiteboard on a computer. But what’s the point of moving a perfectly usable whiteboard to a computer if it’s not even integrating with the other tools? As of today, an engineer who uses graphical tools to design architectures will eventually have to translate the visual objects to a declarative, vendor-related language. And we come back to the pain of IaC: vendor lock-in (even with Terraform), time-consuming documentation currying and error-prone typing. Also, the architecture is not even deployed at this stage. And later, the schema will not be updated when a merge request will change the produced IaC files.
As we observed in the previous section, graphical tools cannot be considered an efficient way to manage cloud resources. So, what does it take to do so? What do we miss so that engineers can design an architecture with visual, cloud-agnostic components, and deploy it without going through the visual-to-code step? In other words, we’re going to see what integrations are required for the visual tools so that they can be the entry point of any engineering process regarding cloud architecture.
First, the power of visual designers will come the day we’re able to translate those diagrams into an actually deployed set of components in a cloud. Concretely, that means we want to allow a button click — just like we do to save files — to actually make the good API calls to make the architecture exist. This killer feature is the real MVP here. Because it’s removing the pain of brain-to-code and code-to-cloud steps. You no longer spend hours in documentation nor on syntax problems, you don’t spend time translating visual objects into their code equivalent. You just see, design & deploy.
Second, we want to integrate the visual diagrams with the engineers workflows. Because computer science works with tracking tools, automated pipelines and linked objects, whiteboard will not allow the management of cloud resources until they’re not fully integrated with the other technologies. That means, we want to offer a versioning system that leverages Git commits, branches, merges, tags and so on. This way, the visual designers will move from digital whiteboards to a step in any engineering process.
Last, but not least, it is important that the design area of your visual tool integrates cloud-agnostic components. That means, we want a library of standard icons for S3-compatible APIs, some computing power resources (GPU, general purpose, etc.). We also want a generic way to link nodes at the network level. Many other cloud concepts are, in fact, redundant between vendors, and abstracting them in order to provide a generic set of components will allow visual tools to become the default approach whether you’re deploying on AWS or Digital Ocean.
Now that we’ve overviewed some of the most important features visual tools need to become part of the engineering process, I can introduce the only one which actually comes closer to those requirements.
To begin, and as a simple statement, Brainboard allows a one button click deployment of an architecture on any cloud provider. That means this tool tackles the Infrastructure as Code pain points, by removing the translation steps in the engineering process. The user designs an architecture using a whiteboard, and can have it created on any selected cloud provider by going to the deployment tab and clicking on the launch button.
What’s compelling behind the scene is that nodes are not directly connected to a vendor API. Instead, the backend engine is creating an abstract representation of the architecture, to generate the associated Terraform file. And boom, now we’ve got the integration we needed. Instead of locking the user on the Brainboard platform, it seamlessly integrates with any environment, allowing one to export the generated files, but also to perform Git native versioning.
With those capabilities, it’s now easy to envision a generic cloud abstraction. In the upcoming months, the team will develop vendor-agnostic visual components, like object storage, memory-intensive computing or private networks. Those objects will allow the users to design infrastructure without selecting cloud products, but instead agnostic components representing cloud computing concepts. When one is ready to deploy, just selecting one of the cloud providers will perform an automatic translation from abstract components to its closer equivalent in the selected vendor’s catalog. And then a button click will deploy the architecture.
This article has shown you three important things. First, that using visual representations is powerful in the context of managing cloud resources. It allows our brains to grasp complex information much more effectively, thus reducing errors & time consumption. Second, we described what current visual designers miss, allowing proper cloud management. Agnostic components, one-click deployment and versioning are three mandatory points to remove the burden of Infrastructure as Code. Third, we exposed how Brainboard implements this solution in a very easy-to-use way. The tool doesn’t pretend to remove your habits nor the tools you already use, but to integrate with them instead. Don’t hesitate to come test it, we’ll be grateful for any feedback and most of all would be happy to design it for you. For this, don’t hesitate to come join us on our Brainboard Community Slack. See you there!
I am no DevOps, Engineer nor a Cloud Architect.
Three weeks ago, I joined Brainboard (BB for short 😉) to be a member of a squad team to build a fast-growing community around an all-in-one solution for the Multi-Cloud (Multi-verse), Brainboard. We have the opportunity to impact the Cloud Computing industry overall at a large scale and I intend to understand it, well.
Trends are among the things we monitor, here at Brainboard.
For the past few years, we saw the Cloud shifting course towards a more sustainable long-term future. According to predictions from Gartner, global spending on cloud services is expected to reach over $482 billion in 2022, up from $313 billion in 2020. That's promising for the industry!
Lets deep dive into the trends that matter for the Cloud Computing industry and how they can help shape the industry, in the next year or so.
The Cloud(1) has been around for 20 years now and has changed radically our way of running business as it accelerates innovation and lowers a lot of risks for companies (like wasting money before finding a market). But most importantly it triggered a paradigm shift in the IT world as companies (and engineers) started thinking differently about how to access computing resources to run applications and businesses.