Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
Government organizations and institutions have similar requirements and goals regarding their IT infrastructure as commercial enterprises: it must be flexible enough to adapt to the changing needs of the organization, easy to maintain and monitor, scalable to meet the changing workload requirements, highly available and resistent to errors, and of course secure to protect the various sensitive data such organizations must process. In addition, they must meet the requirements of various national and state-level regulations, like the Federal Risk and Authorization Management Program (FedRAMP), the Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG), the Federal Information Security Management Act (FISMA), and other legislation.
Read more...
A generation of system engineers has grown up who’ve never had to go to a data center as part of their job. It’s easy to forget that while cloud computing offers an abstraction over physical servers, they do still exist behind the scenes — more of them than ever before. There are hardly any companies anymore that host all of their IT infrastructure on premises, but many enterprises have reasons to continue managing physical servers as part of their infrastructure.
Read more...
One of the core feature of Banzai Cloud’s container management platform, Pipeline, is the capability of building hybrid clouds with ease. The most important reason behind the introduction of our own CNCF certified Kubernetes distribution, Pipeline Kubernetes Engine (PKE) was to provide our customers a Kubernetes environment that behaves consistently across public cloud providers, on-premise, and the combination of those. A single approach does not cover all use-cases, so the Banzai Cloud Pipeline platform provides four different ways to build and use hybrid clouds.
Read more...
One of the key features of Pipeline, our hybrid cloud container management platform, is its ability to provision Kubernetes clusters across five different cloud providers (Alibaba, Azure, Amazon, Google, Oracle), private datacenters (vmWare, baremetal, etc), or any combination thereof. It does this by using either cloud provider-managed Kubernetes, or our own CNCF certified Kubernetes distribution - PKE. Each cloud provider’s internal LB is different, and so is the way each is integrated with Kubernetes.
Read more...
Here at Banzai Cloud, we provision and manage Kubernetes clusters on multiple cloud providers (Alibaba, Amazon, Azure, Google, Oracle) and on-premise (bare metal or vmWare) with our container management platform, Pipeline. We support both cloud provider-managed K8s distributions (ACK, EKS, EKS, GKE, OKE) and our own lightweight, CNCF certified Kubernetes distribution, PKE. Both these approaches have their pros and cons, though that’s not what we’ll be talking about today (we’ve blogged about this several times already, see Deploying Pipeline Kubernetes Engine (PKE) on Azure).
Read more...
At Banzai Cloud we blog mostly about our container management platform, Pipeline. We frequently gloss over the workhorse underneath Pipeline, our CNCF certified Kubernetes distribution, PKE. That’s because our customers usually install and manage PKE with Pipeline, and thus benefit from all the enterprise-grade features we’ve already made available on that platform. However, PKE is also available on its own, and is one of the simplest ways to kickstart a Kubernetes cluster across multiple supported environments.
Read more...
Every major cloud provider offers a managed Kubernetes service that aims to simplify the provisioning of Kubernetes clusters in its respective environment. The Banzai Cloud Pipeline platform has always supported these major providers - AWS, Azure, Google, Oracle, Alibaba Cloud - turning their managed k8s services into a single solution-oriented application platform that allows enterprises to develop, deploy and securely scale container-based applications in multi-cloud environments. While this was very appealing from the outset, we quickly realized there was demand among our enterprise users to implement more sophisticated use cases that were limited by our initial approach.
Read more...
In our last post about using Cadence workflows to spin up Kubernetes we outlined the basic concept of Cadence and walked you through how to use the Cadence workflow engine. Let’s dive into the experiences and best practices associated with implementing complex workflows in Go. We will use the deployment of our PKE Kubernetes distribution, from Pipeline to AWS EC2 as an example. Of course, you can deploy PKE independently, but Pipeline takes care of your cluster’s entire life-cycle , starting from nodepool and instance type recommendations, through infrastructure deployment, certificate management, opt-in deployment and configuration of our powerful monitoring, logging, service mesh, security scan, and backup/restore solutions, to the scaling or termination of your cluster.
Read more...
A strong focus on security has always been a key part of the Banzai Cloud’s Pipeline platform. We incorporated security into our architecture early in the design process, and developed a number of supporting components to be used easily and natively on Kubernetes. From secrets, certificates generated and stored in Vault, secrets dynamically injected in pods, through provider agnostic authentication and authorization using Dex, to container vulnerability scans and lots more: the Pipeline platform handles all these as a default tier-zero feature.
Read more...
One of the main goals of the Banzai Cloud Pipeline platform and PKE Kubernetes distribution is to radically simplify the whole Kubernetes experience and execute complex operations on behalf of the users. These operations communicate with a number of different remote services (from cloud providers to on-prem virtualization or storage providers) where we have little or no way to influence the result of these calls: how long will it take, will it ever succeed, whether it provides the desired result and so.
Read more...