Banzai Cloud Logo Close
Home Benefits Blog Company Contact
Sign in

These days it seems that everyone is using some sort a CI/CD solution for their software development projects, either a third-party service, or something written in house. Those of us working on the Banzai Cloud Pipeline platform are no different; our CI/CD solution is capable of creating Kubernetes clusters, running and testing builds, of pulling secrets from Vault, packaging and deploying applications as Helm charts, and lots more. For quite awhile now (since the end of 2017), we’ve been looking for a Kubernetes native solution but could not find many.

Read more...

At Banzai Cloud we run and deploy containerized applications to our PaaS, Pipeline. Java or JVM-based workloads, are among the notable workloads deployed to Pipeline, so getting them right is pretty important for us and our users. Java/JVM based workloads on Kubernetes with Pipeline Why my Java application is OOMKilled Deploying Java Enterprise Edition applications to Kubernetes A complete guide to Kubernetes Operator SDK Spark, Zeppelin, Kafka on Kubernetes

Read more...

At Banzai Cloud we provision different applications or frameworks to Pipeline, the PaaS we built on Kubernetes. We practice what we preach, and our PaaS’ control plane also runs on Kubernetes and requires a layer of data storage. It was therefore necessary that we explore two different use cases: how to deploy and to run a distributed, scalable and fully SQL compliant DB to cover our client’s, and our own, internal needs.

Read more...

At Banzai Cloud we run and deploy containerized applications to Pipeline, our PaaS. Those of you who (like us) run Java applications inside Docker, have probably already come across the problem of JVMs inaccurately detecting available memory when running inside a container. Instead of accurately detecting the memory available in a Docker container, JVMs see the available memory of the machine. This can lead to cases wherein applications that run inside containers are killed whenever they try to use an amount of memory that exceeds the limits of the Docker container.

Read more...