Scaling web app on Google Compute Engine (Get Cooking in Cloud)
Articles,  Blog

Scaling web app on Google Compute Engine (Get Cooking in Cloud)


SPEAKER 1: to Get
Cooking in Cloud, where we shared the best recipes
to apply in your cloud kitchen, I’m Priyanka Vergadia. And in this episode, we will
talk about scaling a web application on Google
Compute Engine. As we learned in our
Compute Engine example, when web apps are
deployed on GCE instances, deployment and scaling
is done automatically and seamlessly by using
instants templates. When your website
becomes popular, and the users integral
from 1 to 1 million, you need to add instances. And when the requests
go back down, you need to remove
those instances to keep the costs low. But how does adding and removing
instances automatically work? Compute Engine
autoscaling policies. Autoscaling is one of the
features of managed instance groups. Managed instance groups allow
you to operate applications on multiple identical
virtual machines based on instance templates. You can make your workload
scalable and highly available by taking advantage of
automated managed instance group services, including
autoscaling, autohealing, regional deployments,
and auto-updating. But wait. What is an instance template? An instance template is
a specific, customized configuration of
the GCE instance that facilitates the reuse
of instance configurations by using managed
instance groups. This custom image then
becomes the starting point for future deployments. Now, back to
autoscaling policies. Autoscaling policies provides
a way to add or remove instances as needed. The impact of autoscaling
policies is twofold. One, your users get
a great experience using your application
because there are always enough resources
to meet the demand. And two. You maintain better
control over your costs because the autoscaler removes
the instances when demand falls below a specific threshold. To create an
autoscaler, you must specify the autoscaling policy
and a target utilization level that the autoscaler
uses to determine when to scale the group. You can choose to scale using
average CPU utilization, Stackdriver monitoring
metrics, or HTTP load balancing serving capacity, which can
be based on either utilization or requests per second. The autoscaler continuously
collects usage information based on the policy, compares
the actual utilization to your desired
target utilization, and determines if the group
needs to be scaled up or down. For example, if you scale
based on CPU utilization, you can set your target
utilization level at 80%, and the autoscaler will
pull for the CPU utilization and check if it
is more than 80%. If so, it adds the new
instance to the instance group. If the CPU utilization
is less than 80%, then it removes an
instance from the group, making sure the capacity
is always maintained. You can use autoscaling in
conjunction with load balancing by setting up an autoscaler
that scales based on the load of your instances. For example, assume
the load balancing serving capacity of a
managed instance group is defined as 100 requests
per second, for instance. If you create an autoscaler
with HTTP load balancing policy and set it to maintain a target
utilization level of 80%, the autoscaler will
add or remove instances from a managed instance
group to maintain 80% of the serving capacity,
or 80 requests per second per instance. Now, our architecture needs
to automatically replace instances that have failed
or have become unavailable. And when the new
instance comes online, it should understand
its role in the system. It should configure
itself automatically. And it should discover
any of the dependencies. And it should start handling
requests automatically. To replace a field
instance automatically, we can use several Compute
Engine components together. You could create
instance templates that use a public image
and a startup script to prepare the instance
after it starts running. But we recommend that you
use deterministic instance templates, which minimizes
risks and unexpected behavior from your instance templates. Thanks to Managed
Instance Groups, now we have a system that can
replace unhealthy instances with new ones. But we still have a challenge. How are we going to know
which instance to replace? Well for that, we need to define
what is an unhealthy instance. And to do that, we
use health checks. We recommend that you use
separate health checks for load balancing and for autohealing
Autohealing health checks are set up at the
managed instance group level. You create a health check that
looks for a response on port 80 and that can tolerate
some failure before it marks instance as
unhealthy and causes them to be [INAUDIBLE] created. In this example, an instance
is marked as healthy if it returns
successfully two times. It is marked as unhealthy if
it returns unsuccessful three consecutive times. We learned some tricks to
make your web application resilient by spinning
up new instances and taking them down if
compute resources fail or your traffic grows. That’s all for today on
Get Cooking in Cloud. Here’s hoping you can
whip up something great. Join us next time where we will
see a journey of the startup from a few users to
thousands of users, and how they evolved
their web application architecture to meet the
growing and changing needs. If you like this
video, then check out the previous episodes too. And to see more
such content, don’t forget to like and
subscribe to our channel.

2 Comments

  • SKY NET CYBER SYSTEM 2

    โœ”๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–๐Ÿ’–โœ”

  • SKY NET CYBER SYSTEM 2

    โœ”๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡๐Ÿฅ‡โœ”

Leave a Reply

Your email address will not be published. Required fields are marked *