Prezi has a global audience that depends on the fast and reliable accessibility of its content. In this article, we look into the way Prezi serves content from a network perspective.
See this article as a general overview of how content can be served on a global scale. This is not the only and probably not the ultimate solution but one way to do it.
The overall flow of this is depicted in the following image. Prezi runs on AWS and uses AWS services to offer customer-facing internet endpoints.
The DNS zones and records are managed in Route53. Customer traffic goes through AWS Global Accelerator to decrease latency before it is filtered by AWS Web Application Firewall (WAF). The traffic is then terminated at the Application Load Balancer (ALB) and forwarded into the cloud environment where most of the workload runs inside Elastic Kubernetes Services (EKS).
Some of the customer traffic is going to AWS Cloudfront which is used to deliver media assets that benefit from being cached closer to the customer.
The following article will go over these components, see what they do, and discuss the benefits those components offer.
Having customers worldwide and offering services over the Internet poses multiple challenges. One of them is to reduce latency. Cloudflare defines latency as the “amount of time it takes for a data packet to go from one place to another. Lowering latency is an important part of building a good user experience.” (source https://www.cloudflare.com/en-gb/learning/performance/glossary/what-is-latency/)
That said, the challenge in having customers worldwide is the heterogeneity of the network normal people call “the internet”.
When we look at the lower network layer at the internet topology, we can see many different networks peered together.
The following image shows parts of the peering connections in Latin America that form the internet’s backbone. For a data packet, going from South America to Miami means traversing through multiple networks and every network adds a little bit to the complete travel time.
Going back to the challenge of controlling latency for customers there are generally speaking 2 options:
- Offering services close to the customer to avoid far network travels
- Offer a fast path from the customer to the place where services are offered.
The best path for most of the world
Prezi uses the second option by offering a fast path to services via AWS GlobalAccelerator. This service enables customer traffic to be routed most of the time via the global AWS network instead of the public internet.
This routing reduces latency. In experiments from my local machine, optimized requests traveled 200ms faster than the not-optimized ones. The total time until I got an answer went down from 800ms to 600ms.
Loading the Prezi dashboard when logged in needs at the moment roughly 150 individual requests which all benefit from the decrease of 25% in latency.
Please keep in mind that the real percentage of acceleration depends on multiple factors like location and current routing situation.
Whenever a customer sends requests to prezi.com, those requests are routed to the closest AWS network endpoint and then transferred inside this global network.
And the best path for inhabitants of Virginia
As stated in the headline of the previous chapter, most Prezi customers go to Global Accelerator except those who reside in Virginia. Those customers are already close enough to the service endpoint and are routed directly to the following components.
Note: the network diagram above does not show this route to avoid being too complex.
To achieve this, Prezi leverages geo-balanced DNS queries in Route53 so that different IP addresses are returned depending on the location.
The following screenshot for a practical example. The first lookup is executed from a local machine in Europe, and the second one with an activated VPN from Virginia.
The first DNS query returns the endpoints for the Global Accelerator, and the second query from Virgina returns the endpoints of some AWS load balancer (see the following chapter).
Alternatives
The alternative to this network-based approach is to move offered services closer to the customer. This can be achieved for example by deploying instances into selected cloud regions. To achieve this, the whole application stack needs to be deployed and some backend synchronization is needed — as part of the Prezi, suite enables collaboration in-between multiple users.
Serving from a single region reduces the complexity and streamlines deployment.
While the internet is a wonderful place to connect, collaborate, be creative, and a lot more, it is at the same time also a place that attracts bad actors. It is widespread that public and well-known endpoints are the target of distributed denial of service (DDoS) attacks. Prezi leverages the combination of AWS Web Application Firewall (WAF) and Shield to protect the downstream infrastructure from these threat vectors.
Every request that needs to reach Prezi infrastructure is evaluated through these components. Certain endpoints are protected via a specific rate limit to make sure they are not hammered.
For example, it does not make sense to send multiple requests for the login endpoint within a small amount of time. To protect sensible endpoints, the AWS WAF can respond with HTTP/429 (https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429).
See the following screenshot of how a triggered rate limit looks in the browser console:
On a bigger scale, the traffic flow is monitored by AWS Shield and blocked when a DDoS attack is detected. When Shield detects a DDoS from multiple traffic sources, those sources get blocked.
Alternatives
Offering services over the Internet without any protection is a bad idea. Any public-facing IP is attracting traffic and if a company has reached a certain scale it attracts bad actors. There are alternative solutions and vendors like Cloudflare or Akamai that can offer the same protection service. As we run our workload on AWS the natural choice is AWS WAF as the integration is easy.
Requests allowed to enter reach the AWS-managed load balancer fleet that keeps track of routing those requests into the VPC environment that hosts the actual workload. The load balancer uses our public TLS certificate to offload HTTPS connections from customers.
The application load balancer (ALB) is used for routing based on the HTTP Host header. This means that based on the domain used, ALB can forward traffic into the isolated workload environment.
Running inside the Kubernetes fleet is a self-written API gateway. The purpose of this component is to build more detailed routes based on request paths or other identifiers. Most of the backends are based on Python and Scala. Those pods run inside the Kubernetes offering of AWS: Elastic Kubernetes Service.
Traffic is routed into these pods either by a WSGI conform application server in Python land or directly by the JVM for Scala services.
As the mentioned API gateway runs also inside Kubernetes, it can forward traffic to the target backend services based on different routing guidelines within the cluster network. The API gateway offers the flexibility to do advanced routing to the microservices based on configuration by the developers.
When you think back about the scope of AWS WAF usage, there was no check for malicious content and requests. We use a different web application firewall to check for bad requests and protection against cross-site scripting, injections, and other things that might harm Prezi — or our customer.
Prezi’s main purpose is to deliver amazing presentations that most of the time contain visuals like images and gifs. They can be served via a content delivery network (CDN) that can reproach content closer to the customer.
Loading resources from a CDN does decrease the time in which the user waits for the resources to appear.
Also on the cost focus, it is cheaper to serve content from CloudFront instead of serving it every time from the backend. This applies especially to assets like images that don’t change often.
Due to the deep integration into the ecosystem, in our setup, there is no other choice than CloudFront. Technically, it should also be doable with CloudFlare or any other CDN vendor.
The article above describes the architecture Prezi uses to serve content to a global audience.
There are multiple different ways to serve traffic — even if running on AWS.
إرسال تعليق