To address these high-availability and vendor-proofing requirements, many enterprises choose to deploy application stacks across multiple cloud regions, multiple cloud providers and data centers. In doing so, enterprises are encountering a performance impact; modern applications tend to rely on API or microservices gateways to carry out user authentication and policing functions. Gateways are usually packaged as virtual machines and deployed in the public subnet of a VPC in one public cloud region (or in the DMZ in a data center). If an application has microservices deployed across multiple cloud regions, end user traffic will enter the application environment through the gateway, and then get routed to the right public cloud region or data center where the relevant microservice has been deployed. This “traffic tromboning” leads to added network latency and impacts overall application response times.
Instead, what if the all intra-application routing and policing decisions could be carried out closer to endpoints or end users, allowing application owners to curtail traffic tromboning and reduce application response times?