NGINX Interview: Enterprise Adoption of Software Load Balancing, API Gateways, and Service Meshes
MMS • RSS
InfoQ recently sat down with Rob Whiteley, Sidney Rabsatt, Liam Crilly from NGINX Inc, and discussed their views on the future of networking and data center communication. The company aims to be a “trusted advisor” and provide an “easy on-ramp” for enterprises looking to leverage software load balancers, ingress gateways, and service meshes, as is appropriate to their current technology landscape and goals.
Building upon the success of the open source and commercial NGINX proxy and web server implementations, Rabsatt, VP of Product Management, stated that NGINX now offer a comprehensive product suite that enables effective control and observability across the networking and API gateway domains, and increasingly across the service mesh space too. NGINX are focusing on providing solutions that allow “freedom and flexibility” for engineers, combined with the ability to implement guide rails where appropriate. The NGINX team aspire to be seen as a “trusted advisor” that can guide large enterprises on their adoption of new technologies within the domains of application delivery controllers (ADCs) and networking.
Whiteley, CMO at NGINX, and Crilly, Directory of Product Management, stated that due to the changing deployment fabric (cloud, containers and Kubernetes), they believe that the interesting development within the networking space are moving from hardware to software. The enterprise adoption of this new fabric is changing the role of API gateways (and ingress in general), and the Whitely mused that technologies within this space as still “crossing the chasm” in regard to the diffusion of innovations. The service mesh space, although important, is still nascent, and best practices here within the enterprise are still emerging.
Rabsatt continued by discussing how NGINX are increasingly seeing customers attempting to manage growing operational complexity within their software architectures, which is partly driven by the adoption of architectural styles such as microservices and function-as-a-service (FaaS) which have more moving parts. Although some enterprise organisations are experimenting with using NGINX within a service mesh-like configuration, this is very much at the vanguard. However, customers are aware and interested in this space, and are looking for guidance in order to map out a journey or roadmap on how to move from their current networking solutions to this new style of communication.
Approximately 40% of NGINX customers use the product to implement an API gateway solution, and Rabsatt discussed that this is an area that is important to NGINX. Many other API gateway solutions are built using NGINX, for example the open source and commercial Kong API gateway, and the open source OpenResty, and this further validates the applicability and strength of the core NGINX technology within this space. Rabsatt cautioned that teams adopting an API gateway must consider this in relation to the overall networking and communication journey the organisation will be taking, and recommended that technical leaders consider the “completeness of vision” for any products they adopt.
When asked about the role the NGINX Unit polyglot web and application service will play within the product suite, Whiteley replied that this will help to balance the competing requirements between development and operations. Developers want support for more language runtimes in order to allow the best language to be chosen for the given requirements, and operations want reduced runtime implementation and management complexity. The ability for NGINX Unit to support multiple language platforms and offer the same abstraction and control interface can go some way to mitigating the friction between dev/ops requirements.
NGINX Unit can also be run for multiple use cases, Crilly discussed, and many Units can be deployed, one for each service, within a typical microservices-based architecture, or a single large Unit can be deployed to support multiple microservice-like components that can be bound together at runtime. This choice provides freedom to the engineering teams to work independently on components at a level of granularity they require, and to not be constrained by the deployment and operation model. The tight integration of the NGINX proxy functionality within Unit may also provide improved performance over sidecar-based deployments of proxies within a typical service mesh implementation.
The discussion concluded with the participants agreeing that it was important for enterprise organisations to constantly review and refine their journey towards the emerging best practices in the networking and ADC space. The NGINX team aim to provide an “easy on-ramp” for enterprises looking to leverage software load balancers, API gateways and service meshes as appropriate to their current technology landscape and goals. It is important that organisations have “scope to innovate within their own domain”, and for this to occur there needs to be both choice and cohesion across the various components that are needed for a complete networking and application delivery solution.