Mobile Monitoring Solutions

Search
Close this search box.

How MongoDB is driving growth in ANZ – Computer Weekly

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Open-source database company MongoDB has been driving skills development and investing in local manpower in a bid to grow its business in Australia and New Zealand (ANZ).

In an interview with Computer Weekly, Anoop Dhankhar, regional vice-president at MongoDB, said the company has been growing its business at around 20% in ANZ, with high customer retention levels.

It has more than 1,200 local customers and a staff strength of over 150 people in ANZ, which is 50% more than this time last year, Dhankhar said.

Plans are afoot to continue recruiting in 2023, and – unlike some companies – its local operation is more than a sales organisation: it has a big team of developers and software engineers and runs graduate and cadet programmes to build technical skills.

MongoDB appeals to organisations looking for rapid development times and a database system that is secure, reliable and is available in the cloud.

While that includes startups such as My Muscle Chef and Humanitix, other customers include established companies such as Ticketek and financial services businesses such as Bendigo and Adelaide Bank and Macquarie.

The latter is “going all in on MongoDB”, according to Dhankhar, who notes that the database supports many of the Australian Prudential Regulation Authority standards for operational risk management.

How MongoDB is used

Bendigo and Adelaide Bank started using MongoDB around 2014 when it needed a database to support a project that used the MeteorJS development framework to help it deliver modern cloud-native solutions more quickly, said Ash Austin, platforms practice lead at the bank.

A community edition wasn’t appropriate for a production system, so it used Compose’s hosted MongoDB. This provided a great start, he said, and was used to deliver several applications.

In 2020, a transformation plan to optimise and simplify applications – including an API (application programming interface) strategy based on MongoDB and Node.js – meant it was time to go bigger, said Austin, so the bank decided to make MongoDB its preferred database for new developments and switched to Atlas, MongoDB’s multi-cloud database service.

One example of its use is for recording customer entitlements. For example, an individual might be a signatory to an account held by a local sports club in addition to their personal account. A document structure is a better way of dealing with this than a series of bits indicating access rights, observed Austin, especially with Consumer Data Right (CDR) being exercised more frequently.

Bendigo is still using other databases where they are suitable, but MongoDB and Atlas are great for banking applications, he said, especially where several different datasets are involved. An example of this is an application that reviews the progress of customers’ applications, as it involves different data sets built up over time.

Benefits of the migration to Atlas included a better than 50% reduction in total cost of ownership, faster provisioning (thanks to automation and Terraform integration, this is now done in minutes rather than hours or even days), a 20-30% uplift in performance, and database compression resulting in a more than 50% reduction in total storage requirements. These factors are “highly important in the current climate”, Austin said.

Looking ahead, he said financial year 2023 is “the year of the API” at the bank and exposing the right APIs will be fundamental to accelerating the bank’s digitisation and offering more simplified products. When it comes to CDR, everything must be API-enabled, and MongoDB “is best of breed” for a range of purposes including operational data stores and prototyping.

Atlas’s support for multicloud clusters across Google Cloud and Amazon Web Services (AWS) is significant, and Austin is keen to take advantage of the system’s ability to replicate across clouds when an appropriate use case comes up.

MongoDB “is a really flexible product”, he said, adding: “It’s not MongoDB’s modus operandi to lock customers into a proprietary system – instead they prefer to ensure the quality of their solutions deliver everything customers expect.” He also noted that MongoDB “has set the standard for others to follow”.

Besides financial services, Dhankhar said MongoDB has a “reasonable footprint” in government.

The company is undergoing IRAP (Information Security Registered Assessors Programme) assessment, which should lead to a bigger addressable market in the government sector from mid-2023.

IRAP certification, which covers some 700 controls is “a ticket to the dance”, Dhankhar said. At present, government customers only use MongoDB on-premise, and certification could make a “huge” difference to the company in Australia. “Government is the biggest sector in the country,” said Dhankhar, accounting for around 50% of Queensland’s IT, for example.

Comprehensive features

Importantly for some customers, MongoDB clusters can be spun up as required, supporting agility and the ability to expand into new geographies, while data can be constrained to its local cluster to meet data sovereignty requirements.

The database even makes provision for the “right to be forgotten” that is part of Europe’s GDPR (General Data Protection Regulation) and similar legislation, Dhankhar said. This provides a degree of future proofing in other jurisdictions, such as Australia, in case it gets introduced. “All we’ve got to do is switch it on,” he added.

MongoDB’s multicloud replication feature is unique, Dhankhar said. It allows a system running on AWS to replicate to Google Cloud or Azure to allow continued operation in the event of a catastrophe.

Another attractive feature is that everything is included in the MongoDB developer platform – there are no optional extras. “We look at it from an agility perspective,” he explained, so features such as encryption and the “right to be forgotten” support can simply be turned on with no need to redevelop the application.

Selling points for MongoDB include its suitability for high volume, highly resilient transaction monitoring applications; for consumer-oriented businesses, especially those seeking international growth as Atlas is available in 97 regions around the world; and for applications that must be highly reliable, such as banking and other assets of national resilience.

More generally, MongoDB is ideal for microservices architectures – for example, when a large retailer implements inventory, price check, address check and other microservices. “That’s our bread and butter [at] MongoDB,” said Dhankhar.

Platform for growth

Max Ferguson, co-founder and CEO of PDF collaboration specialist Lumin, wrote the first version of his company’s software in 2014 when working on a construction site where paper documents kept getting lost and the online system did not allow annotations. He built Lumin PDF on MongoDB “as the developer experience was fantastic”, partly because of its flexible data model.

Lumin PDF uses MongoDB to store every change made in a PDF document and then sync it with other users of that document.

The product scaled to one million users in a year, and in 2019 Lumin switched to Atlas because “MongoDB took a lot of the maintenance effort off our hands”, said Ferguson.

Today, the New Zealand company has 75 million users around the world and has employees in several countries including the Philippines and Ukraine.

The MongoDB developer experience is an important factor at Lumin. Ferguson chose the product because he liked it, and eight years later there are plenty of developers who know MongoDB and how to use it, so “it’s very easy for us to get them up and running quickly”.

Scalability is another consideration, with “MongoD able to scale with us” from less than one million to 75 million, said Ferguson.

The company “took on too much responsibility” in terms of managing the software, he added, saying it was “a huge roadblock”, but moving to Atlas took away that load, allowing Lumin to concentrate on the application itself.

The availability of Atlas in multiple countries is relevant to companies such as Lumin that want to grow internationally. Not only do customers benefit from better performance but keeping data locally, or at least in an acceptable region, is important to some customers, especially larger ones, he said.

Furthermore, Atlas provides 99.995% uptime, and takes steps to reduce the risk of data loss.

Plans for Lumin include providing an improved document search facility (something it will be working on with MongoDB), and – in the slightly longer term – scaling to one billion customer documents per year, which he expects MongoDB to handle.

Lumin is preparing to launch an e-signing product provisionally called Lumin Send and Sign and hopes it will “commoditise the signing market.”

According to Ferguson, existing products – such as those from DocuSign and Adobe – are expensive and difficult to use, and customers are spending too much on multiple products that aren’t as well designed as they expected.

Lumin Send and Sign, currently in beta after just over a year in development, is “100% built on MongoDB” and will work with a range of other products including Salesforce and Google Workspace.

Skills development

Ferguson and Austin both speak highly of MongoDB University, the company’s online training and certification programme which was recently enhanced.

Ferguson said the programme fits in with developers’ preference for learning new things without having to invest too much time at once. It also suits the company’s practice of training newly hired graduates in-house, in part by having them build by themselves a series of small applications of increasing complexity.

Austin said “it’s a hot market” for developers, and it’s easier to keep staff when they are working with good technology, especially when on-demand training such as MongoDB University is available.

Furthermore, MongoDB is “always willing to turn up” for in-house events such as hackathons. Together, these things help developers feel empowered, involved and enthusiastic, he added.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


RBC Cuts Price Target on MongoDB to $235 From $350, Maintains Outperform Rating

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news


Delayed Nasdaq
 – 


03:47 2022-11-29 pm EST


142.55

USD  

-0.66%

11/28/2022 | 06:50am EST

© MT Newswires 2022

All news about MONGODB, INC.

Analyst Recommendations on MONGODB, INC.

Financials (USD)

Sales 2023 1 207 M

Net income 2023 -421 M

Net cash 2023 701 M

P/E ratio 2023 -24,0x
Yield 2023
Capitalization 9 859 M
9 859 M
EV / Sales 2023 7,59x
EV / Sales 2024 5,91x
Nbr of Employees 4 240
Free-Float 96,2%

Chart MONGODB, INC.

Duration :


Period :

MongoDB, Inc. Technical Analysis Chart | MarketScreener

Technical analysis trends MONGODB, INC.

Short Term Mid-Term Long Term
Trends Neutral Bearish Bearish

Income Statement Evolution

Please enable JavaScript in your browser’s settings to use dynamic charts.

Sell

Buy

Mean consensus BUY
Number of Analysts 26
Last Close Price 143,50 $
Average target price 304,33 $
Spread / Average Target 112%

EPS Revisions

Please enable JavaScript in your browser’s settings to use dynamic charts.

Managers and Directors

Sector and Competitors

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Best Diversified Telecommunication Stocks To Buy Now in 2022

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

The telecommunication industry has been and continues to be among the most volatile, dynamic, and innovative sectors in the global economy. In the past decade in particular, we have witnessed rapid changes and transformations across the entire value chain of the industry. With a reduction in switching costs for software services, companies can now focus on developing software as a service (SaaS) applications that serve specific needs of end users or businesses. The rise of cloud-based telecommunication services will continue to reshape how people and companies interact with one another and manage their communications from virtually anywhere at any time. This article dives into some of the best diversified telecommunication stocks you can buy now to capitalize on this upcoming opportunity in 2022 and beyond.

CenturyLink

CenturyLink is a leading communications and information services company in North America. It provides high-speed Internet, data, voice, video, and managed services to consumers, businesses, government agencies, and other organizations. The company also offers advanced data and cloud computing services, as well as network access and other services to end users, such as communications providers and other Internet service providers. It primarily serves customers in the state of Texas. CenturyLink’s main growth strategy is to continue to invest in its network and product offerings to provide high-quality and innovative products and services to customers. Its long-term strategy is to generate sustainable long-term revenue and earnings growth, which it believes can be achieved by increasing the penetration of its products and services with existing customers and by expanding its customer base. The company’s strategic objectives include increasing revenue, growing its customer base, expanding margins, and increasing cash flow.

Cisco Systems

Cisco Systems, Inc. (CSCO) is a global provider of products, services, and solutions to accelerate the digital transformation of organizations. The company’s products and services are used in the Internet of Things (IoT), cloud, the Internet of Services (IoS), and industries such as agriculture, energy and utilities, health care, manufacturing, public services, retail, transportation, and more. Cisco’s long-term strategy is to deliver secure, intelligent, and open networks that will harmonize with the global Internet economy. The company aims to expand its market share in the network-based services and software-defined networking markets. Cisco’s strategic objectives include increasing revenue and cash flow, growing its customer base, expanding margins, and increasing cash flow.

Intel Corp.

Intel Corporation (INTC) is a semiconductor company that manufactures computer chips. The company operates in two segments, Client Computing Group and Data Center Group. The company’s products are used in computers, computing clouds, and data centers. Intel’s long-term strategy is to create a world that is secure and powered by bold new technologies. The company aims to lead in key technologies such as artificial intelligence, autonomous vehicles, and quantum computing. Intel’s strategic objectives include growing its customer base, expanding margins, and increasing cash flow.

Microsoft

Microsoft Corporation (MSFT) is a provider of computer software, services, and cloud computing. The company’s products include operating systems, language software, server software, tools, and training, as well as various online services, such as Bing, Microsoft Azure, Microsoft Office, and Microsoft Dynamics. The company’s long-term strategy is to lead in a digital world. Microsoft’s strategic objectives include increasing revenue, growing its customer base, expanding margins, and increasing cash flow.

Qualcomm

Qualcomm Incorporated (QCOM) is a diversified multinational telecommunications equipment and software company. It designs and develops wireless communication products and services for the access network and adjacent markets, including semiconductors and systems for telecommunications networks and the Internet. The company’s long-term strategy is to lead in 5G, artificial intelligence, and IoT markets. Qualcomm’s strategic objectives include increasing revenue, growing its customer base, expanding margins, and increasing cash flow.

Summing up

Telecommunication services are essential for modern life. The industry is experiencing rapid changes and transformations in the business environment. This article discusses some of the best diversified telecommunication stocks you can buy now to capitalize on this upcoming opportunity in 2022 and beyond. These stocks are expected to deliver strong returns over the long term.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Webinar – Data modernisation: A key element of business transformat… – Finextra

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Many financial institutions face issues in using legacy platforms that no longer keep up with their customer and internal requirements in real time. So how can businesses evolve their data processing to become more timely, agile, and scalable while controlling infrastructure costs?

Data modernisation is facilitating business transformation by moving from outdated legacy systems to a modern, multi-cloud, data platform to enable personalised customer experiences, new revenue opportunities, and a faster time-to-market.

The transition from legacy to modern platforms can lead to multiple benefits for financial institutions. While reducing TCO, these transformations increase developer productivity and reduce data fragmentation and duplication, offering more value from data. For example, organisations can unlock data to support novel onboarding processes by using existing operational data to score new customers.

Sign up for this Finextra webinar, hosted in association with MongoDB, to join our panel of industry experts as they discuss the following areas:

  • How will data modernisation solve legacy issues faced by financial institutions and businesses?
  • What strategies of data modernisation are in use?
  • How will data modernisation impact customers?
  • What trends are we seeing in the industry in terms of data analytics?
  • What are some examples of use cases in which data modernisation has been successfully applied?

Speakers:

  • Gary Wright – Head of Research, Finextra [Moderator]
  • Joerg Schmuecker – Director, Financial Services Industry Solutions, MongoDB
  • Prabhakaran Pitchandi – Vice President and Global Head of Analytics and Insights, TCS

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Data Engineer Skills: What to Learn to Master the Role – Dice Insights

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Data engineering is a critical job at a growing number of companies. Data engineers must construct and maintain vast repositories of data crucial to business operations; data scientists and data analysts depend on this work to find the right data and perform effective analyses. For an organization of any size, data engineer skills are critical for long-term survival.

If you’re ready to become a data engineer, there are crucial things to keep in mind—and core skills to learn. For starters, you need to understand the many specialties that fall under the bigger umbrella of data engineering. Some are more IT-focused, such as managing and running database systems distributed throughout a cloud platform. Others are more developer-oriented, with a focus on writing applications that integrate data.

Data engineers may also assist data scientists in retrieving, analyzing, and even presenting data. With that in mind, let’s break down the skills necessary to become an effective data engineer.  

Learning a Programming Language

First and foremost, you must become proficient in at least one programming language. For data engineering, the three most common are:

Python is used most often. As such, you will want to learn many of the computation and data libraries available to Python, such as:

Data Querying

As a data engineer, you need to fundamentally understand how data is stored and managed, whether it’s hundreds of pieces of data… or billions. Start small. Learn how to store data in a small MySQL database on your own computer; how to manage the data and its indexes; how to query the data in SQL; and how to create views and stored procedures. Learn how to group and total data, such as sums, averages, and so on.

Then, learn how to do the same in a NoSQL database such as MongoDB. MongoDB uses a completely different way of storing data compared to MySQL. Learn the difference between relational tables that MySQL uses and collections that MongoDB uses. Learn how to transform data with aggregations.

For both systems, you’ll want to at least be familiar with IT concepts such as what replica sets are and how data is sharded. As a data engineer, you might not be the one managing this (depending on your team and company), but it’s good to have some idea of what’s going on.

Then move up to bigger systems geared towards “Big Data.” Examples here include Google Datastore and AWS DynamoDB.

Data Analysis and Statistics

Data engineering goes far beyond simple queries. Whether you’ve become comfortable doing select statements and joins in a database, you need to have a much deeper understanding of statistics and different types of data analysis. You’ll want to study sites on data analysis and statistics, and even consider purchasing a book on it.

IDE and Platforms

For data engineers at every step of their careers, becoming comfortable with tools and IDEs is critical. One important tool is Jupyter Notebook, which is popular among data engineers who use Python. Jupyter Notebook lets you run Python code interactively and have charts and diagrams appear right alongside the code. (It’s much more than that, but that’s a brief description.)

Data Visualization

Data visualization is essential for data engineers, especially those who work with data scientists on generating analyses, results, and presentations. People need to see the information you’re presenting, and you want it in a way they can easily digest. That includes charts, graphs, tables, and so on.

First, you’ll want to learn different types of charts and graphs, such as scatter plots, normal distributions, histograms, density plots, and so on. Most of these graphs and charts are produced by libraries; for Python, that means learning how to use the math-plotting library called Matplotlib.

Other Tools and Concepts

As you progress in your education, you’ll eventually want to explore more advanced topics. These aren’t required for landing an entry-level job; instead, these are topics typical of more advanced positions:

Hadoop: This set of tools lets you manage files across multiple physical computers. It provides the foundation for a Big Data system.

Spark: This data analytics engine works well with Hadoop.

Hive: This data warehousing engine, built on top of Hadoop, is meant for managing huge amounts of data scattered across multiple physical computers and drives.

ETL: This is a concept, not a tool. Data engineers often manage data by extracting large volumes of data from multiple sources, then combining it, manipulating it, and transforming it; then the resulting set of data derived from the first is loaded into a different database. This process is known as an Extract, Transform, and Load pipeline.

MapReduce: Similar to ETL, this is a concept where you read data from several sources, and “reduce” it down to a smaller set of data.

Amazon Web Services EMR: This is AWS’s set of tools for managing the above, including Hadoop, Spark, and Hive. (Be careful practicing with this one, as it can get very expensive because you’re creating clusters, which means you’re creating several servers!)

Conclusion

Becoming a data engineer is no small feat. It requires a lot of training. But people who work in data engineering tend to really enjoy their jobs. When it comes to acquiring new skills, take it slowly; have patience and practice. There will always be something new to learn.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


7 Most Popular Real-Time Databases – Analytics India Magazine

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Listen to this story

While relational databases have been around for a long time, real-time databases have been on the rise because they allow storing and syncing of data in real-time with almost zero latency. Streaming platforms like Netflix, Prime Video, and others, along with companies for managing large influx of data rely on databases that can be monitored and also provide high security and encryption, and to that end, real-time databases are the preferred way.

Check out the top 7 real-time databases that are being used for big organisations; with some with a small architecture that can be used for startups as well!

Redis

One of the most popular and reliable real-time databases for speed and simplicity, Redis has a highly scalable caching layer for best enterprise performance. It can also handle identifying data using the AI-based transaction scoring for easier fraud detection. It is most commonly used for cache, message brokers, and deployment of databases across clouds and hybrid environments. 

Redis boasts its latency for 1GB/s network with 200 microseconds. It runs on macOSX, Linux, Windows, and BSD and is supported in all languages.

Firebase

A Google Developers’ product, Firebase is the rising star for real-time storing and syncing of data. It is a cloud-hosted NoSQL database that enables querying app data at a global scale. It ships with mobile and web SDKs for building serverless applications and users can also execute backend code to receive responses from triggered events of the database. 

Along with offering strong security with authentication, Firebase is also optimised for offline use by storing data on local cache which is then uploaded and synchronised online when the device is connected. Compared to Redis and others, Firebase has a higher latency of around 100 milliseconds. 

Aerospike

Enabling organisations to work across billions of transactions in seconds, Aeropike is another popular NoSQL real-time database. It is built for multi-cloud, leveraged on large-scale JSON documents and also for SQL use cases. Owing to its patented Hybrid Memory Architecture, it has the lowest footprint.

For storing 2 TB of data, Aerospike has less than 1 millisecond latency, but it does not store data in the memory. While compared to other real time databases, it requires 80% less infrastructure to work and is therefore preferable for smaller organisations. 

RethinkDB

The open-source, scalable database, RethinkDB, makes the process of building apps dramatically easier and makes the cumbersome process of managing data a lot easier. It allows you to query JSON documents with dozens of languages and build modern apps with technologies like Socket.io or SignalR. 

The intuitive web UI allows you to scale your app clusters in merely a few clicks with a simple API for precise control. RethinkDB performed a 16-node cluster with a latency of 3 milliseconds—making it faster than a lot of its competitors.

Apache Kafka

An open-sourced distributed event streaming platform with high performance pipelines, streaming analytics, and data integration. With built-in stream processing, Kafka can work with Postgres, JMS, Elasticsearch, AWS, and many more. 

With a latency of less than 2 milliseconds for clusters of machines, Kafka is extremely scalable and can be integrated on various other real-time databases like Hazelcast and RethinkDB, among others. 

AWS Kinesis

Making it easier to process and analyse collected and real-time data, Amazon’s Kinesis, is managed on the AWS server, which proves its scalability. It allows buffering of data and runs fully managed on the streaming applications. The most notable application for this database is for building video analytics applications. 

Kinesis is very efficient for building application monitoring, fraud detection, and showcasing live leaderboards. 

Hazelcast

Hazelcast, a real-time data stream processing platform enables you to build applications and take actions immediately. It can be coded on languages like Java, Node.js, Python, C++, and Go. Hazel can be used for many use cases like retail banking, AI ops, supply–chain logistics, among many other data management applications.

It is cloud agnostic and has an average latency of Hazelcast is around 2 milliseconds for 18k/s throughput. 

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: The Future of Service Mesh with Jim Barton

MMS Founder
MMS Jim Barton

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Introduction [00:01]

Thomas Betts: Hi everyone. Today’s episode features a speaker from our QCon International Software Development Conferences. The next QCon Plus is online from November 30th to December 8th. At QCon, you’ll find practical inspiration and best practices on how to implement emerging software trends directly from senior software developers at early adopter companies to help you adopt the right patterns and practices. I’ll be hosting the modern APIs track so I hope to see you online. Learn more at qconferences.com.

When building a distributed system, we have to consider many aspects in the network. This has led to many tools to help software developers improve performance, optimize requests, or increase observability. Service meshes, sidecars, eBPF, layer three, layer four, layer seven, it can all be a bit overwhelming. Luckily, today I am at QCon San Francisco and I’m talking with Jim Barton, who’s presenting a talk called Sidecars, eBPF and the Future of Service Mesh.

Jim Barton is a field engineer for Solo.io whose enterprise software career spans 30 years. He’s enjoyed roles as a project engineer, sales and consulting engineer, product development manager, and executive leader of Techstars. Prior to Solo, he spent a decade architecting, building, and operating systems based on enterprise open source technologies at the likes of Red Hat, Amazon, and Zappos.

And I’m Thomas Betts, cohost of the InfoQ Podcast, lead editor for architecture and design at InfoQ, and an application architect at Blackbaud. Jim, welcome to the InfoQ Podcast.

Jim Barton: Thank you.

Fundamentals of service mesh [01:17]

Thomas Betts: Let’s start with setting the context. Your talk’s about the future of service mesh, but before we get to the future, can you bring us up to the present state of service mesh and what are we dealing with?

Jim Barton: Sure. So service mesh technology has been around for a few years now. We’ve gained a lot of experience over that time and experimented with some different architectures, and are really getting to a point now where I think we can make some intelligent observations based on the basic experience we have and how to take that forward.

Thomas Betts: How common are they in the industry? Are everybody using them? Is it just a few companies out there? What’s the adoption like?

Jim Barton: When I started at Solo a few years ago, it was definitely in the early adopter phase, and I think now what we’re seeing is we’re beginning to get more into, I’m going to say, mid adopters. It’s not strictly the avant garde who are willing to cut themselves and bleed all over their systems, but it’s beginning to be more mainstream.

The use of sidecars in service mesh [02:08]

Thomas Betts: So sidecars was another subheading on your talk. Where are they at? Are they just a concept like a container? It’s just where you run the code next to your application. And are there common products and packages that are intended for deployment as a sidecar?

Jim Barton: If you survey the history of service mesh technology, you’ll see that sidecars is a feature of just about all of the architectures that are out there today. Basically, what that means is we have certain crosscutting concerns that we’d like the application to be able to carry out. For example, very commonly security, being able to secure all intra-service mTLS communication or secure all communication using mTLS. And so sidecars are a very natural way to do that because we don’t want each application to have to do the heavy lifting associated with building those capabilities in. So if we can factor that capability out into a sidecar, it gives us the ability to remove that undifferentiated heavy lifting from the application developers and externalize that into infrastructure. So sidecars are probably the most common way that’s been implemented since the “dawn of service mesh technology” a few years ago.

Thomas Betts: They really go hand in hand. Most of the service meshes have the sidecar, and that’s just part of what you’re going to see and part of your architecture?

Jim Barton: Absolutely. I’d say if you survey the landscape, the default architecture is the sidecar injection based architecture.

What do application developers need to know about service mesh? [03:26]

Thomas Betts: I’ve had a few roles in my career, but network engineer was never one of them. So I know that layer seven is the application layer and layer one is the wire, but I have to be honest, I would’ve to look it up on Google, whatever is in between. You talked about the whole point of the service mesh is to handle the things that the engineers and the application developers don’t have to deal with. If they have the service mesh in place, what do they need to know versus what do they not need to know?

Jim Barton: And that’s part of the focus of this talk is beginning to change the equation of what they need to know versus what they don’t need to know. Ideally, we want the service mesh technology to be as transparent to application developers as possible. Of course, that’s an objective that we aspire to but we may never 100% reach, but it’s definitely something where we’re moving strongly in that direction.

Thomas Betts: And the other part of my intro with the layers is where does all this technology operate? Where does the service mesh and the sidecar fit in? Where does eBPF fit into this? And does the fact that they operate at different levels affect what the engineers need to know?

Jim Barton: Wow, there’s a lot to unpack in that question there. So I would say transparency, as I said before, is an ultimate goal here. We want to get to the point where application developers know as little as possible about the details of the mesh. Certainly, if you look at the way that developers would have solved these kinds of requirements five years ago, there would’ve been an awful lot of work required to integrate the various components. Let’s take mTLS as an example to integrate the components into each individual application to make that happen, so there was a very high degree of developer knowledge of this infrastructure piece. And then, as we’ve begun to separate out the cross-cutting concerns from the application workloads themselves, we’ve been able to, over time, decrease the amount of knowledge that developers have to have about each one of those layers. There’s still a fair amount, and that’s one of the things that we’re addressing in this talk and in the service mesh community in general is getting to a point where the networking components, whether it’s layer four or layer seven, will be isolated more distinctly from the services themselves.

mTLS is the most common starting point for service mesh adoption [05:32]

Thomas Betts: You mentioned mTLS a few times. What are some other common use cases you’re seeing?

Jim Barton: Sure, yeah. I mentioned mTLS just because we see that as probably the most common one that people come to a service mesh with. You look at regulated industries like financial services and telco, you look at government agencies with the Biden administration’s mandate for Zero Trust architectures, and mTLS security, and so forth. So we see people coming to us with that requirement an awful lot but yeah, certainly there are a lot of others as well.

The three pillars of service mesh are connect, secure, and observe, and so we typically see things falling into one of those three buckets. So security is a central pillar and, as I said before, is probably the most common initial use case that users come to us with.

But also just think about the challenges of observability in a container orchestration framework like a Kubernetes. You have hundreds of applications, you have thousands of individual deployments that are out there, something goes wrong. How do you understand what’s happening in that maze of applications and interconnections? Well, in the old world of monolithic software, it wasn’t that difficult to do. You attach a debugger and you can see what’s happening inside the application stack. But every debugging problem becomes a murder mystery that we have to figure out who done it, and where they were, and why. It just adds completely new sets of dimensions to the problems where you’re throwing networking into the mix at potentially every hop along the way. So I’d say observability is also a very common use case that people come to us with, with respect to service mesh technology. What’s going on? How can I see it? How can I understand that? And so service meshes are all about the business of producing that information in a form that it can be analyzed and acted upon.

Also, from a connectivity standpoint, so the third pillar, connect, secure, observe, from a connectivity standpoint, there’s all kinds of complexities that can be managed in the mesh that in the past, certainly when my career started, you would handle those things in the application layer themselves. Things like application timeouts, failovers, fault injection for testing, and so forth. Even say at the edge of the application network, being able to handle more, let’s say, advanced protocols, maybe things like gRPC, or OpenAPI, or GraphQL, or SOAP, those are all pretty challenging connectivity problems that can be addressed within service mesh.

Service mesh and observability [08:01]

Thomas Betts: So going back to the observability, since this is something that’s at the network layer, is it just observability of the network or are we able to listen for things that are relevant to the application’s behavior? Is that still inside the app code to say I need this type of observability, this method was called, or can you detect all that?

Jim Barton : With the underlying infrastructure that’s used within the service mesh, just particularly if you look at things like Envoy Proxy, there are a whole lot of network and even some application level metrics that are produced as well. And so there’s a lot of that information that you really do get for free just out of the mesh itself. But of course, there’s always a place for individual applications publishing their own process-specific, application-specific metrics, and including that in the mix of things that you analyze.

Thomas Betts: And is that where things like OpenTelemetry, that you’d want to have all that stuff linked together so that you can say, “This network request, I can see these stats from service mesh and my trace gets called in with this application.”?

Jim Barton: Absolutely. Those kinds of standards are really critical, I think, for taking what can be a raft of raw data and assembling that into something that’s actually actionable.

The role of eBPF in service mesh [09:01]

Thomas Betts: So the observability, I think, is where eBPF, at least as I understand it, has really come into play, is that we’re able to now listen at the wire effectively and say, “I want to know what’s going on.” What exactly does that mean? Does it eliminate the need for the sidecar? Does it augment it? And does it solve a different problem?

Jim Barton: eBPF has gotten a ton of hype recently in the enterprise IT space, and I think a lot of that is well deserved. If you look at open source projects like Cilium at the CNI networking layer, it really does add a ton of value and a lot of efficiency, additional security capabilities, observability capabilities, that sort of thing. I think when we think about that from a service mesh standpoint, certainly there’s added value there, but I don’t think, at least from a service mesh standpoint, most of what we see is eBPF not being so much a revolution, but more of a evolution and an improvement in things that we already do.

Just to give you an example, things like short circuiting certain network connections. Whereas before, without eBPF, I might have to traverse call going from A to B, I might have to traverse a stack, go out over the network, traverse another layer seven stack, and then send that to the application. With eBPF, there are certain cases where I can short circuit some of that, particularly if it’s intra-node communication, and can actually make that quite a bit more efficient. So we definitely see enhancements from a networking standpoint as well as an observability and a security standpoint.

Roles of developers and operations [10:29]

Thomas Betts: When it comes to implementing the eBPF or a service mesh in general, is that really something that’s just for the infrastructure and ops teams to handle? Developers need to be involved in that? Is that part of the devops overlapping term or is there a good separation of what you need to know and what you should be capable of doing?

Jim Barton: That’s the great insight and a great observation. I think certainly our goal is to make the infrastructure as transparent as possible to the application. We don’t want to require developers, let’s say an application developer who’s using service mesh infrastructure to be required to understand the details of eBPF. eBPF is a pretty low level set of capabilities. You’re actually loading programs into the kernel and operating those in a sandbox within the kernel. So there’s a lot of gory details there that a typical application owner, developer would rather not have to be exposed to, and so one of our goals is to provide the goodness that that kind of infrastructure can deliver, but without surfacing it all the way up to the application level.

Thomas Betts: So are the ops teams able to then just implement service mesh without having to talk to the dev teams at all, or does it change the design decisions of the application to say because we have the service mesh in place and we know that it can handle these things, I can now reduce some of the code that I need to write, but they need to work hand in hand to make those decisions?

Jim Barton: Definitely ops teams and app dev teams, they obviously still need to communicate. I see a lot of different platform teams, ops teams in our business and I think the ones that are the most effective, it’s a little bit like an official at a sports event, at a basketball game, they’re most effective when you’re not aware of their presence. And so I think a good platform team operates in a similar way, they want to provide the app dev teams the tools they need to do what they do as efficiently as possible within the context of the values of the organization that they represent, but the good ones don’t want to get in the way. They want to make the app dev teams be as effective, and efficient, and as free flowing as possible without interfering.

Thomas Betts: Building on that, how does this affect the code that we write, if at all?

Jim Barton: There are some cases today where occasionally the service mesh can impact what you do in the application itself, but those are the kinds of use cases that the community is moving as aggressively as it can to root those out, to make those unnecessary anymore. And I think with a lot of the advances that are coming with the ambient mesh architecture that we’re going to talk about a fair amount in the talk tomorrow, that plays a big role in removing some of those cases where there does have to be a greater degree of infrastructure awareness on the application side.

Thomas Betts: Yeah, I think that’s always one of those trade offs that back when you were just doing a monolith and there was probably small development team, you had to know everything. When I started out, I had to know how to build the server, and write the code, and support it. And as you got to larger companies, larger projects, bigger applications, distributed applications, you started handing off responsibilities, but it also changed your architectural decisions sometimes, that you got to microservices because you needed independent teams to work independently. This is a platform service. You don’t have a service mesh for just one node of your Kubernetes cluster, this seems to be something that has to solve everyone’s problems. So who gets involved in that? Is it application architects? Is it developers on each of the teams to decide here’s how our service mesh path is going to go forward?

Jim Barton: So how do we separate responsibilities across the implementation of a service mesh? I would definitely say in the most effective teams that I see, the platform team frequently takes the lead. They’re often the ones who are involved earliest on because they’re the ones who have to actually lay down the bits, and set down the processes and the infrastructure that are going to allow the development teams to be effective in their jobs. That being said, there’s clearly a case where the application teams need to start onboarding the mesh, and so the goal is to make that process as easy as possible and the operational process there as easy as possible. There definitely needs to be a healthy communication channel between the two, but I would say, from my experience at least, if the platform team does its job well, it should be a little bit like unofficial at a basketball game that’s being well officiated, they shouldn’t be top level of my consciousness every day as an app developer.

Who makes decisions regarding service mesh configuration? [14:39]

Thomas Betts: I think the good deployment situations I’ve run into personally have been I just write my code, check it in, the build runs, it deploys, and look, there’s something running over there, and there’s a bunch of little steps in the build pipeline and the release pipeline that I don’t need to understand. And I know that I can call out to another service because I know how to write my code to call their services. How does that little piece change? Because I have my microservices and I know I have all these other dependencies. Is that another one of those who gets to decide how those names and basically the DNS problem, the DNS is always the problem, that we’re offloading the DNS problem to the service mesh in some ways to embrace that third pillar of connectivity. Where does that come into play? Is it just another decision that has to be made by the infrastructure team?

Jim Barton: I would say a lot of those decisions, at least in my experience, it can vary from team to team where those kinds of decisions get made, but definitely there are abstractions in the service mesh world that allow you to specify, “Hey, here is a virtual destination that represents a target service that I want to be able to invoke.” From an application client standpoint, it’s just a name. I’ve used an API, I’ve laid down some bits that basically abstracts out all that complexity. Here’s a name I can invoke, very easy, very straightforward way to invoke a service. Now, behind the scenes of that virtual destination, there may be all kinds of complexity. I may have multiple instances that are active at any point in time within my cluster, I may have another cluster that gets failed over to. If cluster one fails, I may fail over to another cluster that has that same service operational. In that case, there’s a ton of complexity behind the scenes that the service mesh ideally will hide from the application developer client. At least that’s the place that the people I work with want to get to.

OpenAPI and gRPC with service mesh [16:22]

Thomas Betts: And then I want to flip this around. So I deal with OpenAPI Swagger specs all the time, and that’s how I know how to call that other service. We’ve talked about how my code should be unaware of the service mesh underneath. Does the service mesh benefit if it knows anything about how my API is shaped? And is there a place for that?

Jim Barton: I would say ideally, not really. There are definitely a variety of API contract standards out there that have different levels of support, depending on what service mesh platform you’re using. OpenAPI, to take probably the most common example in today’s world, is generally pretty well supported. I certainly know that certainly in our context with Istio, that’s a very common pattern. And so there are others. Some of the more emerging standards, for example GraphQL, I think the support for that is less mature because it’s a newer technology and it’s more complex. It’s more about, whereas OpenAPI tends to take a service or a set of services and provide an interface on top of that, GraphQL, when you get to a large-scale deployment of that, potentially provides a single graph interface that can span multiple suites of applications that are deployed all over the place. So there’s definitely an additional level of complexity as we move to a standard like GraphQL that’s less well understood than say something that’s been around for a while, like OpenAPI.

Thomas Betts: The third one that’s usually thrown around with those two is gRPC. And so that’s where gRPC is relying on a little bit of different, it’s HTP2, it’s got a little bit more network requirements. And so just saying that you’re going to use gRPC already implies that you have some of that knowledge of the network into your application. You’re not just relying on it’s HTTP over Port 80 or 443, everything’s different underneath that. Does that change again or is that still in the wheelhouse of service mesh just handles that, it’s fine, no big deal?

Jim Barton: I see gRPC used a lot internally within service meshes. I don’t see it used so much from an external facing standpoint, at least in the circles where I travel. So you’re absolutely correct, it does imply some underlying network choices with gRPC. I’d say because it is a newer, perhaps slightly less common standard than OpenAPI in practice, there’s probably not quite as pervasive support for it in the service mesh community, but certainly in a lot of cases it’s supported well and doesn’t really change the equation, from my standpoint, vis-a-vis something like OpenAPI.

Sustainability and maintainability [18:49]

Thomas Betts: So with all of the service mesh technology, does this help application long term sustainability and maintainability? Having good separation of concerns is always a key factor. This seems like it’s a good way to say, “I can be separated from the network because I don’t have to think about connectivity and retries because that’s no longer my application’s problem.” Is that something you’re seeing in the industry?

Jim Barton: Absolutely. I’d say being able to externalize those kinds of concerns from the application itself is really part and parcel of what service mesh brings to the table, specifically the connect pillar of service mesh value. So yeah, absolutely. Being able to externalize those kinds of concerns into a declarative policy, being able to express that in YAML, store it in a repo, manage it with a GitOps technology like an Argo or a Flux, and be able to ensure that policy is consistently applied throughout a mesh, there’s a ton of value there.

Service mesh infrastructure-as-code [19:43]

Thomas Betts: Your last point there about having the consistency, that seems to be very important. Microservices architecture, one of the jokes but it’s true, is that you went from having one problem to having this distributed problem, and your murder mystery is correct. You don’t even know that there is another house that the body could be buried in. So is this a place where you can see that service mesh comes in because it is here’s how the service mesh is implemented, all of our services take care of these things, we don’t have to worry about them getting out of sync, and that consistency is really important to have a good microservices architecture?

Jim Barton: Yeah, let me tell you a story. I spent many years at Red Hat and we did some longterm consulting engagements with a number of clients. One of them was a large public sector client whose name you would recognize, and we were working with them on a longterm, high touch, multi-week engagement. And we walked in one Monday morning and something happened, I still don’t know what it is, but our infrastructure was gone. Basically, it was completely hosed. And so in the old world, we would’ve been in a world of hurt. We’d had to go debug, and so we start going down this path of all right, what happened? That’s our first instinct as engineers, we want to know what happened, what was the problem?

And we investigated that for a while and finally someone made the brilliant observation, “We don’t necessarily have to care about this. We’ve built this project right, we have all of the proper devops principles in place here, all we really have to do is go press this button. We can regenerate the infrastructure and by lunchtime we can be back on our development path again.” And so that was the day I became a believer, a true dyed-in-the-wool believer in devops technology, and it was just that it was the ability to produce that infrastructure consistently based on a specification, and it’s just invaluable.

Thomas Betts: I’ve seen that personally from one service, one server, one VM, but having it across every node in your network, across your Kubernetes, all of that being defined, and infrastructure as code is so important for all those reasons. That applies as well to the service mesh because you’re saying the service mesh is still managed in YAML stored in repo.

Jim Barton: It’s basically the same principles you would apply to a single service and simply adopting them at scale, pushing them out at scale.

Ambient Mesh [21:53]

Thomas Betts: We spent a lot of time talking about the present. I do want to take the last few minutes and talk about what’s the future? So what is coming? You mentioned ambient mesh architecture. Let’s just start with that. What’s an ambient mesh?

Jim Barton: Ambient mesh is the result of a process that Google and Solo have collaborated on over the past year, and it’s something we are really, really excited about at Solo. And so we identified a number of issues, again, from our experience working with clients who are actually implementing service mesh, things that we would like to improve, things that we would like to do better to make it more efficient, more repeatable, to make upgrades easier, just a whole laundry list of things. And so over the past year, we’ve been collaborating on this, both Google and Solo are leaders in the Istio community, and just released in the past month, the first down payment, I’m going to say, an experimental version of Istio ambient mesh into the Istio community. And so what that does is it basically takes the old sidecar model that’s the traditional service mesh model, and gives you the option of replacing that with a newer data plane architecture that’s based on a set of shared infrastructure resources as opposed to resources that are actually attached to the application workloads, which is how sidecar operates.

Benefits of shared service mesh infrastructure [23:12]

Thomas Betts: What are the benefits of having that shared infrastructure? Is it you only need to deploy 1 thing instead of 10 copies of the same thing?

Jim Barton: The benefits that we see are threefold, really. One is just from a cost of ownership standpoint, replacing a set of infrastructure that is attached to each workload, and being able to factor that out into shared infrastructure. There are a lot of efficiencies there. There’s a blog post out there that we produce based on some hopefully reasonable assumptions that estimate reduction in cost and so forth on the order of 75%. Obviously, your mileage varies depending on how you have things deployed, but there’s some significant savings from just an infrastructure resource consumption standpoint. So that’s one, is just cost of ownership.

A second benefit is just operational impact. And so we see, particularly for customers who do service mesh at scale, let’s pick a common one, let’s say that there’s an Envoy CVE at layer seven, and that requires you to upgrade all of the proxies across your service mesh. Well, that can become a fairly challenging task.

If you’re operating at scale, you have to schedule some kind of downtime that can accommodate this rolling update across your entire service network so that you can go and then apply those envoy changes incrementally. That’s a fairly costly, fairly disruptive process, again, when you’re operating at scale. And so by separating that functionality out from the application sidecar into shared infrastructure, it makes that process a whole lot easier. Now, the applications don’t have to be recycled when you’re doing an upgrade like that, it’s just a matter of replacing the infrastructure that needs to be replaced and the applications are none wiser, they just continue to operate. It goes back to the transparency we were talking about before. We want the mesh to be as transparent as possible to the applications who are living in it.

We talked about cost of ownership, we talked about operational improvements, and third, we should talk about performance. One of the things we see when operating service meshes at scale is that all of these sidecar proxies, they add latency into the equation. And the larger your service network is and the more complex the service interactions are, the more latency that gets added into the process with each service call. And so part of that is because each one of those sidecars is a full up layer seven Envoy proxy, so it takes a couple of milliseconds to traverse the TCP stack typically. And if you’re doing that both at the sending end and the receiving end on every service call, you can do the math on how that latency adds up.

And so by factoring things out with the new ambient mesh architecture, we see, for a lot of cases, we can cut those numbers pretty dramatically. Let’s just take an mTLS use case because, again, that’s a very common one to start with. We can reduce that overhead from being two layer seven Envoy traversals at two milliseconds each, we can reduce that to being two layer four traversals of a secure proxy tunnel at a half millisecond each. And so you do the math on a high scale service and it cuts the additional latency, it improves performance numbers quite a bit.

Thomas Betts: And obviously, that’s what you’re getting into, it makes sense at scale. This isn’t a concern that you have with a small cluster, when you’re talking about the two milliseconds to half millisecond makes a difference when you’re talking about tens of thousands of them.

Jim Barton: The customers we work with, I mean we work with a variety of customers. We work with small customers, we work with large customers. Obviously, the larger enterprises are the more demanding ones, and they’re the ones who really care about these sorts of issues, the issues of operating at scale. And so we think what we’re doing is by partnering with those people, we’re driving those issues out as early in this process as possible so that when people come along with more modest requirements, it’s just they’re non-issues for them.

Future enhancements for GraphQL and service mesh [27:06]

Thomas Betts: Was there anything else in the future of service mesh besides the ambient mesh?

Jim Barton: First of all, the innovation within the ambient mesh itself is not done. Solo and Google wanted to get out there as quickly as possible something that people could get their hands on, and put it through its paces, and get us some feedback on that.

But we already know there are innovations within that space that need to continue. For example, we discussed eBPF a bit earlier, optimizing some of the particular new proxies that are in this new architecture, being able to optimize those to use things like eBPF, as opposed to just using a standard Envoy proxy implementation, to really drive the performance of those components just as hard as we possibly can. So there are definitely some, I’m going to say, incremental changes, not architectural changes, but incremental changes, that I think are going to drive the performance of ambient mesh.

So that’s certainly one thing, but we also see a lot of other innovations that are coming along the data plane of the service mesh as well. One that we are really excited about at Solo has to do with GraphQL technology. GraphQL to date has typically required a separate server component that’s independent of the rest of API gateway technology that you might have deployed in your enterprise. And we think fundamentally that’s a curious architecture. I have an API gateway today, it knows how to serve up OpenAPI, it knows how to serve SOAP, it knows how to serve gRPC maybe. Why is GraphQL this separate unicorn that needs its own separate server infrastructure that has to be managed? Two API gateway moving parts instead of one.

And so that’s something that we’ve been working on a lot at Solo and have GraphQL solutions out there that allow you to absorb the specialized GraphQL server capabilities into just another Envoy filter on your API gateway as opposed to have it being this separate unicorn that has to be managed separately. That’s something that we see again in the data plane of the service mesh that’s going to become even more important as GraphQL, again, my crystal ball, I don’t know if I brought it with me today, but if I look into my crystal ball, GraphQL’s adoption curve looks to be moving up pretty quickly. And we’re excited about that, and we think there are some opportunities for really improving on that from an operational and an infrastructure standpoint.

Thomas Betts: So the interesting question about that is most of the cases I hear about GraphQL, it’s that front end serving layer. I want to have my mobile app be as responsive as possible, it’s going to make one call to the GraphQL server to do all the things and then the data’s handled. That’s why it’s got that funny architecture you described. But I see service mesh is mostly on the backend between the servers. Is this a service mesh at the edge then?

Jim Barton: Service mesh has a number of components. Definitely, with the ambient mesh component that we’ve been talking about, we’re talking, to a large extent, about the internal operations of the service mesh, but an important component of service mesh and, in fact, one of the places where we see the industry converging is in the convergence of traditional standalone API gateway technology and service mesh technology. For example, let’s take Istio as an example in the service mesh space, there’s always been a modest gateway component, Istio ingress gateway, that ships with Istio.

It’s not that full featured, it’s not something that typically, at least on its own as it comes out of the box, is something that people would use to solve complex API gateway sorts of use cases. And so Solo has actually been in this space for a while and has actually brought a lot of its standalone API gateway expertise into the service mesh. In fact, we just announced recently a product offering that’s part of the overall service mesh platform that’s called Gloo gateway that addresses these very issues, being able to add a sophisticated, full-featured API gateway layer at the north south boundary of the service mesh.

Thomas Betts: Well, I think that’s about it for our time. I want to thank Jim Barton for joining me today and I hope you’ll join us again for another episode with the InfoQ podcast.

References

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Hashicorp’s Boundary Now Generally Available on HCP

MMS Founder
MMS Matt Saunders

Article originally posted on InfoQ. Visit InfoQ

Following a successful beta trial, Hashicorp have announced the general availability of Boundary on their cloud platform HCP. This adds a key new aspect to Hashicorp’s managed solution for zero-trust security.

First released in October 2020, Boundary is an open-source project providing identity-based access to an organization’s applications and critical systems with fine-grained authorization. It does this without managing credentials, nor by exposing an organization’s network externally.

Many organizations moving to the cloud are adopting zero-trust security postures, with the new default being to trust nothing and no-one, and to authenticate and authorize everything. Existing software-defined perimeter solutions such as VPNs and PAM tend to be IP-driven and laboriously manual, whereas Boundary gives transparent fine-grained access to users and hosts. Hashicorp’s approach caters multiple clouds, on-premises and hybrid environments, reducing attack vectors and protecting data at every stage. HCP Boundary provides the third leg of Hashicorp’s zero-trust security solution – alongside HCP Consul and HCP Vault – providing automated workflows for users to access critical cloud-based infrastructure. Seamless onboarding of new users and managed resources also means that the day-to-day configuration overhead of a growing cloud infrastructure is minimized.

“At HashiCorp, we have always believed that identity is the foundation for zero trust security for applications, networks, and users. With HCP Boundary, companies now have a modern solution for privileged access management, securing access in dynamic, ephemeral environments for their workforce. We think we’ve reached an important milestone for our customers by delivering a security solution built for today’s threat and infrastructure landscape.” – Armon Dadgar, co-founder and CTO, HashiCorp

Boundary has been available in HCP (Hashicorp Cloud Platform) as a beta since June 2022, and now provides a production-ready managed service for remote access to cloud-based installations. HCP Boundary builds on the open-source version by adding enterprise functionality, such as dynamic credential injection through an integration with Vault. This allows users to access Boundary-managed hosts with single-use passwordless authentication, minimizing the possibility of credential leaks.

Boundary integrates with existing identity providers supporting OIDC, such as Microsoft Active Directory and Okta, and can also automatically discover services using dynamic service catalogs from Microsoft Azure and AWS, and can also integrate with Terraform to discover resources to manage. Finally, sessions authorized with Boundary are logged and auditable, to provide usage insights into sessions and events.

As HCP Boundary is a fully-managed service, Hashicorp take care of maintaining, managing and scaling production Boundary deployments, relieving admins from this responsibility. HCP Boundary is available for signup now.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Cloudformation or Terraform: Which Iac Platform Is the Best Fit for You?

MMS Founder
MMS Omer Hamerman

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • While both CloudFormation and Terraform have the concept of modules, Terraform’s is better defined
  • State storage in Terraform requires special care as the state file is needed to understand the desired state and can contain sensitive information
  • CloudFormation excels at deploying AWS infrastructure whereas Terraform is best suited for dynamic workloads residing in multiple deployment environments where you want to control additional systems beyond the cloud
  • Many organizations choose to use Terraform for databases and high-level infrastructure and CloudFormation for application deployment
  • Since Terraform uses HCL, this can create beneficial segregation between Ops and Dev as Dev may not be familiar with this language.

While both CloudFormation and Terraform are robust IaC platforms that offer efficient configuration and automation of infrastructure provisioning, there are a few key differences in the way they operate. CloudFormation is an AWS tool, making it ideal for AWS users looking for a managed service. Terraform, on the other hand, is an open-source tool created by Hashicorp which provides the full flexibility, adaptability, and community that the open-source ecosystem has to offer. These differences can be impactful depending on your specific environment, use cases, and several other key factors.  

In this post, I’ll compare CloudFormation and Terraform based on important criteria such as vendor neutrality, modularity, state management, pricing, configuration workflow, and use cases to help you decipher which one is the best fit for you.

But first, I’ll provide a bit of background on each platform and highlight the unique benefits that each of them brings to the table.

What is CloudFormation?

AWS CloudFormation is an Infrastructure as Code (IaC) service that enables AWS cloud teams to model and set up related AWS and third-party resources in a testable and reproducible format. 

The platform helps cloud teams focus on the application by abstracting away the complexities of provisioning and configuring resources. You also have access to templates to declare resources; CloudFormation then uses these templates to organize and automate the configuration of resources as well as AWS applications. It supports various services of the AWS ecosystem, making it efficient for both startups and enterprises looking to persistently scale up their infrastructure.

Key features of CloudFormation include:

  • Declarative Configuration with JSON/YAML
  • The ability to preview environment changes
  • Stack management actions for dependency management
  • Cross-regional account management

What is Terraform?

Terraform is Hashicorp’s open-source infrastructure-as-code solution. It manages computing infrastructure lifecycles using declarative, human-readable configuration files, enabling DevOps teams to version, share, and reuse resource configurations. This allows teams to conveniently commit the configuration files to version-control tools for safe and efficient collaboration across departments. 

Terraform leverages plugins, also called providers, to connect with other cloud providers, external APIs, or SaaS providers. Providers help standardize, provision, and manage the infrastructure deployment workflow by defining individual units of infrastructure as resources.

Key features of Terraform include:

  • Declarative configurations via Hashicorp Configuration Language (HCL)
  • The support of local and remote execution modes
  • Default version control integration
  • Private Registry
  • Ability to ship with a full API

Comparing CloudFormation and Terraform

Vendor Neutrality

The most well-known difference between CloudFormation and Terraform is the association with AWS. While you can access both tools for free, as an AWS product, CloudFormation is only built to support AWS services. Consequently, it is only applicable for deployments that rely on the AWS ecosystem of services. This is great for users who run exclusively on AWS as they can leverage CloudFormation as a managed service for free and at the same time, get support for new AWS services once they’re released.

In contrast, Terraform is open-source and works coherently with almost all cloud service providers like Azure, AWS, and Google Cloud Platform. As a result, organizations using Terraform can provision, deploy, and manage resources on any cloud or on-premises infrastructure, making it an ideal choice for multi-cloud or hybrid-cloud users. Furthermore, because it is open source and modular, you can create a provider to wrap any kind of API or just use an existing provider. All of these capabilities have already been implemented by various vendors which offer users far more flexibility and convenience, making it ideal for a greater variety of use cases. 

However, on the downside, Terraform often lags behind CloudFormation with regard to support for new cloud releases. As a result, Terraform users have to play catch-up when they adopt new cloud services.

Modularity

Modules are purposed for reusing and sharing common configurations, which renders complex configurations simple and readable. Both CloudFormation and Terraform have module offerings, however, CloudFormation’s is newer, and therefore not as mature as Terraform’s. 

CloudFormation has always offered features to build modules using templates, and as of 2020, they now offer out-of-the-box support for modules as well. Traditionally, CloudFormation leverages nested stacks, which allow users to import and export commonly used configuration settings. Over the past few years, however, CloudFormation launched both public and private registries. As opposed to Terraform, CloudFormation offers its private registry right out-of-the-box which enables users to manage their own code privately without the risk of others gaining access to it. Furthermore, CloudFormation’s public registry offers a wide array of extensions such as MongoDB, DataDog, JFrog, Check Point, Snyk, and more.

Despite the fact that CloudFormation has come a long way in its modularity, Terraform has innately supported modularity from the get-go, making its registry more robust and easier to use. The Terraform registry contains numerous open-source modules that can be repurposed and combined to build configurations, saving time and reducing the risk of error. Terraform additionally offers native support for many third-party modules, which can be consumed by adding providers or plugins that support the resource type to the configuration.

State Management

One of the benefits of CloudFormation is that it can provision resources automatically and consistently perform drift detection on them. It bundles AWS resources and their dependencies in resource stacks, which it then uses to offer free, built-in support for state management. 

In contrast, Terraform stores state locally on the disk by default. Remote storage is also an option, but states stored remotely are written in a custom JSON file format outlining the model infrastructure and must be managed and configured. If you do not manage state storage properly, it can have disastrous repercussions. 

Such repercussions include the inability to perform DR because drifts have gone undetected, leading to extended downtime. This occurs if the state is impaired and can’t run the code, which means the recovery has to be done manually or started from scratch. Another negative repercussion is if the state file for some unexpected reason becomes public. Since state files often store secrets such as keys to databases or login details this information is quite dangerous to your organization if it gets into the wrong hands. If Hackers find state files it’s easier for them to attack your resources. This is an easy mistake to make since generally speaking, Terraform users that manage their own state on AWS store the files in an S3 bucket, which is one-way state files can be exposed publicly. 

To combat this challenge, Terraform also offers an efficient deployment of self-managed systems that take care of the state by leveraging AWS S3 and DynamoDB. In addition, users can also purchase Terraform’s Remote State Management to automatically maintain state files as a service.

Pricing, License & Support

CloudFormation is a free service within AWS and is supported by all AWS pricing plans. The only cost associated with CloudFormation is that of the provisioned AWS service. 

Terraform is open-source and free, but they also offer a paid service called Terraform Cloud which has several support plans like Team, Governance, and Business that enable further collaboration. Terraform Cloud offers additional features like team management, policy enforcement, a self-hosted option, and custom concurrency. Pricing depends on the features used and the number of users. 

Language

CloudFormation templates are built using JSON/YAML, while Terraform configuration files are built using HCL syntax. Although both are human-readable, YAML is widely used in modern automation and configuration platforms, thereby making CloudFormation much easier to adopt. 

On the other hand, HCL enables flexibility in configuration, but the language requires getting used to it. 

It’s also worth mentioning some IaC alternatives that offer solutions for those who prefer to use popular programming languages. For example, in addition to CloudFormation, AWS offers CDK which enables users to provision resources using their preferred programming languages. Terraform users can enjoy the same benefits with AWS cdktf which allows you to set HCL state files in Python, Typescript, Java, C#, and Go. Alternatively, Pulumi offers an open-source IaC platform that can be configured with a variety of familiar programming languages.

Configuration Workflow

With CloudFormation, the written code is stored by default locally or in an AWS S3 bucket. This configuration workflow is then used with the AWS CLI or the AWS Console to build the resource stack.

Terraform uses a straightforward workflow that only relies on the Terraform CLI tool to deploy resources. Once configuration files are written, Terraform loads these files as modules, creates an execution plan, and applies the changes once the plan is approved. 

Use Cases

While both CloudFormation and Terraform can be used for most standard use cases, there are some situations in which one might be more ideal than the other. 

Being a robust, closed-source platform that is built to work seamlessly with other AWS services, CloudFormation is considered most suitable in a situation where organizations prefer to run deployments entirely on AWS and achieve full state management from the get-go. 

CloudFormation makes it easy to provision AWS infrastructure. Plus, you can more easily take advantage of new AWS services as soon as, or shortly after, they’re launched due to the native support, compliance, and integration between all AWS services. In addition, if you’re working with developers, the YAML language tends to be more familiar, making CloudFormation much easier to use. 

In contrast, Terraform is best suited for dynamic workloads residing in multiple deployment environments where you want to control additional systems beyond the cloud. Terraform offers providers specifically for this purpose whereas CloudFormation requires you to wrap them with your own code. Hybrid cloud environments are also better suited for Terraform. This is because it can be used with any cloud (not exclusively AWS) and can, therefore, integrate seamlessly with an array of cloud services from various providers–whereas this is almost impossible to do with CloudFormation. 

Furthermore, because it’s open source, Terraform is more agile and extendable, enabling you to create your own resources and providers for various technologies you work with or create.

Your given use case may also benefit from implementing both–for example, with multi-cloud deployments that include AWS paired with other public/private cloud services. 

Another example where both may be used in tandem is with serverless architecture. For example at Zesty, we use Serverless which uses CloudFormation under the hood. However, we also use Terraform for infrastructure. Since we work with companies using different clouds, we want to have the ability to use the same technology to deploy infrastructure in multiple cloud providers, which makes Terraform the obvious choice for us. Another unplanned benefit this adds for us is the natural segregation between Dev and Ops. Because Ops tend to be more familiar with the HCL language, it creates barriers that make it more difficult for another team to make a mistake or leave code open to attacks. 

In general, many organizations choose to use Terraform for databases and high-level infrastructure and CloudFormation for application deployment. They often do this because it helps to distinguish the work of Dev and Ops. It’s easier for developers to start from scratch with CloudFormation because it runs with YAML or JSON which are formats every developer knows. Terraform, on the other hand, requires you to learn a different syntax. The benefit of creating these boundaries between Dev and Ops is that one team cannot interfere with another team’s work, which makes it harder for human error or attacks to occur. 

It’s worth noting that even if you don’t currently use both IaC platforms, it’s ideal to learn the syntax of each in case you wind up using one or the other in the future and need to know how to debug them. 

Summary

As we have seen, both CloudFormation and Terraform offer powerful IaC capabilities, but it is important to consider your workload, team composition, and infrastructure needs when selecting your IaC platform.

Because I’m partial to open-source technologies, Terraform is my IaC of choice. It bears the Hashicorp name, which has a great reputation in the industry as well as a large and thriving community that supports it. I love knowing that if I’m not happy with the way something works in Terraform, I can always write code to fix it and then contribute it back to the community. In contrast, because CloudFormation is a closed system, I can’t even see the code, much less change something within it. 

Another huge plus for Terraform is that it uses the HCL language which I prefer to work with over JSON/YAML. The reason for this is that HCL is an actual language whereas JSON and YAML are formats. This means when I want to run things programmatically, like running in a loop or adding conditionals, for example, I end up with far more readable and writable code. When code is easier to read, it’s easier to maintain. And since I’m not the only one maintaining this code, it makes everyone’s life easier. 

Another reason I prefer to use Terraform is due to our extensive use of public modules, which we needed to leverage prior to CloudFormation’s public registry offering.

While CloudFormation may be quicker to adopt new AWS features and manage the state for you for free, all things considered, I prefer the freedom that comes with open source, making Terraform a better choice for my use case.

Hope you found this comparison helpful!

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


.NET Upgrade Assistant Now Migrates WCF Services to CoreWCF

MMS Founder
MMS Edin Kapic

Article originally posted on InfoQ. Visit InfoQ

Sam Spencer, Microsoft’s .NET Core team program manager, announced on November 4th 2022 that the Upgrade Assistant .NET tool now includes a preview of an extension that migrates WCF (Windows Communication Foundation) service code from .NET Framework to .NET Standard targeting .NET 6 and later versions. The WCF code is migrated to the CoreWCF library, an open-source port of WCF for .NET Core.

The WCF migration extension for the Upgrade Assistant can convert single ServiceHost instances. It reads the original configuration file and creates a new one for CoreWCF. It moves the System.ServiceModel class references to CoreWCF ones. It migrates the ServiceHost instance to ASP.NET Core hosting model with the UseServiceModel extension method.

Windows Communication Foundation (WCF) was launched with .NET Framework 3 in 2006 to unify existing SOAP and RPC communication protocols into a single .NET protocol. The original WCF code usually relies heavily on configuration XML files for setting the endpoints, protocols, and communication attributes.

The migrated CoreWCF code moves those configuration settings into the startup code. The configuration file for CoreWCF is minimal, stating only the correspondence between the service contract and implementation classes.

Sample CalculatorService console application Main method after migration:

using System;
using CoreWCF;
using CoreWCF.Configuration;
using CoreWCF.Description;
using CoreWCF.Security;

// Host the service within console application.
public static async Task Main() {
  var builder = WebApplication.CreateBuilder();

  // Set up port (previously this was done in configuration,
  // but CoreWCF requires it be done in code)
  builder.WebHost.UseNetTcp(8090);
  builder.WebHost.ConfigureKestrel(options => {
    options.ListenAnyIP(8080);
  });

  // Add CoreWCF services to the ASP.NET Core app's DI container
  builder.Services.AddServiceModelServices()
   .AddServiceModelConfigurationManagerFile("wcf.config")
   .AddServiceModelMetadata()
   .AddTransient();

  var app = builder.Build();
  // Enable getting metadata/wsdl
  var serviceMetadataBehavior =
   app.Services.GetRequiredService();
  serviceMetadataBehavior.HttpGetEnabled = true;
  serviceMetadataBehavior.HttpGetUrl = new Uri("http://localhost:8080/CalculatorSample/metadata");

  // Configure CoreWCF endpoints in the ASP.NET Core hosts
  app.UseServiceModel(serviceBuilder => {
    serviceBuilder.AddService(
     serviceOptions => {
       serviceOptions.DebugBehavior.IncludeExceptionDetailInFaults = true;
     });
    
    serviceBuilder.ConfigureServiceHostBase(
     serviceHost => {});
  });

  await app.StartAsync();
  Console.WriteLine("The service is ready.");
  Console.WriteLine("Press to terminate service.");
  Console.WriteLine();
  Console.ReadLine();
  await app.StopAsync();
}

.NET Upgrade Assistant is an automated command-line tool that migrates .NET Framework project files to the latest versions of .NET. It supports C# and Visual Basic projects and migrates different legacy technologies such as ASP.NET MVC, Windows Forms, Windows Presentation Foundation (WPF), Universal Windows Platform (UWP), and Xamarin.Forms. It can be run in analysis or upgrade mode. However, Microsoft warns that manual steps will still be needed to migrate the application code fully and recommends the users of the tool be familiar with the official .NET porting guidance.

CoreWCF project was officially released in April 2022, although it began already in 2019. Its aim is to provide a subset of the most frequently used functionality from WCF service on the .NET platform. It is .NET Standard 2.0 compatible, allowing it to be migrated in place on .NET Framework 4.6.2 or above. It covers HTTP and TCP transport protocols with mainstream WCF bindings.

CoreWCF is not to be confused with WCF client libraries project, which provides support for consuming WCF services from .NET applications, implementing the client part of the original WCF framework.

The features that aren’t implemented at the moment in CoreWCF are named pipes transport protocol (there are indications that it will be released later), legacy bindings such as WS2007HttpBinding or NetPeerTcpBinding, advanced messaging security configuration, and support for queues and distributed transactions. Due to the multi-platform nature of modern .NET, it is understandable that the Windows-specific functionality of WCF is not migrated, except for the Windows Authentication mechanism support.

While Microsoft now supports CoreWCF as part of the general .NET platform, it recommends using modern communication protocols such as gRPC for new projects and leaving CoreWCF with the task of modernising server applications running on .NET Framework that have strong dependencies on WCF and SOAP.

Matt Connew, a member of CoreWCF team, states that the long-term intended goal of CoreWCF is to become the umbrella communication protocol by also incorporating gRPC, protobuf, and other protocol support, by getting rid of SOAP architectural decisions in the library core. It seems to reflect the belief among the .NET developers that WCF promised to be a generic communication framework with different supported protocols. However, it was bogged by the complexity of configuration and tooling support.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.